千万不要从任何角度轻看 transformer,重要的话说四遍:
千万不要从任何角度轻看 transformer
千万不要从任何角度轻看 transformer
千万不要从任何角度轻看 transformer
Attention is all you need 整个项目是鬼斧神工之作,巧夺天工之作,堪称神来之笔
它比后来的 Bert GPT x.y flash attention 等不同角度的工作,都出神入化。
本文对应论文 “Attention is All You Need”,着重算法的数学表达
句子长度为n;比如 n=1024,或 n=2048,即,一句话最多可以是1024个单词,或 2048 个单词。
1. 位置编码
可知,是由n个列向量组成的矩阵,每个列向量表示该列号 的位置编码向量。
2. 输入向量
加入本句话第一个单词的词嵌入向量是, 第二个单词是 , 以此类推,最多是.
如果句子长度不足 n个单词,则后面没有单词对应的
令 为句子的词嵌入编码矩阵表示,为了让单词携带位置信息,直接将每个单词的词嵌入向量上加位置编码向量:
矩阵表示为:
作为第一层 self-attention 模块的输入向量。
3. 完整的一层编码器计算过程
假设 是有multihead中的 所产生的输出矩阵,
上面是把8个multihead的输出拼接起来了的到 。
然后经过本层的这个feed forward neuron network:
然后将 送入下一层编码器,进行相同的计算过程,只是其中的 的权重不同而已。
4. Norm() 运算的细节
每一个层中都出现了两个次 normalize() 运算:
这里的作为输入和输出的 都是矩阵,矩阵的行数都是词嵌入的维度 ;
Y的列数是句子最大长度 max_sentence_length
Z的列数是句子最大长度的8倍,因为是8个multihead的结果矩阵拼接起来的产物。
但无论怎样,normalize运算仅单独作用在矩阵Y和Z的每一列数据上,使得本列数据归一为标准正态分布的样子,即,独立同分布,这样据说可以加速训练过程,加快模型收敛,
针对具体实现如下:
假设需要被Norm()运算的矩阵抽象为用来表示,
step1, 以矩阵的列为对象,计算本列元素的均值:
step2, 继续以矩阵的列为对象,计算每列的方差:
step3, 归一化每个列元素,每个列元素减去本列均值,再除以方差:
其中分母中加了,仅仅是为了应对极低概率地出现 的分母为0的情况。
小结:
以上3个step的总体效果为:
5. FNN的具体计算
6. 更多参考资料
原论文:
https://arxiv.org/abs/1706.03762dhttps://arxiv.org/abs/1706.03762The Illustrated Transformer – Jay Alammar – Visualizing machine learning one concept at a time.Discussions:Hacker News (65 points, 4 comments), Reddit r/MachineLearning (29 points, 3 comments)Translations: Arabic, Chinese (Simplified) 1, Chinese (Simplified) 2, French 1, French 2, Italian, Japanese, Korean, Persian, Russian, Spanish 1, Spanish 2, VietnameseWatch: MIT’s Deep Learning State of the Art lecture referencing this postFeatured in courses at Stanford, Harvard, MIT, Princeton, CMU and othersIn the previous post, we looked at Attention – a ubiquitous method in modern deep learning models. Attention is a concept that helped improve the performance of neural machine translation applications. In this post, we will look at The Transformer – a model that uses attention to boost the speed with which these models can be trained. The Transformer outperforms the Google Neural Machine Translation model in specific tasks. The biggest benefit, however, comes from how The Transformer lends itself to parallelization. It is in fact Google Cloud’s recommendation to use The Transformer as a reference model to use their Cloud TPU offering. So let’s try to break the model apart and look at how it functions.The Transformer was proposed in the paper Attention is All You Need. A TensorFlow implementation of it is available as a part of the Tensor2Tensor package. Harvard’s NLP group created a guide annotating the paper with PyTorch implementation. In this post, we will attempt to oversimplify things a bit and introduce the concepts one by one to hopefully make it easier to understand to people without in-depth knowledge of the subject matter.2020 Update: I’ve created a “Narrated Transformer” video which is a gentler approach to the topic:A High-Level LookLet’s begin by looking at the model as a single black box. In a machine translation application, it would take a sentence in one language, and output its translation in another.http://jalammar.github.io/illustrated-transformer/
图解Transformer(完整版)!笔者看过的 Transformer 讲解的最好的文章。https://mp.weixin.qq.com/s?__biz=MzI4MDYzNzg4Mw==&mid=2247515317&idx=3&sn=d06f49715290c8f8c56144031d1e60b3&chksm=ebb78461dcc00d77b57d12d4ec9388054ffa0e06fa1b2454e9c7f4b785f114983fe4708ecf0a&scene=27
自然语言处理Transformer模型最详细讲解(图解版)-CSDN博客文章浏览阅读1.3w次,点赞47次,收藏255次。近几年NLP较为流行的两大模型分别为Transformer和Bert,其中Transformer由论文《Attention is All You Need》提出。该模型由谷歌团队开发,Transformer是不同与传统RNN和CNN两大主流结构,它的内部是采用自注意力机制模块。_transformer模型https://blog.csdn.net/m0_47256162/article/details/127339899
Transformer详解 - mathorB站视频讲解Transformer是谷歌大脑在2017年底发表的论文attention is all you need中所提出的seq2seq模型。现在已经取得了大范围的应用和扩展,而BERT就...https://wmathor.com/index.php/archives/1438/
Transformers from scratch | peterbloem.nlhttps://peterbloem.nl/blog/transformers
未完待续 ... ...