本文属于资料整理
AlphaGo Zero
论文信息:Silver D, Schrittwieser J, Simonyan K, et al. Mastering the game of go without human knowledge[J]. nature, 2017, 550(7676): 354-359.
参考文章:
深入浅析AlphaGo Zero与深度强化学习
AlphaGo Zero论文解析
Transformer - Attention is all you need
论文信息:Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[C]//Advances in neural information processing systems. 2017: 5998-6008.
参考文章:
Self-Attention和Transformer
Transformer论文逐段精读【论文精读】-- 李沐
白话机器学习-Encoder-Decoder框架
Transformer 抛弃了传统的 CNN 和 RNN,没用循环和卷积,整个网络使用了 Attention 机制组成。Transformer 由 Muiti-Attenion 和 Feed Forward Neural Network 组成。可以将时序信息完全做并行处理。
GoogLeNet - Going deeper with convolutions
论文信息:Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9.
【论文研读】GoogLeNet-Going deeper with convolutions
GoogLeNet / Inception V3 -李沐
【精读AI论文】GoogLeNet(Inception V1)深度学习图像分类算法 - 同济子豪
Dropout: A Simple Way to Prevent Neural Networks from Overfitting
论文信息:Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting[J]. The journal of machine learning research, 2014, 15(1): 1929-1958.
深度解说Dropout
【精读AI论文】dropout
Rule - Deep sparse rectifier neural networks
论文信息:Glorot X, Bordes A, Bengio Y. Deep sparse rectifier neural networks[C]//Proceedings of the fourteenth international conference on artificial intelligence and statistics. JMLR Workshop and Conference Proceedings, 2011: 315-323.
深度学习论文笔记(ReLU)
DL论文笔记(ReLU)