反向传播算法简单推导笔记
1.全连接神经网络
该结构的前向传播可以写成:
z(1)=W(1)x+b(1)z^{(1)} = W^{(1)}x+b^{(1)}z(1)=W(1)x+b(1)
a(1)=σ(z(1))a^{(1)} = \sigma(z^{(1)})a(1)=σ(z(1))
z(2)=W(2)a(1)+b(2)z^{(2)}=W^{(2)}a^{(1)}+b^{(2)}z(2)=W(2)a(1)+b(2)
a(2)=σ(z(2))a^{(2)} = \sigma(z^{(2)})a(2)=σ(z(2))
2.符号约定
-
δi(x)\delta^{(x)}_{i}δi(x):∂Loss∂zi(x)\frac{\partial Loss}{\partial z_i^{(x)}}∂zi(x)∂Loss 标量.
-
δ(x)\delta^{(x)}δ(x):向量,由δ1(x),δ2(x)...,δn(x)\delta^{(x)}_{1},\delta^{(x)}_{2}...,\delta^{(x)}_{n}δ1(x),δ2(x)...,δn(x)组成.
-
⊙\odot⊙:表示按元素级别的乘法操作.
[ab]⊙[cd]=[acbd]\left[ \begin{matrix} a\\b \end{matrix} \right] \odot \left[ \begin{matrix} c\\d \end{matrix} \right] = \left[ \begin{matrix} ac\\bd \end{matrix} \right][ab]⊙[cd]=[acbd][abcd]⊙[ef]=[aebecfdf]\left[ \begin{matrix} a&b\\c&d \end{matrix} \right] \odot \left[ \begin{matrix} e\\f \end{matrix} \right] = \left[ \begin{matrix} ae&be\\cf &df\end{matrix} \right][acbd]⊙[ef]=[aecfbedf]
3.全连接反向传播公式推导
针对本博客给出来的2层,每层2个神经元的全连接神经网络模型,给出其反向传播推导过程.
现在欲求∂Loss∂W(1),∂Loss∂W(2),∂Loss∂b(1),∂Loss∂b(2),δ(1),δ(2)\frac{\partial Loss}{\partial W^{(1)}},\frac{\partial Loss}{\partial W^{(2)}},\frac{\partial Loss}{\partial b^{(1)}},\frac{\partial Loss}{\partial b^{(2)}},\delta^{(1)},\delta^{(2)}∂W(1)∂Loss,∂W(2)∂Loss,∂b(1)∂Loss,∂b(2)∂Loss,δ(1),δ(2).
依据模型图并根据链式求导法则可以得到下面的等式:
欲求δ(2)\delta^{(2)}δ(2)先考虑下面式子
δ1(2)=∂Loss∂a1(2)∗∂a1(2)∂z1(2)=∂Loss∂a1(2)∗σ′(z1(2))\delta^{(2)}_1=\frac{\partial Loss}{\partial a^{(2)}_1}*\frac{\partial a^{(2)}_1}{\partial z^{(2)}_1} = \frac{\partial Loss}{\partial a^{(2)}_1}*\sigma'(z^{(2)}_1)δ1(2)=∂a1(2)∂Loss∗∂z1(2)∂a1(2)=∂a1(2)∂Loss∗σ′(z1(2))
δ2(2)=∂Loss∂a2(2)∗∂a2(2)∂z2(2)=∂Loss∂a2(2)∗σ′(z2(2))\delta^{(2)}_2=\frac{\partial Loss}{\partial a^{(2)}_2}*\frac{\partial a^{(2)}_2}{\partial z^{(2)}_2} = \frac{\partial Loss}{\partial a^{(2)}_2}*\sigma'(z^{(2)}_2)δ2(2)=∂a2(2)∂Loss∗∂z2(2)∂a2(2)=∂a2(2)∂Loss∗σ′(z2(2))
那么
δ(2)=[δ1(2)δ2(2)]=[∂Loss∂a1(2)∂Loss∂a2(2)]⊙[σ′(z1(2))σ′(z2(2))]\delta^{(2)}=\left[ \begin{matrix} \delta^{(2)}_1 \\ \\ \delta^{(2)}_2 \end{matrix} \right] = \left[ \begin{matrix} \frac{\partial Loss}{\partial a^{(2)}_1} \\ \\ \frac{\partial Loss}{\partial a^{(2)}_2} \end{matrix} \right] \odot \left[ \begin{matrix} \sigma'(z^{(2)}_1) \\ \\ \sigma'(z^{(2)}_2) \end{matrix} \right]δ(2)=⎣⎢⎡δ1(2)δ2(2)⎦⎥⎤=⎣⎢⎡∂a1(2)∂Loss∂a2(2)∂Loss⎦⎥⎤⊙⎣⎢⎡σ′(z1(2))σ′(z2(2))⎦⎥⎤
欲求∂Loss∂W(2)\frac{\partial Loss}{\partial W^{(2)}}∂W(2)∂Loss先考虑下面式子
∂Loss∂w11(2)=δ1(2)∗∂z1(2)∂w11(2)=δ1(2)∗a1(1)\frac{\partial Loss}{\partial w^{(2)}_{11}} = \delta^{(2)}_1*\frac{\partial z^{(2)}_1}{\partial w^{(2)}_{11}}= \delta^{(2)}_1*a^{(1)}_1∂w11(2)∂Loss=δ1(2)∗∂w11(2)∂z1(2)=δ1(2)∗a1(1)
∂Loss∂w12(2)=δ1(2)∗∂z1(2)∂w12(2)=δ1(2)∗a2(1)\frac{\partial Loss}{\partial w^{(2)}_{12}} = \delta^{(2)}_1*\frac{\partial z^{(2)}_1}{\partial w^{(2)}_{12}}= \delta^{(2)}_1*a^{(1)}_2∂w12(2)∂Loss=δ1(2)∗∂w12(2)∂z1(2)=δ1(2)∗a2(1)
∂Loss∂w21(2)=δ2(2)∗∂z2(2)∂w21(2)=δ2(2)∗a1(1)\frac{\partial Loss}{\partial w^{(2)}_{21}} = \delta^{(2)}_2*\frac{\partial z^{(2)}_2}{\partial w^{(2)}_{21}}= \delta^{(2)}_2*a^{(1)}_1∂w21(2)∂Loss=δ2(2)∗∂w21(2)∂z2(2)=δ2(2)∗a1(1)
∂Loss∂w22(2)=δ2(2)∗∂z2(2)∂w22(2)=δ2(2)∗a2(1)\frac{\partial Loss}{\partial w^{(2)}_{22}} = \delta^{(2)}_2*\frac{\partial z^{(2)}_2}{\partial w^{(2)}_{22}}= \delta^{(2)}_2*a^{(1)}_2∂w22(2)∂Loss=δ2(2)∗∂w22(2)∂z2(2)=δ2(2)∗a2(1)
那么
∂Loss∂W(2)=[δ1(2)⋅a1(1)δ1(2)⋅a2(1)δ2(2)⋅a1(1)δ2(2)⋅a2(1)]=[δ1(2)δ2(2)]⋅[a1(1)a2(1)]=δ(2)⋅a(1)T\frac{\partial Loss}{\partial W^{(2)}} = \left[ \begin{matrix} \delta^{(2)}_1 \cdot a^{(1)}_1 & \delta^{(2)}_1\cdot a^{(1)}_2 \\ \\ \delta^{(2)}_2 \cdot a^{(1)}_1 & \delta^{(2)}_2 \cdot a^{(1)}_2 \end{matrix} \right] = \left[ \begin{matrix} \delta^{(2)}_1 \\ \\ \delta^{(2)}_2 \end{matrix} \right] \cdot \left[ \begin{matrix} a^{(1)}_1 & a^{(1)}_2 \end{matrix} \right] = \delta^{(2)}\cdot a^{(1)T}∂W(2)∂Loss=⎣⎢⎡δ1(2)⋅a1(1)δ2(2)⋅a1(1)δ1(2)⋅a2(1)δ2(2)⋅a2(1)⎦⎥⎤=⎣⎢⎡δ1(2)δ2(2)⎦⎥⎤⋅[a1(1)a2(1)]=δ(2)⋅a(1)T
欲求∂Loss∂b(2)\frac{\partial Loss}{\partial b^{(2)}}∂b(2)∂Loss,先考虑下面式子
∂Loss∂b1(2)=δ1(2)∗1\frac{\partial Loss}{\partial b^{(2)}_1} = \delta^{(2)}_1*1∂b1(2)∂Loss=δ1(2)∗1
∂Loss∂b2(2)=δ2(2)∗1\frac{\partial Loss}{\partial b^{(2)}_2} = \delta^{(2)}_2*1∂b2(2)∂Loss=δ2(2)∗1
那么
∂Loss∂b(2)=[δ1(2)δ2(2)]⊙[1]=δ(2)\frac{\partial Loss}{\partial b^{(2)}} = \left[ \begin{matrix} \delta^{(2)}_1 \\ \\ \delta^{(2)}_2 \end{matrix} \right] \odot \left[ \begin{matrix} 1 \end{matrix} \right] = \delta^{(2)}∂b(2)∂Loss=⎣⎢⎡δ1(2)δ2(2)⎦⎥⎤⊙[1]=δ(2)
欲求δ(1)\delta^{(1)}δ(1),先考虑下面式子
δ1(1)=δ1(2)⋅∂z1(2)∂a1(1)⋅∂a1(1)∂z1(1)+δ2(2)⋅∂z2(2)∂a1(1)⋅∂a1(1)∂z1(1)=δ1(2)⋅w11(2)⋅σ′(z1(1))+δ2(2)⋅w21(2)⋅σ′(z1(1))\delta^{(1)}_1 = \delta^{(2)}_1 \cdot \frac{\partial z^{(2)}_{1}}{\partial a^{(1)}_1} \cdot \frac{\partial a_1^{(1)}}{\partial z_1^{(1)}} + \delta^{(2)}_2 \cdot \frac{\partial z^{(2)}_{2}}{\partial a^{(1)}_1} \cdot \frac{\partial a_1^{(1)}}{\partial z_1^{(1)}} = \delta^{(2)}_1 \cdot w^{(2)}_{11} \cdot \sigma'(z_1^{(1)}) + \delta^{(2)}_2 \cdot w_{21}^{(2)} \cdot \sigma'(z_1^{(1)})δ1(1)=δ1(2)⋅∂a1(1)∂z1(2)⋅∂z1(1)∂a1(1)+δ2(2)⋅∂a1(1)∂z2(2)⋅∂z1(1)∂a1(1)=δ1(2)⋅w11(2)⋅σ′(z1(1))+δ2(2)⋅w21(2)⋅σ′(z1(1))
δ2(1)=δ1(2)⋅∂z1(2)∂a2(1)⋅∂a2(1)∂z2(1)+δ2(2)⋅∂z2(2)∂a2(1)⋅∂a2(1)∂z2(1)=δ1(2)⋅w12(2)⋅σ′(z2(1))+δ2(2)⋅w22(2)⋅σ′(z2(1))\delta^{(1)}_2 = \delta^{(2)}_1 \cdot \frac{\partial z^{(2)}_{1}}{\partial a^{(1)}_2} \cdot \frac{\partial a_2^{(1)}}{\partial z_2^{(1)}} + \delta^{(2)}_2 \cdot \frac{\partial z^{(2)}_{2}}{\partial a^{(1)}_2} \cdot \frac{\partial a_2^{(1)}}{\partial z_2^{(1)}} = \delta^{(2)}_1 \cdot w^{(2)}_{12} \cdot \sigma'(z_2^{(1)}) + \delta^{(2)}_2 \cdot w_{22}^{(2)} \cdot \sigma'(z_2^{(1)})δ2(1)=δ1(2)⋅∂a2(1)∂z1(2)⋅∂z2(1)∂a2(1)+δ2(2)⋅∂a2(1)∂z2(2)⋅∂z2(1)∂a2(1)=δ1(2)⋅w12(2)⋅σ′(z2(1))+δ2(2)⋅w22(2)⋅σ′(z2(1))
那么
δ(1)=[w11(2)w21(2)w12(2)w22(2)]⋅[δ1(2)δ2(2)]⊙[σ′(z1(1))σ′(z2(1))]=W(2)T⋅δ(2)⊙σ′(z(1))\delta^{(1)}=\left[ \begin{matrix} w_{11}^{(2)} & w_{21}^{(2)} \\ \\ w_{12}^{(2)} & w_{22}^{(2)} \end{matrix} \right] \cdot \left[ \begin{matrix} \delta^{(2)}_1 \\ \\ \delta^{(2)}_2 \end{matrix} \right] \odot \left[ \begin{matrix} \sigma'(z^{(1)}_1) \\ \\ \sigma'{(z_2^{(1)})} \end{matrix} \right]=W^{(2)T} \cdot \delta^{(2)} \odot \sigma'(z^{(1)})δ(1)=⎣⎢⎡w11(2)w12(2)w21(2)w22(2)⎦⎥⎤⋅⎣⎢⎡δ1(2)δ2(2)⎦⎥⎤⊙⎣⎢⎡σ′(z1(1))σ′(z2(1))⎦⎥⎤=W(2)T⋅δ(2)⊙σ′(z(1))
求∂Loss∂W(1),∂Loss∂b(1)\frac{\partial Loss}{\partial W^{(1)}},\frac{\partial Loss}{\partial b^{(1)}}∂W(1)∂Loss,∂b(1)∂Loss的方法与之前求∂Loss∂W(2),∂Loss∂b(2)\frac{\partial Loss}{\partial W^{(2)}},\frac{\partial Loss}{\partial b^{(2)}}∂W(2)∂Loss,∂b(2)∂Loss完全相同,这里就不赘述了
利用向量微积分简化推导过程
上面的推到方法我们专注于拆成标量用链式求导法则算,最后拼成矩阵相乘的形式,这样从数学上来说比较严谨,但是较为麻烦.
而实际上我们有一种更为简单(玄学 )的做法,那就是直接对向量运用链式求导法则,并且根据矩阵的维数来调整项目的位置和对项目进行转置.
欲求δ(2)=∂Loss∂z(2)=∂Loss∂a(2)⋅∂a(2)∂z(2)\delta^{(2)} = \frac{\partial Loss}{\partial z^{(2)}} = \frac{\partial Loss}{\partial a^{(2)}} \cdot \frac{\partial a^{(2)}}{\partial z^{(2)}}δ(2)=∂z(2)∂Loss=∂a(2)∂Loss⋅∂z(2)∂a(2)
由于dim{δ(2)}=2∗1,dim{∂Loss∂a(2)}=2∗1dim\{\delta^{(2)}\} = 2*1,dim\{\ \frac{\partial Loss}{\partial a^{(2)}} \} = 2*1dim{δ(2)}=2∗1,dim{ ∂a(2)∂Loss}=2∗1,根据矩阵乘法的法则,理论上应该有dim{∂a(2)∂z(2)}=1∗1dim\{\ \frac{\partial a^{(2)}}{\partial z^{(2)}} \} = 1*1dim{ ∂z(2)∂a(2)}=1∗1.
而实际上如果是列向量(a(2)a^{(2)}a(2))对列向量(z(2)z^{(2)}z(2))进行求导,大多都是对应元素进行求导,比如∂a(2)∂z(2)=[σ′(z1(2))σ′(z2(2))]=σ′(z(2))\frac{\partial a^{(2)}}{\partial z^{(2)}}=\left[ \begin{matrix} \sigma'(z^{(2)}_1) \\ \\ \sigma'(z^{(2)}_2) \end{matrix} \right]=\sigma'(z^{(2)})∂z(2)∂a(2)=⎣⎢⎡σ′(z1(2))σ′(z2(2))⎦⎥⎤=σ′(z(2)),也就是说dim{∂a(2)∂z(2)}=2∗1dim\{\frac{\partial a^{(2)}}{\partial z^{(2)}} \} = 2*1dim{∂z(2)∂a(2)}=2∗1.
那怎么办呢,2∗1,2∗22*1,2*22∗1,2∗2的矩阵不满足乘法规律,这样的话,使用⊙\odot⊙这个操作刚好就可以,所以:
δ(2)=∂Loss∂z(2)=∂Loss∂a(2)⋅∂a(2)∂z(2)=∂Loss∂a(2)⊙σ′(z(2))\delta^{(2)} = \frac{\partial Loss}{\partial z^{(2)}} = \frac{\partial Loss}{\partial a^{(2)}} \cdot \frac{\partial a^{(2)}}{\partial z^{(2)}}=\frac{\partial Loss}{\partial a^{(2)}} \odot \sigma'(z^{(2)})δ(2)=∂z(2)∂Loss=∂a(2)∂Loss⋅∂z(2)∂a(2)=∂a(2)∂Loss⊙σ′(z(2))
看到这里,明白人一定会说:你这不扯淡吗,毫无道理.
没错,你来打我呀.
我们再看一个例子:
欲求∂Loss∂W(2)=(∂Loss∂a(2)⋅∂a(2)∂z(2))⋅∂z(2)∂W(2)=δ(2)⋅∂z(2)∂W(2)\frac{\partial Loss}{\partial W^{(2)}} = (\frac{\partial Loss}{\partial a^{(2)}} \cdot \frac{\partial a^{(2)}}{\partial z^{(2)}}) \cdot \frac{\partial z^{(2)}}{\partial W^{(2)}} = \delta^{(2)} \cdot \frac{\partial z^{(2)}}{\partial W^{(2)}}∂W(2)∂Loss=(∂a(2)∂Loss⋅∂z(2)∂a(2))⋅∂W(2)∂z(2)=δ(2)⋅∂W(2)∂z(2)
注意到dim{∂Loss∂W(2)}=2∗2,dim{δ(2)}=2∗1dim\{\frac{\partial Loss}{\partial W^{(2)}}\}=2*2,dim\{\delta^{(2)}\}=2*1dim{∂W(2)∂Loss}=2∗2,dim{δ(2)}=2∗1
那么理论上应该有dim{∂z(2)∂W(2)}=1∗2dim\{\frac{\partial z^{(2)}}{\partial W^{(2)}}\} = 1*2dim{∂W(2)∂z(2)}=1∗2.
我们根据z(2)=W(2)a(1)+b(2)z^{(2)}=W^{(2)}a^{(1)}+b^{(2)}z(2)=W(2)a(1)+b(2),知道dim{a(1)}=2∗1dim\{a^{(1)}\}=2*1dim{a(1)}=2∗1,所以令∂z(2)∂W(2)=a(1)T\frac{\partial z^{(2)}}{\partial W^{(2)}}=a^{(1)T}∂W(2)∂z(2)=a(1)T.
于是∂Loss∂W(2)=δ(2)⋅a(1)T\frac{\partial Loss}{\partial W^{(2)}}= \delta^{(2)} \cdot a^{(1)T}∂W(2)∂Loss=δ(2)⋅a(1)T.
虽然很扯淡,但这样做跟前面用数学推导得到的公式是一样的.
我们再看一个例子:
δ(1)=∂Loss∂z(1)=∂Loss∂z(2)⋅∂z(2)∂a(1)⋅∂a(1)∂z(1)\delta^{(1)} = \frac{\partial Loss}{\partial z^{(1)}}=\frac{\partial Loss}{\partial z^{(2)}} \cdot \frac{\partial z^{(2)}}{\partial a^{(1)}} \cdot \frac{\partial a^{(1)}}{\partial z^{(1)}}δ(1)=∂z(1)∂Loss=∂z(2)∂Loss⋅∂a(1)∂z(2)⋅∂z(1)∂a(1)
由于dim{∂Loss∂z(1)}=n∗1,dim{∂Loss∂z(2)}=m∗1,dim{∂z(2)∂a(1)}=?,dim{∂a(1)∂z(1)}=n∗1dim\{\frac{\partial Loss}{\partial z^{(1)}}\}=n*1,dim\{\frac{\partial Loss}{\partial z^{(2)}}\}=m*1,dim\{\frac{\partial z^{(2)}}{\partial a^{(1)}}\} = ?,dim\{\frac{\partial a^{(1)}}{\partial z^{(1)}}\}=n*1dim{∂z(1)∂Loss}=n∗1,dim{∂z(2)∂Loss}=m∗1,dim{∂a(1)∂z(2)}=?,dim{∂z(1)∂a(1)}=n∗1.
再根据z(2)=W(2)a(1)+b(2)z^{(2)}=W^{(2)}a^{(1)}+b^{(2)}z(2)=W(2)a(1)+b(2),那么∂z(2)∂a(1)\frac{\partial z^{(2)}}{\partial a^{(1)}}∂a(1)∂z(2)一定是有W(2)W^{(2)}W(2)变化而来的,而dim{W(2)}=m∗ndim\{W^{(2)}\}=m*ndim{W(2)}=m∗n,所以玄学一波(调整一下顺序以及转置)应该得到:
(其中n,m=2n,m=2n,m=2)
δ(1)=W(2)T⋅δ(2)⊙σ′(z(1))\delta^{(1)} = W^{(2)T} \cdot \delta^{(2)} \odot \sigma'(z^{(1)})δ(1)=W(2)T⋅δ(2)⊙σ′(z(1)).这与上面数学推导得到的结果是一致的.
4.卷积层反向传播
输入层为XXX,卷积核为KKK,输出层为YYY.
那么有 X⊗K=YX \otimes K=YX⊗K=Y.
如果XXX的宽度为xxx,KKK的宽度为kkk,YYY的宽度为yyy.那么有y=x−k+1y=x-k+1y=x−k+1成立.
[x11x12x13x21x22x23x31x32x33]⊗[k11k12x21k22]=[y11y12y21y22]\left[ \begin{matrix} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \end{matrix} \right] \otimes \left[ \begin{matrix} k_{11} & k_{12} \\ x_{21} & k_{22} \end{matrix} \right] = \left[ \begin{matrix} y_{11} & y_{12} \\ y_{21} & y_{22} \end{matrix} \right]⎣⎡x11x21x31x12x22x32x13x23x33⎦⎤⊗[k11x21k12k22]=[y11y21y12y22]
展开之后我们可以写成:
y11=x11k11+x12k12+x21k21+x22k22y_{11}=x_{11}k_{11}+x_{12}k_{12}+x_{21}k_{21}+x_{22}k_{22}y11=x11k11+x12k12+x21k21+x22k22
y12=x12k11+x13k12+x22k21+x23k22y_{12}=x_{12}k_{11}+x_{13}k_{12}+x_{22}k_{21}+x_{23}k_{22}y12=x12k11+x13k12+x22k21+x23k22
y21=x21k11+x22k12+x31k21+x32k22y_{21}=x_{21}k_{11}+x_{22}k_{12}+x_{31}k_{21}+x_{32}k_{22}y21=x21k11+x22k12+x31k21+x32k22
y22=x22k11+x23k12+x32k21+x33k22y_{22}=x_{22}k_{11}+x_{23}k_{12}+x_{32}k_{21}+x_{33}k_{22}y22=x22k11+x23k12+x32k21+x33k22
记δij=∂Loss∂yij=∇yij\delta_{ij}=\frac{\partial Loss}{\partial y_{ij}}=\nabla y_{ij}δij=∂yij∂Loss=∇yij
∂Loss∂k11=δ11⋅x11+δ12⋅x12+δ21⋅x21+δ22⋅x22\frac{\partial Loss}{\partial k_{11}}=\delta_{11} \cdot x_{11} + \delta_{12} \cdot x_{12}+\delta_{21} \cdot x_{21}+\delta_{22} \cdot x_{22}∂k11∂Loss=δ11⋅x11+δ12⋅x12+δ21⋅x21+δ22⋅x22
∂Loss∂k12=δ11⋅x12+δ12⋅x13+δ21⋅x22+δ22⋅x23\frac{\partial Loss}{\partial k_{12}}=\delta_{11} \cdot x_{12} + \delta_{12} \cdot x_{13}+\delta_{21} \cdot x_{22}+\delta_{22} \cdot x_{23}∂k12∂Loss=δ11⋅x12+δ12⋅x13+δ21⋅x22+δ22⋅x23
∂Loss∂k21=δ11⋅x21+δ12⋅x22+δ21⋅x31+δ22⋅x32\frac{\partial Loss}{\partial k_{21}}=\delta_{11} \cdot x_{21} + \delta_{12} \cdot x_{22}+\delta_{21} \cdot x_{31}+\delta_{22} \cdot x_{32}∂k21∂Loss=δ11⋅x21+δ12⋅x22+δ21⋅x31+δ22⋅x32
∂Loss∂k22=δ11⋅x22+δ12⋅x23+δ21⋅x32+δ22⋅x33\frac{\partial Loss}{\partial k_{22}}=\delta_{11} \cdot x_{22} + \delta_{12} \cdot x_{23}+\delta_{21} \cdot x_{32}+\delta_{22} \cdot x_{33}∂k22∂Loss=δ11⋅x22+δ12⋅x23+δ21⋅x32+δ22⋅x33
我们发现
[∇k11∇k12∇k21∇k22]=[x11x12x13x21x22x23x31x32x33]⊗[∇y11∇y12∇y21∇y22]\left[ \begin{matrix} \nabla k_{11} &\nabla k_{12} \\ \\ \nabla k_{21} &\nabla k_{22} \end{matrix} \right] = \left[ \begin{matrix} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \end{matrix} \right] \otimes \left[ \begin{matrix} \nabla y_{11} & \nabla y_{12} \\ \\ \nabla y_{21} & \nabla y_{22} \end{matrix} \right]⎣⎡∇k11∇k21∇k12∇k22⎦⎤=⎣⎡x11x21x31x12x22x32x13x23x33⎦⎤⊗⎣⎡∇y11∇y21∇y12∇y22⎦⎤
而这刚好也是一个卷积操作,可以写成
∂Loss∂K=∂Loss∂Y⋅∂Y∂K=X⊗∇Y\frac{\partial Loss}{\partial K}=\frac{\partial Loss}{\partial Y}\cdot\frac{\partial Y}{\partial K}=X \otimes \nabla Y∂K∂Loss=∂Y∂Loss⋅∂K∂Y=X⊗∇Y
我们写成 ∇K=X⊗∇Y\nabla K=X \otimes \nabla Y∇K=X⊗∇Y
这说明了卷积层的梯度传播仍然是卷积操作.
那么对卷积核KKK的梯度∇K\nabla K∇K我们求出来了,下面我们省略对XXX的梯度推导步骤,直接得出公式∇X=pad(∇Y)⊗rot180(K)\nabla X= pad(\nabla Y) \otimes rot_{180}(K)∇X=pad(∇Y)⊗rot180(K)
其中rot180rot_{180}rot180操作表示将矩阵旋转180180180度,可以理解成先左右旋转,再上下旋转.
其中padpadpad操作表示对矩阵周围进行补000操作,为什么需要补000呢,这是为了保证最后维数是匹配的.∇X\nabla X∇X的维度是rrr,KKK的唯独是kkk,那么我们前向传播时候得到的YYY的维度是r−k+1r-k+1r−k+1.假设pad(∇Y)pad(\nabla Y)pad(∇Y)的维度是???,那么必须有r=?−k+1r=?-k+1r=?−k+1,所以?=r+k−1?=r+k-1?=r+k−1,因此padpadpad操作将维度为r−k+1r-k+1r−k+1的矩阵补成维度r+k−1r+k-1r+k−1.
例如这个例子中的pad(∇Y)pad(\nabla Y)pad(∇Y)应该写成:
[00000∇y11∇y1200∇y21∇y2200000]\left[ \begin{matrix} 0&0&0&0 \\ 0&\nabla y_{11} & \nabla y_{12} & 0\\ 0&\nabla y_{21} & \nabla y_{22} & 0\\ 0&0&0&0 \end{matrix} \right]⎣⎢⎢⎡00000∇y11∇y2100∇y12∇y2200000⎦⎥⎥⎤
这个例子中的rot180(K)rot_{180}(K)rot180(K)应该写成:
[w22w21w12w11]\left[ \begin{matrix} w_{22} &w_{21} \\ \\ w_{12} & w_{11} \end{matrix} \right]⎣⎡w22w12w21w11⎦⎤
5.卷积操作的实现
卷积操作其实也可以写作矩阵乘法来做.
比如下面式子可以写成矩阵相乘的形式:
[x11x12x13x21x22x23x31x32x33]⊗[k11k12x21k22]=[y11y12y21y22]\left[ \begin{matrix} x_{11} & x_{12} & x_{13} \\ x_{21} & x_{22} & x_{23} \\ x_{31} & x_{32} & x_{33} \end{matrix} \right] \otimes \left[ \begin{matrix} k_{11} & k_{12} \\ x_{21} & k_{22} \end{matrix} \right] = \left[ \begin{matrix} y_{11} & y_{12} \\ y_{21} & y_{22} \end{matrix} \right]⎣⎡x11x21x31x12x22x32x13x23x33⎦⎤⊗[k11x21k12k22]=[y11y21y12y22]
即
y11=x11k11+x12k12+x21k21+x22k22y_{11}=x_{11}k_{11}+x_{12}k_{12}+x_{21}k_{21}+x_{22}k_{22}y11=x11k11+x12k12+x21k21+x22k22
y12=x12k11+x13k12+x22k21+x23k22y_{12}=x_{12}k_{11}+x_{13}k_{12}+x_{22}k_{21}+x_{23}k_{22}y12=x12k11+x13k12+x22k21+x23k22
y21=x21k11+x22k12+x31k21+x32k22y_{21}=x_{21}k_{11}+x_{22}k_{12}+x_{31}k_{21}+x_{32}k_{22}y21=x21k11+x22k12+x31k21+x32k22
y22=x22k11+x23k12+x32k21+x33k22y_{22}=x_{22}k_{11}+x_{23}k_{12}+x_{32}k_{21}+x_{33}k_{22}y22=x22k11+x23k12+x32k21+x33k22
即
[y11y12y21y22]=[x11x12x21x22x12x13x22x23x21x22x31x32x22x23x32x33]⋅[k11k12k21k22]\left[\begin{matrix} y_{11}\\y_{12}\\y_{21}\\y_{22} \end{matrix}\right] = \left[\begin{matrix} x_{11}&x_{12}&x_{21}&x_{22}\\ x_{12}&x_{13}&x_{22}&x_{23}\\ x_{21}&x_{22}&x_{31}&x_{32}\\ x_{22}&x_{23}&x_{32}&x_{33} \end{matrix}\right] \cdot \left[\begin{matrix} k_{11}\\ k_{12}\\k_{21}\\k_{22} \end{matrix}\right]⎣⎢⎢⎡y11y12y21y22⎦⎥⎥⎤=⎣⎢⎢⎡x11x12x21x22x12x13x22x23x21x22x31x32x22x23x32x33⎦⎥⎥⎤⋅⎣⎢⎢⎡k11k12k21k22⎦⎥⎥⎤
从上面的例子可以看出吗,要把卷积操作变成矩阵相乘需要把YYY矩阵和KKK矩阵都展成一列向量,并且要对XXX矩阵做一个操作,就是把XXX按照卷积核的大小拆出一个大小为k2∗k2k^2*k^2k2∗k2的矩阵来.
最后矩阵乘法做完以后,将yyy向量reshapereshapereshape成矩阵回来就好了.