dimension calculation of convolution layer
Suppose the input size of the convolution layer x*x to 5*5, the volume kernel size is k*k to 3*3, step stride is 2, assuming not fill, output dimension will be (X-k)/2+1, that is 2*2; If the step size is 1, then the output will be 3*3. There are many derivations of the front Shang and reverse propagation of step 1. Don't repeat it.
forward Propagation
Suppose the input is as follows:
Convolution cores:
Convolution results
Through convolution calculation, we can conclude that:
z1=w1*x1+w2*x2+w3*x3+w4*x6+w5*x7+w6*x8+w7*x11+w8*x12+w9*x13;
z2=w1*x3+w2*x4+w3*x5+w4*x8+w5*x8+w6*x10+w7*x13+w8*x14+w9*x15;
The same is true of Z2 Z3.
Suppose that the sensitivity of the next layer is:
So αj/αw1=
Empathy W2 is associated with X2 X4 X12 X14. Similarly W3 W4 and so on.
According to the convolution return principle, just insert the sensitivity matrix into 0 rows of 0 columns, you can do convolution by step 1 and input, you can get the gradient of convolution core:
That
It can be understood that the forward output is 1 convolution per step, but at every step, the convolution kernel is set to zero. The above derivation does not consider the activation function, which defaults to linear output.