One of the key steps in the error back propagation of the CNN (Convolutional Neural network) is to pass the error of a convolution (convolve) layer to the pool layer on the previous layer, because it is 2D back in CNN, Unlike conventional neural networks where 1D is slightly different in detail, the following is a simple example of how to decompose this counter step in detail.
Suppose that in a CNN network, p represents a pooled layer, K is the convolution core, C represents the volume base, first of all, to look at the forward (feed forward) calculation, from a pooled layer after the convolution kernel (Kernel) operation to obtain the convolution layer:
By decomposing the forward calculation steps, you can get the following formula:
The following steps are based on this forward calculation to decompose the reverse propagation step:
The first step is to determine the destination of the error propagation, from Deltac to Deltap, so start with the DELTAP1 analysis
From the previous forward calculation process, we can find out which elements of C are involved in the calculation of P1, and can calculate the inverse propagation formula according to the corresponding forward calculation:
And so on, there are the following formulas:
For P2
For P3
For P4
For P5
Can always be pushed to P9
Summarize these 9 backward-propagating formulas together:
It can be further found that these 9 formulas can be implemented using the following convolution process:
At this point, from the details of the calculation explains why the reverse propagation of the convolution core rotation 180 °, and the full form of convolution operations.
(Note: the "convolution" mentioned above is considered to be a calculation process that does not rotate the second factor by 180°, in fact, the Conv2 (A, b) in MATLAB will automatically rotate by 180°, in other words, this step in MATLAB does not have to be rotated in advance, Leave the CONV2 function to rotate itself)
A concise analysis of the rotational convolution core of CNN error inversion (RPM)