Reproduced
User-aware
Links: https://www.zhihu.com/question/24827633/answer/91489990
Source: Know
Copyright belongs to the author. Commercial reprint please contact the author for authorization, non-commercial reprint please specify the source.
is usually explained by the chain rules .
such as the following neural network
For nodes, the net input is as follows:
Then do a sigmoid function to get the output of the node:
Similarly, we can get the node, the output 、、。
When the result is obtained, the output error of the whole neural network can be expressed as:
It is the target value of the node, which has just been calculated by forward propagation. To measure the error between the two.
This can also be thought of as cost function, but here is omitted to prevent Overfit regularization term ()
Expand get
Of the
output layer
Adjustment by gradient descent, required by the chain rule:
,
As shown in the following:
The above 3 multiply to get the gradient, then can use this gradient training:
Many teaching materials, such as Stanford's course, will record the intermediate results, indicating how much responsibility this node will have for the final error. So there is.
Of the
hidden layer
Adjustment by gradient descent, required by the chain rule:
,
As shown in the following:
Parameters affect, and then affect, and then affect.
Solve each part:
,
Which, as calculated before here.
The calculations are similar, so get
。
The other two items in the chain are as follows:
,
Multiply to get
After you get the gradient, you can iterate over it:
。
The same can be defined in the previous equation, so the entire gradient can be written
=======================
The above is the origin of the third step in the tutorial unsupervised Feature learning and deep learning Tutorial:
The so-called post-transmission, in fact, is "the future in the spread of propaganda deviation, you have to be responsible!" ", each node is responsible for the amount of representation, then, the hidden node needs to be responsible for the amount of the output node is responsible for the amount of a layer of forward conduction.
Reference:
"1" A Step by step backpropagation Example
"2" unsupervised Feature learning and deep learning Tutorial
How to understand the inverse propagation algorithm inside a neural network?