Http://www.cnblogs.com/python27/p/MachineLearningWeek05.html
This chapter may be the most unclear chapter of Andrew Ng, why do you say so? This chapter focuses on the back propagation (backpropagration, BP) algorithm, Ng spent half time talking about how to calculate the error item δ, how to calculate the δ matrix, and how to use MATLAB to achieve the post transmission, but the most critical question-why so calculate. The previous calculation of these amounts represents what, Ng basically did not explain, also did not give a mathematical derivation of the example. So this time I'm not going to follow the Open class, after consulting a lot of data, I want to start with a simple neural network gradient deduction, understand the basic principle of the post propagation algorithm and the actual meaning of each symbol, and then according to the course of the specific steps of BP calculation, so that more helpful to understand. Back propagation (backpropagration, BP) algorithm for simple neural networks 1. Review of previous forward propagation (Forwardpropagration, FP) algorithms
FP algorithm is still very simple, frankly speaking is based on the value of the previous layer of neurons, first weighted and then take the sigmoid function to get the value of the next layer of neurons, written in the form of mathematics is:
A (1) =x
Z (2) =θ (1) A (1)
A (2) =g (Z (2))
Z (3) =θ (2) A (2)