Imagine a neural network instance diagram as shown
The network description is as follows:
1) There are 2 inputs, namely X1, X2 in the figure
2) There are 3 neurons, i.e. B1, B2, B3 on the way
3) A total of 2 layers of network. The neurons in the 1th layer are B1, B2, and the 2nd layer of neurons is B3. The middle layer can be called the hidden layer. For example, the B1 and B2 on the way belong to the hidden layer
4) has 6 weights (W11 to W23). The final output is out.
The output of the Neuron B1 is: X1W11+X2W21+B1
The output of the neuron B2 is: x1w12+x2w22+b2
If there is no non-linear activation, then the final output of the calculation formula is:
out= (X1W11 + x2w21 + b1) * W13 + (X1w12 + x2w22 + b2) *w23 + B3
=x1* (w11w13 + w12w23) + x2 * (w21w13 + w22w23) + (b1w13 +b2w23 +b3)
As can be seen from the above formula, although the use of 3 neurons, but the network for X1 and X2 is still linear, is completely equivalent to the effect of 1 neurons.
Such as:
Therefore, if you simply connect the neurons together, do not add nonlinear processing, the final result is still a linear function, unable to complete the description of various complex phenomena, it can be seen in the output of the neuron need a nonlinear function is necessary.
If the nonlinear activation function is f for the middle-tier neuron and the final output neuron is G, then for the previous network, the final output will change to:
Out=g (f (x1w11 + X2W21 + b1) * W13 + f (x1w12 + x2w22+b2) *w23 + b3)
Since both F and G are non-linear, the resulting nonlinear network output can be used to fit complex data.
Principle of deep convolution network-the necessity of nonlinear activation