First, Introduction
Using ReLU instead of the sigmoid activation function in alexnet, it is found that the convergence rate of SGD obtained using ReLU is much faster than Sigmoid/tanh
Second, the role
1.sigmoid and Tanh have saturation zone, Relu at x>0 time derivative is always 1, help to alleviate the gradient disappear, thus speeding up the training speed
2. Whether it is forward propagation or reverse propagation, the computational amount is significantly less than sigmoid and Tanh
Third, shortcomings
When x<0, the gradient is 0, thus the gradient that spreads downward is 0. When the learning rate is large, a large number of neurons may "die". The solution is leaky Relu, when x<0, gives a certain negative gradient.
Iv. attention
The output mean is not 0, which is the disadvantage of sigmoid because the sigmoid has saturation zone. But the Relu output mean is not 0, not a disadvantage, because the Relu does not have a saturation zone.
AlexNet----ReLU