AlexNet----ReLU

Source: Internet
Author: User

First, Introduction

Using ReLU instead of the sigmoid activation function in alexnet, it is found that the convergence rate of SGD obtained using ReLU is much faster than Sigmoid/tanh

Second, the role

1.sigmoid and Tanh have saturation zone, Relu at x>0 time derivative is always 1, help to alleviate the gradient disappear, thus speeding up the training speed

2. Whether it is forward propagation or reverse propagation, the computational amount is significantly less than sigmoid and Tanh

Third, shortcomings

When x<0, the gradient is 0, thus the gradient that spreads downward is 0. When the learning rate is large, a large number of neurons may "die". The solution is leaky Relu, when x<0, gives a certain negative gradient.

Iv. attention

The output mean is not 0, which is the disadvantage of sigmoid because the sigmoid has saturation zone. But the Relu output mean is not 0, not a disadvantage, because the Relu does not have a saturation zone.

AlexNet----ReLU

Related Keywords:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.