Tips for CNN Training

Source: Internet
Author: User

Turn from:

http://weibo.com/p/1001603816330729006673

I think a large part of the internet has never been seen, so I translated the next, if there is wrong place, welcome correction:  1: Prepare the data: it is necessary to ensure that there is a large number of high quality and with clean label data, no such data, learning is impossible 2: pretreatment: This does not say much, is 0 mean and 1 variance 3:minibatch: The recommended value 128,1 the best, but not high efficiency, but do not use too big value, otherwise it is easy to fit 4: Gradient normalization: In fact, after the gradient is calculated, divided by the number of Minibatch. That's not much. 5: The following main focus on the study rate of 5.1: In general, with a general learning rate to start, and then gradually reduce it 5.2: A recommended value is 0.1, suitable for many NN problems, generally tend to smaller. 5.3: A recommendation for scheduling learning rate: If the performance no longer increases on the validation set let the learning rate be divided by 2 or 5, and then continue, the learning rate will always be very small, until the end can stop training. 5.4: A lot of people use a design learning rate principle is to monitor a ratio (each update gradient norm divided by the current weight norm), if this ratio around 10-3, if less than this value, learning will be very slow, if greater than this value, then the study is very unstable, which will lead to failure. 6: Using a validation set, you can know when to start lowering the learning rate, and when to stop training. 7: Some suggestions on the choice of weight initialization: 7.1: If you are lazy, initialize it directly with 0.02*randn (Num_params), Of course other values you can also try 7.2: if the top one is not so good, then initialize each weight matrix in sequence with INIT_SCALE/SQRT (layer_width) * Randn,init_ Scale can be set to 0.1 or 17.3: The effect of initialization parameters on the result is critical and should be taken seriously. 7.4: In the depth of the network, the random initialization of weights, the use of SGD is generally not good, because the weight of the initialization is too small. In this case, the shallow network is effective, but when deep enough to die, because weight update, is by a lot of weight multiplied, the smaller, a bit like the gradient disappears meaning (this sentence is I added) 8: If training rnn or LSTM, It is important to ensure that the norm of the gradient is constrained to 15 or 5 (provided that the gradient is first normalized), which is significant in RNN and lstm. 9: Check the gradient below, if it is your own calculation. 10: If you use LSTM to solve the problem of long-time dependencies, remember to initialize bias 12: As far as possible to find ways to expand training data, if you are using image data, you may want to do a bit of imageTurn around and stuff, and expand the data training set. 13: Use dropout14: When evaluating the final result, do it a few more times and then average their results.

Tips for CNN training

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.