Learn from scratch Mxnet (v) Mxnet of the black Technology of memory saving Dafa

Source: Internet
Author: User
Tags mxnet

It's a bit of a mouthful to find out a name. - -#

Everyone in the deep learning, should have met the memory is not enough to use, and then have to pain minus Batchszie, or cut their own network structure it? Finally ran out of the effect is not satisfactory, always feel that they have been targeted by the world. What happens when you encounter this situation? Please use Mxnet's Miracle Dafa to save your memory! Lu Xun once said: "You do not try, how can you know that your idea is really so bad?"

The first is the portal attached to the Mxnet-memonger, the corresponding paper is also worth a look at the Training deep Nets with the sublinear Memory cost.

In fact, repo and paer inside are very clear, here simply mention it.

First, why?

  What is the principle of saving memory? We know that when we train a network, video memory is used to save the intermediate results, why need to save the intermediate results, because in the BP calculation gradient, we need the current layer of the value and the previous layer of the gradient to be calculated together, so it seems that the memory is not saved?  Of course not, a simple example: a 3-layer neural network, we can not save the results of the second layer, when the BP to the second layer needs its results, can be calculated by the results of the first layer, which saves a lot of memory. To remind you, this is only my personal understanding, in fact, this article paper has not been to a good read, have time to take notes again. But the general meaning is almost like this.

  

Second, how?

  How do you do it? Share my trick, I will generally add a line in the symbol, such as data = Data+ DATA0 This followed by a row data._set_attr (force_mirroring=' True '), Why do you want to go to see Repo's readme,symbol place after processing, only the following can be, Searchplan will return a symbol that saves video memory to you, and the rest of the place is exactly the same.

  

1 Importmxnet as MX2 ImportMemonger3 4 #Configure your network5NET =My_symbol ()6 7 #Call Memory optimizer to search possible memory plan.8net_planned =Memonger.search_plan (NET)9 Ten #Use as normal OneModel =mx. Feedforward (net_planned, ...) AModel.fit (...)

PS: Use the time to pay attention, do not in the random layer such as dropout after adding mirror, because this result, once again and the last time is different, will make your symbol of loss become very strange.

 

Iii. Summary

  Oh, my God!

Learn from scratch Mxnet (v) Mxnet of the black Technology of memory saving Dafa

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.