How can i solve the memory leak of Python training data?

Source: Internet
Author: User

Python language  is increasingly widely used in machine learning/deep learning, and is currently the hottest programming language in the field, and the large depth learning framework supports the Python interface. In the process of TensorFlow training model, the general data loading sub-scenarios are different. 1.   when the amount of data can be directly loaded into memory, of course, the most convenient, directly loaded into memory, and then training. There are several ways to 2.   data when it is not fully loaded into memory. It introduces 2 kinds of use, one is the side of the training side reading (can be optimized with multithreading), the other is randomly disrupt all the training data index and randomly select some data for training test. In the 2nd scenario, it is possible that the GC mechanism of Python is not deeply understood (Me ... ), a memory leak occurs. For example, the list on line 12th below cannot be freed, causing a memory leak, and 11 rows without a memory leak. Import numpy  as NP import GC import psutil import osdef  get_memory_usage ():p rocess = Psutil. Process (Os.getpid ()) return Process.memory_info (). rss/(1024*1024) print ' Before load, Memory:%d MB '% get_memory_usage ( ) #dataset = [Np.zeros ((6000,6000), Dtype=np.float32) for _ in range (30000)]dataset = [(Np.zeros (6000,6000), dtype= Np.float32), Np.zeros ((200,200), dtype=np.int32))   for _  in Range (10000)] print ' after load, Memory:%d MB '% get_memory_usage () print ' Before release, Memory:%d MB '% get_memory_usage () del datasetdataset = &N Bsp Nonegc.collect () print ' After release, Memory:%d MB '% get_memory_usage (11th row Memory release result: before  load,  memory:25 MB after  load,  memory:220 MB before   release,  memory:220 MB after  release,  memory:27 MB 12th row Memory release result: Before   load,  memory:23 MB after  load,  memory:1406 MB before   release,  memory:1406 MB after  release,  memory:1170 MB You can see a serious memory leak on line 12th. When this memory leak occurs, model training can only train several epochs to appear OOM. To make a miracle. This problem is of course best solved from the root cause, that is, to study the Python GC mechanism to resolve memory leaks. Sometimes rough and fast methods are also a choice when time is tight or problems are more troublesome. The goal of the training model is to optimize the model parameters according to the training data and the model structure, so that the process is not important as long as the goal is achieved. In this example, you can directly train only 1 epochs at a time, and then restart the Python process to read the model for fine-tune. With the direct training of several Epoch effects close. # Training of 50 epoch seq 50|xargs-i python train.py--model-path=./model

Source: Network

How can i solve the memory leak of Python training data?

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.