This article mainly introduces the example of using MELIAE to analyze program memory usage in Python. This article provides sample code. if you need a friend, you can refer to the dht protocol search program written below, I optimized these days and found that the speed is indeed much faster. But there was a new problem, and the memory soared. I opened ten crawlers that occupied 800 MB of memory. At first, I thought there were too many nodes. I found a few minor problems and it was useless. Later, I went to the Internet to find the python memory analysis tool. I checked some information and found that python has a meliae Library which is very convenient to operate. I analyzed it and found that the reason is not that there are too many nodes: 0 0, is to save the sent t_id, used to indicate that the returned message is that the sent dictionary is too large.
From the analysis results, it is very easy to locate the number and size of an object, and it is very easy to analyze. I thought it was because many elements in the dictionary were not released after I sent the query information. I used the expiration time to judge and delete them. It was found that it was small, but not very significant. it seems that it was less than 100 M. Later, I reduced the time needed to search for a random hash. I used to query it once every minute. I changed it to the first query !, I found that it was not reduced by 0. I don't know why. It should be to look for the hash, ask for the node, and then return and ask about the nodes in it. the number of nodes is increasing at the end, but I don't understand how there are 0.6 million entries running for so many minutes. That is to say, there were so many objects in the memory that were not released at that time. After the memory usage is reached, it basically does not change, and there is a very small and very slow improvement, because other programs are still opened, and it is not sure whether these programs are caused by the increase of other objects. Perform a staged dump test.
Install pip install meliae directly and you will be OK. I haven't updated the project for a long time. I don't know if there are any good alternatives, but it's not bad.
Dump memory to a file
The code is as follows:
From meliae import pipeline
Objects. dump_all_objects ('/tmp/dumpdomains.txt' % time. time ())
Analysis file:
The code is as follows:
From meliae import loader
# Loading dump files
Om = loader. load ('/opt/log/dump.txt ')
# Calculate the reference relationship of each Objects
Om. compute_parents ()
# Remove the _ dict _ attribute of each object Instance
Om. collapse_instance_dicts ()
# Analyze memory usage
Om. summarize ()
The field has the following meanings:
Index: row Index number
Count: total number of objects of this type
% (Count): Percentage of the total number of objects of this type to the total number of objects of all types
Size: total number of bytes of objects of this type
% (Size): Percentage of the total object bytes of this type to the total object bytes of all types
Cum: % (Size) after the row index is accumulated)
Max: Number of bytes of the consumer in this type of object
Kind: type
Analyzes an object and finds its reference relationship.
The code is as follows:
# Obtain all POP3ClientProtocol objects
P = om. get_all ('pop3clientprotocol ')
# View the first object
P [0]
# View all references to this object
P [0]. c
# View who has referenced this object
P [0]. p