Python performance Analysis Getting Started Guide

Source: Internet
Author: User

In Lingnan six less blog found good things.

Note: The original author of this article is Huy Nguyen, the original address is a guide to analyzing Python performance

While not every Python program you write requires a rigorous performance analysis, it's reassuring to know that the Python ecosystem has a variety of tools to handle these problems when the problem occurs.

The performance of the parser can be attributed to answering four basic questions:

How fast speed bottlenecks are running where memory usage is how much memory leaks where

Below, we will use some magical tools to delve into the answers to these questions.
Calculate time with coarse granularity

Let's start by using a quick and rough way to calculate our code: The traditional Unix Time tool.

$ time Python yourprogram.py
Real 0m1.028s
User 0m0.001s
SYS 0m0.003s

The detailed meaning between the three output measurements is here StackOverflow article, but the introduction in this:

Real-refers to the actual time-consuming user-refers to CPU time outside the kernel sys-refers to the CPU time spent in kernel-specific functions

You will have a visual sense of how much CPU cycles your application has run out of, regardless of the system and user time added by other running programs on the system.

If the sum of SYS and user time is less than real time, then you can guess that most program performance problems are most likely related to IO wait.
Fine-grained calculation time with timing context Manager

Our next technology includes embedding code directly to get granular timing information. Here is a small fragment of my Code for TIME measurement

timer.py

Import time

Class Timer (object):

def __init__ (self, verbose=false):    self.verbose = Verbosedef __enter__ (self):    Self.start = Time.time ()    return Selfdef __exit__ (self, *args):    self.end = Time.time ()    self.secs = Self.end-self.start    Self.msecs = s Elf.secs *  millisecs    if self.verbose:        print ' elapsed time:%f ms '% self.msecs

To use it, use the Python with keyword and the Timer context Manager to wrap the code you want to calculate. When your code block starts executing, it will take care of the startup timer, and when your code block ends, it will stop the timer.

This code snippet example:

From timer Import timer
From Redis import Redis
RDB = Redis ()

With Timer () as T:

Rdb.lpush ("foo", "Bar")

print "= elasped Lpush:%s S"% t.secs

With Timer () as T:

Rdb.lpop ("foo")

print "= elasped Lpop:%s S"% t.secs

To see the evolution of my program's performance over time, I often record the output of these timers into a file.
Time-per-line timing and analysis of execution frequency using Profiler

Robert Kern has a good project called Line_profiler, and I often use it to analyze how fast my scripts are and how often each line of code executes:

To use it, you can install it by using PIP:

Pip Install Line_profiler

After the installation is complete, you will get a new module called Line_profiler and kernprof.py executable script.

To use this tool, first set the @profile modifier on the function you want to measure. Don't worry, you don't need to introduce anything for this modifier. The kernprof.py script will automatically inject your script at run time.

primes.py

@profile
Def primes (n):

If n==2:    return [2]elif n<2:    return []s=range (3,n+1,2) mroot = n * * 0.5half= (n+1)/2-1i=0m=3while m <= mroot:    if s[i]:        j= (m*m-3)/2        s[j]=0 while        j

Primes (100)

Once you've got your code that sets the modifier @profile, run the script with kernprof.py.

Kernprof.py-l-V fib.py

The-l option tells Kernprof to inject the modifier @profile into your script, and the-v option tells Kernprof to display the timing information once your script is complete. This is a similar output for more than one script:

Wrote profile results to Primes.py.lprof
Timer unit:1e-06 S

File:primes.py
Function:primes at line 2
Total time:0.00019 S

Line # Hits time per hit% time line Contents
2 @profile 3 def primes (n): 4             1 2 2.0 1.1 if N==2:5 return [2] 6 1         1 1.0 0.5 Elif n<2:7 return [] 8         1 4 4.0 2.1 s=range (3,n+1,2) 9 1 10.0 5.3 Mroot = n * * 0.510         1 2 2.0 1.1 half= (n+1)/2-111 1 1 1.0 0.5 i=012         1 1 1.0 0.5 m=313 5 7 1.4 3.7 while M <= mroot:14         4 4 1.0 2.1 If S[I]:15 3 4 1.3 2.1 j= (m*m-3)/216 3 4 1.3 2.1 s[j]=017 1.0 16.3 While  J 

Look for rows with a higher hits value or a high time interval. These places have the greatest space for optimization and improvement.
How much memory does it use?

Now that we have a good timing information for our code, let's continue to find out how much memory our program uses. We were so lucky that Fabian pedregosa a good memory analyzer modeled after Robert Kern's line_profiler.

First install it via PIP:

$ pip install-u Memory_profiler
$ pip Install Psutil

It is recommended to install Psutil here because the package can improve the performance of Memory_profiler.

Like Line_profiler, Memory_profiler requires you to set @profile to decorate your function:

@profile
Def primes (n):

......

Run the following command to show how much memory your function uses:

$ python-m Memory_profiler primes.py

Once your program exits, you should be able to see the output like this:

Filename:primes.py

Line # Mem usage Increment line Contents
2 @profile 3 7.9219 MB 0.0000 MB def primes (n): 4 7.9219 MB 0.0000 MB if n==2:                                   5 return [2] 6 7.9219 MB 0.0000 MB elif n<2:7 return [] 8 7.9219 MB 0.0000 MB S=range (3,n+1,2) 9 7.9258 mb 0.0039 MB mroot = n * * 0.510 7.9  258 MB 0.0000 MB half= (n+1)/2-111 7.9258 mb 0.0000 MB i=012 7.9258 mb 0.0000 MB m=313 7.9297               MB 0.0039 MB while m <= mroot:14 7.9297 mb 0.0000 MB if s[i]:15 7.9297 MB 0.0000 MB     j= (m*m-3)/216 7.9258 MB-0.0039 mb s[j]=017 7.9297 mb 0.0039 MB while j 

IPython shortcut commands for Line_profiler and Memory_profiler

Line_profiler and Memory_profiler A little-known feature is that there are quick commands on the IPython. All you can do is type the following command on the IPython:

%load_ext Memory_profiler
%load_ext Line_profiler

After doing so, you can use the Magic commands%lprun and%mprun, which behave like copies of their command line, the main difference is that you do not need to give the function you need to parse @profile modifier. Continue analyzing directly on your IPython session.

In [1]: From primes import primes
In [2]:%mprun-f primes primes (1000)
In [3]:%lprun-f primes primes (1000)

This can save you a lot of time and effort, because using these analytic commands, you don't need to modify your source code.
Where is the memory overflow?

The CPython interpreter uses reference counting as its primary method of tracking memory. This means that each object holds a counter, and when an object's reference is stored, the counter is incremented, and when a reference is deleted, the counter is reduced. When the counter reaches 0, the CPython interpreter knows that the object is no longer in use, so the interpreter deletes the object and frees the memory held by the object.

A memory leak often occurs when your program holds a reference to the object even when the object is no longer in use.

The quickest way to discover a memory leak is to use a very good tool called [Objgraph][6] written by Marius Gedminas.
This tool allows you to see the number of objects in memory, and also to locate the references to those objects in all the different places in the code.

To begin, we first install Objgraph

Pip Install Objgraph

Once you have installed this tool, insert a declaration that invokes the debugger in your code.

Import PDB; Pdb.set_trace ()

Which object is most common

At run time, you can check the top 20 most common objects running in your program

PDB) Import Objgraph
(PDB) Objgraph.show_most_common_types ()

Mybigfatobject 20000
Tuple 16938
function 4310
Dict 2790
Wrapper_descriptor 1181
Builtin_function_or_method 934
Weakref 764
List 634
Method_descriptor 507
Getset_descriptor 451
Type 439

Which object has been added or deleted?

We can see which objects have been added or deleted between two points in time.

(PDB) Import Objgraph
(PDB) Objgraph.show_growth ()
.
.
.
(PDB) Objgraph.show_growth () # This is only shows objects that have been added or deleted since last Show_growth () call

Traceback 4 +2
Keyboardinterrupt 1 +1
Frame 24 +1
List 667 +1
Tuple 16969 +1

What is the reference to this leaking object?

To continue, we can also see where references to any given object are. Let's give an example with this simple procedure.

x = [1]
y = [x, [x], {"A": X}]
Import PDB; Pdb.set_trace ()

To see what the reference to holding the variable X is, run the Objgraph.show_backref () function:

(PDB) Import Objgraph
(PDB) Objgraph.show_backref ([x], filename= "/tmp/backrefs.png")

The output of the command is a PNG image that is stored in/tmp/backrefs.png and it should look like this:

Enter a description of the picture here

The red font at the bottom of the box is the object we are interested in, and we can see that it was referenced once by the symbol X and was referenced three times by the list Y. If X is an object that causes a memory leak, we can use this method to track all its references to see why it has not been automatically retracted.

Once again, Objgraph allows us to:

Shows the top N objects that occupy the Python program's memory show which objects were incremented over a period of time, and which objects were deleted showing all references obtained in our script

Effort vs Precision

In this article, I showed you how to use some tools to analyze the performance of a Python program. Armed with these tools and techniques, you should be able to get all the information required to track most memory leaks and quickly identify bottlenecks in Python programs.

As with many other topics, running performance analysis implies a tradeoff between pay and precision. When in doubt, use the simplest solution to meet your current needs.

Related reading:

Stack Overflow–time explainedline_profilermemory_profilerobjgraph

Python performance Analysis Getting Started Guide

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.