When running a complex Python program, it can take a long time to execute, and you may want to increase the execution efficiency of your program. But what should we do?
First, there is a tool that can detect bottlenecks in your code, for example, to find out which part of the execution time is relatively long. Next, optimize for this section.
At the same time, you need to control memory and CPU usage, so you can optimize your code on the other hand.
So in this article I'll introduce 7 different Python tools to check the execution time of functions in code and the use of memory and CPU.
1. Use adorners to measure function execution time
An easy way to do this is to define an adorner to measure the execution time of the function and output the result:
Import time from
functools import wraps
def fn_timer (function):
@wraps (function)
def function_timer ( *args, **kwargs):
t0 = time.time () result
= function (*args, **kwargs)
t1 = time.time ()
print ("Total Time running%s:%s seconds '%
(function.func_name, str (T1-T0)) return result return
function_ Timer
Next, add the adorner to the function you want to measure, as follows:
@fn_timer
def myfunction (...):..
For example, here's how long it takes for a function to sort an array that contains 2 million random numbers:
@fn_timer
def random_sort (n): return
sorted ([Random.random () to I in range (n)])
if __name__ = = "__main__":
Random_sort (2000000)
When you execute the script, you see the following results:
Total time running random_sort:1.41124916077 seconds
2. Using the Timeit module
Another approach is to use the Timeit module to calculate the average time consumption.
Execute the following script to run the module.
Python-m timeit-n 4-r 5-s "Import timing_functions" "Timing_functions.random_sort (2000000)"
The timing_functions here is the Python script file name.
At the end of the output, you can see the following results:
4 loops, Best of 5:2.08 sec per loop
This means that the test 4 times, the average repeat 5 times per test, the best test result is 2.08 seconds.
If you do not specify a test or number of repetitions, the default is 10 tests, repeated 5 times at a time.
3. Use the time command in UNIX systems
However, adorners and Timeit are all based on Python. The Unix time utility is useful when testing python in an external environment.
Run time utility:
$ time-p Python timing_functions.py
The output results are:
Total time running random_sort:1.3931210041 seconds real
1.49
user 1.40
sys 0.08
The first line comes from a predefined adorner, and the other three behaviors:
- Real represents the total time that the script was executed
- User represents the CPU time that is consumed by the execution of the script.
- SYS represents the time it takes to perform kernel function consumption.
Note: According to Wikipedia, the kernel is a computer program that manages the input and output of the software and translates it into data processing instructions that the CPU and electronic devices on other computers can perform.
Therefore, the difference between real execution time and User+sys execution time is to consume the time that is consumed when the input/output and the system perform other tasks.
4. Using the Cprofile module
If you want to know how much time each function and method consumes, and how many times these functions have been invoked, you can use the Cprofile module.
$ python-m cprofile-s Cumulative timing_functions.py
You can now see a detailed description of the function in the code, which contains the number of times per function call, and the end result is sorted according to the cumulative execution time of each function, because the-s option (additive) is used.
Readers will find that the total time required to execute the script is more than before. This is because it takes time to measure the execution time of each function.
5. Using the Line_profiler module
The Line_profiler module can give you the CPU time it takes to execute each line of code.
First, install the module:
$ pip Install Line_profiler
Next, you need to specify which function to detect with @profile (you do not need to import the module in the code):
@profile
def random_sort2 (n):
l = [Random.random () for I in range (n)]
l.sort () return
l
if __name_ _ = = "__main__":
random_sort2 (2000000)
It is best to obtain a row-by-line description of the Random_sort2 function by using the following command.
$ kernprof-l-V timing_functions.py
Where-L represents a line-by-row explanation, and-V represents the output verbose result. In this way, we see that the build array consumes 44% of the computation time, and the sort () method consumes the remaining 56% of the time.
Similarly, because of the need to detect execution time, the script takes longer to execute.
6. Using the Memory_profiler module
The Memory_profiler module is used for memory usage based on line-by-row measurement code. Using this module will make the code run more slowly.
The installation method is as follows:
Pip Install Memory_profiler
In addition, it is recommended to install Psutil packs so that Memory_profile will run faster:
Similar to Line_profiler, use the @profile adorner to identify the function that needs to be traced. Next, enter:
$ python-m Memory_profiler timing_functions.py
The execution time of the script is 1 or 2 seconds longer than before. If the Psutil package is not installed, it may be longer.
As can be seen from the results, memory usage is measured in MIB units, representing mebibyte (1MiB = 1.05MB).
7. Use Guppy package
Finally, this package lets you know how many objects each type (str, tuple, dict, and so on) are created in each phase of the code execution.
The installation method is as follows:
Next, add it to your code:
From Guppy import hpy
def random_sort3 (n):
hp = hpy ()
print "Heap at the beginning of the Functionn", Hp.heap (
L = [Random.random () for I in range (n)]
l.sort ()
print "Heap at the" "Functionn", Hp.heap ()
ret Urn L
if __name__ = = "__main__":
random_sort3 (2000000)
To run the code:
$ python timing_functions.py
You can see the output as:
By placing Heap () in a different location in your code, you can learn about the process of object creation and deletion in your script.
If you want to learn more about Python code speed optimization, I suggest you read the book High performance python:practical performant programming for humans, September 2014. "
I hope this article can help you! ^_^