Python Performance Tuning Recommendations

Source: Internet
Author: User
Tags pow

Reference:

1190000000666603

http://blog.csdn.net/zhoudaxia/article/details/23853609 #使用cpython PyPy for improved performance

http://www.ibm.com/developerworks/cn/linux/l-cn-python-optim/

  1. Optimization of the algorithm time complexity

    The time complexity of the algorithm has the greatest impact on the execution efficiency of the program, and in Python it is possible to optimize the time complexity by selecting the appropriate data structure, such as the time complexity of the list and set to find an element, respectively O (n) and O (1). Different scenarios have different optimization methods, in general, there are divided, branch boundaries, greed, dynamic planning and other ideas.

  2. Reduce redundant data

    To save a large symmetric matrix in a triangular or triangular way. The sparse matrix representation is used in the matrix of the 0 element majority.

  3. Reasonable use of copy and Deepcopy

    For objects such as data structures such as dict and lists, the direct assignment uses a reference method. In some cases, you need to copy the entire object, you can use copy and deepcopy in the copy package, the difference between the two functions is recursive replication. The efficiency is not the same: (The following program runs in Ipython)

    import copya = range(100000)%timeit -n 10 copy.copy(a) # 运行10次 copy.copy(a)%timeit -n 10 copy.deepcopy(a)10 loops, best of 3: 1.55 ms per loop10 loops, best of 3: 151 ms per loop

    The Timeit followed by-n indicates the number of runs, and the next two lines correspond to the output of two Timeit. This shows that the latter is one order of magnitude slower.

  4. To find an element using Dict or set

    Python dict and set are implemented using a hash table (similar to unordered_map in the C++11 Standard library), and the time complexity of finding elements is O (1)

    a = range(1000)s = set(a)d = dict((i,1) for i in a)%timeit -n 10000 100 in d%timeit -n 10000 100 in s10000 loops, best of 3: 43.5 ns per loop10000 loops, best of 3: 49.6 ns per loop

    dictThe efficiency is slightly higher (more space is occupied).

  5. Rational use of generators (generator) and yield
    %timeit -n 100 a = (i for i in range(100000))%timeit -n 100 b = [i for i in range(100000)]100 loops, best of 3: 1.54 ms per loop100 loops, best of 3: 4.56 ms per loop

    Using () a Generator object, the amount of memory required is independent of the size of the list, so the efficiency is higher. For specific applications, such as set (I for I in range (100000)) is faster than set ([I for I in range (100000)]).

    But for situations where loop traversal is required:

    %timeit -n 10 for x in (i for i in range(100000)): pass%timeit -n 10 for x in [i for i in range(100000)]: pass10 loops, best of 3: 6.51 ms per loop10 loops, best of 3: 5.54 ms per loop

    The latter is more efficient, but if there is a break in the loop, the benefits of using generator are obvious. yieldalso used to create generator:

    def yield_func(ls):    for i in ls:        yield i+1def not_yield_func(ls):    return [i+1 for i in ls]ls = range(1000000)%timeit -n 10 for i in yield_func(ls):pass%timeit -n 10 for i in not_yield_func(ls):pass10 loops, best of 3: 63.8 ms per loop10 loops, best of 3: 62.9 ms per loop

    For a list that is not very large in memory, you can return a list directly, but with yield better readability (a person's preference).

    Python2.x has xrange function, Itertools package, etc. built-in generator function.

  6. Optimize loops

    What you can do outside of the loop is not in the loop, for example, the following optimizations can be faster:

    a = range(10000)size_a = len(a)%timeit -n 1000 for i in a: k = len(a)%timeit -n 1000 for i in a: k = size_a1000 loops, best of 3: 569 μs per loop1000 loops, best of 3: 256 μs per loop
  7. Optimize the order of multiple judgment expressions

    For and, the satisfying conditions should be placed in the front, for or, the satisfying conditions are placed in front. Such as:

    a = range(2000)  %timeit -n 100 [i for i in a if 10 < i < 20 or 1000 < i < 2000]%timeit -n 100 [i for i in a if 1000 < i < 2000 or 100 < i < 20]     %timeit -n 100 [i for i in a if i % 2 == 0 and i > 1900]%timeit -n 100 [i for i in a if i > 1900 and i % 2 == 0]100 loops, best of 3: 287 μs per loop100 loops, best of 3: 214 μs per loop100 loops, best of 3: 128 μs per loop100 loops, best of 3: 56.1 μs per loop
  8. Use join to merge strings in iterators
    In [1]: %%timeit   ...: s = ‘‘   ...: for i in a:   ...:         s += i   ...:10000 loops, best of 3: 59.8 μs per loopIn [2]: %%timeits = ‘‘.join(a)   ...:100000 loops, best of 3: 11.8 μs per loop

    joinFor the cumulative way, there are about 5 times times the Ascension.

  9. Choose the appropriate format character method
    s1, s2 = ‘ax‘, ‘bx‘%timeit -n 100000 ‘abc%s%s‘ % (s1, s2)%timeit -n 100000 ‘abc{0}{1}‘.format(s1, s2)%timeit -n 100000 ‘abc‘ + s1 + s2100000 loops, best of 3: 183 ns per loop100000 loops, best of 3: 169 ns per loop100000 loops, best of 3: 103 ns per loop

    In three cases, % the way is the slowest, but the difference between the three is not big (both very fast). (Personal feel % of the best readability)

  10. Exchange values of two variables without the help of an intermediate variable
    In [3]: %%timeit -n 10000    a,b=1,2   ....: c=a;a=b;b=c;   ....:10000 loops, best of 3: 172 ns per loopIn [4]: %%timeit -n 10000a,b=1,2a,b=b,a   ....:10000 loops, best of 3: 86 ns per loop

    Using a,b=b,a instead c=a;a=b;b=c; of exchanging a A, a, a, a, can be 1 time times faster.

  11. Use if is
    a = range(10000)%timeit -n 100 [i for i in a if i == True]%timeit -n 100 [i for i in a if i is True]100 loops, best of 3: 531 μs per loop100 loops, best of 3: 362 μs per loop

    Use if is True it if == True nearly one times faster.

  12. Using cascading comparisons x < y < z
    x, y, z = 1,2,3%timeit -n 1000000 if x < y < z:pass%timeit -n 1000000 if x < y and y < z:pass1000000 loops, best of 3: 101 ns per loop1000000 loops, best of 3: 121 ns per loop

    x < y < zThe efficiency is slightly higher, and the readability is better.

  13. while 1 while True faster than
    def while_1():    n = 100000    while 1:        n -= 1        if n <= 0: breakdef while_true():    n = 100000    while True:        n -= 1        if n <= 0: break    m, n = 1000000, 1000000 %timeit -n 100 while_1()%timeit -n 100 while_true()100 loops, best of 3: 3.69 ms per loop100 loops, best of 3: 5.61 ms per loop

    While 1 is much faster than while true because in python2.x, true is a global variable, not a keyword.

  14. Use ** rather than POW
    %timeit -n 10000 c = pow(2,20)%timeit -n 10000 c = 2**2010000 loops, best of 3: 284 ns per loop10000 loops, best of 3: 16.9 ns per loop

    **is faster than 10 times times!

  15. Use CProfile, Cstringio and Cpickle to implement the same function (corresponding to profile, Stringio, pickle) with C
    import cPickleimport picklea = range(10000)%timeit -n 100 x = cPickle.dumps(a)%timeit -n 100 x = pickle.dumps(a)100 loops, best of 3: 1.58 ms per loop100 loops, best of 3: 17 ms per loop

    C Implementation of the package, speed faster than 10 times times!

  16. Use the best way to deserialize

    The following compares the efficiency of Eval, Cpickle, json three for the corresponding string deserialization:

    import jsonimport cPicklea = range(10000)s1 = str(a)s2 = cPickle.dumps(a)s3 = json.dumps(a)%timeit -n 100 x = eval(s1)%timeit -n 100 x = cPickle.loads(s2)%timeit -n 100 x = json.loads(s3)100 loops, best of 3: 16.8 ms per loop100 loops, best of 3: 2.02 ms per loop100 loops, best of 3: 798 μs per loop

    The JSON is nearly 3 times times faster than the Cpickle, more than 20 faster than Eval.

  17. using C extension (Extension)

    Currently has CPython (the most common way to implement Python) native API, Ctypes,cython,cffi three ways, Their role is to enable the Python program to invoke the C-compiled dynamic-link library, which is characterized by:

    CPython native API : by introducing the Python.h header file, The data structure of Python can be directly used in the corresponding C program. The implementation process is relatively cumbersome, but has a relatively large scope of application.

    ctypes : Typically used for encapsulating (wrap) C programs, allowing pure Python programs to invoke functions in a dynamic-link library (DLL in Windows or so files in Unix). If you want to use a C class library in Python already, using cTYPES is a good choice, and with some benchmarks, python2+ctypes is the best way to perform.

    Cython : Cython is a superset of CPython that simplifies the process of writing C extensions. The advantage of Cython is that the syntax is concise and can be well compatible with NumPy and other libraries that contain a large number of C extensions. The Cython scenario is typically optimized for an algorithm or process in the project. In some tests, you can have hundreds of times times the performance boost.

    Cffi : Cffi is the implementation of cTYPES in PyPy (see below), which is also compatible with CPython. Cffi provides a way to use Class C libraries in Python, writing C code directly in Python code, and supporting links to existing C-class libraries.

    Using these optimizations is generally optimized for existing project performance bottleneck modules, which can significantly improve the efficiency of the entire program in the event of minor changes to the original project.

  18. Parallel programming

    Because of the Gil's presence, Python is difficult to take advantage of multicore CPUs. However, there are several parallel modes that can be implemented through the built-in module multiprocessing:

    Multi-process : For CPU-intensive programs, you can use multiprocessing Process,pool and other packaged classes to implement parallel computing in a multi-process manner. However, because the communication cost in the process is relatively large, the efficiency of the program that requires a lot of data interaction between processes may not be greatly improved.

    multithreading : For IO-intensive programs, the Multiprocessing.dummy module uses multiprocessing's interface to encapsulate threading, making multithreaded programming very easy (for example, you can use the pool's map interface , simple and efficient).

    distributed : The Managers class in multiprocessing provides a way to share data in different processes, on which a distributed program can be developed.

    Different business scenarios can choose one or several of these combinations to achieve program performance optimization.

  19. End-stage big kill device: PyPy

    PyPy is a python implemented using Rpython (a subset of CPython), which is 6 times times faster than the CPython implementation of Python based on the benchmark data of the website. The reason for this is that the Just-in-time (JIT) compiler, a dynamic compiler, is different from a static compiler (such as GCC,JAVAC, etc.) and is optimized using data from the process that the program is running. The Gil is still in pypy for historical reasons, but the ongoing STM project attempts to turn PyPy into Python without Gil.

    If the Python program contains a c extension (non-cffi), the JIT optimization effect will be greatly reduced, even slower than CPython (than NumPy). Therefore, it is best to use pure python or cffi extension in PyPy.

    With the improvement of stm,numpy and other projects, I believe PyPy will replace CPython.

  20. Using the Performance analysis tool

    In addition to the Timeit modules used above in Ipython, there are cprofile. Cprofile is also very simple to use: python -m cProfile filename.py to filename.py run the program's file name, you can see in the standard output the number of times each function is called and the elapsed time, so as to find the performance bottleneck of the program, and then can be targeted to optimize.

Python Performance Tuning Recommendations

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.