In micro-blog saw the article, and then try to translate a bit, some places do not understand, directly affixed to the original, but also hope to understand the people pointing.
# Fast Python Performance Tuning #
1.%timeit (per line) and%prun (CProfile) in Ipython Interactive shell.
Profile your code-while-working on it, and try-find of where is the bottleneck. This isn't contrary to the fact that premature optimization are the root of all evil. This is mean for the "a" optimization and not a heavy optimization sequence.
For the profiling Python code, you should read this:http://www.huyng.com/posts/python-performance-analysis/
Another Interesting library, Line_profiler is for line by line profiling Https://bitbucket.org/robertkern/line_profiler
2. Reduce the number of calls to the function
If you need to operate on a list, passing a list to a function is faster than calling a function once for each element.
3. Use xrange instead of range # #
Xrange is the C implementation of range, more efficient use of memory
4. For large data, the use of numpy is faster than the standard data structure
5. "". Join (string) is better than + or + =
6.while 1 is faster than while True
7. List derivation > For loop > While loop
The list derivation is faster than the For loop, while the loop is the slowest because while the external counter is used
8. Use of CProfile, Cstringio and Cpickle
Always use the available C version of the module.
9. Use local variables local variables than global variables, macros and property lookup fast
10.ist and iterators versions exist-iterators are memory efficient and scalable. Use Itertools
Create generators and use yeild as much as posible. They are faster compared to the normal list way of doing it.
11. In all possible places, use Map, Reduce and Filter instead of loops. 12. Dict or set is better than list/tuple for checking ' A in B ' places.
13. For large data, use immutable types as much as possible so that they are faster tuples>list
14.insertion into a list of O (n) complexity.
15. If you want to start from the end of the list, use a two-terminal queue
16. Use Del to delete objects after use
Python does it by itself. But Make sure of this with the GC module or
By writing an __del__ magic function or
The simplest way, del after use.
1.time.clock ()
18.GIL (Http://wiki.python.org/moin/GlobalInterpreterLock)-GIL is a demon.
GIL allows only one of the Python native thread to being run per process, preventing CPU level parallelism. Try using ctypes and native C libararies to overcome this. When even to reach the "end of optimizing" with Python, always there exist an option of rewriting terribly slow functions I n native C, and using it through Python C bindings. Other libraries-like gevent are also attacking the problem, and are successful to some.
Tl,dr:while you write code, just give one round of thought on the data structures, the iteration constructs, Builtins, and Create C extensions for tricking the GIL if need.