Some scenarios for improving Python performance.
Function call optimization (space span, avoid access to memory)
The key point of program optimization is to minimize the operation span, including the span of code execution time and the space span in memory.
1. Large data summation, using sum
A = range (100000)
%timeit-n sum (a)
loops, Best of 3:3.15 ms per loop
%%timeit
...: s = 0
...: For I in a:
...: s + = I.
:
loops, Best of 3:6.93 ms Per loop
2. Small data summation, avoid using sum
%timeit-n 1000 s = a + B + C + D + E + F + G + H + i + j + k # Direct accumulation faster
1000 loops, Best of 3:571 ns per loop
%timeit-n 1000 s = SUM ([a,b,c,d,e,f,g,h,i,j,k]) # Small amount of data calls the SUM function, reduced space efficiency
1000 loops, Best of 3:669 ns per loop
Conclusion: The large data sum sum efficiency is high, the small data summation direct accumulation efficiency is high.
For loop optimization elements (using stacks or registers to avoid access to memory)
For LST in [(1, 2, 3), (4, 5, 6)]: # LST index requires extra overhead pass
You should avoid using indexes as much as possible.
For a, B, C in [(1, 2, 3), (4, 5, 6)]: # Better pass
is equivalent to assigning a value directly to each element.
Def force ():
LST = range (4)
for A1 in [1, 2]: for
A2 in LST: for
A3
in LST: for B1 in LST: for B2 In LST: For
B3 in LST: for
C1 in LST:
for C2 in LST:
for C3 in LST: for D1 in LST: Yield
(A1, a 2, A3, B1, B2, B3, C1, C2, C3, D1)
%%timeit-n to
T in force ():
sum ([t[0], t[1], t[2], t[3], t[4], t[5] , T[6], t[7], t[8], t[9]] loops, best of
3:465 ms per loop%%timeit-n,
A1, A2, A3, B1, B2,
C1, C2, C3, D1 in force (): Sum ([A1, A2, A3, B1, B2, B3, C1, C2, C3,
D1])
loops, Best of 3:360 ms Per loop
Third, generator optimization (look-up table substitution operation)
Def force (Start, end): # Used for password brute force cracking program for
I in range (Start, end): Now
= i
sublst = [] for
J in Range (Ten):
sublst.append (i% 10) # Division operation is expensive, larger than multiplication
i//=
sublst.reverse ()
yield (tuple (sublst), now)
Def force (): # better
LST = range (5) for
A1 in [1]: for
A2 in LST:
for
A3 in LST: for B1 in lst:< C15/>for B2 in LST: For B3 in LST: for
C1 in
LST: for C2 in LST: for C3 in LST: for D1 in Lst:
yield (A1, A2, A3, B1, B2, B3, C1, C2, C3, D1)
R0 = [1, 2] # readability and flexibility
r1 = Range r2 = R3 = R4 = R5 = R6 = R7 = R8 = R9 = R1 force
= ((a0, A1, A2, A3, A4, A5, A6, A7, A8, A9) for A0 in R0 to A1 in R1 for A2 in R2 for A3 in R3 for A4 in R4 for A5 in to R5 for
A7 in R7 for A8 in R8 for A9 in R9)
Power Operation Optimization (POW (x,y,z))
def isprime (n):
if n & 1 = 0: return
False
k, q = FIND_KQ (n)
a = Randint (1, n-1)
if POW (A, q , n) = = 1: # Better than using a * * q% n operation optimization number of times return
True to
J in range (k):
If Pow (A, POW (2, j) * Q, N) = = n-1: # A * * ((2 * * j) * Q)% n return
True
Conclusion: POW (X,Y,Z) is superior to x**y%z.
Optimization of division operation
In [1]: from random import getrandbits into
[2]: x = getrandbits (4096) in
[3]: y = getrandbits (2048) in
[4]:%ti Meit-n 10000 Q, r = Divmod (x, y)
10000 loops, best 3:10.7 us/loop in
[5]:%timeit-n 10000 q, r = x//y, X % y
10000 loops, best of 3:21.2 US per loop
Conclusion: Divmod is superior to//and%.
vi. optimization algorithm time complexity
The time complexity of the algorithm has the greatest impact on the execution efficiency of the program, in Python you can choose the appropriate data structure to optimize the time complexity, such as the time complexity of the list and set to find an element is O (n) and O (1) respectively. Different scenarios have different optimization methods, in general, there are generally divided governance, branch and bound, greedy dynamic planning and other ideas.
Vii. reasonable use of copy and Deepcopy
For objects of data structures such as dict and lists, direct assignment uses the reference method. In some cases it is necessary to copy the entire object, and then you can use copy and deepcopy in the copy package, which differs in that the deepcopy is recursively replicated. Different efficiency:
In [%]: import copy
in [[]:%timeit-n copy.copy (a)
loops, Best of 3:606 ns/loop in
[[]:%timeit -N Copy.deepcopy (a)
loops, Best of 3:1.17 US per loop
The Timeit-n indicates the number of runs, and the last two lines correspond to the output of two Timeit, the same below. This shows that the latter is one order of magnitude slow.
One example of copy:
>>> lists = [[]] * 3
>>> lists
[[], [], []]
>>> lists[0].append (3)
>> > Lists
[[3], [3], [3]]
What happens is that [[]] is a list of only one element that contains an empty list, so all three elements of [[]] * 3 are (pointing to) this empty list. Modify any element of lists to modify this list. The modification efficiency is high.
Use Dict or set to find elements
Python dictionaries and collections are implemented using a hash table (similar to the C + + standard library unordered_map), and the time complexity of finding elements is O (1).
In [1]: R = Range (10**7) in
[2]: s = set (R) # occupies 588MB of memory in
[3]: D = dict ((i, 1) for I in R) # occupies 716MB memory in
[4]:%timeit-n 10000 (10**7)-1 in R
10000 loops, best 3:291 ns/Loop in
[5]:%timeit-n 10000 (10**7)- 1 in S
10000 loops, best 3:121 ns/loop in
[6]:%timeit-n 10000 (10**7)-1
in D 10000 loops, best 3:111 ns per loop
Conclusion: The memory footprint of set is minimal and the dict running time is shortest.
IX. reasonable use (generator) and yield (save memory)
In [1]:%timeit-n a = (I for I in range (10**7)) # Generator usually traverses more efficient
loops, best of 3:933 ns/loop in
[2]:%t Imeit-n a = [I for I in Range (10**7)]
loops, Best of 3:916 ms/Loop in
[1]:%timeit-n to X in (i) For I in range (10**7)):
loops, Best of 3:749 ms/Loop in
[2]:%timeit-n for x in [I for I-Ra Nge (10**7)]: Pass
loops, Best of 3:1.05 S/per loop
Conclusion: Try to use the generator to traverse.
These are some of the programs to improve Python performance, follow-up continue to add, need to look at.