Before using JS to write the project, the project team used the component mode, has been feeling very good. Recently, new projects have been made with Python, and the project structure continues the component model. There has been no understanding of the performance of function calls, and today it was a whim to test and write some test code
A few classes are defined first:
classA (object):defTest (self):PassclassB (object):def __init__(self): SELF.A=A ()defTest (self):PassclassC (object):def __init__(self): self.b=B ()defTest (self):PassclassD (object):def __init__(self): SELF.C=C ()defTest (self):Pass
Contrast 1:
Call the method on the instance object directly and use the variable to cache the method and then call
n = 10000000ImportTimeita=A () t_direct= Timeit. Timer ('a.test ()','From __main__ Import a'). Timeit (n)Print 'Direct call Func:', T_directcache=A.testt_cache= Timeit. Timer ('cache ()','From __main__ Import Cache'). Timeit (n)Print 'cache func Call:', T_cachePrint 'Performance:', (T_cache/t_direct)
Try to draw the time conclusion of the case after several times:
Direct call Func: 1.14136314392cache func- call :0.745277881622 Performance: 0.652971743123
The cache method is then called, which can improve performance by about 35%
When a function is called, Python temporarily creates an object, such as when the function a.test () is called directly, and Python performs the following steps:
1:temp = A.test
2:temp ()
3:del Temp
So when called frequently, performance is a problem. Memory should also be a problem, feeling will trigger GC more frequently
Contrast 2:
Calls a function through member variables, and the performance of functions directly calling the object is poor
T0 = Timeit. Timer ('d.test ()','From __main__ Import D'). Timeit (n)Print '0 Level:', T0t1= Timeit. Timer ('d.c.test ()','From __main__ Import D'). Timeit (n)Print '1 level:', T1,' : ', (t1/t0) * 100T2= Timeit. Timer ('d.c.b.test ()','From __main__ Import D'). Timeit (n)Print '2 level:', T2,' : ', (T2/T1) * 100,' ', (T2/T0 * 100) T3= Timeit. Timer ('d.c.b.a.test ()','From __main__ Import D'). Timeit (n)Print '3 level:', T3,' : ', (T3/T2) * 100,' ', (T3/T0 * 100)
Try to draw the time conclusion of the case after several times:
0 level:1.26769399643
1 level:1.50338602066:118.592185882
2 level:1.74297595024:115.936687337 137.491851752
3 level:1.87865877151:107.784549251 148.194972667
Basically, the function calls a layer more, the performance consumes 5% to 15% or so.
This is not a detailed answer for the moment. The hand also does not have the JS test data, is not sure at that time JS some writes the project time, whether also has this performance question.
Before encountering some project structure is, write the time is divided into several files to write, but at the end of the run, will be the various files defined in the properties, functions are aggregated to a class body, become a Big Mac class. I never understood what it meant and felt bloated, and now it seems estimated to reduce the level of function calls and improve performance.
Python function call Performance record