In the Performance testing platform optimization process, because the start-up task is relatively frequent compared to other test tasks, and currently 30 times two packets of cross-contrast (30 times) test takes 30 minutes to complete, it is intended to prioritize the test process optimization, the test time consumption reduced to 20 minutes.
As a result of optimistic, think start time, a device theoretically start at most 1s,1*2*30 also 60s, plus other expenses, 5 minutes are enough, can reduce to 20 minutes should be small half a day can be finished.
So I came to the first step:
1.review Code Flow
(1) Review all the related sleep in the start-up process
It does have a bit of effect, because there is a part of sleep at the start of the task execution phase, 60 times times the leverage is very scary, so remove part of sleep, actually reduced to 23 minutes.
The second step of the moment can't think of, method coupling nested quite a lot, and adapt to multiple versions of the product, moving to start the whole body, the second step is to think of the suspicious method of monitoring up
2. Monitor the time-consuming of suspicious methods
For ease of monitoring, two additional adorners are added to count time-consuming
defcosts (FN):def_wrapper (*args, * *Kwargs): Start=time.time () FN (*args, * *Kwargs)Print "%s function cost%s seconds"% (FN.__name__, Time.time ()-start)return_wrapperdefCosts_with_info (info):def_wrapper (FN):Print "Info:"+Inforeturncosts (FN)return_wrapper
When the method needs to be monitored, add @costs or @costs_with_info ("some infomation")
@costs def Configurequickstart (self, pkg_name): if" 1 " : self.logger.info ("Disable Quick Start:%s" % pkg_name) self.disablequickstartsnapshot (pkg_name) Else : self.logger.info ("Quick Start is enabled:%s" % pkg_name)
We recommend that you do not use such a method, really time-consuming, and the effect is poor. Spent half a day optimizing for a minute.
So think of the Android Traceview,traceview there is a way to get the entire call stack performance consumption, including time-consuming, Python should also have such a method, and then I found the cprofile, so happily entered the third step
3. Using Cprofile for analysis
(1) Directly add the entrance monitoring, output result.prof file, and print out the log area tottime (no sub-method time-consuming statistics)
Import CProfile Import pstatscprofile.run ('main ()', filename='result.prof' , sort="tottime"= pstats. Stats ('result.prof') p.sort_stats ('time' ). Print_stats ()
The log area prints out the following logs
Part of the system method, part of their own method, not very intuitive, and then found another artifact Graphviz.
First you need to install:
sudo Install Graphviz
Then download Gprof2dot, then run
Python gprof2dot.py-f pstats result.out | Dot-tpng-o Result.png
Finally, I got a way to start the test. Time-consuming charts
The partial display is as follows:
This makes it very clear how much time each function consumes, but I'm shocked to see that in the start-up test, 95.29% of the time in the 30 minutes is sleep!. But it doesn't matter, because I know which method started sleep and what can be optimized.
An experience of performance test Platform efficiency optimization (Python version)