Using Xrange
When we get a number of loops, we are accustomed to the for loop and range function, for example:
for in range: Print i
Here the range (10) generates a 10-length list with content from 0 to 9, so the For loop here is actually the element that is traversing it.
If the number of loops is too large, range generates a huge list, which causes the program to degrade performance.
The solution is to use xrange, with the same basic usage as range:
for in xrange (Ten): Print i
But how big is the difference in performance?
Performance evaluation
We use the following procedure to do a test:
fromTimeImport Time fromTimeImportSleepImportSYSdefcount_time ():deftmp (func):defWrapped (self, *args, * *Kargs): Begin_time=Time () result= Func (self, *args, * *Kargs) End_time=Time () Cost_time= End_time-Begin_timePrint '%s called Cost time:%s Ms'% (func.__name__, float (cost_time) *1000) returnresultreturnWrappedreturntmp@count_time ()deftest1 (length): forIinchRange (length):Pass@count_time ()deftest2 (length): forIinchxrange (length):Passif __name__=='__main__': Length= Int (sys.argv[1]) test1 (length) test2 (length)
In the above code, Count_time is an adorner that is used to count the time the program runs.
We begin the formal test below:
[Email protected]:~/documents/py|?] Python 10.py 100000Test1 called cost time:13.8590335846Mstest2 called cost time:3.76796722412Ms[email protected]:~/documents/py|? Python 10.py 100000Test1 called cost time:16.725063324Mstest2 called cost time:3.08418273926Ms[email protected]:~/documents/py|? Python 10.py 200000Test1 called cost time:34.875869751Mstest2 called cost time:7.85899162292Ms[email protected]:~/documents/py|? Python 10.py 500000Test1 called cost time:41.6638851166Mstest2 called cost time:17.1940326691Ms[email protected]:~/documents/py|? Python 10.py 500000Test1 called cost time:59.8731040955Mstest2 called cost time:14.0538215637Ms[email protected]:~/documents/py|? Python 10.py 500000Test1 called cost time:94.1109657288Mstest2 called cost time:8.5780620575Ms[email protected]:~/documents/py|? Python 10.py 500000Test1 called cost time:61.615228653Mstest2 called cost time:7.21502304077 ms
The results surprised us, the gap between the two is very clear, the highest time gap is more than 10 times times.
Let's select a few smaller data:
[email protected]:~/documents/py|? Python 10.py 10 test1 called cost time: 0.00596046447754 Mstest2 called cost time: 0.0109672546387 ms[email protected]: ~/ Documents/py|? Python 10.py 20test1 called Cost Time: 0.00619888305664 Mstest2 called cost time: 0.159025192261 ms[email& Nbsp;protected]: ~/documents/py|? Python 10.py 50test1 called Cost Time: 0.00786781311035 Mstest2 called cost time: 0.00405311584473 ms[ Email protected]: ~/documents/py|? Python 10.py 100test1 called Cost time: 0.00786781311035 Mstest2 called cost time: 0.00309944152832 ms
The range's performance is not bad, even beginning to slightly higher.
We can conclude that when n is smaller, we use range, but when I exceeds a certain range, we must consider using xrange .
But where are the reasons for the performance gap?
We analyze below.
Performance comparison from range and xrange to yield keyword (top)