The illusion that twisted uses multi-core CPU
Twisted provides a thread delay call model. Twisted itself is an event model. After calling a thing, it can delay processing and callback. However, these event-based processing operations are actually executed in a thread, or the event loop reactor. run () is actually running in the main thread. The thread delay call model can use the delay callback function after a thread is started. This is the difference between a single thread and multiple threads.
AProgramYou need to use this thread delay call function to query the database. In fact, the function is twisted. Internet. threads. defertothread. For more information, see the manual.
After using this programming method, the CPU usage reaches 140% when running on a dual-core CPU server (with four CPU cores. I was so happy that I thought that using this method can achieve the goal of using multi-core CPU.
Later, I wrote a script to test the computing effect. As a result, I was disappointed. At the same time, only one CPU core was working, but the thread under Gil monitoring was not always running on the same CPU core, therefore, the illusion that two CPU cores are used at the same time will appear in the short period when the thread switches to another core. This will show that the CPU usage exceeds 100%. Of course, it is all speculation. If the reader has a high opinion, he may leave it blank. It has been a long time since I used python for SMP programming.
The script for testing parallel computing is as follows:
"""
The defer thread of twisted is used for parallel computing to check whether python can use multi-core CPU in this way.
"""
Import time
Import thread
From twisted. Internet import reactor, Protocol
From twisted. Python import log
From twisted. Internet. threads import defertothread
All_timelong = []
All_timelong_lock = thread. allocate_lock ()
Def calc (max ):
"" A self-added computing cannot optimize parallel computing, and the python interpreter cannot optimize it """
Start = Time. Time ()
Result = 1
For I in range (2, Max + 1 ):
Result + = I
Return result, (Time. Time ()-start)
Def callback (Infopack ):
"Callback function """
Timelong = Infopack [1]
Try:
All_timelong_lock.acquire ()
All_timelong.append (timelong)
Print "time long: %. 04f" % timelong
Finally:
All_timelong_lock.release ()
Return
Def main (threadcount ):
"Multi-thread computing """
For I in range (0, threadcount ):
D = defertothread (calc, 2000000)
D. addcallback (callback)
# Reactor. calllater (60, reactor. Stop)
Reactor. Run ()
If _ name __= = "_ main __":
Main (4)