Co-process
Co-process, also known as micro-threading, fiber. English name Coroutine. One sentence explains what a thread is: The process is a lightweight thread that is user-state.
The co-process has its own register context and stack. When the schedule is switched, the register context and stack are saved elsewhere, and the previously saved register context and stack are restored when it is cut back. So:
The process can retain the state of the last invocation (that is, a specific combination of all local states), each time the procedure is re-entered, which is equivalent to the state of the last call, in other words: The position of the logical stream at the last departure.
Benefits of the co-process:
Without the overhead of thread context switching
? No need for atomic operation locking and synchronization overhead
"Atomic operations (atomic operation) do not require synchronized", so-called atomic operations are operations that are not interrupted by the thread scheduling mechanism; Once this operation starts, it runs until the end, and there is no context switch in the middle. (Switch to another thread). An atomic operation can be a step or multiple steps, but its order cannot be disrupted, or it can be cut off only the execution part. As a whole is the nucleus of atomicity.
Easy to switch the control flow and simplify the programming model
? high concurrency + high scalability + Low cost: A CPU support for tens of thousands of processes is not a problem. Therefore, it is suitable for high concurrency processing.
Disadvantages:
? can not take advantage of multi-core resources: The nature of the process is a single-threaded, it can not be a single CPU at the same time multiple cores, the process needs and processes to run on multiple CPUs. Of course, most of the applications we write on a daily basis are not necessary, except for CPU-intensive applications.
A blocking (Blocking) operation, such as IO, blocks the entire program
Example of using yield to implement a co-process operation
Import TimeImportQueuedefConsumer (name):Print("--->starting eating baozi ...") whileTrue:new_baozi=yield Print("[%s] is eating Baozi%s"%(name, New_baozi))#time.sleep (1)defProducer ():#producersR = Con.__next__() R= Con2.__next__() n=0 whileN < 5: N+ = 1con.send (n) con2.send (n)Print("\033[32;1m[producer]\033[0m is making Baozi%s"%N)if __name__=='__main__': Con= Consumer ("C1") Con2= Consumer ("C2") P= producer ()
The results of the program execution are:
--->starting eating baozi ...
--->starting eating baozi ...
[C1] is eating Baozi 1
[C2] is eating Baozi 1
[Producer] is making Baozi 1
[C1] is eating Baozi 2
[C2] is eating Baozi 2
[Producer] is making Baozi 2
[C1] is eating Baozi 3
[C2] is eating Baozi 3
[Producer] is making Baozi 3
[C1] is eating Baozi 4
[C2] is eating Baozi 4
[Producer] is making Baozi 4
[C1] is eating Baozi 5
[C2] is eating Baozi 5
[Producer] is making Baozi 5
The problem is that it is now possible to achieve a multi-concurrency effect because every producer doesn't have any code to spend time on, so he's not stuck at all, and if this time at the producer Sleep (1), then the speed slows down and looks at the following function
defHome ():Print("In func 1") Time.sleep (5) Print("Home exec done")defBBS ():Print("In func 2") Time.sleep (2)deflogin ():Print("In func 2")
If you say nginx every time a request is processed by a function, but it is a single-threaded case, if the Nginx request home page, because nginx in the background processing is single-threaded, single-threaded case colleagues come over three requests, then how to do? Must be a serial execution, but I want to let him realize the feeling is the effect of concurrency, I should be in the various co-process between the switch ah, but when to switch it? So, I'm asking you, if you're going to print from a single request directly, will I switch to this place right away? Because there is no blocking in this, will not be the card owner, so do not need to switch immediately. If he needs to do one thing, such as the whole home 5s clock, single-threaded is serial, even if the use of the coprocessor, then it is serial, in order to ensure the concurrency effect, when to switch? Should Time.sleep (5) here to switch to BBS request, then BBS if also sleep? Then it switches to the next login, so that's the switch. How to achieve a single-threaded implementation of the above program concurrency effect? In a word, encounter IO operation on the switch, the process is able to handle large concurrency, in fact, the IO operation to squeeze out, that is, IO operation on the switch, that is, this program only CPU in the operation, so fast! So when the problem is switched on, then when is the switch back? In other words, how does the program automatically monitor IO operations complete? Then let's look at the next point of knowledge!
Greenlet
Greenlet is a C implementation of the process module, compared with the yield of Python, it is a well-encapsulated association, you can switch between any function arbitrarily, without having to declare the function as generator.
fromGreenletImportGreenletdeftest1 ():Print(12) Gr2.switch ()#Switch to GR2 Print(34) Gr2.switch ()#Switch to GR2deftest2 ():Print(56) Gr1.switch ()#Switch to Gr1 Print(78) Gr1= Greenlet (test1)#start a co-processGR2 = Greenlet (test2)#Gr1.switch ()#Switch to Gr1
After the execution of the program, the result is:
12
56
34
78
Big talk python----co-process