Co-process, also known as micro-threading, fiber. English name Coroutine. A process is a lightweight thread that is user-state.
The so-called user state means that the process is controlled by the user, the CPU does not know the association, the process is running in the thread.
The co-process has its own register context stack. When the co-debug switch is run, the register context stack is saved elsewhere, and the previously saved register context stack is restored when it is cut back.
Therefore, the process can retain the state of the last invocation (that is, a specific combination of all local states), each time the procedure is re-entered, which is equivalent to the state of the last call, that is, the position of the logical stream where the last departure occurred.
When a thread switches, the context and stack are saved to the registers of the CPU.
The standard definition of the association, that is, all of the following conditions can be called a co-process :
1. Implementing concurrency in a single thread
2. No lock required to modify shared data
3. The context stack in the user program that holds multiple control flows
4. A co-process encounters an IO operation to automatically switch to another coprocessor
Benefits of the co-process:
No overhead for thread context switching
No need for atomic operation locking and synchronization overhead
Atomic operations are operations that are not interrupted by the thread scheduling mechanism, which, once started, runs until the end
Easy switching of control flow and simplified programming model
High concurrency + high scalability + Low cost: One CPU supports tens of thousands of threads, which is well suited for high concurrency processing.
Disadvantages of the co-process:
Cannot take advantage of multi-core resources:
The nature of the process is a single thread, which cannot simultaneously use multiple cores of a single CPU
The co-processes need to work with the process to run on multiple CPUs.
Block the entire program when blocking (Blocking) operations (such as IO)
Examples of using yield to implement a process:
#!/usr/bin/python#author:seanimport timedef consumer (name): print ("---> Start eating baozi ... ") while True: new_baozi = yield print ("[%s] is eating baozi %s "% (Name,new_baozi)) # time.sleep (2) Def producter (): r = tom.__next__ () r = jerry.__next__ () n = 0 while n < 5: n += 1 tom.send (n) jerry.send (n) print ("\033[32;1m[producter]\033[0m is making baozi %s "% n) IF&NBSP;__NAME__&Nbsp;== ' __main__ ': tom = consumer ("Tom") jerry = consumer ("Jerry") p = producter ()
How do I implement concurrency on a single thread?
The answer is to switch on an IO operation because the IO operation takes a long time
The reason why the process can handle high concurrency, is actually the IO operation to kill, is an IO operation to switch.
In this case the whole program becomes only CPU in operation.
When you encounter an IO operation, you switch, so when will it be cut back?
The answer is to cut back when the IO operation is finished.
So again, how does Python monitor the end of IO operations? Let's take a look at a couple of examples.
Greenlet module:
The Greenlet is a well-encapsulated co-process that switches manually via the switch method
#!/usr/bin/python#author:seanfrom Greenlet Import Greenletdef func1 (): Print ("Haha11") Gr2.switch () print ("Haha22 ") Gr2.switch () def func2 (): Print (" Haha33 ") Gr1.switch () print (" Haha44 ") Gr1 = Greenlet (func1) GR2 = Greenlet (fu NC2) Gr1.switch ()
gevent module:
Gevent is a third-party library that makes it easy to implement simultaneous or asynchronous programming.
The main use in Gevent is Greenlet, which is a lightweight process that accesses Python in the form of C extension mode.
Greenlet all run inside the main program operating system process, but they are dispatched in a collaborative manner
Gevent can automatically switch the IO
#!/usr/bin/python#author:seanimport geventdef foo (): Print ("Running in foo") gevent.sleep (0) #模仿IO操作 print (' Expli CIT Context switch to Foo again ') def bar (): Print (' Explicit context to bar ') gevent.sleep (0) #模仿IO操作 print (' Impli CIT Context switch back to bar ') Gevent.joinall ([Gevent.spawn (foo), Gevent.spawn (bar)])
The difference between synchronous and asynchronous:
#!/usr/bin/python#author:seanimport geventdef Task (PID): "" "Some non-deterministic Task" "" Gevent.sleep (0.5) Print (' Task%s done '% PID) def synchronous (): For I in range (1, ten): Task (i) def asynchronous (): Threads = [Gevent.spawn (Task, I) for I in Range] Gevent.joinall (threads) print (' Synchronous: ') synchronous () print (' Asynchronous: ') asynchronous ()
Crawl a Web site with a co-process concurrent crawler:
#!/usr/bin/python#author:seanfrom urllib import requestimport gevent# by default, Gevent does not know urllib or when the socket has IO operations # By default, Gevent and Urllib and sockets do not have any association, and of course they cannot improve efficiency because they are essentially serial operations # To let gevent know urllib or socket is in the IO operation, need to give gevent a patch from gevent import monkeymonkey.patch_all () #把当前程序的所有IO操作单独做上标记def f (URL): print (' get: %s '% url) resp = request.urlopen (URL) data = resp.read () # f = open ("url.html", "WB") # f.write (data) # f.close () print ('%d bytes received from %s .‘ % (len (data), URL)) Gevent.joinall ([ gevent.spawn (F, ' https://www.python.org '), gevent.spawn (F, ' https://yahoo.com '), gevent.spawn (F, ' https://github.com ')])
Write a single thread-high concurrency socket with gevent:
Service side:
#!/usr/bin/python#author:seanimport sysimport socketimport timeimport geventfrom Gevent import socket,monkeymonkey.patch_all () #把当前程序的所有IO操作单独做上标记def server (host, Port): s = socket.socket () s.bind ((host,port)) s.listen ($) while True: cli,addr = s.accept () gevent.spawn (handle _REQUEST,CLI) def handle_request (conn): try: while True: data =&NBSP;CONN.RECV (1024x768) print ("Recv: ", data) conn.send (data) if not data: conn.shutdown (socket. SHUT_WR) except Exception as e: print (e) finally: conn.close () if __name__ == ' __main__ ': server (' 0.0.0.0 ', 8001)
Client:
#!/usr/bin/python#author:seanimport sockethost = ' localhost ' #The remote Hostport = 8001 #The same port as used by the SE RVers = Socket.socket (socket.af_inet,socket. Sock_stream) S.connect ((Host,port)) While true:msg = bytes (Input (">>:"), encoding= "Utf-8") s.sendall (msg) dat A = S.RECV (1024x768) print (' Received ', repr (data)) S.close ()
Concurrent 100 sock Connections:
#!/usr/bin/python#author:seanimport socketimport threadingdef sock_conn (): client = socket.socket () client.connect (("localhost", 8001) count = 0 while True: #msg = input (">>:"). Strip () #if len (msg) == 0:continue client.send ( (" hello %s " %count"). Encode ("Utf-8")) data = CLIENT.RECV (1024x768) print ("[%s]recv from server:" % threading.get_ident (), Data.decode ()) #结果 count +=1 client.close () For i in range (+): t = threading. Thread (TarGet=sock_conn) t.start ()
event-driven and asynchronous IO, please go this way
Now we can answer the question, how does Python monitor the end of IO operations?
IO operations are handled by the operating system and are switched when an IO operation is encountered
The callback function notifies the coprocessor that the IO operation is complete when the IO operation is finished.
This article is from the "Home" blog, please make sure to keep this source http://itchentao.blog.51cto.com/5168625/1895251
Python's co-process