Python full stack Road series of the association Process
What's the association?
As with subroutines, a process is also a program component. The process is more general and flexible relative to subroutines, but in practice it is as broad as no subroutines. The association originates from the Simula and Modula-2 languages, but there are other language support. The process is more suitable for implementing program components that are familiar to each other, such as cooperative multitasking, iterators, infinite lists, and pipelines.
From Wikipedia https://zh.wikipedia.org/wiki/
The coprocessor has its own register context and stack, and when the co-scheduling switches, the register context and stack are saved elsewhere, and the previously saved register context and stack are restored at the time of the cut back. Thus: The process can retain the state of the last invocation (that is, a specific combination of all local states), each time the procedure is re-entered, which is equivalent to the state of the last call, in other words: The position of the logical flow at the last departure.
Advantages and disadvantages of the co-process:
Advantages
No overhead for thread context switching
No cost of locking and synchronizing atomic operations (changing a variable)
Easy switching of control flow and simplified programming model
High concurrency + high scalability + Low cost: A CPU support for tens of thousands of processes is not a problem. Therefore, it is suitable for high concurrency processing.
Disadvantages:
Unable to take advantage of multicore resources: The nature of the process is a single-threaded, it can not multicore, the process needs and processes to run on multiple CPUs, of course, we write most of the applications are not necessary, unless it is CPU-intensive applications.
Blocking (Blocking) operations (such as IO) can block the entire program
Implementing a co-process instance
yield
Def consumer (name): print ("--->starting eating baozi ...") while True: new_baozi = yield # return directly to print ("[%s] is eating baozi %s " % (Name, new_baozi)) def producer (): r = con.__next__ () r = con2.__next__ () n = 0 while n < 5: n += 1 con.send (n ) # wake-up generator at the same time pass in a parameter con2.send (n) print ("\033[32;1m[producer]\033[0m is making baozi %s" % n) if __name__ == ' __main__ ': con = consumer ("C1") con2 = consumer ("C2") p = producer ()
Greenlet
Installing Greenlet
PIP3 Install Greenlet
#-*-Coding:utf-8-*-from greenlet import greenletdef func1 (): Print (All) Gr2.switch () print (+) Gr2.switch () Def func2 (): Print (Gr1.switch () print (78) # Create two Ctrip Gr1 = Greenlet (func1) GR2 = Greenlet (FUNC2) Gr1.switch ( ) # Manual Switch
Gevent
Gevent can implement concurrent synchronous or asynchronous programming, the main pattern used in Gevent is Greenlet, which is a lightweight coprocessor that accesses Python in the form of a C extension module, Greenlet all run inside the main program OS process, but they are dispatched in a collaborative manner.
Installing Gevent
PIP3 Install Gevent
import geventdef foo (): print (' Running in foo ') gevent.sleep (2) print (' explicit context switch to foo Again ') def bar (): print (' Explicit context to bar ') gevent.sleep (3) print (' implicit context Switch back to bar ') # Auto switch Gevent.joinall ([ Gevent.spawn (foo), # start a co- gevent.spawn (bar),])
page crawl
From urllib import requestfrom gevent import monkeyimport geventimport timemonkey.patch_all () # the current program as long as the settings to the IO operation are labeled Def wget (URL): Print (' get: %s ' % url) resp = request.urlopen (URL) data = resp.read () print ('%d bytes received from %s. ' % (len (data), url) urls = [ ' https://www.python.org/', ' https://www.python.org/', ' https://github.com/', ' https://blog.ansheng.me/',]# serial crawl start_time = Time.time () for n in urls: wget (n) print ("Serial Crawl Usage Time:", time.time () - start_time) # parallel Crawl Ctrip_time = time.time () Gevent.joinall ([ Gevent.spawn (wget, ' https://www.python.org/'), gevent.spawn (wget, ' https://www.python.org/'), gevent.spawn (wget, ' https://github.com/'), gevent.spawn (wget, ' https://blog.ansheng.me/') print ("Parallel crawl Use Time:", time.time () - ctrip_time)
output
c:\python\python35\python.exe e:/mycodeprojects/co-process/s4.pyget: https://www.python.org/47424 bytes received from https://www.python.org/. get: https://www.python.org/47424 bytes received from https://www.python.org/. get: https://github.com/25735 bytes received from https://github.com/. get: https://blog.ansheng.me/82693 bytes received from https://blog.ansheng.me/. Serial Crawl usage Time: 15.143015384674072get: https://www.python.org/get: https://www.python.org/get: https://github.com/GET: https://blog.ansheng.me/25736 bytes received from https:// github.com/.47424 bytes received from https://www.python.org/.82693 bytes received from https://blog.ansheng.me/.47424 bytes received from https:// www.python.org/. Parallel crawl usage time: 3.781306266784668process finished with exit code 0
#Python全栈之路 #协程
This article is from the "Eden" blog, so be sure to keep this source http://edeny.blog.51cto.com/10733491/1924915
7Python Full Stack Road series of the association Process