Co-process Python

Source: Internet
Author: User
Tags mutex

Python in co-process

Talk about Python processes and threads before you elicit the concept of co-formation.

Process:

The process is executing a program instance. During the execution of the program, the kernel will tell the program code to load virtual memory, allocate space for program variables, and establish bookkeeping data structure to record process-related information.

such as process ID, user ID, and so on. When the process is created, the kernel allocates a certain amount of resources to the process and constantly adjusts when the process is alive, such as memory, which occupies a portion of memory when it is created.

At the end of the process, resources are freed up for other resources to use. We can understand the process as a kind of container, the container of resources can be much less, but the container inside the program can only use things inside the container. So start

Process will be slow, especially when Windows, especially multi-process (preferably when the intensive operation of the time to start a multi-process)

Thread:

Multiple threads can be executed in a process. Multiple threads share resources within a process. So threads can be seen as processes that share the same virtual memory and other properties.
The advantage of line threads for processes is that data sharing between different threads under the same process is easier.

When it comes to threading, say Gil (Global interpreter Lock), Gil exists to implement the mutex for shared resource access in Python. And it's a very overbearing interpreter-level mutex. Under the GIL mechanism, after a thread accesses the interpreter, the other threads need to wait for the thread to be released before they can access it. There is no problem with this approach under a single processor, the nature of a single processor is serially executed. But under multiple processors, this approach can lead to the inability to take advantage of multicore. Python's thread scheduling is similar to the operating system's process scheduling, and is a preemptive dispatch. After a process executes for a certain amount of time, a signal is sent, the operating system responds to the clock interrupt (signal), and the process begins scheduling. In Python, the software simulates this interruption to implement thread scheduling. For example: to the global num do add to 100 of the operation, maybe when you add to 11, not yet complete, then the CPU is given to another thread processing, so the final result may be smaller than 100 or greater than 100.

Simply to talk about the relationship between process and thread

1. Start a process with at least one thread

2, modify the main thread of the data will affect the child thread data, because the memory is shared between them, modify the main process does not affect the child process data, two sub-processes are independent of each other, if you want to implement communication between sub-processes, you can use middleware, such as multiprocessing queue.

Such as:

1 #!/usr/bin/env Python 2 #-*-coding:utf-8-*-3  4 #进程之间的通信 5 from multiprocessing import Process,queue 6  7 def F (QQ): 8     #在子进程设置值, essentially child process pickle data serialized to public place 9     qq.put ([' Hello ', none,123]) Ten if __name__ = = ' __main__ ': 13     q = Queue ()     t = Process (target=f,args= (q,))     T.start ()     #从父进程中取出来, Essentially, the parent process pickle the data from a public place.     Print Q.get ()     t.join ()

3. New threads are easy to create, but new processes need to clone their parent processes one at a time

4. A thread can manipulate and control other threads in the same process, but the process can only manipulate its child processes.

After understanding the concept of process and thread, talk about co-formation.

Co-process:

Co-process, also known as micro-Threading. English name Coroutine.

The greatest advantage of the co-process is the very high execution efficiency of the process. Because the subroutine switch is not a thread switch, but is controlled by the program itself, therefore, there is no thread switching overhead, and multithreading ratio, the more the number of threads, the performance advantage of the association is more obvious.

The second big advantage is that there is no need for a multi-threaded lock mechanism, because there is only one thread, there is no simultaneous write variable conflict, in the process of controlling shared resources without locking, only need to judge the state is good, so the execution efficiency is much higher than multithreading.

Because the co-process is a thread execution, how do you take advantage of multicore CPUs? The simplest method is multi-process + co-progression, which takes full advantage of multicore, and maximizes the high-efficiency of the co-processes, which can achieve very good performance.

Using yield to realize the traditional producer-consumer model is a thread to write messages, a thread to fetch messages, to control the queue and wait through a lock mechanism.

 1 #!/usr/bin/env Python 2 #-*-coding:utf-8-*-3 4 Import time 5 6 def Consumer (): 7 R = ' 8 while True:9 n = yield R10 if not n:11 return12 print (' Consume running%s ... '% n) time.sl EEP (1) #遇到阻塞到produce执行14 r = ' $ OK ' for def Produce (c): C.next () #启动迭代器18 n = 019 while n < 5: n = n + 121 print (' [produce] running%s ... '% n) (r = C.send (n) #到consumer中执行23 print (  ' [Consumer] return:%s '% R ' c.close () 29 if __name__== ' __main__ ': + c = Consumer () #迭代器28 Produce (c) 30 Implementation results: [Produce] running 1...32 consume running 1...33 [Consumer] return:200 OK34 [produce] running 2...35 consume Runn ing 2...36 [Consumer] return:200 OK37 [produce] running 3...38 consume running 3...39 [Consumer] return:200 OK40 [produc E] Running 4...41 consume running 4...42 [Consumer] return:200 OK43 [produce] running 5...44 consume running 5...45 [Cons Umer] return:200 OK

In fact, Python has a module that encapsulates the co-function, Greenlet. Look at the code.

1 #!/usr/bin/env Python 2 #-*-coding:utf-8-*-3  4 #封装好的协成 5 from Greenlet import Greenlet 6  7 def test1 (): 8
   print "Test1:", 9     gr2.switch ()     print "Test1:", 1211     Gr2.switch () def test2 ():     print " Test2: ", 1315     gr1.switch ()     print" Test2: ", 1417 Gr1 = Greenlet (test1) GR2 = Greenlet (test2) 21 Gr1.switch () 22 23 execution Result: test1:1125 test2:1326 test1:1227 test2:14

This also manual switch, is not feel too troublesome, do not catch urgent, Python also has an automatic switch than greenlet more powerful gevent.

The principle is that when an greenlet encounters an IO operation, such as accessing the network, it automatically switches to the other Greenlet, waits until the IO operation is complete, and then switches back to execution at the appropriate time.

Because the IO operation is very time-consuming and often puts the program in a waiting state, with gevent automatically switching the co-process for us, it is guaranteed that there will always be greenlet running, rather than waiting for IO. Directly on the code

1 #!/usr/bin/env Python 2 #-*-coding:utf-8-*-3  4 #协成的自动切换 5 import gevent 6 import time 7  8 def func1 (): 9
   
    print (' \033[31;1m is executing  111...\033[0m ') all     gevent.sleep (2)     print (' \033[31;1m executing 444...\033[0m ') 12 Def func2 ():     print (' \033[32;1m executing 222...\033[0m ')     -gevent.sleep (3) #阻塞3秒, so automatically switches to func1, after Func1 is executed Then switch back to     print (' \033[32;1m is executing 333...\033[0m ') start_time = Time.time () gevent.joinall ([     Gevent.spawn (func1),     gevent.spawn (FUNC2), 27 (gevent.spawn), func3     = End_time () 26 The program takes 3 seconds to execute the print "Spend", (End_time-start_time), "second" 29 30 execution Result: in  progress  111...32 executing  222...33  executing 444...34  
   

Below we use Greenlet to implement a socket multithreading data function. But you need to install a monkey patch, please install it yourself.

Client side:

1 #!/usr/bin/env Python 2 #-*-coding:utf-8-*-3  4  5 from socket import * 6  7 ADDR, PORT = ' localhost ', 8001 8 client = socket (Af_inet,sock_stream) 9 Client.connect ((ADDR, PORT)) Ten while 1:12     cmd = raw_input (' >>: '). St RIP ()     if len (cmd) = = 0:continue14     client.send (cmd)     data = Client.recv (1024x768)     print Data17     # Print (' Received ', repr (data)) Client.close ()

Server side:

1 #!/usr/bin/env Python 2 #-*-coding:utf-8-*-3  4  5 import sys 6 import socket 7 Import gevent 8 from Gevent im Port Monkey 9 Monkey.patch_all () def server (port):     sock = Socket.socket ()     sock.bind (' 127.0.0.1 ', port ) Sock.listen (+     )     1:16         conn, addr = sock.accept ()         #handle_request (conn)         Gevent.spawn (Handle_request, conn), Def handle_request (conn):     try:23 while         1:24             data = CONN.RECV (1024x768)             if not data:26                 break27             print ("recv:", data),             conn.send (data)     Except Exception as ex:31         print (ex) +     finally:33         conn.close () if __name__ = = ' __main__ ':     Server (8001)

The above code can open several clients on their own, and then perform a look, is not very cool, regardless of what the client input, the server can be received in real-time.

over!

Category: Python

Co-process Python

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.