Python Learning notes-(14) Process & co-processes

Source: Internet
Author: User

I. Process

1. Multi-process multiprocessing

The multiprocessing package is a multi-process Management Pack in Python and is a cross-platform version of a multi-process module. With threading. Like thread, it can take advantage of multiprocessing. Processes object to create a process. The process can run functions written inside the Python program. The process object is similar to the usage of the thread object.

Create a process instance that can be started using the start () method.

The join () method can wait for the child process to end before continuing to run, typically for inter-process synchronization.

From multiprocessing import Processimport timedef F (name):    time.sleep (2)    print (' Hello ', name) if __name__ = = ' __ main__ ':    p = Process (target=f, args= (' Bob ',))    P.start ()    p.join ()

Write a program that compares the ID of the main process and the child process:

From multiprocessing import processimport osdef info (title): Print (    ' process name: ', __name__) print    (' Parent Process ID: ', os.getppid ())    Print (' subprocess ID: ', os.getpid ())    print ("\ n") def f (name):    info (' \033[31;1mcalled from child process function f\ 033[0m ')    print (' Hello ', name) if __name__ = = ' __main__ ':    info (' \033[32;1mmain process line\033[0m ')    p = Process (Target=f, args= (' Bob ',))    P.start ()

2. Inter-process communication

Memory between different processes is not shared, and to achieve data exchange between two processes, you can use queue, Pipe, and Manager, where:

1) Queue \ Pipe only realizes the transfer of data between processes;

2) The Manager realizes the sharing of data between processes, that is, multiple processes can modify the same data;

2.1 Queue

The queue allows multiple processes to be put in, with multiple processes fetching objects from the queue, FIFO. (using the same method as the queue in threading)

From multiprocessing import Process,queuedef F (QQ):    qq.put ([42,none, "Hello"])    qq.put ([43,none, "HI"]) if __ name__ = = ' __main__ ':    q = Queue ()    p = Process (target=f,args= (q,))    P.start ()    print (Q.get ())    Print (Q.get ())    P.join ()
2.2 Pipe

Pipe is also first-out

From multiprocessing import Process, Pipedef F (conn):    conn.send ([All, none, ' son sent Message '])    conn.send ([All, none, ' The son sends messages again '])    print ("Receive Father's message:", Conn.recv ())    conn.close () if __name__ = = ' __main__ ':    parent_conn, Child_ conn = Pipe ()    p = Process (Target=f, args= (Child_conn,))    P.start ()    print (PARENT_CONN.RECV ())  # Prints "[None, ' Hello ']"    print (PARENT_CONN.RECV ())  # prints "[" None, ' Hello '] "    parent_conn.send ("Go home for dinner!" ") # prints" [All, None, ' hello '] "    p.join ()
2.3 Manager

The manager object is similar to the communication between the server and the customer (server-client), similar to our activity on the Internet. We use a process as a server to build the manager to really store the resources. Other processes can be passed through parameters or access the manager based on the address, and after establishing the connection, manipulate the resources on the server. With the firewall permitting, we can fully apply the manager to multiple computers, mimicking a real-world network scenario.

From multiprocessing import Process,managerimport  osdef F (d,l):    d[os.getpid ()] = Os.getpid ()    l.append ( Os.getpid ())    print (l) if __name__ = = "__main__":    with manager () as manager:        d = manager.dict () #生成一个字典, The        L = manager.list (range (5)) #生成一个列表 can be shared and passed across multiple processes, sharing and passing between multiple processes        p_list = [] for        i in range:            p = Process (target=f,args= (d,l))            P.start ()            p_list.append (p) for        res in p_list: #等待结果            res.join ()

3. Process Pool

The process pool can create multiple processes. These processes are like soldiers on standby, ready to perform tasks (procedures). A process pool can accommodate multiple soldiers on standby.

There are two methods of process pooling:

1) Serial: Apply

2) Parallel: Apply_async

From multiprocessing import Process,poolimport timeimport osdef Foo (i):    time.sleep (2)    print ("In Process", Os.getpid ())    return i+100def Bar (ARG):    ' callback function ' '    print ("-->>exec done:", arg,os.getpid ()) if __ name__ = = "__main__":    pool = Pool (processes=3) #允许进程池同时放入3个进程    Print ("main process", Os.getpid ()) for    I in range (10):        Pool.apply_async (func=foo,args= (i,), Callback=bar)    print (' End ')    pool.close ()    pool.join () # The process pool is closed after the process has finished executing, or if the program is closed directly

The purpose of using the callback function is to improve efficiency in the parent process, such as connecting the database, writing the callback function, the parent process connects to the database once, and if the child process is used, multiple connections are required)

4. Other (lock)

Lock: A lock printed on the screen to prevent confusion in printing

From multiprocessing import Process, Lockdef f (L, i):    #上锁    l.acquire ()    try:        print (' Hello World ', i)    finally:        #解锁        l.release () #因为屏幕是共享的, the purpose of defining the lock is to print the information without scrambling, instead of the order will not be disorderly if __name__ = = ' __main__ ':    #定义锁    Lock = Lock () for    num in range:        Process (Target=f, args= (Lock, num)). Start ()

Two. Co-process

Co-process, also known as micro-threading, fiber. English name Coroutine.

The co-process has its own register context and stack. When the schedule is switched, the register context and stack are saved elsewhere, and the previously saved register context and stack are restored when it is cut back. Thus: The process can retain the state of the last invocation (that is, a specific combination of all local states), each time the procedure is re-entered, which is equivalent to the state of the last call, in other words: The position of the logical flow at the last departure.

Benefits:

    • No overhead for thread context switching
    • No need for atomic operation locking and synchronization overhead
    • Easy switching of control flow and simplified programming model
    • High concurrency + high scalability + Low cost: A CPU support for tens of thousands of processes is not a problem. Therefore, it is suitable for high concurrency processing.

Disadvantages:

    • Unable to take advantage of multicore resources: The nature of the process is a single thread, it can not be a single CPU at the same time multiple cores, the process needs and processes to run on multi-CPU. Of course, most of the applications that we write in the day-out are not necessary, except for CPU-intensive applications.
    • Blocking (Blocking) operations (such as IO) can block the entire program
1. Example

The traditional producer-consumer model is a thread-write message, a thread that takes messages, controls the queue through a lock mechanism, and waits, but can be accidentally deadlocked.

If the use of the process, the producer produces the message, directly by yield jumping to the consumer to start execution, after the consumer has finished, switch back to the producer to continue production, very high efficiency.

code example:

DEF consumer ():    r = ' while    True:        n = yield R        if not N:            return        print (' [consumer] consuming%s ... '% n)        r = ' OK ' def Produce (c):    c.send (None)    n = 0 while    N < 5:        n = n + 1        print (' [Producer] Produ Cing%s ... '% n '        r = C.send (n)        print (' [Producer] consumer return status code:%s '% r)    c.close () c = consumer () produce (c)

Output Result:

Producers Producing 1 ... Consumers Consuming 1 ... [Producer] Consumer return status code: ok[producer] Producing 2 ... Consumers Consuming 2 ... [Producer] Consumer return status code: ok[Producer] producing 3 ... Consumers Consuming 3 ... [Producer] Consumer return status code: ok[Producer] Producing 4 ... Consumers Consuming 4 ... [Producer] Consumer return status code: ok[Producer] producing 5 ... Consumers Consuming 5 ... [Producer] Consumer return status code: $ OK

Notice that the consumer function is a generator , put one in consumer produce after:

    1. First call the c.send(None) startup generator;
    2. Then, once the thing is produced, by c.send(n) switching to consumer execution;
    3. consumerBy yield getting the message, processing, and passing the yield results back;
    4. produceGet consumer The result of processing, continue to produce the next message;
    5. producedecided not to produce, by c.close() closing consumer , the whole process was over.

The entire process is unlocked, executed by one thread, 生产者 and the 消费者 task is completed collaboratively, so called "co-process", rather than a thread-preemptive multi-tasking. (principle: Switch on I/O operation, only CPU operation (CPU operation very fast))

A sentence summarizes the characteristics of the process: subroutine is a special case of the process.

The following two modules are supported in Python: Greenlet and Greent

2. Greenlet

Greenlet package, using the. Swith to manually switch the co-operation

From Greenlet import  greenletdef test1 ():    print (All)    gr3.switch ()    print (    gr2.switch) Print (+)    def test2 ():    print (    gr1.switch () def test3 ():    print (All)    gr1.switch () Gr1 = Greenlet (test1) #启动协程gr2 = Greenlet (test2) GR3 = Greenlet (test3) Gr1.switch ()

3. Greent

Gevent is a third-party library that makes it easy to implement concurrent or asynchronous programming through Gevent, and the main pattern used in Gevent is Greenlet, which is a lightweight coprocessor that accesses Python in the form of a C extension module. Greenlet all run inside the main program operating system process, but they are dispatched in a collaborative manner.

Import geventdef foo ():    print ("Run foo")    Gevent.sleep (2)    print ("Back to Foo") def Bar ():    print ("Here is Bar")    gevent.sleep (1)    print ("Back to Bar") def func3 ():    print ("Run Func3")    gevent.sleep (0)    print (" Run Func3 ") Gevent.joinall ([    gevent.spawn (foo),    gevent.spawn (bar),    Gevent.spawn (func3)])

Performance differences between synchronous and asynchronous:

1) Sync:

From gevent import monkey;# monkey.patch_all () import geventfrom  urllib.request import urlopenimport timedef f (URL):    print (' GET:%s '% URL)    resp = urlopen (URL)    data = Resp.read ()    print ('%d bytes received from%s. '% (Len (d ATA), url) urls = [' https://www.python.org/',         ' https://www.yahoo.com/',         ' https://github.com/'         ]time_ Start = Time.time () for URL in URLs:    f (URL) print ("Sync Cost", Time.time ()-Time_start)

2) Async:

From gevent import monkey;# monkey.patch_all () import geventfrom  urllib.request import urlopenimport timedef f (URL):    print (' GET:%s '% URL)    resp = urlopen (URL)    data = Resp.read ()    print ('%d bytes received from%s. '% (Len (d ATA), url) urls = [' https://www.python.org/',         ' https://www.yahoo.com/',         ' https://github.com/'         ]async_ Time_start = Time.time () gevent.joinall ([    gevent.spawn (F, ' https://www.python.org/'),    gevent.spawn (F, ' https://www.yahoo.com/'),    gevent.spawn (F, ' https://github.com/'),]) print ("Async cost", Time.time ()-async_time_ Start)

Conclusion: The synchronization cost time is 4 seconds, the asynchronous cost is 2.5 seconds, the cost is greatly saved, which is the charm of the Association; Monkey.patch_all () enables gevent to recognize I/O operations in Urllib

Using Gevent to implement multi-socket concurrency with single-threaded threads:

ImportSYSImportSocketImport TimeImportgevent fromGeventImportSocket,monkeymonkey.patch_all ()defServer (port): s=socket.socket () S.bind ('0.0.0.0', Port)) S.listen (500)     whiletrue:cli, addr=s.accept () gevent.spawn (handle_request, CLI)defHandle_request (conn):Try:         whileTrue:data= CONN.RECV (1024)            Print("recv:", data) conn.send (data)if  notData:conn.shutdown (socket. SHUT_WR)exceptException as ex:Print(ex)finally: Conn.close ()if __name__=='__main__': Server (8001)
Server Side
ImportSocket HOST='localhost'    #The remote hostPORT = 8001#The same port as used by the servers =Socket.socket (socket.af_inet, socket. Sock_stream) S.connect ((HOST, PORT)) whiletrue:msg= Bytes (Input (">>:"), encoding="UTF8") S.sendall (msg) Data= S.RECV (1024)    #print (data)     Print('Received', repr (data)) S.close ()
Client Side

Python Learning notes-(14) Process & co-processes

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.