Multiprocessing module in Python (RPM)

Source: Internet
Author: User
Tags semaphore

Order. Multiprocessing

Multithreading in Python is not really multi-threading, and if you want to fully use the resources of multicore CPUs, most of the situations in Python require multiple processes. Python provides a very useful multi-process package multiprocessing that only needs to define a function, and Python will do everything else. With this package, you can easily convert from single-process to concurrent execution . Multiprocessing supports sub-processes, communicates and shares data, performs different forms of synchronization, and provides components such as process, Queue, Pipe, lock, and so on.

1.Process

the class that created the process: Processes ([group [, Target [, name [, args [, Kwargs]]]), target represents the calling object, and args represents the positional parameter tuple of the calling object. The Kwargs represents the dictionary of the calling object. Name is an alias. Group is not actually used.
methods : Is_alive (), join ([timeout]), run (), start (), Terminate (). Wherein, process starts a session with start ().

Properties : Authkey, Daemon (to be set through Start (), ExitCode (the process is none at run time, if it is –n, signal N ends), name, PID. Where daemon is automatically terminated after the parent process terminates and cannot produce a new process, it must be set before start ().

Example 1.1: Create a function and use it as a single process

Import Multiprocessingimport Time def worker (interval):    n = 5 while    n > 0:        print ("The time is {0}". Format (t Ime.ctime ()))        Time.sleep (interval)        N-= 1 if __name__ = = "__main__":    p = multiprocessing. Process (target = worker, args = (3,))    P.start ()    print "P.pid:", p.pid    print "P.name:", p.name    print " P.is_alive: ", p.is_alive ()
P.pid:8736p.name:process-1p.is_alive:truethe time is Tue April 20:55:12 2015The time is Tue Apr 20:55:15 2015The ti Me is Tue Apr 20:55:18 2015The time was Tue Apr 20:55:21 2015The time is Tue Apr 21 20:55:24 2015

Example 1.2: Create a function and use it as multiple processes

Import multiprocessingimport Time def worker_1 (interval):    print "worker_1"    time.sleep (interval)    print " End Worker_1 "def worker_2 (interval):    print" worker_2 "    time.sleep (interval)    print" End Worker_2 "def Worker_3 (interval):    print "Worker_3"    time.sleep (interval)    print "End worker_3" if __name__ = = "__main__":    P1 = multiprocessing. Process (target = worker_1, args = (2,))    P2 = multiprocessing. Process (target = worker_2, args = (3,))    p3 = multiprocessing. Process (target = worker_3, args = (4,))     P1.start ()    P2.start ()    p3.start ()     print ("The number of the CPU is: "+ str (Multiprocessing.cpu_count ())) ' for    p in Multiprocessing.active_children ():        print ('   child P.name: "+ p.name +" \tp.id "+ str (p.pid))    print" END!!!!!!!!!!!!!!!!! "

Results

Example 1.3: Defining a process as a class  

ImportMultiprocessingImport Timeclassclockprocess (multiprocessing. Process):def __init__(self, interval): multiprocessing. Process.__init__(self) self.interval=intervaldefRun (self): n= 5 whilen >0:Print("The time is {0}". Format (Time.ctime ())) Time.sleep (self.interval) n-= 1if __name__=='__main__': P= Clockprocess (3) P.start ()

Note : When process P calls start (), the run () is called automatically

Results

The time is Tue April 20:31:30 2015the time is Tue April 20:31:33 2015the time is Tue April 20:31:36 2015the time is Tu E Apr 20:31:39 2015the time is Tue Apr 21 20:31:42 2015

Example 1.4:daemon Program comparison results

#1.4-1 do not add daemon property

Import Multiprocessingimport Time def worker (interval):    print ("Work start:{0}". Format (Time.ctime ()));    Time.sleep (interval)    print ("Work end:{0}". Format (Time.ctime ())); if __name__ = = "__main__":    p = Multiprocessing. Process (target = worker, args = (3,))    P.start ()    print "end!"

Results

End!work start:tue Apr 21:29:10 2015work end:tue Apr 21 21:29:13 2015

#1.4-2 Plus Daemon Property

Import Multiprocessingimport Time def worker (interval):    print ("Work start:{0}". Format (Time.ctime ()));    Time.sleep (interval)    print ("Work end:{0}". Format (Time.ctime ())); if __name__ = = "__main__":    p = Multiprocessing. Process (target = worker, args = (3,))    P.daemon = True    p.start ()    print "end!"

Results

end!

Note : The factor process sets the Daemon property, and the main process ends, and they end with it.

#1.4-3 Set daemon method to finish execution

Import Multiprocessingimport Time def worker (interval):    print ("Work start:{0}". Format (Time.ctime ()));    Time.sleep (interval)    print ("Work end:{0}". Format (Time.ctime ())); if __name__ = = "__main__":    p = Multiprocessing. Process (target = worker, args = (3,))    P.daemon = True    p.start ()    p.join ()    print "end!"

Results

Work start:tue Apr 22:16:32 2015work end:tue Apr 22:16:35 2015end!
2.Lock

When multiple processes require access to shared resources, lock can be used to avoid conflicting access.

Import Multiprocessingimport sys def worker_with (lock, F): With    lock:        fs = open (f, ' A + ')        n = ten while        n &G T 1:            fs.write ("LOCKD acquired via with\n")            N-= 1        fs.close () def worker_no_with (lock, F):    Lock.acquire ()    try:        fs = open (f, ' A + ')        n = ten while        n > 1:            fs.write ("Lock acquired directly\n")            N-= 1
   fs.close ()    finally:        lock.release () if __name__ = = "__main__":    lock = multiprocessing. Lock ()    f = "file.txt"    w = multiprocessing. Process (target = Worker_with, args= (lock, F))    NW = multiprocessing. Process (target = Worker_no_with, args= (lock, F))    W.start ()    nw.start ()    print "End"

Results (output file)

LOCKD acquired via WITHLOCKD acquired via WITHLOCKD acquired via WITHLOCKD acquired via WITHLOCKD acquired via WITHLOCKD a Cquired via WITHLOCKD acquired via WITHLOCKD acquired via WITHLOCKD acquired via Withlock acquired Directlylock acquired D Irectlylock acquired Directlylock acquired Directlylock acquired Directlylock acquired DirectlyLock acquired Directlylock acquired Directlylock acquired directly
3.Semaphore

Semaphore is used to control the number of accesses to a shared resource, such as the maximum number of connections to a pool.

Import Multiprocessingimport Time def worker (S, i):    s.acquire ()    print (Multiprocessing.current_process (). Name + "Acquire");    Time.sleep (i)    print (Multiprocessing.current_process (). Name + "release\n");    S.release () if __name__ = = "__main__":    s = multiprocessing. Semaphore (2) for    I in range (5):        p = multiprocessing. Process (target = worker, args= (S, i*2))        P.start ()

Results:

Process-1acquireprocess-1release Process-2acquireprocess-3acquireprocess-2release Process-5acquireprocess-3release Process-4acquireprocess-5release Process-4release
4.Event

Event is used to implement inter-process synchronous communication.

Import multiprocessingimport Time def wait_for_event (e):    print ("wait_for_event:starting")    e.wait ()    Print ("Wairt_for_event:e.is_set ()" + str (E.is_set ())) def wait_for_event_timeout (E, T):    print ("Wait_for_ Event_timeout:starting ")    e.wait (t)    print (" wait_for_event_timeout:e.is_set-> "+ str (E.is_set ())) if __ name__ = = "__main__":    e = multiprocessing. Event ()    w1 = multiprocessing. Process (name = "Block",            target = wait_for_event,            args = (E,))     W2 = multiprocessing. Process (name = "Non-block",            target = wait_for_event_timeout,            args = (E, 2))    W1.start ()    W2.start ()     Time.sleep (3)     e.set ()    print ("Main:event is set")

Results

Wait_for_event:startingwait_for_event_timeout:startingwait_for_event_timeout:e.is_set->falsemain:event is Setwairt_for_event:e.is_set ()->true
5.Queue

A queue is a multi-process-safe queuing that enables data transfer between multiple processes using the queue. The Put method is used to insert data into the queue, and the put method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, the method blocks the time specified by timeout until the queue has the remaining space. If timed out, a Queue.full exception is thrown. If blocked is false, but the queue is full, an Queue.full exception is thrown immediately.

The Get method can read from the queue and delete an element. Similarly, the Get method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, then no element is taken within the wait time, and a Queue.empty exception is thrown. If blocked is false, there are two cases where the queue has a value that is available, and the value is immediately returned, otherwise the Queue.empty exception is thrown immediately if it is empty. A sample code for the queue:
Import multiprocessing def writer_proc (q):          try:                 q.put (1, block = False)     except:                 pass    def reader_ Proc (q):          try:                 print q.get (block = False)     except:                 Pass if __name__ = = "__main__":    q = Multiprocessing. Queue ()    writer = multiprocessing. Process (Target=writer_proc, args= (Q,))      Writer.start ()        reader = multiprocessing. Process (Target=reader_proc, args= (Q,))      Reader.start ()       reader.join ()      writer.join ()

Results

1
6.Pipe

Pipe method Return (CONN1, CONN2) represents the two end of a pipeline. The pipe method has the duplex parameter, if the duplex parameter is true (the default), then the pipeline is full duplex, that is, both CONN1 and CONN2 can send and receive. Duplex is only responsible for receiving messages for FALSE,CONN1, CONN2 is only responsible for sending messages.

The Send and Recv methods are methods for sending and receiving messages, respectively. For example, in full-duplex mode, you can call Conn1.send to send a message conn1.recv receive a message. If there is no message to receive, the Recv method is blocked. If the pipe has been closed, then the Recv method throws Eoferror.
Import multiprocessingimport Time def proc1 (pipe): While    True: for        i in Xrange (10000):            print "send:%s"% (i) C3/>pipe.send (i)            time.sleep (1) def proc2 (pipe):    while True:        print "proc2 rev:", Pipe.recv ()        Time.sleep (1) def PROC3 (pipe):    while True:        print "PROC3 rev:", Pipe.recv ()        time.sleep (1) If __name__ = "__ main__ ":    pipe = multiprocessing. Pipe ()    p1 = multiprocessing. Process (Target=proc1, args= (Pipe[0],))    P2 = multiprocessing. Process (TARGET=PROC2, args= (pipe[1),))    #p3 = multiprocessing. Process (TARGET=PROC3, args= (Pipe[1],))     P1.start ()    P2.start ()    #p3. Start ()     p1.join ()    P2.join ()    #p3. Join ()

Results

7.Pool

In the use of Python for system management, especially the simultaneous operation of multiple file directories, or remote control of multiple hosts, parallel operation can save a lot of time. When the number of objects is small, can be directly used in multiprocessing process dynamic genetic multiple processes, more than 10 is OK, but if it is hundreds, thousands of goals, manual to limit the number of processes is too cumbersome, at this time can play the role of process pool.
Pool can provide a specified number of processes for the user to invoke, and when a new request is submitted to the pool, a new process is created to execute the request if it is not full, but if the number of processes in the pool has reached the specified maximum, the request waits until the process ends in the pool. To create a new process to it.

Example 7.1: Using a process pool

#coding: Utf-8import multiprocessingimport Time def func (msg):    print "msg:", msg    time.sleep (3)    print "End" if __name__ = = "__main__":    pool = multiprocessing. Pool (processes = 3) for    I in Xrange (4):        msg = "Hello%d"% (i)        Pool.apply_async (func, (msg,))   #维持执行的进程总数为 Processes, when a process finishes executing, a new process is added to     print "mark~ mark~ mark~~~~~~~~~~~~~~~~~~~~~~"    pool.close ()    Pool.join ()   #调用join之前, call the close function first, or you will get an error. After close, there will be no new process to join the Pool,join function waiting for all child processes to end    print "sub-process (es) done."

One execution result

mmsg:hark~ mark~ Mark~~~~~~~~~~~~~~~~~~~~~~ello 0 msg:hello 1msg:hello 2endmsg:hello 3endendendSub-process (es) done.

Function Explanation:

    • Apply_async (func[, args[, kwds[, callback]]) It is non-blocking , apply (func[, args[, Kwds]) is blocked (Understanding the difference, See Example 1 cases 2 result difference)
    • Close () Closes the pool so that it is not accepting a new task.
    • Terminate () ends the worker process and is not processing the unfinished task.
    • Join () The main process is blocked, waiting for the child process to exit, and the Join method is used after close or terminate.

Execution Note: Create a process pool and set the number of processes to 3,xrange (4) will produce four objects in succession [0, 1, 2, 4], four objects are submitted to the pool, because the pool specifies the number of processes is 3, so 0, 1, 2 will be sent directly to the process to execute, The output "Msg:hello 3" appears after "end" when one execution is finished before a Process object 3 is empty. Because it is non-blocking, the main function will do its own self, do not respond to the execution of the process, so run the for loop after the direct output of "mmsg:hark~ mark~ mark~~~~~~~~~~~~~~~~~~~~~~", The main program waits for the end of each process at Pool.join ().

Example 7.2: Using a process pool (blocking)

Coding:utf-8import multiprocessingimport Time def func (msg):    print "msg:", msg    time.sleep (3)    print "End" if __name__ = = "__main__":    pool = multiprocessing. Pool (processes = 3) for    I in Xrange (4):        msg = "Hello%d"% (i)        pool.apply (func, (msg,))   #维持执行的进程总数为proces SES, when a process finishes executing, it adds a new process.     print "mark~ mark~ mark~~~~~~~~~~~~~~~~~~~~~~"    pool.close ()    pool.join ()   #调用join之前, call the close function first, or you will get an error. After close, there will be no new process to join the Pool,join function waiting for all child processes to end    print "sub-process (es) done."

Results of one execution

Msg:hello 0endmsg:hello 1endmsg:hello 2endmsg:hello 3endmark~ mark~ mark~~~~~~~~~~~~~~~~~~~~~~sub-process (es) done.

Example 7.3: Using a process pool and focusing on the results

Import Multiprocessingimport Time def func (msg):    print "msg:", msg    time.sleep (3)    print "End"    return " Done "+ msg if __name__ = =" __main__ ":    pool = multiprocessing. Pool (processes=4)    result = [] for    I in Xrange (3):        msg = "Hello%d"% (i)        result.append (Pool.apply_async ( Func, (msg,)))    pool.close ()    pool.join ()    for res in result:        print ":::", Res.get ()    print " Sub-Process (es) done. "

One execution result

Msg:hello 0msg:hello 1msg:hello 2endendend::: Donehello 0::: Donehello 1::: Donehello 2sub-process (es) done

Example 7.4: Using multiple process pools

#coding: Utf-8import multiprocessingimport os, time, Random def Lee (): print "\nrun task lee-%s"% (Os.getpid ()) #os. Get PID () Gets the ID of the current process start = Time.time () time.sleep (Random.random () *) #random. Random () randomly generates 0-1 decimal end = Time.time    () print ' Task Lee, runs%0.2f seconds. '% (End-start) def Marlon (): print "\nrun Task marlon-%s"% (Os.getpid ()) Start = Time.time () time.sleep (Random.random () * +) End=time.time () print ' Task Marlon runs%0.2f seconds. '% (E Nd-start) def Allen (): print "\nrun task allen-%s"% (Os.getpid ()) start = Time.time () time.sleep (Random.random ( ) end = Time.time () print ' Task Allen runs%0.2f seconds. '% (End-start) def Frank (): print "\nrun Task Fr ank-%s "% (Os.getpid ()) start = Time.time () time.sleep (Random.random () *) end = Time.time () print ' Task Fran K runs%0.2f seconds. '% (end-start) if __name__== ' __main__ ': function_list= [Lee, Marlon, Allen, Frank] print "p Arent process%s "% (OS.GEtpid ()) pool=multiprocessing.     Pool (4) for the Func in Function_list:pool.apply_async (func) #Pool执行函数, apply executes the function, and when a process finishes executing, a new process is added to the pool print ' Waiting for all subprocesses do ... ' Pool.close () pool.join () #调用join之前, be sure to call the close () function first, or you will get an error, close () After execution there will be no new process added to the Pool,join function waiting to be known as the child process end of print ' all subprocesses done. '

Results:

Parent process 7704 waiting for all subprocesses done ... Run task Lee-6948 Run task Marlon-2896 run task Allen-7304 run task Frank-3052task Lee, runs 1.59 seconds. Task Marlon runs 8.48 seconds. Task Frank runs 15.68 seconds. Task Allen runs 18.08 seconds. All subprocesses is done.

  

  

  

  

  

  

  

  

  

  

  

  

  

  

  

  

  

  

  

  

  

  

  

Multiprocessing module in Python (RPM)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.