Pipelines, inter-process data sharing, process pooling

Source: Internet
Author: User
Tags terminates

One: Pipeline (Learn)

Use: from multiprocessing import process,pipe

Knowledge:

1 Create Pipeline: Pipe () By default is duplex, if changed to false, then CONN1 can only receive, CONN2 can only be sent.

Conn1,conn2=pipe ()

2 pipe module Send string without bytes type, directly is the string type.

Pipe ([duplex]): Creates a pipe between processes and returns a tuple (CONN1,CONN2), where conn1,conn2 represents the connection object at both ends of the pipeline, emphasizing that a pipe must be produced before the process object is generated. #参数介绍:dumplex: The default pipe is full-duplex, and if duplex is set to FALSE,CONN1 only for receive, CONN2 can only be used for sending. #主要方法:conn1.recv (): Receives the object sent by Conn2.send (obj). If there is no message to receive, the Recv method is blocked.    if the other end of the connection is closed, the Recv method throws Eoferror.    #其他方法:conn1.close (): Close the connection. This method is called automatically if CONN1 is garbage collected Conn1.fileno (): Returns the integer file descriptor used by the connection conn1.poll ([timeout]): Returns True if the data on the connection is available. Timeout Specifies the maximum time period to wait. If this argument is omitted, the method returns the result immediately. If the timeout is fired to none, the operation waits indefinitely for the data to arrive.

One receive message between processes, one to send messages

# from multiprocessing import  Process,pipe# def func (conn):#     conn.send (' hello ')#     conn.close ()# if __ name__== ' __main__ ': #判断是否是当前文件的文件名, if it is executed below #     conn1,conn2=pipe ()#     p=process (target= Func,args= (CONN1,))#     P.start ()#     print (CONN2.RECV ())#     # Conn2.close ()   Note that this does not write, I am writing the wrong, this line removed #     P.join ()

Print Result: a hello

If you are receiving multiple messages between processes, not one, you need to pass multiple objects (formal parameters are two), execute one time, and another close.

Process,pipefunc (conn1,conn2):    conn2.close () while    True:        try:            msg= CONN1.RECV ()            print (msg)        eoferror:            conn1.close ()              break __name__==' __main__ ':    conn1,conn2=pipe ()    Process (target=func,args= (CONN1,CONN2) ). Start ()    conn1.close ()    range: conn2.send (' Hello ') Conn2.close ()   

Printing results: 20 Hello

Pipe implements the producer consumer model (main process and child process send message)

# Pipe implements producer consumer model fromMultiprocessingImportLock,pipe,processdefProducer (Con,pro,name,food): Con.close () forIinchRange(+): F ='%s production%s%s '% (Name,food,i)Print (f) pro.send (f) pro.send (None) Pro.send (None) Pro.send (None) Pro.close ()defConsumer (Con,pro,name,lock): Pro.close ()While True: Lock.acquire () food = CON.RECV () lock.release ()ifFoodis None: Con.close () Breakprint ('%s ate%s ' % (name, food))if __name__ = = ' __main__ ': Con,pro = Pipe () lock= Lock ( p = Process (target=producer,args= (Con,pro,' Egon',' swill')) C1 = Process (Target=consumer , args= (Con, pro, ' Alex ', lock)) C2 = Process (target=consumer, args= (Con, pro, ' Bossjin ') /c14>,lock)) C3 = Process (target=consumer, args= (Con, pro, ' Wusir ', Lock)) C1.start () C2.start () C3.start () P.start () Con.close () pro.close (  )

Note: Be sure to use a lock to control.

#多个消费之之间的竞争问题带来的数据不安全问题 fromMultiprocessingImportProcess,pipe,lockdefConsumer (produce, Consume,name,lock): Produce.close ()While True: Lock.acquire () baozi=consume.recv () lock.release ()ifBaozi:Print'%s received bun:%s '% (Name,baozi))Else: Consume.close () BreakdefProducer (Produce, consume,n): Consume.close () forIinchrange (N): Produce.send (i) produce.send (none) produce.send (none) produce.close ()I F __name__ = = ' __main__ ': produce,consume=pipe () lock = Lock () c1=process (Target=consumer,a Rgs= (Produce,consume,' C1 ', Lock)) c2=process (Target=consumer,args= (produce,consume,' C2 ') , Lock)) p1=process (target=producer,args= (Produce,consume,)) C1.start () C2.start () P1.start ( ) Produce.close () consume.close ()       

Note:pipe data is not secure, generally we use joinablequeue to achieve production model

# Pipe Data not secure # IPC # Lock to control the behavior of the operation pipeline to avoid data insecurity caused by data scrambling between processes # Data security between queue processes # pipe + lock

Process Pool: Focus

Process pools and multiprocess. Pool Module Highlights

Why do you have a process pool?

During a program's actual process, there are thousands of tasks that need to be performed during busy hours, and there may be only sporadic tasks at leisure. So when thousands of tasks need to be executed, do we need to create thousands of processes? First, the creation process consumes time, and the destruction process also consumes time. Second, even if thousands of processes are turned on, the operating system does not allow them to execute simultaneously, which can affect the efficiency of the program. Therefore, we cannot open or end the process indefinitely according to the task. So what are we going to do about it?

Here, to introduce the concept of a process pool, define a pool, put a fixed number of processes inside, there is a need to come, take a pool of processes to handle the task, wait until the process is finished, processes are not closed, but the process is put back into the process pool to continue to wait for the task. If there are many tasks that need to be performed, the number of processes in the pool is not enough, and the task waits for the previous process to complete before returning to the idle process to continue execution. That is, the number of processes in the pool is fixed, and at most a fixed number of processes are running at the same time. This will not increase the operating system scheduling difficulty, but also save the opening and closing process time, but also to a certain extent, to achieve concurrency effect.

In short: Create a process pool to handle multiple tasks, batch processing (such as a process pool set to 5, processing 5 tasks at a time when the process pool has tasks, and only when the task is processed to perform the next task). The number of processes in the pool is fixed, so at most a fixed number of processes are running at the same time. This will not increase the operating system scheduling difficulty, but also save the opening and closing process time, but also to a certain extent, to achieve concurrency effect.

How do I create it?

Pool ([numprocess  [, initializer [, Initargs]]): Create a process pool

Parameter description:

1 Numprocess: The number of processes to be created, if omitted, the value of Cpu_count () will be used by default 2 initializer: Is the callable object to execute when each worker process starts, default to None 3 Initargs: Is the parameter group to pass to the initializer

Main methods

P.apply are generally used for synchronization.

P.apply_asyn is generally used for asynchronous

# Method Introduction # 1 P.apply (func [, args [, Kwargs]]): Executes func (*args,**kwargs) in a pool worker process and returns the result.  # 2 ' needs to be emphasized: this operation does not execute the Func function in all pool worker processes. If you want to execute the Func function concurrently with different parameters,   you must call the P.apply () function from a different thread or use P.apply_async ()   ' # 4 P.apply_async (func [, args [, Kwargs]]: executes func (*args,**kwargs) in a pool worker process and returns the result.  # 5 "The result of this method is an instance of the AsyncResult class, callback is a callable object that receives input parameters. When the result of Func becomes available, the understanding is passed to callback. Callback does not prohibit any blocking operations, otherwise it will receive results from other asynchronous operations. '  # 6      # 7 P.close (): Close the process pool to prevent further action. If all operations hang continuously, they will complete  before the worker process terminates # 9 p.jion (): Waits for all worker processes to exit. This method can only be called after close () or teminate ()

Other methods (Learn)

its other methods (learn)     # 1# Methods The return value of    apply_async ()    # and Map_async () is an instance of Asyncresul obj. The instance has the following method     # 2    # obj.get (): Returns the result and waits for the result to arrive if necessary. Timeout is optional. If it has not arrived within the specified time, a one will be raised. If an exception is thrown in a remote operation, it is raised again when this method is called.      # 3    # obj.ready (): Returns True if the call is complete    # 4    # Obj.successful (): If the call is complete and no exception is thrown, returns True if this method is called before the result is ready, throwing an exception     # 5    # obj.wait ([ Timeout]): Waits for the result to become available.      # 6    # obj.terminate (): Immediately terminates all worker processes without performing any cleanup or end of any pending work. If P is garbage collected, this function is called automatically

Example:

#进程池的同步调用

#进程池的同步调用ImportOs,time#引入系统模块和时间模块 fromMultiprocessingImportPool#引入进程池模块defWork (n):#定义一个函数print ('%s run ' %os.getpid ())#打印idtime.sleep (3)#睡3秒 return n**2  #返回一个n平方 if __name__ = = ' __main__ ':#如果文件名等于当前文件名P=pool (3) # Three processes are created from scratch in the process pool, which is the three processes in the execution task res_l=[]#创建一个列表  for  i in range (10): #循环十个数res=p.apply (work,args= (i)) # is called synchronously until the task executes and gets to res, waiting for the task to execute in a process that may have blocked res_l . Append () may also not be blocked # but no matter if the task is blocked, the synchronous call will be waiting in place. print (res_l)#打印列表       

Printing results:

9656 Run
6536 Run
1492 Run
9656 Run
6536 Run
1492 Run
9656 Run
6536 Run
1492 Run
9656 Run
[]

To have a return value, just add res to the list and you can print [0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

ImportOsImportTimeImportRandom fromMultiprocessingImportPooldefWork (n):Print'%s run '%os.getpid ()) Time.sleep (Random.random ())returnn**2if __name__ = = ' __main__ ': P=pool (3) #进程池中从无到有创建三个进程, which has been the three processes in the execution task res_l=[] 
      
       for i 
        in 
       range (x): Res=p.apply_async (Work,args= (i)) # runs asynchronously, depending on the number of processes in the process pool, up to 3 sub-processes at a time Asynchronous execution # After returning results, put the results into the list, return the process, and then execute the new task # Note that three processes in the process pool do not open at the same time or end at the same time # Instead, you release a process after executing one, and the process takes on new tasks. res_l.append (RES) # asynchronous Apply_async Usage: If you use asynchronous-committed tasks, the main process needs to use Jion, wait for the task in the process pool to finish processing, and then use get to collect the results # Otherwise, The main process ends, the process pool may not be able to execute, and then it ends together p.close () p.join  ()  for res in res_l: print (Res.get ()) #使用get来获取apply_aync的结果, if it is apply, there is no get method, because apply is executed synchronously, gets the result immediately, and does not need get at all     
       

Note: There is a return value for both synchronous and asynchronous.

Pipelines, inter-process data sharing, process pooling

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.