The multiprocessing module supports two main forms of interprocess communication: pipelines and queues. Both methods are implemented using messaging, but the queue interface intentionally mimics the common queue usage in thread programs.
For a queue programming instance, you can view the microblog content.
Queue ([maxsize])
Creates a shared process queue. MaxSize is the maximum number of items allowed in the queue. If this argument is omitted, there is no size limit. The underlying queue is implemented using pipelines and locks. In addition, you need to run a support thread so that the data in the queue is transferred to the underlying pipeline.
The instance of the queue Q has the following methods:
Q.cancel_join_thread ()
The background thread is not automatically connected when the process exits. This will prevent the Join_thread () method from blocking.
q.close ()
Closes the queue to prevent more data from being added to the queue. When this method is called, the background thread will continue to write to the data that has been queued but not yet written, but will close as soon as this method completes. If q is garbage collected, this method is called automatically. Closing a queue does not generate any type of data end signal or exception in the queue consumer. For example, if a consumer is being blocked on a get () operation, shutting down a queue in a producer does not cause the get () method to return an error.
Q.empty ()
Returns true if q is null when this method is called. If other processes or threads are adding items to the queue, the results are unreliable. That is, between returning and using the result, a new item may have been added to the queue.
q.full ()
If q is full, return to true. Due to the presence of threads, the results may also be unreliable (refer to the Q.empty () method):
Q.get ([block [, timeout]])
Returns an item in Q. If q is empty, this method blocks until an item in the queue is available. Block is used to control blocking behavior, which is true by default. If set to False, an Queue.empty exception is thrown (defined in the queue module). Timeout is an optional time-out that is used in blocking mode. If no items become available within the established time interval, a Queue.empty exception is thrown.
q.get_nowait ()
The same q.get (False) method.
Q.join_thread ()
The background thread that connects the queue. This method is used to wait for all queue items to be consumed after the Q.close () method is called. By default, this method is called by all processes that are not the original creator of Q. Calling the Q.cancel_join_thread () method can prohibit this behavior.
q.put (item [, block [, timeout]])
Put the item in the queue. If the queue is full, this method will block until there is space available. Block controls blocking behavior, which defaults to true. If set to False, an Queue.empty exception is thrown (defined in the Queue Library module). Timeout Specifies the length of time to wait for free space in blocking mode. A Queue.full exception is thrown after the timeout.
q.qsize ()
Returns the correct number of current items in the queue. The result of this function is unreliable because items may have been added or deleted in the queue between returning results and using the results later in the program. On some systems, this method may throw an Notimplementederror exception.
Joinablequeue ([maxsize])
Create a shared process queue that can be connected. This is like a queue object, but the queue allows the user of the project to notify the producer that the project has been successfully processed. The notification process is implemented using shared signals and condition variables.
Joinablequeue instance P has the following methods in addition to the same method as the queue object:
Q.task_done ()
This method is used by the consumer to signal that the item returned by Q.get () has been processed. If the number of times this method is called is greater than the number of items deleted from the queue, an ValueError exception is thrown.
Q.join ()
The producer will use this method to block until all items in the queue are processed. The blocking continues until the Q.task_done () method is called for each item in the queue.
The following example shows how to establish a process that runs forever, using and working with items on the queue. The producer puts the items in the queue and waits for them to be processed.
ImportMultiprocessing def consumer(input_q): while True: Item=input_q.get ()#处理项目 PrintItem#此处替换为有用的工作 #发出信号通知任务完成Input_q.task_done () def producer(sequence,output_q): forIteminchSequence#将项目放入队列Output_q.put (item)#建立进程if__name__==' __main__ ': Q=multiprocessing. Joinablequeue ()#运行使用者进程Cons_p=multiprocessing. Process (target=consumer,args= (q,)) cons_p.daemon=True #定义该进程为后台运行 TrueCons_p.start ()#生产项目, sequence represents the sequence of items to be sent to the consumer #在时间中, this could be the output of the generator or produced in some other waysequence=[1,2,3,4]#range (5) [1:5]Producer (SEQUENCE,Q)#等待所有项目被处理Q.join ()
If you want, you can place multiple processes in the same queue, or you can get multiple processes from the same queue. For example, if you are constructing a consumer process pool, you can write code such as the following:
#这里只是对main函数作出了修改 If __name__== ' __main__ ': q=multiprocessing. Joinablequeue () #创建一些使用者进程 cons_p1=multiprocessing. Process (target=consumer,args= (q,)) Cons_p1.daemon=true cons_p1.start () cons_p2= Multiprocessing. Process (target=consumer,args= (q,)) Cons_p2.daemon=true cons_p2.start () #生产项目. Sequence represents the project sequence to be sent to the consumer #在时间中, which may be the output of the generator or produced in some other way sequence=[ 1 , 2 , 3 , 4 ] producer (sequence,q) #等待所有项目被处理 Q.join ()
In some applications, producers need to notify consumers that they are no longer producing any projects and should be closed. The code written for this should use the Flag (Sentinel)-a special value that indicates completion. The following example uses the None as a sign to illustrate the concept:
def consumer(imput_q): while True: Item=imput_q.get ()ifItem is None: Break #处理项目 PrintItem#关闭 Print "Coonsumer Done" def producer(sequence,output_q): forIteminchSequence#吧项目放入队列Output_q.put (item)if__name__=="__main__": Q=multiprocessing. Queue ()#启动使用者进程Cons_p=multiprocessing. Process (target=consumer,args= (q,)) Cons_p.start ()#生产项目sequence=[1,2,3,4] Producer (SEQUENCE,Q)#在队列上安置标志 send out the completion signalQ.put (None)#等待使用者进程关闭Cons_p.join ()
If you use a flag as in the example above, be sure to place a flag on the queue for each user. For example, if there are three consumer processes using the items on the queue, the producer needs to place three flags on the queue for all consumers to close.
Copyright NOTICE: This article for Bo Master original article, without Bo Master permission not reproduced.
Multiprocessing-IPC Queue for advanced applications in Python