Producer consumer model and queue, process pool

Source: Internet
Author: User
Tags iterable

Producer Consumer Model

Producer Consumer Model
The main idea is to understand the decoupling
A producer consumer model can be implemented with queues

Stack: Advanced post-Exit (first in last out abbreviation FILO)
Queue: FIFO (first-in-out abbreviation )

Import Queue #不能进行多进程之间的数据传输
(1) from multiprocessing import queue #借助Queue解决生产者消费者模型, queues are secure
Q=queue (num)
Num: Maximum length of the queue
Q.get () #阻塞等待获取数据, if there is data to get directly, if there is no data on blocking wait
Q.put () #阻塞, if you can continue to put data in the queue, directly put, can not be put on blocking wait
Q.get nowait () #不阻塞, if there is data direct access, no data on the error
Q.put nowait () #不阻塞, if OH can continue to put data in the queue, directly put, can not put the error

(2) From multiprocessing import Joinablequeue #可连接的队列
Joinablequeue is inherited from the queue, so you can use the methods in the queue
Q.join () #用于生产者. Wait for the return result of Q.task done, by returning the result, the producer can get how much data the consumer consumes
Q.task done () #用于消费者, which refers to one data per consumption queue, returns an identity to the join

Use of callback functions:
The return value of the task function of the process, which is received as a parameter of the callback function, for further operation
The callback function is called by the main process, not the child process, and the child process is only responsible for passing the result to the callback function
 fromMultiprocessingImportqueue,processImport Time#Consumerdefcon (q,name): while1: Info=q.get ()#Consumer        ifInfo#Print If there is, otherwise break            Print('%s was removed from%s'%(name,info))Else:             Break#producersdefsh (q,product): forIinchRange (10): Info=product+'version of the doll%s number'%Str (i)Print(Info) q.put (info)#Productionq.put (None)if __name__=='__main__': Q=queue (10)#Queue Length 10 (you can set it yourself)P=process (target=sh,args= (q,'Nobita')) P_1=process (target=con,args= (q,'Alex') ) P.start ()#Execute child processP_1.start ()#Execute child process

Queue implementation producer Consumer model
Module: from multiprocessing import queue,process
 fromMultiprocessingImportqueue,processdefXiao (q,name,color):#the color here is the colour of the reference     while1: Ret=q.get ()#consumer Q.get ()        ifret:Print('%s%s took the%s doll \033[0m'% (Color,name,ret))#Color receives the arguments for the colors and is the beginning, so the color is placed in front        Else:             Break       #when the consumer gets none in the data queue, it is the identification of the producer no longer producing the data, at which time the consumer ends the consumptiondefsheng (q,ban): forIinchRange (0,12): Ret=ban+'version of Doll No.%s'%Str (i)#print (ret)Q.put (ret)#production q.put (variable)if __name__=='__main__': Q=queue (15)#Queue LengthP2 = Process (Target=sheng, args= (q,'Little Bear'))#Turn on producer Sub-processP1=process (target=xiao,args= (q,'Ko','\033[31m')) P1_1=process (target=xiao,args= (q,'LP','\033[33m'))#the conversion color here is in the last, but this is the beginning of the color, you need to put it in front of the printingp_p=[P1_1,P1,P2] [I.start () forIinchP_p]#let two of consumers take turns spendingP2.join ()#The main process block waits for the production child process to finish executing (production) before continuing down executionQ.put (None)#several consumers will receive a few closing signsQ.put (None)

Shared memory between processes
The value of the main process is the same as the value of the child process
Usage: M=manager ()
num = m.dict ({key: value})
num = m.list ([+])
 fromMultiprocessingImportProcess,manager,valuedeffunc (num): num[0]-=1Print('the NUM value in the child process is', num)if __name__=='__main__': M=Manager () Num=m.list ([i])#shared memory, so the value of the main process is the same as the value of the child processP=process (target=func,args=(num,)) P.start () P.join ( )Print('the NUM value in the main process is', num)

Process Pool
Three methods for process pooling
(1) Map (func,iterable)
Func: Task function performed by a process in a process pool
Iterable: An iterative object that passes each element of an iterative object sequentially to a task function when the parameter
(2) Apply (func,args= ()): The efficiency of synchronization, that is, the process in the pool one by one to perform the task
Func: Task function performed by a process in a process pool
Args: An iterative object-type parameter that is passed to the task function
Close and join are not required when working with tasks synchronously
When a task is synchronized, all processes in the process pool are normal processes (the main process needs to wait for its execution to finish)

(3) Apply_async (func,args= (), Callback=none): asynchronous efficiency, which means that the process in the pool executes the task at once
Func: Task function performed by a process in a process pool
Args: An iterative object-type parameter that is passed to the task function
Callback: callback function, which means that whenever a process in the process pool has finished processing the task, the returned result can be passed to the callback function for further processing by the callback function,
The callback function is not synchronous unless it is asynchronous, when the task is processed asynchronously, all processes in the process pool are daemons (the main process code finishes executing the daemon)
When working with tasks asynchronously, you must add close and join


Map return Value:
 fromMultiprocessingImportPooldeffunc (num): num+=1Print(num)returnNumif __name__=='__main__': P=Pool () Res=p.map (func,[i forIinchRange (10)]) p.close () P.join ()Print('map return values in the main process', res)

Process pool Asynchronous processing problem (async: Open multiple processes and handle multiple tasks at the same time)
 fromMultiprocessingImportPoolImport Timedeffunc (num): num+ = 1returnNumif __name__=='__main__': P= Pool (5)#set the number of processes (preferably one more than your computer's core)Start =time.time () L= []     forIinchRange (10000): Res= P.apply_async (func,args= (i,))#asynchronously handles these 100 tasks, which means that there are 5 processes in the process, 5 tasks are processed, and the next process finishes the task, and immediately receives the next task.l.append (res) p.close () P.join ()Print(Time.time ()-start)

Process Pool Synchronous Processing Task (synchronization: Although there are multiple processes, it is still a process of processing the processes)
 fromMultiprocessingImportPoolImport Timedeffunc (num): num+ = 1returnNumif __name__=='__main__': P= Pool (5)#Open 5 ProcessesStart = Time.time ()#write down the time before opening the processL=[]     forIinchRange (10000): Res= P.apply (func,args= (i,))#Synchronous Processing Task, although there are five processes, but still a process of processing a process to process the taskL.append (RES)#Put 10,000 numbers in a list    Print(L)Print(Time.time ()-start)#The time at which the process ended minus the time before the open process

Comparison of the efficiency of synchronous and asynchronous
 fromMultiprocessingImportPoolImportRequestsImport Timedeffunc (URL): Res=requests.get (URL)Print(Res.text)ifRes.status_code = = 200:        return 'OK'if __name__=='__main__': P= Pool (5) L= ['https://www.baidu.com',         'http://www.jd.com',         'http://www.taobao.com',         'http://www.mi.com',         'http://www.cnblogs.com',         'https://www.bilibili.com',] Start=time.time () forIinchl:p.apply (Func,args=(i,)) Apply_= Time.time ()-Start Start=time.time () forIinchL:p.apply_async (func, args=(i,)) P.close () P.join ( )Print('the time to synchronize is%s, and the asynchronous time is%s'% (Apply_, Time.time ()-start))

Producer consumer model and queue, process pool

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.