Python------producer consumer models and pipelines

Source: Internet
Author: User

First, why use producers and consumers?

In the world of threads, the producer is the thread of the production data, the consumer is the thread of consuming data, in the multi-threaded development, if the producer processing speed is very fast, and the consumer processing speed is very slow, then the producer must wait for the consumer to finish processing, can continue to produce the data, the same reason, If consumers have more processing power than producers, then consumers must wait for producers, and in order to solve the problem, they introduce producer and consumer models.

Ii. What is the producer-consumer model

The producer-consumer model solves the problem of strong coupling between producers and consumers through a container. Producers and consumers do not communicate with each other directly, and through the blocking queue to communicate, so producer production data without waiting for consumer processing, directly to the blocking queue, consumers do not find producers to data, but directly from the blocking queue, blocking the queue is equivalent to a buffer, Balance the processing power of producers and consumers.

 fromMultiprocessingImportProcess,queueImportTime,random,osdefConsumer (q): whileTrue:res=Q.get ()ifRes isNone: Break #the end signal is received .Time.sleep (Random.randint (1,3))        Print('\033[45m%s Eat%s\033[0m'%(Os.getpid (), res))defproducer (q): forIinchRange (10): Time.sleep (Random.randint (1,3)) Res='Bun%s'%I q.put (res)Print('\033[44m%s produced%s\033[0m.'%(Os.getpid (), res)) Q.put (None)#Send End Signalif __name__=='__main__': Q=Queue ()#producers: the chefsP1=process (target=producer,args=(q,))#consumers: Those who are foodiesC1=process (target=consumer,args=(q,))#StartP1.start () C1.start ( )Print('Master')
implementation of producer consumer model based on queue

Note: The end signal is none, does not have to be issued by the producer, the main process can also be sent, but the main process needs to wait until the end of the producer should send the signal

 fromMultiprocessingImportProcess,queueImportTime,random,osdefConsumer (q): whileTrue:res=Q.get ()ifRes isNone: Break #the end signal is received .Time.sleep (Random.randint (1,3))        Print('\033[45m%s Eat%s\033[0m'%(Os.getpid (), res))defproducer (q): forIinchRange (2): Time.sleep (Random.randint (1,3)) Res='Bun%s'%I q.put (res)Print('\033[44m%s produced%s\033[0m.'%(Os.getpid (), res))if __name__=='__main__': Q=Queue ()#producers: the chefsP1=process (target=producer,args=(q,))#consumers: Those who are foodiesC1=process (target=consumer,args=(q,))#StartP1.start () C1.start () P1.join () q.put (None)#Send End Signal    Print('Master')
The main process sends the end signal to none after the producer has finished producing
 fromMultiprocessingImportProcess,queueImportTime,random,osdefConsumer (q): whileTrue:res=Q.get ()ifRes isNone: Break #the end signal is received .Time.sleep (Random.randint (1,3))        Print('\033[45m%s Eat%s\033[0m'%(Os.getpid (), res))defproducer (name,q): forIinchRange (2): Time.sleep (Random.randint (1,3)) Res='%s%s'%(name,i) q.put (res)Print('\033[44m%s produced%s\033[0m.'%(Os.getpid (), res))if __name__=='__main__': Q=Queue ()#producers: the chefsP1=process (target=producer,args= ('steamed Bun', Q)) P2=process (target=producer,args= ('Bones', Q)) P3=process (target=producer,args= ('swill', Q)) #consumers: Those who are foodiesC1=process (target=consumer,args=(q,)) C2=process (target=consumer,args=(q,))#StartP1.start () P2.start () P3.start () C1.start () P1.join ()#must ensure that all production is completed before the end signal should be sentP2.join () p3.join () q.put (None)#Several consumers should send a few end signals to noneQ.put (None)#Send End Signal    Print('Master')
example of multiple consumers: several consumers need to send a few end signals

Third, pipeline

To create a pipeline class:

Pipe ([]duplex): Creates a pipe between processes and returns a tuple (CONN1,CONN2), where conn1,conn2 represents the connection object at both ends of the pipeline, emphasizing that a pipe must be produced before the process object is generated

Parameters:

Duplex: The default pipe is full-duplex, and if duplex is set to FALSE,CONN1 only for receive, CONN2 can only be used for sending.

Main methods:

CONN1.RECV (): Receives the object sent by Conn2.send (obj). If there is no message to receive, the Recv method is blocked. If the other end of the connection is closed, then the Recv method throws, Eoferror.

Conn1.send (obj): Sends an object over a connection. Obj is an arbitrary object that is compatible with serialization.

Conn1.close (): Closes the connection. This method is called automatically if the conn1 is garbage collected.

Conn1.fileno (): Returns the integer file descriptor used by the connection.

Conn1.poll ([timeout]): Returns True if the data on the connection is available. Timeout Specifies the maximum time period to wait. If this argument is omitted, the method returns the result immediately. If the timeout is fired to none, the operation waits indefinitely for the data to arrive.

Conn1.recv_bytes ([maxlength]): Receives a complete byte message sent by the C.send_bytes () method. MAXLENGTH Specifies the maximum number of bytes to receive. If the incoming message exceeds this maximum, a IOError exception is thrown and no further reads can be made on the connection. If the other end of the connection is closed and no more data exists, a Eoferror exception is thrown.

Conn.send_bytes (buffer[, offset[,size]): Sends a byte data buffer through a connection, buffer is any object that supports the buffer interface, offset is the byte offset in the buffer, and size is the number of bytes to send. The result data is emitted as a single message, and then the C.recv_bytes () function is called to receive it.

Conn.recv_bytes_into (Buffer[,offset]): Receives a complete byte message and saves it in a buffer object that supports a writable buffer interface (that is, a ByteArray object or similar object). offset specifies the byte displacement at which the message is placed in the buffer. The return value is the number of bytes received. If the message length is greater than the available buffer space, an Buffertooshort exception is thrown.

Pay particular attention to the correct management of the pipe endpoint: If neither the producer nor the consumer is using an endpoint of the pipeline, it should be closed. This also explains why the output of the pipe is closed in the producer and the input of the pipe is closed in the consumer.

 fromMultiprocessingImportProcess,pipedefConsumer (p,name): Produce, consume=p Produce.close () whileTrue:Try: Baozi=consume.recv ()Print('%s received bun:%s'%(Name,baozi))exceptEoferror: Breakdefproducer (seq,p): Produce, consume=p Consume.close () forIinchseq:produce.send (i)if __name__=='__main__': Produce,consume=Pipe () C1=process (target=consumer,args= (Produce,consume),'C1')) C1.start () Seq= (i forIinchRange (10) ) producer (seq, (Produce,consume)) Produce.close () Consume.close () C1.join ()Print('Main Process')
Pipe implements producer consumer model
 fromMultiprocessingImportProcess,pipe,lockdefConsumer (p,name,lock): Produce, consume=p Produce.close () whileTrue:lock.acquire () Baozi=consume.recv () lock.release ( )ifBaozi:Print('%s received bun:%s'%(Name,baozi))Else: Consume.close () Breakdefproducer (p,n): Produce, consume=p Consume.close () forIinchrange (N): Produce.send (i) produce.send (none) Produce.send (None) Produce.close ()if __name__=='__main__': Produce,consume=Pipe () lock=Lock () C1=process (target=consumer,args= (Produce,consume),'C1', Lock)) C2=process (target=consumer,args= (Produce,consume),'C2', Lock)) P1=process (target=producer,args= (produce,consume), 10)) C1.start () C2.start () P1.start () Produce.close () Consume.close () C1.join () C2.join () P1.joi N ()Print('Main Process')
The problem of data insecurity caused by competition between multiple consumption

Iv. data sharing between data

By using threads, it is also recommended that the program be designed as a collection of independent threads that Exchange data through Message Queuing. This greatly reduces the need to use locking and other synchronization methods, and can be extended to distributed systems. However, the process should try to avoid communication, even if the need to communicate, you should choose the process security tools to avoid the problem of locking.

Inter-process data is independent and can be communicated using queues or pipelines, both of which are message-based. Although the inter-process data is independent, it is possible to share the data through the manager, in fact the manager is much more than that.

 fromMultiprocessingImportManager,process,lockdefWork (D,lock): With Lock:#without locking and manipulating shared data, there is a definite data glitchd['Count']-=1if __name__=='__main__': Lock=Lock () with Manager () as M:dic=m.dict ({'Count': 100}) p_l=[]         forIinchRange (100): P=process (target=work,args=(Dic,lock)) P_l.append (P) p.start () forPinchP_l:p.join ()Print(DIC)
Manager

Python------producer consumer models and pipelines

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.