Mutex and interprocess communication

Source: Internet
Author: User
Tags mutex terminates ticket

One, mutual exclusion lock

Data isolation between processes, but sharing a set of file systems, so that the process can be directly communicated through the file, but the problem is that they must be locked up for processing

Note: The purpose of the lock is to ensure that multiple processes modify the same piece of data, the same time can only have one modification, that is, the serial modification, yes, the speed is slow, sacrificing speed and ensure the data security.

1. Toilet

First of all, to give an easy-to-understand example, the home of the toilet, you have to go to the bathroom after the door lock, toilet doors are equivalent to a mutex, when you are in the inside when others come to the toilet can only be at the door and so on.

From multiprocessing import Process,lockimport osimport timedef work (mutex):    mutex.acquire ()  #上锁    print (' task[%s] Toilet '%os.getpid ())    Time.sleep (3)    print (' task[%s] after toilet '%os.getpid ())    mutex.release ()  # unlock if __name__ = = ' __main__ ':    mutex=lock ()  #实例化 (mutex)    p1=process (target=work,args= (mutex))    p2= Process (target=work,args= (mutex))    p3=process (target=work,args= (mutex))    P1.start ()    P2.start ()    P3.start ()    print (' Start ... ')

2, mock Rob ticket

#文件db的内容为: {"Count": 1}  #票数可以自己定义 # Note Be sure to use double quotes, otherwise JSON will not recognize from multiprocessing import Process,lockimport Jsonimport Timeimport randomimport Osdef Search ():  #查看票数    dic=json.load (open (' Db.txt '))    print (' number of votes left%s '%dic[' Count ']) def get_ticket ():  #购票    dic = json.load (open (' db.txt ',))    if dic[' count '] > 0:        dic[' count ']- = 1        json.dump (dic,open (' Db.txt ', ' W '))        print ('%s ticket purchase successful '%os.getpid ()) def task (mutex):  #购票流程    Search ()    Time.sleep (Random.randint (1, 3)) #模拟购票一系列繁琐的过程所花费的时间    mutex.acquire ()    get_ticket (    ) Mutex.release () if __name__ = = ' __main__ ':    mutex = Lock () for    I in range:        p = Process (Target=task,args = (mutex,))        P.start ()

Second, Process object other attributes use case supplement

1, Deamon daemon process

P.daemon: The default value is False, if set to True, represents the daemon that P is running in the background, when P's parent process terminates, p also terminates with it, and set to True, p cannot create its own new process and must be set before P.start ()

Ps:

From multiprocessing import Processimport osimport timedef work ():    print ('%s is working '%os.getpid ())    Time.sleep    print ('%s is ending '% os.getpid ()) If __name__ = = ' __main__ ':    p1=process (target=work)    p2= Process (target=work)    p3=process (target=work)    p1.daemon=true    p2.daemon=true    p3.daemon=true    P1.start ()    P2.start ()    P3.start ()    time.sleep (2)    print (' Start ... ‘)

2. Join waits for child process

P.join ([timeout]): The main thread waits for p to terminate (emphasis: is the main thread is in the state, and P is in the running state). Timeout is an optional time-out, and it should be emphasized that the p.join can only join the START process and not join the run-open process

Ps:

From multiprocessing import Processimport osimport timedef work ():    print ('%s is working '%os.getpid ())    Time.sleep (3) if __name__ = = ' __main__ ':    p1=process (target=work)    p2=process (target=work)    p3=process ( target=work)    p1.daemon=true    p2.daemon=true    p3.daemon=true    p1.start () #初始化1    P2.start () #初始化2    P3.start () #初始化3    p3.join ()    p1.join ()    p2.join ()    print (' Continue running based on the results of the initialization ')

3, Terminate,is_alive,name,pid

P.terminate (): Forcibly terminates the process p, does not make any cleanup operations, if p creates a child process, the child process is a zombie process, using this method requires special care about this situation. If P also holds a lock then it will not be released, resulting in Deadlock p.is_alive (): If P is still running, return Truep.name: Name of the process p.pid: PID of the process

Ps:

From multiprocessing import Processimport osimport timedef work ():    print ('%s is working '%os.getpid ())    Time.sleep (3) if __name__ = = ' __main__ ':    p1=process (target=work)    p2=process (target=work)    p3=process ( Target=work)    P1.start () #初始化1    p2.start () #初始化2    p3.start () #初始化3    p1.terminate ()  #不建议使用    Print (P1.is_alive ())    #虽然已经强制终止进程了但是操作系统终止进程也需要时间所以此时还是True    Print (p1.name)  # If there is no name, the number following the default Process-1 is sorted by sub-process order    print (p2.name)    print (p1.pid)  # p1.pid = = Os.getpid ()    print (' Continue to run based on the results of the initialization ')

III. Inter-process communication

We have learned to use shared files in a way that enables direct sharing of processes, that is, sharing data in a way that requires thoughtful synchronization, locking, and so on. Moreover, the file is an abstraction provided by the operating system and can be used as a medium for direct communication with the Mutiprocess module.

But in fact the Mutiprocessing module provides us with a message-based IPC communication mechanism: queues and pipelines. The queue in the IPC mechanism is also based on (pipeline + lock) implementation, can let us out of the complex lock problem, we should try to avoid the use of shared data, as far as possible using message delivery and queue, avoid dealing with complex synchronization and locking problems, and when the number of processes increased, often can get better malleable.

1. Interprocess communication (IPC) mode one: Queue (recommended)

Processes are isolated from each other, and to implement interprocess communication (IPC), the Multiprocessing module supports two forms: queues and pipelines, both of which use message passing

Can put any type of data into the queue

Queue: FIFO

1) Import

From multiprocessing import Queue

2) instantiation

Q=queue (3)  #3是队列中规定允许的最大项数, omitted that is not limited to size

3) Main methods

The Q.put method is used to insert data into the queue, and the put method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, the method blocks the time specified by timeout until the queue has the remaining space. If timed out, a Queue.full exception is thrown. If blocked is false, but the queue is full, an Queue.full exception is thrown immediately. The Q.get method can read from the queue and delete an element. Similarly, the Get method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, then no element is taken within the wait time, and a Queue.empty exception is thrown. If blocked is false, there are two cases where the queue has a value that is available, and the value is immediately returned, otherwise the Queue.empty exception is thrown immediately if it is empty. Q.get_nowait (): Same as Q.get (false) q.put_nowait (): Same as Q.put (false) Q.empty (): When calling this method, q is null to return true and the result is not reliable, for example, in the process of returning true, If the item is added to the queue. Q.full (): When this method is called, Q is full to return true, and the result is unreliable, for example, in the process of returning true, if the items in the queue are taken away. Q.qsize (): Returns the correct number of current items in the queue, and the results are unreliable, for the same reason as Q.empty () and Q.full ()

4) Other methods (learn)

Q.cancel_join_thread (): The background thread is not automatically connected when the process exits. You can prevent the Join_thread () method from blocking Q.close (): Close the queue and prevent more data from being added to the queue. Calling this method, the background thread will continue to write to the data that has been queued but not yet written, but will close as soon as this method completes. If q is garbage collected, this method is called. Closing a queue does not produce any type of data end signal or exception in the queue consumer. For example, if a consumer is being blocked on a get () operation, shutting down a queue in a producer does not cause the get () method to return an error. Q.join_thread (): The background thread that connects the queue. This method is used to wait for all queue items to be consumed after the Q.close () method is called. By default, this method is called by all processes that are not the original creator of Q. Calling the Q.cancel_join_thread method can prohibit this behavior

5) Application

From multiprocessing import process,queue#1: You can put any type of data in the Queue 2 queues: FIFO Q=queue (3) q.put (' first ') q.put (' second ') Q.put (' Third ') # q.put (' Fourht ') print (Q.full ()) #满了print (Q.get ()) print (Q.get ()) print (Q.get ()) # Print (Q.get ()) Print ( Q.empty ()) #空了 # Q=queue (3) # q.put (' first ', Block=false) # q.put (' second ', Block=false) # q.put (' third ', Block=false) # Q.put (' fourth ', block=true,timeout=3) # Q.get (block=false) # q.get (block=true,timeout=3) # q.get_nowait () #q. Get (Block =false)

6) Production consumer model

Using producer and consumer patterns in concurrent programming can solve most concurrency problems. This mode improves the overall processing speed of the program by balancing the productivity of the production line and the consuming thread.

In the world of threads, the producer is the thread of production data, and the consumer is the thread of consumption data. In multithreaded development, producers have to wait for the consumer to continue producing data if the producer is processing fast and the consumer processing is slow. Similarly, consumers must wait for producers if their processing power is greater than that of producers. To solve this problem, the producer and consumer models were introduced.

The producer-consumer model solves the problem of strong coupling between producers and consumers through a container.

Implementation of producer consumer model based on queue

 fromMultiprocessingImportProcess,queueImport TimeImportRandomImportOSdefConsumer (q): whileTrue:res=Q.get ()ifRes isNone: BreakTime.sleep (Random.randint (1,3))        Print('%s ate%s'%(Os.getpid (), res))defproducer (q): forIinchRange (5): Time.sleep (2) Res='Bun%s'%I q.put (res)Print('%s was manufactured %s'%(Os.getpid (), res)) Q.put (None)if __name__=='__main__': Q=Queue ()#Producers: ChefsP1=process (target=producer,args=(q,))#Consumers: FoodiesP2=process (target=consumer,args=(q,)) P1.start () P2.start () P1.join () P2.join ()Print('Master') Producer Consumer model
Producer Consumer Model

7) Create another class for the queue

Joinablequeue ([maxsize]): This is like a queue object, but the queue allows the consumer of the project to notify the creator that the project has been successfully processed. The notification process is implemented using shared signals and condition variables.

MaxSize is the maximum number of items allowed in a queue, and no size limit is omitted.

Joinablequeue instance P has the same method as the queue object:    Q.task_done (): The consumer uses this method to signal that the returned item of Q.get () has been processed. If the number of times this method is called is greater than the number of items removed from the queue, the ValueError exception is thrown    Q.join (): The producer calls this method to block until all items in the queue are processed. Blocking will persist until each item in the queue calls the Q.task_done () method
 fromMultiprocessingImportProcess,joinablequeueImport TimeImportRandomImportOSdefConsumer (q): whileTrue:res=Q.get () time.sleep (Random.randint (1,3))        Print('%s ate%s'%(Os.getpid (), res)) Q.task_done ()defProduct_baozi (q): forIinchRange (5): Time.sleep (2) Res='Bun%s'%I q.put (res)Print('%s was manufactured %s'%(Os.getpid (), res)) Q.join ()if __name__=='__main__': Q=Joinablequeue ()#Producers: ChefsP1=process (target=product_baozi,args=(q,))#Consumers: FoodiesP4=process (target=consumer,args=(q,)) P4.daemon=True P1.start () P4.start () P1.join ( )Print('Master')    #P2 's over.producer Consumer Model 2
producer Consumer Model 2
 fromMultiprocessingImportProcess,joinablequeueImport TimeImportRandomImportOSdefConsumer (q): whileTrue:res=Q.get () time.sleep (Random.randint (1,3))        Print('%s ate%s'%(Os.getpid (), res)) Q.task_done ()defProduct_baozi (q): forIinchRange (3): Time.sleep (2) Res='Bun%s'%I q.put (res)Print('%s was manufactured %s'%(Os.getpid (), res)) Q.join ()defProduct_gutou (q): forIinchRange (3): Time.sleep (2) Res='Bone%s'%I q.put (res)Print('%s was manufactured %s'%(Os.getpid (), res)) Q.join ()defProduct_ganshui (q): forIinchRange (3): Time.sleep (2) Res='swill%s'%I q.put (res)Print('%s was manufactured %s'%(Os.getpid (), res)) Q.join ()if __name__=='__main__': Q=Joinablequeue ()#Producers: ChefsP1=process (target=product_baozi,args=(q,)) P2=process (target=product_gutou,args=(q,)) P3=process (target=product_ganshui,args=(q,))#Consumers: FoodiesP4=process (target=consumer,args=(q,)) P5=process (target=consumer,args=(q,)) P4.daemon=True P5.daemon=True#set as Daemon, p also stops when the main thread stops, but don't worry, producer calls Q.join guarantees that consumer has finished processing all the elements in the queueP_l=[P1,P2,P3,P4,P5] forPinchP_l:p.start () p1.join () P2.join () P3.join ( )Print('Master') Producer Consumer Model 3
producer Consumer Model 3

2, interprocess communication (IPC) mode two: pipeline (not recommended, understand it)

3, inter-process communication method Three: Shared data (not recommended, understand it)

Mutex and interprocess communication

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.