One, mutual exclusion lock, synchronous lock
Data between processes is not shared, but sharing the same set of file systems, so access to the same file, or the same print terminal, is no problem,
The result of competition is the disorder, how to control, is to lock processing
Part1: Multiple processes sharing the same print terminal
#并发运行, high efficiency, but competes with the same print terminal, bringing print confusion from multiprocessing import processimport os,timedef work (): print ('%s is running '%o S.getpid ()) Time.sleep (2) print ('%s is done '%os.getpid ()) If __name__ = = ' __main__ ': for I in range (3): p=process (target=work) P.start ()
concurrent operation, high efficiency, but competing for the same print terminal, resulting in print confusion
#由并发变成了串行, sacrificing operational efficiency, but avoiding competition from multiprocessing import process,lockimport os,timedef work (lock): Lock.acquire () print ('%s is running '%os.getpid ()) Time.sleep (2) print ('%s was done '%os.getpid ()) lock.release () if __name__ = = ' __main__ ': lock=lock () for I in range (3): p=process (target=work,args= (lock)) P.start ( )
locking: From concurrency into serial, sacrificing operational efficiency, but avoiding competition
Part2: Multiple processes sharing the same file
File as database, mock Rob ticket
#文件db的内容为: {"Count": 1} #注意一定要用双引号, or JSON does not recognize from multiprocessing import process,lockimport Time,json,randomdef Search (): dic=json.load (Open (' Db.txt ')) print (' \033[43m remaining votes%s\033[0m '%dic[' Count ']) def get (): dic= Json.load (Open (' Db.txt ')) Time.sleep (0.1) #模拟读数据的网络延迟 if dic[' count '] >0: dic[' count ']-=1 Time.sleep (0.2) #模拟写数据的网络延迟 json.dump (Dic,open (' Db.txt ', ' W ')) print (' \033[43m ticket Success \033[0m ') def task (lock) : search () get () if __name__ = = ' __main__ ': lock=lock () for I in range: #模拟并发100个客户端抢票 p= Process (target=task,args= (lock,)) P.start ()
concurrent operation, high efficiency, but competing to write the same file, data write confusion
#文件db的内容为: {"Count": 1} #注意一定要用双引号, or JSON does not recognize from multiprocessing import process,lockimport Time,json,randomdef Search (): dic=json.load (Open (' Db.txt ')) print (' \033[43m remaining votes%s\033[0m '%dic[' Count ']) def get (): dic= Json.load (Open (' Db.txt ')) Time.sleep (0.1) #模拟读数据的网络延迟 if dic[' count '] >0: dic[' count ']-=1 Time.sleep (0.2) #模拟写数据的网络延迟 json.dump (Dic,open (' Db.txt ', ' W ')) print (' \033[43m ticket Success \033[0m ') def task (lock) : search () lock.acquire () get () lock.release () if __name__ = = ' __main__ ': lock=lock () For i in range: #模拟并发100个客户端抢票 p=process (target=task,args= (lock)) P.start ()
Locking: Ticket purchase behavior from concurrency to serial, sacrificing operational efficiency, but to ensure data security
Summarize:
Lock can ensure that multiple processes modify the same piece of data, only one task can be modified at the same time, that is, serial modification, yes, the speed is slow, but at the expense of speed to ensure the data security.
Although you can use file sharing data to achieve interprocess communication, the problem is:
1. Low efficiency
2. Need to lock the handle yourself
For this purpose the Mutiprocessing module provides us with a message-based IPC communication mechanism: queues and pipelines.
1 both the queue and the pipeline are storing the data in memory
The 2 queue is also based on (pipe + lock) implementation, which allows us to free ourselves from complex lock problems,
We should try to avoid using shared data, use messaging and queues whenever possible, avoid complex synchronization and locking problems, and often get better malleable when the number of processes increases.
Second, the other properties of the process
Note: the process () in Windows must be placed in # if __name__ = = ' __main__ ': under
Since Windows has no fork, the multiprocessing module starts a new Python process and imports the calling module. If Process () gets called upon import, then this sets off a infinite succession of new processes (or until your machine RU NS out of resources). This is the reason-hiding calls to Process () insideif __name__ = = "__main__" Since statements inside this if-statement Would not get called upon import. Because Windows does not have fork, the multi-processing module launches a new Python process and imports the calling module. If you call process () on import, this will start a new process with infinite inheritance (or until the machine runs out of resources). This is the original to hide the internal call to process (), using if __name__ = = "__main __", the statement in this if statement will not be called at the time of import.
Detailed explanation
Two ways to create and open a child process
#开进程的方法一: Import timeimport randomfrom multiprocessing import processdef Piao (name): print ('%s piaoing '%name) Time.sleep (Random.randrange (1,5)) print ('%s Piao end '%name) p1=process (target=piao,args= (' Egon ',)) #必须加, number p2= Process (target=piao,args= (' Alex ',)) p3=process (target=piao,args= (' Wupeqi ',)) p4=process (target=piao,args= (' Yuanhao ',)) P1.start () P2.start () P3.start () P4.start () print (' main thread ')
Method One
#开进程的方法二: Import timeimport randomfrom multiprocessing import processclass Piao (Process): def __init__ (self,name): super (). __init__ () self.name=name def run (self): print ('%s piaoing '%self.name) time.sleep ( Random.randrange (1,5)) print ('%s Piao end '%self.name) P1=piao (' Egon ') P2=piao (' Alex ') P3=piao (' Wupeiqi ') p4= Piao (' Yuanhao ') P1.start () #start会自动调用runp2. Start () P3.start () P4.start () print (' main thread ')
Method Two
Exercise 1: Turn the socket communication you learned last week into a concurrency form
From socket import *from multiprocessing import processserver=socket (Af_inet,sock_stream) server.setsockopt (SOL_ socket,so_reuseaddr,1) Server.bind ((' 127.0.0.1 ', 8080)) Server.listen (5) def talk (CONN,CLIENT_ADDR): While True: Try: msg=conn.recv (1024x768) if not msg:break conn.send (Msg.upper ()) except Exception: Breakif __name__ = = ' __main__ ': #windows下start进程一定要写到这下面 while True: conn,client_addr=server.accept () P=process (target=talk,args= (conn,client_addr)) P.start ()
Server Side
From socket import *client=socket (Af_inet,sock_stream) client.connect ((' 127.0.0.1 ', 8080)) while True: msg=input (' >>: '). Strip () if not msg:continue client.send (Msg.encode (' Utf-8 ')) MSG=CLIENT.RECV (1024) Print (Msg.decode (' Utf-8 '))
multiple Client Side
Each client, all on the server to open a process, if the concurrent to a million client, to open 10,000 processes, you yourself try to open 10,000 on your own machine, 100,000 processes to try. Workaround: Process Pool
Is there a problem with this implementation???
Join method for Process object
From multiprocessing import Processimport timeimport randomclass Piao (Process): def __init__ (self,name): Self.name=name Super (). __init__ () def run (self): print ('%s is piaoing '%self.name) time.sleep ( Random.randrange (1,3)) print ('%s is Piao end '%self.name) P=piao (' Egon ') P.start () p.join (0.0001) #等待p停止, Wait 0.0001 seconds to stop waiting for print (' Start ')
join: Master process, etc., waiting for the child process to end
From multiprocessing import Processimport timeimport randomdef Piao (name): print ('%s is piaoing '%name) Time.sleep (Random.randint (1,3)) print ('%s is Piao end '%name) p1=process (target=piao,args= (' Egon ',)) p2=process ( target=piao,args= (' Alex ',)) p3=process (target=piao,args= (' Yuanhao ',)) p4=process (target=piao,args= (' Wupeiqi ',)) P1.start () P2.start () P3.start () P4.start () #有的同学会有疑问: Now that the join is waiting for the process to end, then I write as follows, the process does not become serial again? #当然不是了 must be clear: P.join () Who's to wait? #很明显p. Join () is to let the main thread wait for the end of P, stuck is the main thread and not the process P, #详细解析如下: #进程只要start就会在开始运行了, so P1-p4.start (), the system already has four concurrent processes # and we p1.join () Is waiting for the end of the P1, yes P1 as long as not the end of the main thread will always be stuck in situ, which is also the key to the problem #join is to let the main thread, and so on, while P1-P4 is still concurrent execution, P1.join time, the remaining P2,P3,P4 is still running, such #p1.join end, May p2,p3 , P4 is already over, so p2.join,p3.join.p4.join directly through detection, without waiting # So the total time spent on 4 joins is still the longest time that the process runs P1.join () p2.join () P3.join () P4.join () print (' main thread ') #上述启动进程与join进程可以简写为 # p_l=[p1,p2,p3,p4]# # for P in p_l:# P.start () # # for P in p_l:# p.join ( )
with join, the program is not serial it???
Other methods or properties of the process object (learn about)
#进程对象的其他方法一: Terminate,is_alivefrom multiprocessing import processimport timeimport randomclass Piao (Process): def __init__ (self,name): self.name=name Super (). __init__ () def run (self): print ('%s is piaoing '% Self.name) Time.sleep (Random.randrange (1,5)) print ('%s is Piao end '%self.name) P1=piao (' Egon1 ') P1.start () P1.terminate () #关闭进程, does not shut down immediately, so is_alive may be able to see the results immediately or live print (p1.is_alive ()) #结果为Trueprint (' Start ') print (P1.is_alive ()) # The result is false
Terminate and Is_alive
From multiprocessing import Processimport timeimport randomclass Piao (Process): def __init__ (self,name): # Self.name=name # Super (). __init__ () #Process的__init__方法会执行self. Name=piao-1, # #所以加到这里, Will cover our self.name=name #为我们开启的进程设置名字的做法 super (). __init__ () self.name=name def run (self): Print ('%s is piaoing '%self.name) time.sleep (Random.randrange (1,3)) print ('%s is Piao end '%self.name) p= Piao (' Egon ') P.start () print (' Start ') print (p.pid) #查看pid
name and PID
Three, queue queues
Processes are isolated from each other, and to implement interprocess communication (IPC), the Multiprocessing module supports two forms: queues and pipelines, both of which use message passing
Create a queue class (the underlying is implemented as a pipe and lock):
1 queue ([MaxSize]): Creates a shared process queue, which is a multi-process secure queue that enables data transfer between multiple processes using the queue.
parameter Description:
1 MaxSize is the maximum number of items allowed in a queue, and no size limit is omitted.
Method Description:
Main methods:
1 The Q.put method is used to insert data into the queue, and the put method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, the method blocks the time specified by timeout until the queue has the remaining space. If timed out, a Queue.full exception is thrown. If blocked is false, but the queue is full, an Queue.full exception is thrown immediately. 2 The Q.get method can read from the queue and delete an element. Similarly, the Get method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, then no element is taken within the wait time, and a Queue.empty exception is thrown. If blocked is False, there are two cases where the queue has a value that is available and immediately returns the value, otherwise the Queue.empty exception will be thrown immediately if it is empty. 3 4 q.get_nowait (): Same as Q.get (False) 5 q.put_nowait (): Same as Q.put (False) 6 7 q.empty (): When calling this method, q is null to return true, and the result is unreliable, such as in the process of returning true, if the item is added to the queue. 8 Q.full (): Call this method when Q is full returns true, the result is unreliable, for example, in the process of returning true, if the items in the queue are taken away. 9 q.qsize (): Returns the correct number of current items in the queue, and the results are unreliable, for the same reason as Q.empty () and Q.full ()
Other methods (Learn):
1 Q.cancel_join_thread (): The background thread is not automatically connected when the process exits. You can prevent the Join_thread () method from blocking 2 q.close (): Close the queue and prevent more data from being added to the queue. Calling this method, the background thread will continue to write to the data that has been queued but not yet written, but will close as soon as this method completes. If q is garbage collected, this method is called. Closing a queue does not produce any type of data end signal or exception in the queue consumer. For example, if a consumer is being blocked on a get () operation, shutting down a queue in a producer does not cause the get () method to return an error. 3 Q.join_thread (): The background thread that connects the queue. This method is used to wait for all queue items to be consumed after the Q.close () method is called. By default, this method is called by all processes that are not the original creator of Q. Calling the Q.cancel_join_thread method can prohibit this behavior
Application:
The multiprocessing module supports two main forms of interprocess communication: Pipelines and queues are implemented based on message delivery, but the queue interface ' from multiprocessing import Process,queueimport timeq =queue (3) #put, GET, Put_nowait,get_nowait,full,emptyq.put (3) q.put (3) q.put (3) print (Q.full ()) #满了print (Q.get ()) Print (Q.get ()) print (Q.get ()) print (Q.empty ()) #空了
View Code
IV, producer consumers
Processes are isolated from each other, and to implement interprocess communication (IPC), the Multiprocessing module supports two forms: queues and pipelines, both of which use message passing
Create a queue class (the underlying is implemented as a pipe and lock):
1 queue ([MaxSize]): Creates a shared process queue, which is a multi-process secure queue that enables data transfer between multiple processes using the queue.
parameter Description:
1 MaxSize is the maximum number of items allowed in a queue, and no size limit is omitted.
Method Description:
Main methods:
1 The Q.put method is used to insert data into the queue, and the put method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, the method blocks the time specified by timeout until the queue has the remaining space. If timed out, a Queue.full exception is thrown. If blocked is false, but the queue is full, an Queue.full exception is thrown immediately. 2 The Q.get method can read from the queue and delete an element. Similarly, the Get method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, then no element is taken within the wait time, and a Queue.empty exception is thrown. If blocked is False, there are two cases where the queue has a value that is available and immediately returns the value, otherwise the Queue.empty exception will be thrown immediately if it is empty. 3 4 q.get_nowait (): Same as Q.get (False) 5 q.put_nowait (): Same as Q.put (False) 6 7 q.empty (): When calling this method, q is null to return true, and the result is unreliable, such as in the process of returning true, if the item is added to the queue. 8 Q.full (): Call this method when Q is full returns true, the result is unreliable, for example, in the process of returning true, if the items in the queue are taken away. 9 q.qsize (): Returns the correct number of current items in the queue, and the results are unreliable, for the same reason as Q.empty () and Q.full ()
Other methods (Learn):
1 Q.cancel_join_thread (): The background thread is not automatically connected when the process exits. You can prevent the Join_thread () method from blocking 2 q.close (): Close the queue and prevent more data from being added to the queue. Calling this method, the background thread will continue to write to the data that has been queued but not yet written, but will close as soon as this method completes. If q is garbage collected, this method is called. Closing a queue does not produce any type of data end signal or exception in the queue consumer. For example, if a consumer is being blocked on a get () operation, shutting down a queue in a producer does not cause the get () method to return an error. 3 Q.join_thread (): The background thread that connects the queue. This method is used to wait for all queue items to be consumed after the Q.close () method is called. By default, this method is called by all processes that are not the original creator of Q. Calling the Q.cancel_join_thread method can prohibit this behavior
Application:
The multiprocessing module supports two main forms of interprocess communication: Pipelines and queues are implemented based on message delivery, but the queue interface ' from multiprocessing import Process,queueimport timeq =queue (3) #put, GET, Put_nowait,get_nowait,full,emptyq.put (3) q.put (3) q.put (3) print (Q.full ()) #满了print (Q.get ()) Print (Q.get ()) print (Q.get ()) print (Q.empty ()) #空了
View Code
Producer Consumer Model
Using producer and consumer patterns in concurrent programming can solve most concurrency problems. This mode improves the overall processing speed of the program by balancing the productivity of the production line and the consuming thread.
Why use producer and consumer models
In the world of threads, the producer is the thread of production data, and the consumer is the thread of consumption data. In multithreaded development, producers have to wait for the consumer to continue producing data if the producer is processing fast and the consumer processing is slow. Similarly, consumers must wait for producers if their processing power is greater than that of producers. To solve this problem, the producer and consumer models were introduced.
What is the producer consumer model
The producer-consumer model solves the problem of strong coupling between producers and consumers through a container. Producers and consumers do not communicate with each other directly, and through the blocking queue to communicate, so producers do not have to wait for consumer processing after the production of data, directly to the blocking queue, consumers do not find producers to data, but directly from the blocking queue, the blocking queue is equivalent to a buffer, Balance the processing power of producers and consumers.
Implementation of producer consumer model based on queue
From multiprocessing import Process,queueimport time,random,osdef consumer (q): While True: res=q.get () Time.sleep (Random.randint (1,3)) print (' \033[45m%s eat%s\033[0m '% (Os.getpid (), Res)) def producer (q): For I in Range: time.sleep (Random.randint (1,3)) res= ' bun%s '%i q.put (res) print (' \033[44m%s produced%s\033[ 0m '% (Os.getpid (), res)) if __name__ = = ' __main__ ': q=queue () #生产者们: That is , chefs p1=process (Target=producer, args= (q,)) #消费者们: The foodie c1=process (target=consumer,args= (q,)) #开始 p1.start () C1.start () print (' master ')
View Code
The problem at this point is that the main process will never end, because producer P is finished after production, but consumer C has been in the dead loop and stuck in the Q.get () step after the Q has been emptied.
The solution is to let the producer after the production, the queue to send an end signal, so that the consumer after receiving the end signal can break out of the dead loop
From multiprocessing import Process,queueimport time,random,osdef consumer (q): While True: res=q.get () If res is none:break #收到结束信号则结束 time.sleep (Random.randint (1,3)) print (' \033[45m%s eat%s\033[0m '% (Os.getpid (), RES)) def producer (q): For i in range: time.sleep (Random.randint (1,3)) res= ' bun%s '%i q.put (RES) print (' \033[44m%s produced%s\033[0m '% (Os.getpid (), res)) q.put (None) #发送结束信号if __name__ = = ' __main__ ': q= Queue () #生产者们: The chefs p1=process (target=producer,args= (q)) #消费者们: The foodie c1=process (target= consumer,args= (q,)) #开始 p1.start () c1.start () print (' master ')
The producer sends the end signal after the production is complete none
Note: The end signal is none, does not have to be issued by the producer, the main process can also be sent, but the main process needs to wait until the end of the producer should send the signal
From multiprocessing import Process,queueimport time,random,osdef consumer (q): While True: res=q.get () If res is none:break #收到结束信号则结束 time.sleep (Random.randint (1,3)) print (' \033[45m%s eat%s\033[0m '% (Os.getpid (), RES)) def producer (q): For I in range (2): time.sleep (Random.randint (1,3)) res= ' bun%s '%i q.put (RES) print (' \033[44m%s produced%s\033[0m '% (Os.getpid (), res)) if __name__ = = ' __main__ ': q=queue () #生产者们: That's the chefs p1=process (target=producer,args= (q)) #消费者们: The foodie c1=process (target=consumer,args= (q)) # Start P1.start () C1.start () p1.join () q.put (None) #发送结束信号 print (' master ')
The main process sends the end signal to none after the producer has finished producing
But the above solution, when there are multiple producers and multiple consumers, we need to use a very low way to solve
from multiprocessing import process,queueimport time,random,osdef consumer (q): While True:res=q.get () If res is none:break #收到结束信号则结束 time.sleep (Random.randint (1,3)) print (' \033[45m%s eat%s\033[0m '% (OS.GETPI D (), RES)) def producer (name,q): For I in range (2): Time.sleep (Random.randint (1,3)) res= '%s%s '% (name,i) Q.put (res) print (' \033[44m%s produced%s\033[0m '% (Os.getpid (), res)) if __name__ = = ' __main__ ': Q=queue () #生产者 We: That cooks p1=process (target=producer,args= (' bun ', q)) p2=process (target=producer,args= (' Bones ', q)) p3=process (Target=prod ucer,args= (' swill ', q)) #消费者们: The Foodie c1=process (target=consumer,args= (q,)) c2=process (target=consumer,args= (q,)) #开始 P1.start () P2.start () P3.start () C1.start () P1.join () #必须保证生产者全部生产完毕 should send the end signal P2.join () P3.join () Q.put (None) #有几个生产者就应该发送几次结束信号None q.put (none) #发送结束信号 q.put (none) #发送结束信号 print (' master ')
several producers will need to send a few end signals: quite low
In fact, our idea is to send the end signal only, there is another kind of queue provides this mechanism
#JoinableQueue ([maxsize]): This is like a queue object, but the queue allows the consumer of the project to notify the creator that the project has been successfully processed. The notification process is implemented using shared signals and condition variables. #参数介绍: maxsize is the maximum number of items allowed in a queue, and no size limit is omitted. #方法介绍: Joinablequeue instance P has the same method as the queue object: Q.task_done (): The consumer uses this method to signal that the returned item of Q.get () has been processed. If the number of times this method is called is greater than the number of items removed from the queue, the ValueError exception is thrown Q.join (): The producer calls this method to block until all items in the queue are processed. Blocking will persist until each item in the queue calls the Q.task_done () method
From multiprocessing import Process,joinablequeueimport time,random,osdef consumer (q): While True:res=q.get () Time.sleep (Random.randint (1,3)) print (' \033[45m%s eat%s\033[0m '% (Os.getpid (), res)) Q.task_done () #向q. J Oin () sends a signal to prove that a data has been taken away Def producer (NAME,Q): For I in range: Time.sleep (Random.randint (1,3)) res= '%s %s '% (name,i) q.put (res) print (' \033[44m%s produced%s\033[0m '% (Os.getpid (), res)) Q.join () if __name__ = = ' __m Ain__ ': Q=joinablequeue () #生产者们: That is, the chefs p1=process (target=producer,args= (' bun ', q)) p2=process (Target=producer,args = (' Bones ', q)) p3=process (target=producer,args= (' swill ', q)) #消费者们: The Foodie c1=process (target=consumer,args= (q)) C2=proce SS (Target=consumer,args= (Q,)) C1.daemon=true c2.daemon=true #开始 p_l=[p1,p2,p3,c1,c2] for P in P_l:p . Start () P1.join () P2.join () P3.join () print (' master ') #主进程等--->p1,p2,p3---->c1,c2 #p1, P2,P3 ended, certificate Ming C1,c2 must be all finished P1,P2,P3 sent to the teamColumn data #因而c1, C2 also has no value, it should end with the end of the main process, so set up as a daemon
View
Python concurrent programming Multi-process (ii): Mutex (Sync Lock) & process other properties & interprocess communication (Queue) & producer Consumer Model