Register: Fast running time, but smaller capacity
A daemon process
1 What is a daemon process:
The daemon is the execution of the code that assists the main process, and the daemon is associated with the running cycle of the main process's code. After the main process code executes, the daemon does not exist, so the daemon automatically terminates after the main process code has finished executing.
No other child processes can be opened in the daemon, or an exception will be thrown.
2 Create a daemon:
Daemon=true: Create daemon, daemon default equals False.
Example 1:
# import multiprocessing# Import time# def walk (N): # print (' >>>>>>start ', N) # Time.sleep (3) # print (' >>>>>>end ', n) # if __name__== ' __main__ ': # p1=multiprocessing. Process (target=walk,args= (2,)) # p2=multiprocessing. Process (target=walk,args= (4,)) # p1.daemon=true# P1.start () # P2.start () # print (' Done ')
Example 2:
# import multiprocessing# Import time# def walk (N): # print (' >>>>>>start ', N) # Time.sleep (3 # print (' >>>>>>end ', n) # if __name__== ' __main__ ': # p1=multiprocessing. Process (target=walk,args= (2,)) # p2=multiprocessing. Process (target=walk,args= (4,)) # p1.daemon=true# P1.start () # P2.start () # Time.sleep (2) # print (' Done ')
Example 3:
# import multiprocessing# Import time# def walk (N): # print (' >>>>>>start ', N) # Time.sleep (3) # print (' >>>>>>end ', n) # if __name__== ' __main__ ': # p1=multiprocessing. Process (target=walk,args= (2,)) # p2=multiprocessing. Process (target=walk,args= (4,)) # p1.daemon=true# P1.start () # P2.start () # print (' Done ') # Time.sleep (0.1)
Note: The processes and processes are independent of each other, and the code in the daemon is executed as long as the code in the main process is not finished.
Two synchronous locks are also called mutexes
The role of a synchronous lock is to turn the concurrency into serial and to order the competition
Lock: Locks
Acquire: Plus lock
Release: Unlocking
As follows:
Import Multiprocessingimport jsonimport randomimport timejson.dump ({' Count ': 10},open (' A.txt ', ' W ')) Time.sleep (1) def Wolk (n): ret=json.load (Open (' a.txt ')) print (' <%s> see still left%s ticket '% (n,ret[' count ')) def foo (n): Ret=json.load (Open (' a.txt ')) if ret[' count ']>0: x=random.randint (1,5) if x>ret[' count ']: x = ret[' count '] ret[' count ']-= x time.sleep (0.2) json.dump (ret, open (' A.txt ', ' W ')) print ('%s purchased% S ticket '% (n, x)) def wolk_foo (n,lock): wolk (n) lock.acquire () foo (n) lock.release () if __name__== ' _ _main__ ': lock=multiprocessing. Lock () for J in range: p=multiprocessing. Process (target=wolk_foo,args= (j,lock)) P.start ()
With Lock: Automatically add lock, after execution, automatically unlock the lock. As follows:
# import multiprocessing# Import json# import random# import time# json.dump ({' Count ': 10},open (' A.txt ', ' W ')) # Time.sleep (1) # def Wolk (n): # Ret=json.load (open (' a.txt ')) # print (' <%s> see still left%s ticket '% (n,ret[' count ')) # Def Foo (n): # Ret=json.load (open (' a.txt ')) # if ret[' count ']>0:# x=random.randint (1,5) # if X>ret [' Count ']:# x = ret[' count ']# ret[' count ']-= x# Time.sleep (0.2) # Json.dump (ret, open (' A.txt ', ' W ') # Print ('%s purchased '%s ticket '% (n, x)) # # Def Wolk_foo (n,lock): # Wolk (n) # with lock:# foo (n) # if __name__== ' __main__ ': # lock=multiprocessing. Lock () # for J in range: # p=multiprocessing. Process (target=wolk_foo,args= (J,lock)) # P.start ()
The lock is added in order to achieve serial, ensure the security of the code, but reduce the execution efficiency of the Code
Note: In these two instances, the first dump file will take some time (that is, I/O blocking), so after the dump file to sleep for a period of time, if not add sleep time, file data will be garbled or error.
Three IPC mechanisms
The IPC mechanism is the process of communication between the two ways: one is a queue, the other is a pipeline
The principle of the queue is: FIFO, can only be an incoming data, the other end to read the data. The pipeline + lock will enable the queue.
The principle of piping is:
Queues and pipelines implement a shared memory space between multiple processes.
Queue: Specifies the number of data stored in the queue.
Put: Writing data
Get: Reading data
Import multiprocessingq=multiprocessing. Queue (5) Q.put (111) q.put (222) q.put (333) q.put (444) q.put (555) print (Q.get ()) print (Q.get ()) print (Q.get ()) Print ( Q.get ()) print (Q.get ())
Note: If put and get exceed the defined number of stores, it will block
Put_nowait: Uploading data, if it exceeds the number of stored data will throw an exception
Get_nowait: Reads data, throws an exception if the number of stored data is exceeded
Import multiprocessingq=multiprocessing. Queue (5) q.put_nowait (1) q.put_nowait (2) q.put_nowait (3) q.put_nowait (4) q.put_nowait (5) Print (q.get_nowait ()) Print ( Q.get_nowait ()) print (q.get_nowait ()) print (q.get_nowait ()) print (q.get_nowait ())
Four producers and consumers
Producer: Create Data
Consumer: Reading data
Both producers and consumers are separated by the list.
The producer creates the data, stores it in the queue, and then the consumer reads the data in the queue.
Import Multiprocessingimport randomimport timedef producer (q): For I in range (1,30): time.sleep (random.random ()) q.put (' bun%s '%i) print (' chef%s ' created bun%s '% (' Egon ' +str (Random.randint (1,5)), i)) def consumer (q): While True: ret=q.get () if RET is None:break time.sleep (Random.random ()) print ('%s ate bun '%s '% (' Alex ' +str ( Random.randint (1,5)), ret)) if __name__== ' __main__ ': q = multiprocessing. Queue () c_s=multiprocessing. Process (target=producer,args= (q,)) x_f=multiprocessing. Process (target=consumer,args= (q,)) C_s.start () X_f.start () c_s.join () q.put (None)
Note: You need to add a termination condition after the consumer finishes, or the code will always be there waiting.
Retake course day31 (process of network programming Five II)