31. Communication between mutex lock and process, 31 mutex lock Process

Source: Internet
Author: User

31. Communication between mutex lock and process, 31 mutex lock Process

We have done multi-process concurrency before, so have you found any problems. If multiple processes share the same data, for example, what will happen if you want to view and purchase a ticket at the same time on the client? Today, we will talk about process locks and inter-process communication. processes are isolated from each other and they need a third party.

 

I. mutex lock

Data is isolated between processes, but a file system is shared. Therefore, processes can communicate directly through files, but the problem is that they must be locked.

Note: The purpose of locking is to ensure that multiple processes can modify the same piece of data at the same time, that is, only one modification at a time, that is, the serial modification. Yes, the speed is slow, data security is ensured at the expense of speed.

1. Toilet

Let's give you an easy-to-understand example. When you go to the bathroom, you will lock the door first. The toilet door is equivalent to a mutex lock, when you are inside, when someone else comes to the bathroom, they can only wait at the door.

From multiprocessing import Process, Lockimport osimport timedef work (mutex): mutex. acquire () # Lock print ('Task [% s] Toilet '% OS. getpid () time. sleep (3) print ('Task [% s] Toilet finished '% OS. getpid () mutex. release () # unlock if _ name _ = '_ main _': mutex = Lock () # instantiate (mutex Lock) p1 = Process (target = work, args = (mutex,) p2 = Process (target = work, args = (mutex,) p3 = Process (target = work, args = (mutex,) p1.start () p2.start () p3.start () print ('start... ')

2. Simulate ticket snatching

# The content of the file db is {"count": 1} # The number of votes can be defined by yourself # Be sure to use double quotation marks. Otherwise, json cannot identify from multiprocessing import Process, lockimport jsonimport timeimport randomimport osdef search (): # view votes dic=json.load(open('db.txt ',) print ('remaining votes % s' % dic ['Count']) def get_ticket (): # purchase dic = json.load(open('db.txt ',) if dic ['Count']> 0: dic ['Count']-= 1 json.dump(dic,open('db.txt', 'w ')) print ('% s ticket purchased successfully' % OS. getpid () def task (mutex): # purchase process search () time. sleep (random. randint (1, 3) # simulate the time mutex spent in a series of tedious ticketing processes. acquire () get_ticket () mutex. release () if _ name _ = '_ main _': mutex = Lock () for I in range (50): p = Process (target = task, args = (mutex,) p. start ()

 

Ii. Supplementary use cases for other attributes of the Process object

1. deamon daemon

P. daemon: The default value is False. If it is set to True, p indicates the daemon running in the background. When the parent process of p terminates, p also terminates and is set to True, p cannot create its own new process. It must be in p. set before start ()

Ps:

from multiprocessing import Processimport osimport timedef work():    print('%s is working' %os.getpid())    time.sleep(10)    print('%s is ending' % os.getpid())if __name__ == '__main__':    p1=Process(target=work)    p2=Process(target=work)    p3=Process(target=work)    p1.daemon=True    p2.daemon=True    p3.daemon=True    p1.start()    p2.start()    p3.start()    time.sleep(2)    print('start。。。')

2. join and wait for the sub-process

P. join ([timeout]): The main thread waits for p to terminate (stress: the main thread is in the same state, while p is in the running state ). Timeout is an optional timeout time. It must be emphasized that p. join can only join processes enabled by start, but not processes enabled by run.

Ps:

From multiprocessing import Processimport osimport timedef work (): print ('% s is working' % OS. getpid () time. sleep (3) if _ name _ = '_ main _': p1 = Process (target = work) p2 = Process (target = work) p3 = Process (target = work) p1.daemon = True p2.daemon = True p3.daemon = True p1.start () # initialize 1 p2.start () # initialize 2 p3.start () # initialize 3 p3.join () p1.join () p2.join () print ('continue Running Based on the initialization result ')

3. terminate, is_alive, name, pid

 

P. terminate (): Force terminate the process p without any cleanup operations. If p creates a sub-process, the sub-process becomes a zombie process. You must be especially careful when using this method. If p also saves a lock, it will not be released, resulting in a deadlock. is_alive (): If p is still running, Truep is returned. name: process name p. pid: Process pid

 

Ps:

From multiprocessing import Processimport osimport timedef work (): print ('% s is working' % OS. getpid () time. sleep (3) if _ name _ = '_ main _': p1 = Process (target = work) p2 = Process (target = work) p3 = Process (target = work) p1.start () # initialize 1 p2.start () # initialize 2 p3.start () # initialize 3 p1.terminate () # print (p1.is _ alive () is not recommended. # although the process has been forcibly terminated, it takes time for the operating system to terminate the process. Therefore, it is still True print (p1.name) # If no name is specified, print (p2.name) print (p1.pid) # p1.pid = OS will be sorted by the number following Process-1 by default. getpid () print ('continue Running Based on the initialization result ')

 

3. inter-process communication

We learned how to use shared files to directly share processes, that is, to share data. This method must be fully considered for synchronization, lock, and other issues. Files are the abstraction provided by the operating system and can be used as the medium for direct communication between processes. They are irrelevant to the mutiprocess module.

In fact, the mutiprocessing module provides message-based IPC communication mechanism: queue and pipeline. The queues in the IPC Mechanism are implemented based on (pipeline + lock), which can free us from complicated lock problems. We should avoid using shared data as much as possible, try to use message transmission and queue to avoid complicated synchronization and lock problems, and get better scalability when the number of processes increases.

1. inter-process communication (IPC) Method 1: Queue (recommended)

Processes are isolated from each other. To implement inter-process communication (IPC), the multiprocessing module supports two forms: queue and pipeline. Both methods use message transmission.

Any type of data can be stored in the queue.

Queue: FIFO

1) Import

from multiprocessing import Queue

2) instantiation

Q = Queue (3) #3 is the maximum number of items allowed in the Queue.

3) Main Methods

Q. The put method is used to insert data into the queue. The put method has two optional parameters: blocked and timeout. If blocked is True (default value) and timeout is a positive value, this method blocks the time specified by timeout until the queue has space left. If it times out, a Queue. Full exception is thrown. If blocked is False but the Queue is Full, the Queue. Full exception is thrown immediately. Q. The get method can read from the queue and delete an element. Similarly, the get method has two optional parameters: blocked and timeout. If blocked is True (default) and timeout is a positive value, if no element is obtained during the waiting time, a Queue. Empty exception is thrown. If blocked is False, two conditions exist. If a value in the Queue is available, this value is returned immediately. Otherwise, if the Queue is empty, a Queue is thrown immediately. empty exception. q. get_nowait (): Same as q. get (False) q. put_nowait (): Same as q. put (False) q. empty (): If q is null when this method is called, True is returned. The result is unreliable. For example, if a project is added to the queue when True is returned. Q. full (): If q is full when this method is called, True is returned. The result is unreliable. For example, if a project in the queue is removed when True is returned. Q. qsize (): returns the correct number of items in the queue, and the result is not reliable. The reason is the same as q. empty () and q. full ().

4) Other methods (understanding)

Q. cancel_join_thread (): the backend thread is not automatically connected when the process exits. It can prevent the join_thread () method from blocking q. close (): close the queue to prevent more data from being added to the queue. When this method is called, the background thread will continue to write data that has already been written into the queue but has not yet been written, but will immediately close when this method is completed. This method is called if q is garbage collected. Disabling a queue does not generate any data end signal or exception in the queue user. For example, if a user is blocked in the get () operation, disabling the queue in the producer will not cause the get () method to return an error. Q. join_thread (): the background thread that connects to the queue. This method is used to wait for all queue items to be consumed after the q. close () method is called. By default, this method is called by all processes that are not the original creator of q. This behavior can be prohibited by calling the q. cancel_join_thread method.

5) Applications

From multiprocessing import Process, Queue #1: any type of data can be placed in the Queue. 2. Queue: first-in-first-out q = Queue (3) q. put ('first') q. put ('second') q. put ('third') # q. put ('fourht ') print (q. full () # print (q. get () print (q. get () print (q. get () # print (q. get () print (q. empty () # empty # q = Queue (3) # q. put ('first', block = False) # q. put ('second', block = False) # q. put ('third', block = False) # q. put ('fourth', block = True, timeout = 3) # q. get (block = False) # q. get (block = True, timeout = 3) # q. get_nowait () # q. get (block = False)

6) production consumer model

In concurrent programming, the use of producer and consumer modes can solve the vast majority of concurrency problems. This mode improves the overall data processing speed of the program by balancing the working capabilities of the production line and consumption thread.

In the thread world, the producer is the thread that produces data, and the consumer is the thread that consumes data. In multi-threaded development, if the producer processing speed is very fast and the Consumer processing speed is very slow, the producer must wait until the consumer completes processing before continuing to produce data. Similarly, if the processing capability of a consumer is greater than that of a producer, the consumer must wait for the producer. To solve this problem, the producer and consumer models were introduced.

The producer consumer model uses a container to solve the strong coupling problem between producers and consumers.

Producer and consumer model based on queues

From multiprocessing import Process, Queueimport timeimport randomimport osdef consumer (q): while True: res = q. get () if res is None: break time. sleep (random. randint (1, 3) print ('% s eat % s' % (OS. getpid (), res) def producer (q): for I in range (5): time. sleep (2) res = 'steamed stuffed bun % s' % I q. put (res) print ('% s made % s' % (OS. getpid (), res) q. put (None) if _ name _ = '_ main _': q = Queue () # producers: chefs p1 = Process (target = producer, args = (q,) # consumers: foodies p2 = Process (target = consumer, args = (q,) p1.start () p2.start () p1.join () p2.join () print ('main ')
Producer and consumer model

7) create another queue class

JoinableQueue ([maxsize]): this is like a Queue object, but the Queue allows the project user to notify the builder that the project has been successfully processed. The notification process is implemented using shared signals and conditional variables.

Maxsize is the maximum number of items allowed in the queue. If it is omitted, there is no size limit.

In addition to the same method as the Queue object, the instance p of JoinableQueue also has: q. task_done (): the user uses this method to send a signal, indicating q. the returned project of get () has been processed. If the number of times this method is called is greater than the number of items deleted from the queue, A ValueError exception q. join (): The producer calls this method for blocking until all items in the queue are processed. Blocking continues until every project in the queue calls the q. task_done () method.
From multiprocessing import Process, JoinableQueueimport timeimport randomimport osdef consumer (q): while True: res = q. get () time. sleep (random. randint (1, 3) print ('% s eat % s' % (OS. getpid (), res) q. task_done () def product_baozi (q): for I in range (5): time. sleep (2) res = 'steamed stuffed bun % s' % I q. put (res) print ('% s made % s' % (OS. getpid (), res) q. join () if _ name _ = '_ main _': q = JoinableQueue () # Producer: chefs p1 = Process (target = product_baozi, args = (q,) # consumers: foodies p4 = Process (target = consumer, args = (q,) p4.daemon = True p1.start () p4.start () p1.join () print ('main') # p2 is over
Producer consumer model 2
From multiprocessing import Process, JoinableQueueimport timeimport randomimport osdef consumer (q): while True: res = q. get () time. sleep (random. randint (1, 3) print ('% s eat % s' % (OS. getpid (), res) q. task_done () def product_baozi (q): for I in range (3): time. sleep (2) res = 'steamed stuffed bun % s' % I q. put (res) print ('% s made % s' % (OS. getpid (), res) q. join () def product_gutou (q): for I in range (3): time. sleep (2) res = 'bone % s' % I q. put (res) print ('% s made % s' % (OS. getpid (), res) q. join () def product_ganshui (q): for I in range (3): time. sleep (2) res = 'drowning % s' % I q. put (res) print ('% s made % s' % (OS. getpid (), res) q. join () if _ name _ = '_ main _': q = JoinableQueue () # Producer: chefs p1 = Process (target = product_baozi, args = (q,) p2 = Process (target = product_gutou, args = (q,) p3 = Process (target = product_ganshui, args = (q ,)) # consumers: foodies p4 = Process (target = consumer, args = (q,) p5 = Process (target = consumer, args = (q ,)) p4.daemon = True p5.daemon = True # Set to daemon. p also stops when the main thread is stopped, but don't worry. The producer calls q. join ensures that consumer has processed all elements in the queue p_l = [p1, p2, p3, p4, p5] for p in p_l: p. start () p1.join () p2.join () p3.join () print ('main ')
Producer consumer model 3

2. inter-process communication (IPC) Method 2: MPS Queue (not recommended. Learn more)

3. inter-process communication mode 3: Share Data (not recommended for use only)

(If you are not familiar with these two methods, you can search for related articles by yourself)

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.