Tenth day of Learning Python

Source: Internet
Author: User
Tags mutex semaphore terminates ticket

One, Python concurrent programming ~ Multi-process

1.multiprocessing Module Introduction

Multithreading in Python does not take advantage of multicore advantages, and if you want to fully use the resources of multicore CPUs (Os.cpu_count () Viewing), most of the situations in Python require multiple processes to be used. Python provides the multiprocessing.
The multiprocessing module is used to open sub-processes and perform our custom tasks (such as functions) in the subprocess, which is similar to the programming interface of the Multithreaded module threading.

The multiprocessing module has many functions: supporting sub-processes, communicating and sharing data, performing different forms of synchronization, and providing components such as process, Queue, Pipe, lock, etc.

One thing that needs to be emphasized again is that, unlike threads, processes have no shared state, process-modified data, and changes are limited to that process

Process Module Introduction

Process is a function under the multiprocessing module to open a subprocess

Process ([group [, Target [, name [, args [, Kwargs]]]), an object instantiated by the class that represents a task in a child process (not yet started) emphasizes: 1. You need to use a keyword to specify parameter 2. args Specifies the positional parameter to be passed to the target function, which is a tuple form and must have a comma

The group parameter is not used and the value is always none

Target represents the calling object, which is the task to be performed by the child process

Args represents the positional parameter tuple of the calling object, args= ("Egon",)

Kwargs represents the dictionary of the calling object, kwargs={' name ': ' Egon ', ' Age ': 18}

Name is the child process

P.start (): Starts the process and calls the P.run () in the child process

P.run (): The method that runs at the start of a process is exactly what it calls the function specified by the target, and we must implement the method in the class of our custom class

P.terminate (): Forcibly terminates the process p, does not make any cleanup operations, if p creates a child process, the child process is a zombie process, using this method requires special care about this situation. If P also holds a lock then it will not be released, resulting in a deadlock
P.is_alive (): If P is still running, returns true

P.join ([timeout]): The main thread waits for p to terminate (emphasis: is the main thread is in the state, and P is in the running state). Timeout is an optional time-out, and it should be emphasized that the p.join can only join the START process and not join the run-open process

P.daemon: The default value is False, if set to True, represents the daemon that P is running in the background, when P's parent process terminates, p also terminates with it, and set to True, p cannot create its own new process and must be set before P.start ()

P.name: Name of the process

P.pid: PID of the process

P.exitcode: The process is none at run time, if it is –n, the signal N is terminated (understand)

P.authkey: The authentication key for the process, by default, is a 32-character string that is randomly generated by os.urandom (). The purpose of this key is to provide security for the underlying interprocess communication that involves a network connection, which can only succeed if you have the same authentication key (understand it)

The process class used under Windows must be placed under main and should be placed in a different location causing the code to be preloaded

Since Windows has no fork, the multiprocessing module starts a new Python process and imports the calling module.
If Process () gets called upon import, then this sets off a infinite succession of new processes (or until your machine RU NS out of resources).
The reason for hiding calls to Process () inside

if __name__ = = "__main__"
Since statements inside this if-statement would not get called upon import.
Because Windows does not have fork, the multi-processing module launches a new Python process and imports the calling module.
If you call process () on import, this will start a new process with infinite inheritance (or until the machine runs out of resources).
This is the original to hide the internal call to process (), using if __name__ = = "__main __", the statement in this if statement will not be called at the time of import.

Two ways to create a child process method

1

#开进程的方法一:
Import time
Import Random
From multiprocessing import Process
def piao (name):
Print ('%s piaoing '%name)
Time.sleep (Random.randrange (1,5))
Print ('%s Piao end '%name)

P1=process (target=piao,args= (' Egon ')) #必须加, number, the incoming parameter must be passed in as a tuple, so you must add the number
P2=process (target=piao,args= (' Alex ',))
P3=process (target=piao,args= (' Wupeqi ',))
P4=process (target=piao,args= (' Yuanhao ',))

P1.start ()
P2.start ()
P3.start ()
P4.start ()
Print (' main thread ')

2.

#开进程的方法二 (encapsulates the class, adds the specified parameters, inherits the process as a parent class, and imports all methods under the parent class, and the Run method is quite p.start):
Import time
Import Random
From multiprocessing import Process


Class Piao (Process):
def __init__ (self,name):
Super (). __INIT__ ()
Self.name=name
def run (self):
Print ('%s piaoing '%self.name)

Time.sleep (Random.randrange (1,5))
Print ('%s Piao end '%self.name)

P1=piao (' Egon ')
P2=piao (' Alex ')
P3=piao (' Wupeiqi ')
P4=piao (' Yuanhao ')

P1.start () #start会自动调用run
P2.start ()
P3.start ()
P4.start ()
Print (' main thread ')

Focus, the memory space between the process and the process is isolated.

Join method for Process:

P.join () is to specify that the main process can only be executed after the P process has finished running, and that the main process will remain in place until the end of P's run.

Daemon process:

One: The daemon terminates after the execution of the main process code is completed

Second: The daemon can no longer open the child process, or throw an exception: Assertionerror:daemonic processes is not allowed to has children

Note: Processes are independent of each other, the main process code is running and the daemon terminates

Process Synchronization (Lock)

In the case of concurrent multi-process operation, if there is a process that modifies the same file at the same time, there is a situation where data confusion is inaccurate:

#并发运行, high efficiency, but competing for the same print terminal, which brings print confusion
From multiprocessing import Process
Import Os,time
def work ():
Print ('%s is running '%os.getpid ())
Time.sleep (2)
Print ('%s is Do '%os.getpid ())

if __name__ = = ' __main__ ':
For I in range (3):
P=process (Target=work)
P.start ()

Concurrent operation, high efficiency, but competing for the same print terminal, resulting in print confusion

This problem can be solved by locking, locking the code that must be executed separately, restricting each execution of the code to be the same process

From multiprocessing import Lock

#由并发变成了串行, sacrificing operational efficiency, but avoiding competition
From multiprocessing import Process,lock
Import Os,time
def work (lock):
Lock.acquire ()
Print ('%s is running '%os.getpid ())
Time.sleep (2)
Print ('%s is Do '%os.getpid ())
Lock.release ()
if __name__ = = ' __main__ ':
Lock=lock ()
For I in range (3):
P=process (target=work,args= (lock,))
P.start ()

Locking: From concurrency into serial, sacrificing operational efficiency, but avoiding competition

Detailed case, simulation of the process of ticket, multi-process simultaneously modify the specified file:

#文件db的内容为: {"Count": 1}
#注意一定要用双引号, or JSON is not recognized
From multiprocessing import Process,lock
Import Time,json,random
Def search ():
Dic=json.load (Open (' db.txt '))
Print (' \033[43m remaining votes%s\033[0m '%dic[' count '])

Def get ():
Dic=json.load (Open (' db.txt '))
Time.sleep (0.1) #模拟读数据的网络延迟
If dic[' count '] >0:
dic[' Count ']-=1
Time.sleep (0.2) #模拟写数据的网络延迟
Json.dump (Dic,open (' Db.txt ', ' W '))
Print (' \033[43m ticket successful \033[0m ')

def task (lock):
Search ()
Lock.acquire () #该过程为加锁的步骤, after the lock must be relesas, otherwise it will lead to a deadlock situation occurs, if you are afraid of forgetting can with lock: And with the user to open the same file
Get ()
Lock.release ()
if __name__ = = ' __main__ ':
Lock=lock ()
For I in range: #模拟并发100个客户端抢票
P=process (target=task,args= (lock,))
P.start ()

Locking: Ticket purchase behavior from concurrency to serial, sacrificing operational efficiency, but to ensure data security

Summary:

#加锁可以保证多个进程修改同一块数据时, only one task at a time can be modified, that is, the serial modification, yes, the speed is slow, but at the expense of the speed to ensure the data security.
Although you can use file sharing data to achieve interprocess communication, the problem is:
1. Inefficient (shared data is file-based, and files are data on the hard disk)
2. Need to lock the handle yourself

#因此我们最好找寻一种解决方案能够兼顾: 1, high efficiency (multiple processes share a piece of memory data) 2, help us handle the lock problem. This is the message-based IPC communication mechanism that the Mutiprocessing module provides for us: queues and pipelines.
Both the queue and the pipeline are storing the data in memory
The queue is also based on (pipeline + lock) implementation, can let us from the complex lock problem, free from,
We should try to avoid using shared data, use messaging and queues whenever possible, avoid complex synchronization and locking problems, and often get better malleable when the number of processes increases.

Message Queuing:

Processes are isolated from each other, and to implement interprocess communication (IPC), the Multiprocessing module supports two forms: queues and pipelines, both of which use message passing

Queue ([MaxSize]): Creates a shared process queue, which is a multi-process secure queue that enables data transfer between multiple processes using the queue.
MaxSize refers to the maximum number of accepted, and does not specify that the default is unlimited

The Q.put method is used to insert data into the queue, and the put method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, the method blocks the time specified by timeout until the queue has the remaining space. If timed out, a Queue.full exception is thrown. If blocked is false, but the queue is full, an Queue.full exception is thrown immediately.
The Q.get method can read from the queue and delete an element. Similarly, the Get method has two optional parameters: Blocked and timeout. If blocked is true (the default) and timeout is a positive value, then no element is taken within the wait time, and a Queue.empty exception is thrown. If blocked is false, there are two cases where the queue has a value that is available, and the value is immediately returned, otherwise the Queue.empty exception is thrown immediately if it is empty.

Q.get_nowait (): Same as Q.get (False)
Q.put_nowait (): Same as Q.put (False)

Q.empty (): When calling this method, q is null to return true, and the result is unreliable, such as in the process of returning true, if the item is added to the queue.
Q.full (): When this method is called, Q is full to return true, and the result is unreliable, for example, in the process of returning true, if the items in the queue are taken away.
Q.qsize (): Returns the correct number of current items in the queue, and the results are unreliable, for the same reason as Q.empty () and Q.full ()

1 Q.cancel_join_thread (): The background thread is not automatically connected when the process exits. You can prevent the Join_thread () method from blocking 2 q.close (): Close the queue and prevent more data from being added to the queue. Calling this method, the background thread will continue to write to the data that has been queued but not yet written, but will close as soon as this method completes. If q is garbage collected, this method is called. Closing a queue does not produce any type of data end signal or exception in the queue consumer. For example, if a consumer is being blocked on a get () operation, shutting down a queue in a producer does not cause the get () method to return an error. 3 Q.join_thread (): The background thread that connects the queue. This method is used to wait for all queue items to be consumed after the Q.close () method is called. By default, this method is called by all processes that are not the original creator of Q. Calling the Q.cancel_join_thread method can prohibit this behavior

Application Examples:

‘‘‘
The multiprocessing module supports two main forms of interprocess communication: Pipelines and queues
is implemented based on messaging, but the queue interface
‘‘‘

From multiprocessing import Process,queue
Import time
Q=queue (3)


#put, GET, Put_nowait,get_nowait,full,empty
Q.put (3)
Q.put (3)
Q.put (3)
Print (Q.full ()) #满了

Print (Q.get ())
Print (Q.get ())
Print (Q.get ())
Print (Q.empty ()) #空了

Focus: producer and Consumer models

Using producer and consumer patterns in concurrent programming can solve most concurrency problems. This mode improves the overall processing speed of the program by balancing the productivity of the production line and the consuming thread.

To use the producer consumer model

In the world of threads, the producer is the thread of production data, and the consumer is the thread of consumption data. In multithreaded development, producers have to wait for the consumer to continue producing data if the producer is processing fast and the consumer processing is slow. Similarly, consumers must wait for producers if their processing power is greater than that of producers. To solve this problem, the producer and consumer models were introduced.

What is the producer consumer model

The producer-consumer model solves the problem of strong coupling between producers and consumers through a container. Producers and consumers do not communicate with each other directly, and through the blocking queue to communicate, so producers do not have to wait for consumer processing after the production of data, directly to the blocking queue, consumers do not find producers to data, but directly from the blocking queue, the blocking queue is equivalent to a buffer, Balance the processing power of producers and consumers.

Implementation of producer consumer model based on queue

From multiprocessing import Process,queue
Import Time,random,os
def consumer (q):
While True:
Res=q.get ()
Time.sleep (Random.randint (1,3))
Print (' \033[45m%s eat%s\033[0m '% (Os.getpid (), res))

def producer (q):
For I in range (10):
Time.sleep (Random.randint (1,3))
Res= ' bun%s '%i
Q.put (RES)
Print (' \033[44m%s produced%s\033[0m '% (Os.getpid (), res))

if __name__ = = ' __main__ ':
Q=queue ()
#生产者们: Chefs
P1=process (target=producer,args= (q,))

#消费者们: The Foodie
C1=process (target=consumer,args= (q,)) #参数必须通过元组传值的方式传到类中去

#开始
P1.start ()
C1.start ()
Print (' master ')

#生产者消费者模型总结

#程序中有两类角色
A category responsible for production data (producer)
A class responsible for processing data (consumer)

#引入生产者消费者模型为了解决的问题是:
Balancing the speed difference between producers and consumers

#如何实现:
Producer-"Queue-" consumer
#生产者消费者模型实现类程序的解耦和

But now the problem when the producer is not in the production data, the consumer is unable to obtain the data, the program will be stuck in the p.get (), resulting in the failure to properly terminate the program:,

Workaround:

The problem at this point is that the main process will never end, because producer P is finished after production, but consumer C has been in the dead loop and stuck in the Q.get () step after the Q has been emptied.

The solution is to let the producer after the production, the queue to send an end signal, so that the consumer after receiving the end signal can break out of the dead loop

From multiprocessing import Process,queue
Import Time,random,os
def consumer (q):
While True:
Res=q.get ()
If Res is None:break #收到结束信号则结束
Time.sleep (Random.randint (1,3))
Print (' \033[45m%s eat%s\033[0m '% (Os.getpid (), res))

def producer (q):
For I in range (10):
Time.sleep (Random.randint (1,3))
Res= ' bun%s '%i
Q.put (RES)
Print (' \033[44m%s produced%s\033[0m '% (Os.getpid (), res))
Q.put (None) #发送结束信号
if __name__ = = ' __main__ ':
Q=queue ()
#生产者们: Chefs
P1=process (target=producer,args= (q,))

#消费者们: The Foodie
C1=process (target=consumer,args= (q,))

#开始
P1.start ()
C1.start ()
Print (' master ')

The producer sends the end signal after the production is complete none

However, in a multi-process situation, we can only open a few sub-processes to send how many none of the way to execute, relative to this solution is not the final scenario

#JoinableQueue ([maxsize]): This is like a queue object, but the queue allows the consumer of the project to notify the creator that the project has been successfully processed. The notification process is implemented using shared signals and condition variables.

#参数介绍:
MaxSize is the maximum number of items allowed in a queue, and no size limit is omitted.
#方法介绍:
Joinablequeue instance P has the same method as the queue object:
Q.task_done (): The consumer uses this method to signal that the returned item of Q.get () has been processed. If the number of times this method is called is greater than the number of items removed from the queue, a ValueError exception is thrown
Q.join (): The producer calls this method to block until all items in the queue are processed. Blocking will persist until each item in the queue calls the Q.task_done () method

From multiprocessing import Process,joinablequeue
Import Time,random,os
def consumer (q):
While True:
Res=q.get ()
Time.sleep (Random.randint (1,3))
Print (' \033[45m%s eat%s\033[0m '% (Os.getpid (), res))

Q.task_done () #向q. Join () sends a signal to prove that a data has been taken away.

def producer (NAME,Q):
For I in range (10):
Time.sleep (Random.randint (1,3))
res= '%s%s '% (name,i)
Q.put (RES)
Print (' \033[44m%s produced%s\033[0m '% (Os.getpid (), res))
Q.join ()


if __name__ = = ' __main__ ':
Q=joinablequeue ()
#生产者们: Chefs
P1=process (target=producer,args= (' bun ', q))
P2=process (target=producer,args= (' Bones ', q))
P3=process (target=producer,args= (' swill ', q))

#消费者们: The Foodie
C1=process (target=consumer,args= (q,))
C2=process (target=consumer,args= (q,))
C1.daemon=true
C2.daemon=true

#开始
P_L=[P1,P2,P3,C1,C2]
For P in p_l:
P.start ()

P1.join ()
P2.join ()
P3.join ()
Print (' master ')

#主进程等--->p1,p2,p3 and other---->C1,C2
#p1, P2,P3 is over, and C1,C2 must be all finished. P1,p2,p3 sent to the queue of data
#因而c1, C2 also has no value, and should end with the end of the main process, so set up as a daemon

Pipeline:

From multiprocessing import Process,pipe

Import Time,os
def adder (p,name):
Server,client=p
Client.close ()
While True:
Try
X,Y=SERVER.RECV ()
Except Eoferror:
Server.close ()
Break
Res=x+y
Server.send (RES)
Print (' Server done ')
if __name__ = = ' __main__ ':
Server,client=pipe ()

C1=process (target=adder,args= ((server,client), ' C1 '))
C1.start ()

Server.close ()

Client.send ((10,20))
Print (CLIENT.RECV ())
Client.close ()

C1.join ()
Print (' main process ')
#注意: The Send () and recv () methods use the Pickle module to serialize an object.

Pipelines can be used for two-way communication, using a request/response model or a remote procedure call typically used in a client/server to write a program that interacts with the process

Sharing data:

From multiprocessing import Manager,process,lock
Import OS
def work (D,lock):
# with Lock: #不加锁而操作共享的数据, there's gotta be a data glitch.
d[' Count ']-=1

if __name__ = = ' __main__ ':
Lock=lock ()
With Manager () as M:
Dic=m.dict ({' Count ': 100})
P_l=[]
For I in range (100):
P=process (target=work,args= (Dic,lock))
P_l.append (P)
P.start ()
For P in p_l:
P.join ()
Print (DIC)
#{' Count ': 94}

Working with shared data between processes

Signal Volume:

mutexes allow only one thread to change data at the same time, while Semaphore allows a certain number of threads to change data, such as the toilet has 3 pits, the maximum allows only 3 people to the toilet, the back of the people can only wait inside someone out to go in, if the specified signal volume is 3, then a person to obtain a lock, Count plus 1, when the Count equals 3 o'clock, the back of the person needs to wait. Once released, someone can get a lock

The semaphore is much like the concept of a process pool, but the signal volume involves the concept of locking.

From multiprocessing import Process,semaphore
Import Time,random

def GO_WC (Sem,user):
Sem.acquire ()
Print ('%s ' takes up a Manger '%user)
Time.sleep (Random.randint (0,3)) #模拟每个人拉屎速度不一样, 0 for someone to squat and get up.
Sem.release ()

if __name__ = = ' __main__ ':
Sem=semaphore (5)
P_l=[]
For I in range (13):
P=process (target=go_wc,args= (SEM, ' user%s '%i,))
P.start ()
P_l.append (P)

For I in p_l:
I.join ()
Print (' ============ ")

Semaphore Semahpore (same as thread)

Event:

The events of the Python thread are used by the main thread to control the execution of other threads, and the event provides three methods set, wait, clear.

Event handling mechanism: A global definition of a "flag", if the "flag" value is False, then when the program executes the Event.wait method is blocked, if the "flag" value is true, then the Event.wait method will no longer block.

Clear: Set "Flag" to False
Set: Sets "Flag" to True

#_ *_coding:utf-8_*_
#!/usr/bin/env python

From multiprocessing import process,event
Import Time,random

def car (e,n):
While True:
If not E.is_set (): #Flase
Print (' \033[31m red light \033[0m,car%s waiting for '%n ')
E.wait ()
Print (' \033[32m car%s saw green light \033[0m '%n)
Time.sleep (Random.randint (3,6))
If not E.is_set ():
Continue
Print (' Walk you, car ', N)
Break

def police_car (e,n):
While True:
If not E.is_set ():
Print (' \033[31m red light \033[0m,car%s waiting for '% n ')
E.wait (1)
Print (' The lamp is%s, the police car is gone, car%s '% (E.is_set (), N))
Break

def traffic_lights (E,inverval):
While True:
Time.sleep (Inverval)
If E.is_set ():
E.clear () #e. Is_set ()---->false
Else
E.set ()

if __name__ = = ' __main__ ':
E=event ()
# for I in range (10):
# p=process (target=car,args= (E,i,))
# P.start ()

For I in range (5):
p = Process (Target=police_car, args= (E, I,))
P.start ()
T=process (target=traffic_lights,args= (e,10))
T.start ()

Print (' ============ ")

Event (same as thread)

Two, Python concurrent programming ~ Multithreading

1.threading Module

The multiprocessing module mimics the functional usage of the threading module, so there is no difference in usage

Two ways to turn on multithreading:

#方式一
From threading Import Thread
Import time
def sayhi (name):
Time.sleep (2)
Print ('%s say hello '%name)

if __name__ = = ' __main__ ':
T=thread (target=sayhi,args= (' Egon ')) #用法与multiprocessing模块用法相同, target passed in the function name, the parameter is passed in the form of a tuple, a single parameter must be added "," number
T.start ()
Print (' main thread ')

#方式二, encapsulate the threading module into a class, inherit threading as a parent class, and use all the methods of the parent class
From threading Import Thread
Import time
Class Sayhi (Thread):
def __init__ (self,name):
Super (). __INIT__ ()
Self.name=name
def run (self):
Time.sleep (2)
Print ('%s say hello '% self.name)


if __name__ = = ' __main__ ':
t = Sayhi (' Egon ')
T.start ()
Print (' main thread ')

Way Two

Summary: It is faster to open a thread than to turn on a child process because the child thread is not required to request memory space separately, and all the child threads share the same PID with the main thread

Methods for thread instance objects
# isAlive (): Returns whether the thread is active.
# getName (): Returns the thread name.
# setName (): Sets the thread name.

Some of the methods provided by the threading module are:
# Threading.currentthread (): Returns the current thread variable.
# threading.enumerate (): Returns a list that contains the running thread. Running refers to threads that do not include pre-and post-termination threads until after the thread has started and ends.
# Threading.activecount (): Returns the number of running threads with the same result as Len (Threading.enumerate ()).

Daemon Thread:

What is a daemon thread, daemon:

Both the daemon and daemon are listening to the main program, but after the main code runs, the system reclaims the daemons and threads in memory. But there's a difference between the two.

#1 The main process has finished running after its code is finished (the daemon is being reclaimed at this point), then the main process will always wait until the non-daemon has finished running and reclaim the child process's resources (otherwise it will produce a zombie process) before it ends, #2 The main thread runs after the other non-daemon threads have finished running (the daemon is recycled at this point). Because the end of the main thread means the end of the process, the resources of the process as a whole are recycled, and the process must ensure that the non-daemon threads are finished before they end.

Python GIL (Global ineterprer Lock)

GIL Introduction

Gil is the essence of a mutex, since it is a mutex, all the nature of the mutex is the same, all the concurrent operation into serial, in order to control the same time shared data can only be modified by a task, and thus ensure data security.

One thing is certain: to protect the security of different data, you should add a different lock.

To understand the Gil, first make a point: each time you execute a python program, you create a separate process. For example, Python Test.py,python Aaa.py,python bbb.py will produce 3 different Python processes

#1 all data is shared, where the code is shared by all threads as a data (all code for test.py and all code for the CPython interpreter)
For example: test.py defines a function work (code content), where all threads in the process can access the code of the business, so we can open three threads and target points to that code, which means it can be executed.

#2 the task of all threads requires that the code of the task be passed as a parameter to the interpreter's code to execute, that is, all threads want to run their own tasks, the first thing to do is to have access to the interpreter's code.

Sync Lock:

Three points to note:
#1. Thread Rob is Gil Lock, Gil lock equivalent to execute permissions, get execute permission to get the mutex lock lock, other threads can also Rob Gil, but if it is found that lock is still not released is blocked, even if the access to execute permission Gil must hand over immediately

#2. Join is waiting for all, that is, the whole serial, and the lock is only the part of the lock to modify the shared data, that is, part of the serial, to ensure that the fundamental principle of data security is to make the concurrency into serial, join and mutex can be achieved, no doubt, the partial serial efficiency of the mutex is higher

#3. Be sure to look at the last of this section. The classic analysis of Gil and mutual exclusion lock

GIL VS Lock

The witty classmate may ask this question, that is, since you said before, Python already has a Gil to ensure that only one thread can execute at the same time, why do we need lock here?

First we need to agree that the purpose of the lock is to protect the shared data, and only one thread at a time can modify the shared data

Then we can conclude that protecting different data should be a different lock.

Finally, the problem is clear, GIL and lock are two locks, the protection of the data is different, the former is the interpreter level (of course, is to protect the interpreter-level data, such as garbage collection data), the latter is to protect the user's own development of the application data, it is obvious that Gil is not responsible for this matter, Only user-defined lock handling, or lock

Process Analysis: All threads rob the Gil lock, or all threads rob the Execute permission

Thread 1 Grab the Gil Lock, get execution permissions, start execution, and then add a lock, not finished, that is, thread 1 has not released lock, it is possible that thread 2 grab the Gil Lock, start execution, the execution of the process found that lock has not been released by thread 1, and then thread 2 into the block, take the execution permissions , it is possible that thread 1 gets the Gil and then normal execution to release lock ... This leads to the effect of serial operation

Since it's serial, we're doing it.

T1.start ()

T1.join

T2.start ()

T2.join ()

This is also serial execution Ah, why add lock, it is necessary to know that the join is waiting for T1 all the code to complete, the equivalent of locking T1 all the code, and lock is only a part of the operation to share data sharing code.

Deadlock phenomenon:

In the implementation of the Code, sometimes the use of lock is too complex, and a deadlock occurs, this situation generally causes the program to die in the running phase, can not continue to execute, but also cause the system memory resources do not need to consume waste.

Tenth day of Learning Python

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.