This article mainly introduces the deadlocks, reentrant locks, and mutex locks in Python. although Python GIL problems in thread programming are commonplace, you can refer
I. deadlock
To put it simply, a deadlock occurs when a resource is called multiple times, and a deadlock occurs when the resource is not released by multiple callers. here we use examples to illustrate the two common deadlocks.
1. iterative deadlock
In this case, when a thread "iterates" to request the same resource, a deadlock occurs:
import threadingimport timeclass MyThread(threading.Thread): def run(self): global num time.sleep(1) if mutex.acquire(1): num = num+1 msg = self.name+' set num to '+str(num) print msg mutex.acquire() mutex.release() mutex.release()num = 0mutex = threading.Lock()def test(): for i in range(5): t = MyThread() t.start()if __name__ == '__main__': test()
In the above example, the resource is requested for the first time in the if judgment of the run function, but the request has not been release, and the request is acquire again, so the resource cannot be released, resulting in a deadlock. In this example, you can annotate the two lines below print to perform the operation normally. In addition, you can also solve the problem through reentrant locks, which will be mentioned later.
2. Mutual call deadlock
The deadlock in the above example is caused by multiple calls in the same def function. In another case, both functions call the same resource and wait for the end of each other. If the two threads occupy a part of the resources and wait for the resources of the other side at the same time, a deadlock will occur.
import threadingimport timeclass MyThread(threading.Thread): def do1(self): global resA, resB if mutexA.acquire(): msg = self.name+' got resA' print msg if mutexB.acquire(1): msg = self.name+' got resB' print msg mutexB.release() mutexA.release() def do2(self): global resA, resB if mutexB.acquire(): msg = self.name+' got resB' print msg if mutexA.acquire(1): msg = self.name+' got resA' print msg mutexA.release() mutexB.release() def run(self): self.do1() self.do2()resA = 0resB = 0mutexA = threading.Lock()mutexB = threading.Lock()def test(): for i in range(5): t = MyThread() t.start()if __name__ == '__main__': test()
The deadlock example is a little complicated. For more information, see.
II. reentrant locks
To support multiple requests to the same resource in the same thread, python provides the "reentrant lock": threading. RLock. RLock internally maintains a Lock and a counter variable. counter records the number of acquire times so that resources can be require multiple times. Resources can be obtained only when all acquire of a thread is release. Here we take example 1. if you use RLock instead of Lock, no deadlock will occur:
import threadingimport timeclass MyThread(threading.Thread): def run(self): global num time.sleep(1) if mutex.acquire(1): num = num+1 msg = self.name+' set num to '+str(num) print msg mutex.acquire() mutex.release() mutex.release()num = 0mutex = threading.RLock()def test(): for i in range(5): t = MyThread() t.start()if __name__ == '__main__': test()
The difference from the above example is that threading. Lock () is replaced by threading. RLock ().
III. mutex lock
The python threading module has two types of locks: threading. Lock and threading. RLock ). The usage of the two is basically the same, as follows:
lock = threading.Lock()lock.acquire()dosomething……lock.release()
RLock is used to change threading. Lock () to threading. RLock (). Easy to understand. the following code is used:
[root@361way lock]# cat lock1.py
#! /Usr/bin/env python # coding = utf-8import threading # import threading module import time # import time Module class mythread (threading. thread): # Create a class def _ init _ (self, threadname) through inheritance: # initialization method # Call the initialization method threading of the parent class. thread. _ init _ (self, name = threadname) def run (self): # overload the run method global x # Use global to indicate that x is the global variable for I in range (3 ): x = x + 1 time. sleep (5) # Call the sleep function to sleep the thread for 5 seconds print xtl = [] # define the list for I in range (10): t = mythread (str (I )) # class instantiation tl. append (t) # add the class object to the list x = 0 # assign x to 0for I in tl: I. start ()
The execution result here is different from what you think. The result is as follows:
[root@361way lock]# python lock1.py
30303030303030303030
Why are the results 30? The key lies in the global and time. sleep rows.
1. because x is a global variable, the value of x after each loop is the result value after execution;
2. because the code is a multi-threaded operation, when sleep is waiting, the threads that have been executed are waiting, the subsequent processes are also executed within five seconds, waiting for print. Similarly, x is reset to the new bin value due to the global principle. Therefore, all the printed results are 30;
3. easy to understand. you can try to comment sleep and so on. you can check the results again and you will find that there are differences.
In practical applications, for example, crawling programs, there will also be situations similar to sleep waiting. When frontend and backend calls are ordered or output is printed, concurrent competition occurs, resulting in disordered results or output. The lock concept is introduced here. the above code is modified as follows:
[root@361way lock]# cat lock2.py
#! /Usr/bin/env python # coding = utf-8import threading # import threading module import time # import time Module class mythread (threading. thread): # Use inheritance to create the class def _ init _ (self, threadname): # initialization method threading. thread. _ init _ (self, name = threadname) def run (self): # overload the run method global x # Use global to indicate that x is the global variable lock. acquire () # Call lock's acquire method for I in range (3): x = x + 1 time. sleep (5) # Call the sleep function to sleep the thread for five seconds print x lock. release () # Call lock's release method lock = threading. lock () # class instantiation tl = [] # Definition list for I in range (10): t = mythread (str (I) # class instantiation tl. append (t) # add the class object to the list x = 0 # assign x to 0for I in tl: I. start () # run threads in sequence
The execution result is as follows:
[root@361way lock]# python lock2.py
36912151821242730
The locking result may cause blocking and cause high unlocking. It will be output in order by the concurrent multi-thread in order. if the subsequent thread runs too fast, it needs to wait until the previous process ends before it can end. the writing seems like a queue concept, however, in many cases of locking, the queue can be used to solve the problem.