Sync lock/Recursive lock/co-1 Sync lock
Locks are often used to synchronize access to shared resources, create a lock object for each shared resource, call the Acquire () method to get the lock object when you need to access the resource (if another thread has already acquired the lock, the current thread waits for it to be freed), and when the resource is accessed, Release the lock in the call releases mode:
Import Threadingimport timedef subnum (): global num # num-=1 lock.acquire () #对用户进行加锁处理 # Locking the next temp=num time.sleep (0.01) num=temp-1 #对此公共变量进行-1 Operation Lock.release () only after the first release of the user data is completed. #然后进程释放锁num =100l=[]lock=threading. Lock () for I in range: t=threading. Thread (Target=subnum) T.start () l.append (t) for T in L: t.join () print ("Result:", num) execution result: result:0
2 dead Lock
The so-called deadlock: refers to two or two or more processes or threads in the execution process, because of the contention for resources caused by a mutual waiting phenomenon, if there is no external force, they will not be able to proceed. At this point, the system is in a deadlock state or the system produces a deadlock, and these processes that are always waiting for each other become deadlock processes.
Import Threadingimport timemutexa=threading. Lock () mutexb=threading. Lock () class MyThread (threading. Thread): def __init__ (self): threading. Thread.__init__ (self) def run (self): self.fun1 () self.fun2 () def fun1 (self): Mutexa.acquire () Print ("I am%s,get res:%s---%s"% (Self.name, "ResA", Time.time ())) Mutexb.acquire () print ("I am%s,get re s:%s---%s "% (Self.name," Resb ", Time.time ())) Mutexb.release () mutexa.release () def fun2 (self): Mute Xb.acquire () print ("I am%s,get res:%s---%s"% (Self.name, "Resb", Time.time ())) Time.sleep (0.2) Mutex A.acquire () print ("I am%s,get res:%s---%s"% (Self.name, "Resb", Time.time ())) Mutexa.release () Mutex A.release () if __name__ = = ' __main__ ': Print ("Start------------%s", Time.time ()) for I in Range (0,10): My_threa D=mythread () My_thread.start () execution result: Start------------%s 1494311688.1136441I am thread-1,get res:resa---1494311688.1136441I am Thread-1,get res:resb---1494311688.1136441I am thread-1,get res:resb---1494311688.1136441I am thread-2,get Res:resa---1494311688.1136441
3 Recursive lock
In Python, in order to support multiple requests for the same resource in the same thread, Python provides a reentrant lock rlock. The Rlock internally maintains a lock and a counter variable, counter records the number of acquire, so that resources can be require multiple times. Until all the acquire of a thread are release, the other threads can get the resources.
Import Threadingimport timerlock=threading. Rlock () class MyThread (threading. Thread): def __init__ (self): threading. Thread.__init__ (self) def run (self): self.fun1 () self.fun2 () def fun1 (self): Rlock.acquire () #如 The lock is occupied, then blocked here, waiting for the lock to release print ("I am%s,get res:%s---%s"% (Self.name, "ResA", Time.time ())) Rlock.acquire () #count = 2 print ("I am%s,get res:%s---%s"% (Self.name, "Resb", Time.time ())) Rlock.release () #count-1 rlock.re Lease () #count -1=0 def fun2 (self): Rlock.acquire () #count =1 print ("I am%s,get res:%s---%s"% (self.name , "Resb", Time.time ()) Time.sleep (0.2) rlock.acquire () #count =2 print ("I am%s,get res:%s---%s"% (self . Name, "ResA", Time.time ())) Rlock.release () #coun-1 rlock.release () #count -1if __name__ = ' __main__ ': P Rint ("Start-----------%s"%time.time ()) for I in Range (0,10): My_thread=mythread () My_thread.start ()
4 Event Object
A key feature of a thread is that each thread is run independently and the state is unpredictable. We need to use the event object in the threading library, which contains a signal flag that can be set by the thread, which allows the thread to wait for something to happen, and in the initial case the signal flag in the event object is set to False. If the thread waits for an event object, and the flag of the event object is false, then the threads will be blocked until the flag is true. A thread if the signal flag of an event object is set to true, it will wake up all the threads waiting for the event object. If a thread waits for an event object that has already been set to true, then it ignores the thing and continues execution.
Event.isset (): Returns the status value of the event, event.wait (): If Event.isset () ==false will block the thread; Event.set (): Sets the status value of event to True, All blocking pool threads are activated into a ready state, waiting for the operating system to dispatch; Event.clear (): The status value of recovery event is false.
For example, we have more than one thread reading data from a Redis queue that attempts to link Redis's services, and in general, if the Redis connection is unsuccessful, the code of each thread will attempt to reconnect, so we can take threading. Event mechanism to coordinate the connection operations of individual worker threads: the main thread will attempt to connect to the Redis service, and if normal, the worker threads will attempt to connect to the Redis service if the event is triggered.
#event is the use of communication between thread threads
#event wait the default value is false to be blocked
#set change to ture is down to execution.
Import Threadingimport timeimport logginglogging.basicconfig (level=logging. Debug,format= "(% (threadname) -10s)% (message) S",) def Worker (event): Logging.debug ("Waiting for Redis Ready ...") event. Wait () #if flag=false blocking, waiting for flag=true to continue executing logging.debug (' Redis ready,and connect to Redis server and do some work[%s] ', Time.time ()) Time.sleep (1) def main (): Readis_ready=threading. Event () #flga =false t1=threading. Thread (target=worker,args= (Readis_ready,), name= "T1") T1.start () t2=threading. Thread (target=worker,args= (Readis_ready,), name= "T2") T2.start () Logging.debug ("First of All,check Redis Server,make Sure it is Ok,and then trigger the Redis Teady event ") Time.sleep (3) #simulate the check propress readis_ready.set () if __name__ = = ' __main__ ': main () execution result: (T1) waiting for Redis ready ... (T2) Waiting for Redis ready ... (Mainthread) First of All,check Redis server,make sure it's ok,and then trigger the Redis Teady event (t1) Redis Rea Dy,and ConnECT to Redis server and does some work[1494314141.0479438] (T2) Redis Ready,and connect to Redis server and do some wo RK[1494314141.0479438]
threading. The wait method for the event also accepts a timeout parameter, by default, if the event does not occur consistently, the wait method blocks until the timeout parameter is added, and if the blocking time exceeds the value set by this parameter, the Wait method returns. Corresponding to the above scenario, if the Redis server does not start consistently, we want the sub-threads to be able to print some logs to constantly remind us that there is currently no Redis service to connect to, and we can set this timeout parameter to achieve this purpose.
Import Threadingimport timeimport logginglogging.basicconfig (level=logging. Debug,format= "(% (threadname) -10s)% (message) S",) def Worker (event): Logging.debug ("Waiting for Redis ready .....") whi Le not Event.is_set (): #现在是false的时候 event.wait (3) #每隔三秒钟提示一下 #一直打印 logging.debug ("Wait ...") Logging.debug ("Redis Ready,and connect to Redis server and does Mome work [%s]", Time.ctime ()) Time.sleep (1) def main (): Readis_ready=threading. Event () #flga =false t1=threading. Thread (target=worker,args= (Readis_ready,), name= "T1") T1.start () t2=threading. Thread (target=worker,args= (Readis_ready,), name= "T2") T2.start () Logging.debug ("First of All,check Redis Server,make Sure it is Ok,and then trigger the Redis Teady event ") Time.sleep (3) #simulate the check propress readis_ready.set () if __name__ = = ' __main__ ': main () execution result (T1) waiting for Redis ready .... (T2) Waiting for Redis ready .... (Mainthread) First of All,check Redis Server,make sUre it is Ok,and then trigger the Redis Teady event (t1) wait .... (t1) Redis Ready,and connect to Redis server and does mome work [Tue 9 16:56:18] (T2) Wait ... (T2) Redis Ready,and connect to Redis server and does mome work [Tue 9 16:56:18 2017]
5 Signal Volume
Semaphore manages a built-in counter:
Built-in counter whenever acquire () is called-1
Built-in counter when call release () +1
The cardinality cannot be less than 0, and when the counter is 0 o'clock, acquire () blocks the thread until another thread calls release ().
Example: (at the same time only 5 threads can get semaphore, that is, you can limit the maximum number of connections to 5)
Import Threadingimport timesemaphore=threading. Semaphore (5) def func (): semaphore.acquire () #相当于加了一把锁 run five thread print (Threading.currentthread () simultaneously. GetName () + "get semaphore") Time.sleep (3) #等待3秒后 semaphore.release () #五个线程同时释放掉for i in range (20) : t1=threading. Thread (Target=func) T1.start ()
6 multiprocessing Module
Because of the Gil's existence, multithreading in Python is not really multi-threading, and if you want to fully use the resources of multicore CPUs, most of the situations in Python need to use multiple processes.
The multiprocessing package is a multi-process Management Pack in Python. With threading. Like thread, it can take advantage of multiprocessing. Processes object to create a process. The process can run functions written inside the Python program. The process object is used in the same way as the thread object, and there is a method for start (), run (), join (). In addition, there are lock/event/semaphore/condition classes in the multiprocessing package (these objects can be like multithreading,
Normal passes through parameters to each process), which synchronizes the process, using the same name as the class in the threading package. Therefore, a large part of multiprocessing and threading use the same set of APIs, but to a multi-process situation.
Process invocation of Python: Call of the procedure class
From multiprocessing import Processimport timedef F (name): print ("Hello", Name,time.ctime ()) Time.sleep (5) if __name__ = = ' __main__ ': p_l=[] for i in range (3): p=process (target=f,args= ("alvin:%s"%i,)) P_ L.append (P) P.start () for p in p_l: p.join () print ("End") execution result: Hello alvin:0 Tue 9 17:49:10 2017hello alvin:1 Tue 9 17:49:10 2017hello alvin:2 Tue may 9 17:49:10 2017end
Inheriting the Process class call
From multiprocessing import processimport timeclass myprocess (Process): def __init__ (self): super (). __init __ () def run (self): print ("Hello", Self.name,time.ctime ()) Time.sleep (1) if __name__ = = ' __main__ ': p_l=[] for i in range (3): p=myprocess () P.start () p_l.append (p) for p in p_l: P.join () print ("End")
Process class
Construction Method:
Process ([group [, Target [, name [, args [, Kwargs]]])
Group: Thread groups, currently not implemented, the library reference must be none;
Target: The method to be executed;
Name: Process name;
Args/kwargs: The parameter to pass in the method.
Example method:
Is_alive (): Returns whether the process is running.
Join ([timeout]): Blocks the process of the current context until the process calling this method terminates or arrives at the specified timeout (optional parameter).
Start (): Process ready, waiting for CPU scheduling
Run (): Strat () calls the Run method, and if the instance process does not have an incoming target set, this star executes the T default run () method.
Terminate (): Stops the worker process immediately, regardless of whether the task is completed
Property:
Daemon: Same as thread's Setdeamon function
Name: Process name.
PID: Process number.
From multiprocessing import processimport osimport timedef info (name): Print (" name:", name) print ("Parent Process: ", Os.getppid ()) print (" Process ID: ", os.getpid ()) print ("-----------") Time.sleep (2) def foo ( Name): info (name) if __name__ = = ' __main__ ': info ("main process line") p1=process (target=info,args= (" Alvin ",)) p2=process (target=info,args= (" Egon ",)) P1.start () P2.start () p1.join () P2.join () print ("ending") execution result: Name:main process lineparent process:8032process id:9400-----------Name: Alvinparent process:9400process id:9252-----------name:egonparent process:9400process id:10556-----------Ending
Process name for each process number (PID) detected by tasklist
7 co-process
Pros: 1 No more switching due to single thread
2 no longer has any concept of lock
Yield vs. Ctrip: Generates concurrency
Yeild and Ctrip
Import Timedef Consumer (): #消费者的 r= "" while True: #3 consumer through yield to get the message, processing, and through yield to return the results N=yield R If not N:return print ("[consumer]--consuming%s ...."%n) Time.sleep (1) r= "OK" def Produce (c): #消费者 next (c) #取生成对象的一个值 n=0 if __name__ = = ' __main__ ': while N<5:n=n+1 Print ("[producer]--producing%s ..."%n) #2 then, once something has been produced, switch to consumer execution Cr=c.send (n) by C.send (n) #4 produce get Consumer processing results, continue to produce the next message print ("[Producer]consumer return:%s"%CR) #5 produce decided not to produce, by C.close () Close consumer, the entire process ends C.close () if __name__ = = ' __main__ ': C=consumer () #产生一个生成器 Produce (c) Execution result: [producer]-- Producing 1 ... [Consumer]--consuming 1 .... [PRODUCER] Consumer return:200 ok[producer]--producing 2 ... [Consumer]--consuming 2 .... [PRODUCER] Consumer return:200 ok[producer]--producing 3 ... [Consumer]--consuming 3 .... [PRODUCER] Consumer return:200 ok[producer]--producing 4... [Consumer]--consuming 4 .... [PRODUCER] Consumer return:200 ok[producer]--producing 5 ... [Consumer]--consuming 5 .... [PRODUCER] Consumer return:200 OK
The Greenlet module is the basis of the association process
The main idea of the Greenlet mechanism is that the yield statement in the generator function or the co-function is executed in the Suspend function, and Greenlet is a basic library of Python that implements what we call "Coroutine" (Ctrip).
From Greenlet import Greenlet def test1 (): print (All) gr2.switch () print (Gr2.switch) () def test2 (): print (+) gr1.switch () print (Gr1 = Greenlet (test1) GR2 = Greenlet (test2) Gr1.switch ()
Greenlet-based framework
Gevent Module realizes Ctrip
Python provides basic support for the process through yield, but not entirely, while the third-party gevent provides Python with a more complete range of support for the coprocessor
When an greenlet encounters an IO operation, such as accessing the network, it automatically switches to the other Greenlet, waits until the IO operation is complete, and then switches back to continue at the appropriate time, because the IO operation is very time-consuming and often causes the Cheng Greenlet sequence to be in the waiting state. With Gevent for us to automatically switch the co-process, it is guaranteed that Greenlet is running, not waiting for IO.
Import Geventimport timedef foo (): print ("Running in foo") Gevent.sleep (2) print ("Switch to Foo again") def Bar (): print ("Switch to Bar") Gevent.sleep (5) print ("Switch to bar Again") Start=time.time () Gevent.joinall ( [Gevent.spawn (foo), Gevent.spawn (bar)]) print (Time.time ()-start)
Of course, in the actual code, we will not use Gevent.sleep () to switch the process, but in the implementation of the IO operation, Gevent automatically switch, the code is as follows:
From gevent import monkeymonkey.patch_all () Import geventfrom urllib import requestimport timedef f (URL): print (' GET :%s '% URL) resp = request.urlopen (URL) data = Resp.read () print ('%d bytes received from%s. '% (len data), U RL)) Start=time.time () Gevent.joinall ([ gevent.spawn (F, ' https://itk.org/'), gevent.spawn (f, ' https:// www.github.com/'), gevent.spawn (F, ' https://zhihu.com/'),]) #执行结果: # f (' https://itk.org/') # f (' https:// www.github.com/') # f (' https://zhihu.com/') print (Time.time ()-start)
python--Synchronous Lock/Recursive lock/co-process