One concurrent.futures module
Open the process pool and the thread pool module.
The difference between a thread pool and a process pool: the number of process pool turns on can only be within the number of cores of the CPU pair. The number of thread pools can be arbitrarily defined, which is five times times the number of cores.
Related usage:
Threadpoolexecutor: Creating a thread pool
Processpoolexecutor: Creating a process Pool
Executor: is an abstract class, the above two are methods that inherit this abstract class.
Submit: The asynchronous way to commit.
Map: Simplified submission method, map with loop, only simple commit, for int type, and not put back the result.
Shutdown: Provides an excuse for recovering resources from a process pool or thread pool after it has finished executing. When the wait=true in it, wait for the explanation to be executed before the return result, if Wait=false, will understand the return results, wait until the end of the task execution can be recycled to the process pool or thread pool resources. But you can't submit a task.
Result: Get the results.
# import concurrent.futures# Import os# import time# import random## def work (n): # print ('%s is working '%os.getpid ()) # Time.sleep (Random.random ()) # return n## if __name__== ' __main__ ': # excutor= Concurrent.futures.ProcessPoolExecutor (4) # futures=[]# for i in range: # Future=excutor.submit ( work,i) # Futures.append (future) # Excutor.shutdown (wait=true) # print ('%s is zhuing '%os.getpid ()) # For futures:# print (Future.result ())
Abbreviated as follows:
# import concurrent.futures# Import os# import time# import random# def work (n): # print ('%s is working '% os.getpid ()) # Time.sleep (Random.random ()) # return n# if __name__ = = ' __main__ ': # Excutor = Concurrent.futures.ThreadPoolExecutor (4) # futures = [Excutor.submit (work, i) for I in range]# Excutor.shutdown (wait=true) # print ('%s is zhuing '% os.getpid ()) # for the future in futures:# print ( Future.result ())
callback function: Add_done_callback followed by a callback function in parentheses, the callback function is to receive the first function to return a pair of image, when used, but also in the callback function of the internal Sunmit submitted.
As follows:
# import concurrent.futures# Import requests# import time# import random# import os# def work (URL): # Print ('%s '%s '% (Os.getpid (), URL)) # ret=requests.get (URL) # time.sleep (3) # if ret.status_code==200:# print ('%s done%s '% (Os.getpid (), URL) # return {' url ': url, ' text ': ret.text}# def foo (ret): # ret = Ret.result () # print ('%s foo%s '% (Os.getpid (), R et[' URL ')) # Time.sleep (1) # res= '%s; Length:%s '% (ret[' url '],len (ret[' text ')) # with open (' a.txt ', ' a ', encoding= ' utf- 8 ') as f:# f.write (res+ ' \ n ') # # if __name__== ' __main__ ': # url_list = [# ' http://tool.chinaz.com/regex/', # ' http://www.cnblogs.com/fangjie0410/', # "Http://www.cnblogs.com/xuanan", # "http://www.cnblogs.co M/bg0131/p/6430943.html ", #" http://www.cnblogs.com/wupeiqi/", #" http://www.cnblogs.com/linhaifeng/", # "Http://www.cnblogs.com/Eva-J/articles/7125925.html", # "http://www.cnblogs.com/Eva-J/articles/6993515.html", # ]# Excutres=concurrent.futures.processpoolexecutor () # for I in url_list:# excutres.submit (work,i). Add_done_c Allback (foo) # # print (' main ', Os.getpid ())
Exception Handling: Exception: Exception interface
Concurrent.futures: With exceptions, Python's built-in exceptions are not recognized.
Raise: Throwing Exceptions
Import Concurrent.futuresimport osimport timeimport randomdef work (n): print ('%s is working '% os.getpid ()) Time.sleep (Random.random ()) raise Exception return nif __name__ = ' __main__ ': excutor = Concurrent.futures.ThreadPoolExecutor (4) futures = [] for I in range: the future = Excutor.submit (work, i) . Result () futures.append (future) Excutor.shutdown (wait=true) print ('%s is zhuing '% os.getpid ()) For the Futures: print (future)
Cuncel: Canceling the terminating exception
Two deadlock phenomena and recursive locking phenomena
What is a deadlock: each holding the other's desired lock, but each missing a lock and can not be released
Lock: is a mutex but is prone to deadlock. can only acquire once, as long as the lock does not release (selease), it can not acquire.
The deadlock phenomenon is as follows:
# import threading# Import time# import random# l1=threading. Lock () # l2=threading. Lock () # Class Func (threading. Thread): # def run (self): # Self.aa () # self.bb () # def aa (self): # L1.acquire () # print (111) # L2.acquire () # print (222) # L2.release () # L1.release () # # def bb (self): # L2.acquire () # Print (333) # Time.sleep (Random.random ()) # L1.acquire () # print (444) # l1.release () # l2.release () # # if __name__== ' __main__ ': # for i in range: # Ret=func () # Ret.start ()
Rlock: Recursive lock, you can assign multiple variables.
The recursive lock phenomenon is as follows:
# import threading# Import time# import random# r1=r2=threading. Rlock () # Class Func (threading. Thread): # def run (self): # Self.aa () # self.bb () # def aa (self): # R1.acquire () # print (111) # R2.acquire () # print (222) # R2.release () # R1.release () # # def bb (self): # R1.acquire () # Print (333) # Time.sleep (Random.random ()) # R2.acquire () # print (444) # r2.release () # r1.release () # # if __name__== ' __main__ ': # for i in range: # Ret=func () # Ret.start ()
Recursive lock: Do not add a lock, the reference technology will add 1, do not lose one lock, the reference technology will be reduced by one, and can be acquire several times, as long as the technology is not 0, can not be the other threads to grab.
Three-signal volume
What is a semaphore: in fact, it is a lock that can create multiple locks and achieve a concurrency effect. A task that exceeds the scope of the lock will only be able to grab the lock if it waits for the resulting release. Equivalent to generating a bunch of new threads and processes.
Semaphore: Create semaphores while managing a built-in counter
# import threading# Import time# import random# import os# def task (n): # with sm:# time.sleep (Random.randint (1, 5) # Print ('%s is tasking '%threading.current_thread (). GetName ()) # # If __name__== ' __main__ ': # sm=threading. Semaphore (5) # for I in range: # t=threading. Thread (target=task,args= (i,)) # T.start ()
You can specify the number of semaphores.
Four Events Event
Event: Create events. Used for communication between threads. The event object implements a simple communication mechanism between threads, which provides a set of signals, clears signals, waits, etc. to implement communication between threads.
Set: Sets the signal inside the event to true.
Wait: only when the internal signal is true will it execute quickly and complete the return. When an event object's internal signal flag is false, the wait method waits until it is true to return.
Timeout:wait inside of a parameter, time parameter, wait for the time range.
Use the Clear () method of the event object to clear the signal flag inside the event object and set it to false, and when the clear method of the event is used, the IsSet () method returns a false
Is_set: Determine if the signal is passed in
IsSet: Returns the status value of event
# import threading# Import time# import random# e=threading. Event () # def work (): # print ('%s ' is detecting '%threading.current_thread (). GetName ()) # Time.sleep (Random.randint ( 1,5) # E.set () # def foo (): # count=1# while not E.is_set (): # if Count > 3:# raise Timeouterror ( ' Wait timeout ') # print ('%s is waiting for%s secondary connection '% (Threading.current_thread (). GetName (), Count)) # e.wait (timeout= Random.randint (1,5) # count+=1# print ('%s is connecting '%threading.current_thread (). GetName ()) # if __name__== ' __ main__ ': # t1=threading. Thread (target=work) # t2=threading. Thread (target=foo) # T1.start () # T2.start ()
Five timers
Timer: Timer, is a derived class of thread that is used to invoke a method after a specified time.
# import threading# import random# def hello (n): # print (' Hello ', n) # # if __name__== ' __main__ ': # for I in range (20) : # T=threading. Timer (Random.random (), hello,args= (I,)) # T.start ()
Six Thread queue: queue
Queue: The data that is first put in the queue is read first
# import queue# Import time# import random# import threading# q=queue. Queue (5) # def work (n): # Time.sleep (Random.randint (1,5)) # Q.put ('%s is working '%n) # print (n) # def foo (): # Time.sleep (Random.randint (1,3)) # print (Q.get ()) # # If __name__== ' __main__ ': # for I in range (20): # t1=threading. Thread (target=work,args= (i,)) # t2=threading. Thread (target=foo) # T1.start () # T2.start ()
Priorityqueue: First read the highest priority data, when the parameter is passed in the format of the tuple, preceded by the number, followed by incoming content. The priority of numbers from small to large, and if the numbers are the same, they are sorted from small to large in ASCII code
# import queue# Import time# import random# import threading# q=queue. Priorityqueue () # def work (n): # Time.sleep (Random.randint (1,5)) # Q.put ('%s is working '%n) # print (n) # # def foo (): # Time.sleep (Random.randint (1,3)) # print (Q.get ()) # # If __name__== ' __main__ ': # for I in range ( ): # t1=threading. Thread (target=work,args= (i,)) # t2=threading. Thread (target=foo) # T1.start () # T2.start ()
Lifoqueue: Advanced back-out, also called stacks. When reading data, it is read by a short time.
# import queue# Import random# import time# import threading# q=queue. Lifoqueue () # def work (n): # Time.sleep (Random.randint (1, 5)) # Q.put ((Random.randint (1,20), ' jie_%s '%n) # print (n) # def foo (): # print (Q.get ()) # Time.sleep (Random.randint (1,3)) # # If __name__== ' __main__ ': # For I in range: # t1=threading. Thread (target=work,args= (1,)) # t2=threading. Thread (target=foo) # T1.start () # T2.start ()
Retake course day34 (Network programming eight Thread II)