Threads are the smallest unit of operating system scheduling, with the popular multithreaded programming of multicore processors becoming a powerful tool for maximizing CPU potential (except Python). The memory between threads is shared, so the overhead required to create a thread is much smaller than the cost of creating a process. Aside from the hardware level, multithreading also gives us the means to perform multiple tasks simultaneously (concurrent programming), so that we can have another kind of programming thinking. The python threads are encapsulated in the threading module.
Threading
- Threading.active_count (): Returns the current number of active threads and returns the length of the list returned by another method enumerate () in the module.
- Threading.current_thread (): Returns the current thread object, and when the caller's control thread has not yet been created through the threading module, a limited-functionality dummy thread (which I do not understand) will be returned.
- Threading.get_ident (): Returns the current identification number, a nonzero integer. Its value does not have a direct meaning. Its value may be recycled when one thread is destroyed and another thread is turned on
- Threading.enumerate (): Returns a list of all active threads, including the background thread, dummy thread, and the main thread. It does not contain the end and the bit start thread.
- Threading.main_thread (): Returns the main thread object, which is typically created by an interpreter at the outset.
- Threading.settrace (func): Sets a Trace function (available for debugging) on all threads. For each thread, this function is passed to Sys.settrace ().
- Threading.setprofile (func): Sets an outline function (available for debugging) on all threads. For each thread, this function is passed to Sys.setprofile ().
- Threading. Timeout_max: Sets the time-out for thread hangs (Lock.acquire (), Rlock.acquire (), condition.wait (), etc).
Thread Object
There are two ways to create a thread object: one is to pass a callable object (function), and one is to override the Run method by inheriting the thread class (note that other functions are best not rewritten except when the constructor is overwritten)
class Threading. Thread (Group=none, Target=none, Name=none, args= (), kwargs={}, *, Daemon=none)
Parameters:
- Group: Thread groups, temporarily useless, left for later to do the expansion.
- Target: The function that needs to be executed.
- Name: The name of the thread, which is the "thread-n" format by default.
- Args,kwargs: Call the function to execute the required parameters.
- Daemon: Set as Background thread, should be set before calling the start () method, otherwise error.
Methods and properties:
- Start (): Starting the thread, repeating the same thread will throw a RuntimeError exception.
- Run (): The method that the thread executes, and the subclass should override this method.
- Join (Timeout=none): The main thread waits until the thread ends, can add a timeout time-out, and should call this method after thread start.
- Name: The name of the thread.
- GetName: Gets the thread name.
- SetName: Sets the thread name.
- Ident: The ID of the thread, if the thread has not started to return a none.
- Is_alive (): Returns whether the thread is in an active state.
- Daemon: Flag Whether this thread is a background thread, True (background thread).
- Isdaemon (), Setdaemon (): Methods for obtaining and configuring daemon
ImportThreadingImport TimeclassA (Threading. Thread):#implementing threads with methods that inherit the thread class def __init__(Self, *args, * *Kwargs): Super ().__init__() Print('Thread%s created state (alive):%s id:%s'%(Self.name, Self.is_alive (), self.ident))defRun (self):Print("thread%s starts execution"%self.name)Print('thread%s in execution state (alive):%s id:%s'%(Self.name, Self.is_alive (), self.ident)) Time.sleep (5) Print("thread%s has ended"%self.name)
Perform:
A = A () A.start ()# start thread print(A.isdaemon ()) # after thread ends Print(a.is_alive ())
Output:
Thread Thread-4 created state (alive): False id:none Thread-4 start thread-4 in Execution State (alive): True ID : 139913777370880falsetrue Threads Thread-4 is over.
Perform:
t = [] # All threads are placed in this list for in range (3): t.append (A ())for in T: # turn on all threads i.start ()for in T: # wait for all threads to end i.join ()print(" All Tasks completed ")
The output is:
Thread Thread-6 start thread Thread-5 start thread Thread-7 start execution thread Thread-5 created state (alive): False id:none thread-6 in Execution State (alive): True id:139913768978176 thread Thread-5 executing state (Alive): True id:139913777370880 thread Thread-7 executing state (Alive): True ID : 139913760585472 Thread-6 created State (alive): false Id:none thread-7 created State (alive): false ID: None Thread-6 ends thread thread-7 ends thread-5 ends All tasks completed
Lock on thread
Programs executed between threads are non-intrusive (asynchronous), but their data (memory) is shared. When a thread reads the value of a variable and starts to do a series of operations, the result is then re-assigned to the variable, during which time another thread will do the same. This can result in data clutter. In other words, we now need to manipulate some data without allowing other threads to manipulate the data, which means that this is an atomic operation. So we need a tool to synchronize threads so that threads can serialize when manipulating some data, which is the lock. The toughest problem with multithreaded programming is data clutter. Unfortunately, the logic of solving real-world problems is complex, and when there are more locks to be maintained, problems will inevitably arise. So, the ability to use a good lock can reflect a programmer's concurrent programming capabilities, just like a C language pointer.
Class Threading. Lock
Method:
- Acquire (Blocking=true, timeout=-1): Gets this lock. When the parameter acquire is set to True, it is blocked until the lock is freed by another thread. This lock is then locked and returns True, and when the parameter acquire is set to false, it is not blocked and returns a false directly. The parameter timeout receives a number of floating-point types, indicating the maximum time to block the wait, and returns a false if the time is exceeded. When it is-1, it means waiting indefinitely for blocking.
- Release (): Releases the lock. Releasing an unlocked lock will cause an error. no return value.
Note: Release () This method is not only called by threads that have executed the acquire method, but can be called by other threads. In the actual use of locks, it is recommended to use the WITH statement, one is that the code looks elegant, layered sense, and the second is to avoid forgetting to call the release () method, resulting in a deadlock.
Continue with the above code execution:
t =0threads_list= []defFun ():Globalt a=T Time.sleep (0.5) T= A + 1 for_inchRange (100): Threads_list.append (Threading. Thread (Target=Fun )) forIinchThreads_list:i.start () forIinchThreads_list:i.join ()Print("The result is:%s"%t
The output is:
The result is: 1
Perform:
t = 0#a global variable that can be changed by multiple threadsThreads_list =[]lock=Threading. Lock ()defFun ():GlobalT with Lock:a=T Time.sleep (0.5) T= A + 1 for_inchRange (100): Threads_list.append (Threading. Thread (Target=Fun )) forIinchThreads_list:i.start () forIinchThreads_list:i.join ()Print("The result is:%s"%t
The output is:
The result is: 100
Class Threading. Rlock
Can re-enter the lock, its only feature is that when a rlock is acquire by a thread, it can continue to be used by this thread acquire without waiting.
Method:
- Acquire (Blocking=true, timeout=-1): Use the same method as lock. The difference is that each time a thread (the same thread) calls acquire, a recursive level inside is incremented by 1. If you want to release the lock, you must call the same number of release () methods, the recursion level is reduced by 1, otherwise the other thread aquire the lock will be blocked.
- Release (): use as described above.
Perform:
NUM = 0#maintain a global variableRlock =Threading. Rlock ()classA (Threading. Thread):deffun_1 (self):GlobalNum Num+ = 1Time.sleep (0.5) Print("Thread%s executed method fun_1, num is:%s"%(Self.name, NUM))deffun_2 (self):GlobalNum Num+ = 1Time.sleep (0.5) Print("Thread%s executed method fun_2, num is:%s"%(Self.name, NUM))defRun (self): with Rlock:#reentrant locks can be nested within the same threadself.fun_1 () with rlock:self.fun_2 () t_list= [] forIinchRange (3): T_list.append (A ()) forIinchT_list:i.start ()
The output is:
Thread Thread-7 executes the method fun_1, num is: 1 threadsThread 7 Executes the method fun_2, num is: 2 threadsThread-8 executes the method fun_1, num is: 3 Thread8 Executes the method fun_2, num is: 4 threadsThread-9 executes the method fun_1, num is: 5 threadsThread-9 executes the method fun_2, num is: 6
Why should there be rlock?
I was dumbfounded when I saw that I could re-enter the lock. When a thread obtains the lock, it can do anything to the data before the lock is released, and other threads do not affect it at all. There is absolutely no need to acquire the lock after acquiring the lock. Even if this is possible, but does it fit the needs of the actual use? I've been thinking about this for a long time and I finally think of a usage scenario. As shown below:
Credit = 1000classAcounthandle (Threading. Thread): Rlock= Threading. Rlock ()#all instances are using the same lock defWithdraw (self, amount):#Withdraw Money GlobalCredits with Self.rlock:a=Credit Time.sleep (0.5) Credit= A-AmountPrint("withdrawal%s, balance:%s"%(amount, credit))defSave (self, amount):#Save GlobalCredits with Self.rlock:a=Credit Time.sleep (0.5) Credit= A +AmountPrint("deposit%s, balance:%s"%(amount, credit))defWithdraw_fee (self, amount):#cross-line withdrawals, charge more than 2 dollars fee GlobalCredit#made an atomic operation by taking money and collecting fees.With Self.rlock:time.sleep (0.5) Self.withdraw (amount) credit= Credit-2Print("handling charge%s, balance:%s"% (2, credit)) Thread_1= Threading. Thread (Target=acounthandle (). Withdraw, args= (100,)) Thread_2= Threading. Thread (Target=acounthandle (). Save, Args= (200,)) Thread_3= Threading. Thread (Target=acounthandle (). Withdraw_fee, args= (50,)) forIinch[Thread_3, Thread_2, Thread_1]:#3 different instances in 3 lines thread at a timeI.start ()#perform 3 different operations
The output is:
Withdrawal 50, balance: 950 charge 2, Balance:948 Deposit 200, Balance:1148 withdrawal 100, balance:1048
In the above code, class Acounthandle has two basic methods: saving money and taking cash. The lock will definitely be used when executing them, which is fine. But at this point, my business began to expand, increased the cross-line to withdraw money at the same time charge the business of handling fees. In this business I call the method of taking money, and also the account balance of the operation. There is no doubt that these two actions must be used in the lock, the method of taking money has been self-contained using the lock, so only in the deduction of formalities time and lock. Please note that the above code, if I change the rlock into the Lock,withdraw_fee method after executing the WITH statement to Rlock, and then execute the withdraw method, in which aquire the lock, there will be a wait. This thread had previously aquire the lock and locked itself in, and there was a deadlock.
Of course there may be such an idea, I can in the Withdraw_fee method, do not use lock nesting, withdraw money and deduct the fee of the two actions are locked and at the same level. Of course, this will do what I want to achieve. But there will be a problem, after the money is done, I immediately have another person is also operating to withdraw money, and he immediately to the lock and completed the action. For those who had previously taken the money, they could deduct the fee, and when the man saw the balance of the fee being deducted, he was dumbfounded (he was deducted a sum of money for no reason). So in the charge and deduction of two actions must be made an atomic operation. Please note that the locks I mentioned above are the same lock, and multiple instances are common to the same lock.
Not finished, to be continued ~~~~~~~
Concurrent programming threads and locks