Gil Global interpreter lock, deadlock recursive lock, Semaphore, event event, thread queue

Source: Internet
Author: User
Tags mutex semaphore

Gil Global Interpreter Lock

The Gil Essence is a mutex, and the same as all mutex nature, is to make the concurrency run into serial, so as to control the same time shared data can only be modified by one task, thus ensuring data security

To protect the security of different data, a different lock should be added. For example, the IO mode should be multi-threaded (open file, time.sleep, input and output, etc.), and the calculation is related to using multi-process

In a python process, not only the main thread of the py file you are running, but also the other threads that are opened by that thread, as well as the interpreter-level thread of the interpreter-enabled garbage collection, all threads are running within this process. If more than one thread target=work then the execution process is multiple lines enters upgradeable access to the interpreter code, that is, get execute permissions, and then the target code to the interpreter code to execute, the interpreter's code is shared by all threads, So garbage collection threads can also access the code of the interpreter to execute, which leads to a problem: for the same data 100, it is possible that thread 1 executes x=100 while garbage collection performs the recovery of 100 operations, there is no clever way to solve this problem is to lock processing, such as, Code that guarantees that the Python interpreter can execute only one task at a time

Gil protects the data at the interpreter level and protects the user's own data by locking them

With Gil's presence, same time in the same town. Only one thread is executed

For computing, the more CPU, the better, but for I/O more CPU is useless

We have four tasks to deal with, and the way to do this is to play the concurrency effect, and the solution can be:

Scenario One: Open four processes

Scenario Two: Open four threads under one process

Single-core case: If four tasks are computationally intensive, there is no multicore to parallel computing, the scenario increases the cost of creating a process, two wins if the four tasks are I/O intensive, the cost of the scenario one creation process is large, and the process is not much faster than the thread, the scheme is two wins

Multicore scenario: If four tasks are computationally intensive, multicore means parallel computing, in Python a process in which only one thread executes at the same time does not use multicore, scheme one wins if four tasks are I/O intensive, and no more cores can solve I/O problems, the scheme wins

Conclusion: The current computer is mostly multicore, Python for the computation-intensive task of multi-threading efficiency does not bring much performance improvement, or even less than serial (no large switching), but for the IO-intensive task efficiency is significantly improved.

# Compute Intensive: Should use multi-process from multiprocessing Import processfrom threading Import Threadimport os,timedef work ():    res=0    For I in Range (100000000):        res*=iif __name__ = = ' __main__ ':    l=[]    start=time.time () for    I in range (6): C6/>p=process (target=work) #进程用时18.202494621276855        # p=thread (target=work) #进程用时32.08451318740845        L.append (P)        P.start () for    p in L:        p.join ()    stop=time.time ()    print (' Run time is%s '% ( Stop-start))

# IO-intensive: Should turn on multithreading # from threading import thread# from multiprocessing import  process# import time,random## def task (): #     Time.sleep (2) # # if __name__ = = ' __main__ ': #     l=[]#     star=time.time () # for     I in range: #         t= Process (Target=task) #进程用时3.76550555229187#         t=thread (target=task) #线程用时2.002195119857788#         l.append (t) #         T.start () # for     N in l:#         n.join () #     Stop=time.time () #     print (Stop-star)

Deadlock and Recursive lock

Deadlock means that two or more or two processes or threads in the execution process, because of the contention for resources caused by a mutual waiting phenomenon, if there is no external force, they will not be able to proceed. At this point, the system is in a deadlock state or the system generates a deadlock, and these processes, which are always waiting for each other, are called deadlock processes.

 

From threading Import Thread,lock,rlockimport Timemutexa=lock () Mutexb=lock () # Mutexb=mutexa=rlock () class Mythead ( Thread):    def run (self):        self.f1 ()        self.f2 ()    def F1 (self):        mutexa.acquire ()        print ('%s ' Grab a lock '%self.name)        mutexb.acquire ()        print ('%s grab B Lock '%self.name)        mutexb.release ()        mutexa.release ()    def f2 (self):        mutexb.acquire ()        print ('%s grabbed a B lock '%self.name)        time.sleep (2)        Mutexa.acquire ()        print ('%s grabbed a lock '%self.name)        mutexa.release ()        mutexb.release () if __name__ = = ' __ Main__ ': For    i in range:        t=mythead ()        T.start ()
"Thread-1 get a lock Thread-1 get B lock Thread-1 get B lock Thread-2 get a lock and then stuck, dead lock " 

The solution is a recursive lock, that is, a rlock () recursive lock is a thread or process to grab the lock can be taken multiple times the lock, and this lock every time he was taken inside will be recorded once, unless a process or thread all the acquire are release other threads to compete for the lock

Signal Volume Semaphore

Same as Process

The semaphore manages a built-in counter whenever the Counter +1 counter-1 calls release when Acquire () is called, which cannot be less than 0 when the counter is 0 o'clock acquire will block the thread until another thread calls release

From threading Import Thread,semaphoreimport Time,randomsm=semaphore (5) def task (name):    sm.acquire ()    print (' %s is potty '%name '    time.sleep (Random.randint (1,3))    sm.release () if __name__ = = ' __main__ ': for    I in range (20) :        t=thread (target=task,args= (' Passer-by%s '%i,)        T.start ()

Within this code, Semaphors sets the maximum number of 5 to just start 5 threads when two threads call release, the other threads can acquire two more.

Event Events

A key feature of a thread is that each thread is run independently and the state is unpredictable. Thread synchronization problems can become tricky if other threads in the program need to determine the state of a thread to decide what to do next. To solve these problems, we need to use the event object in the threading library. The object contains a signal flag that can be set by the thread, which allows the thread to wait for certain events to occur. In the initial case, the signal flag in the event object is set to False. If the thread waits for an event object, and the flag of the event object is false, then the threads will be blocked until the flag is true. A thread if the signal flag of an event object is set to true, it will wake up all the threads waiting for the event object. If a thread waits for an event object that is already set to true, it ignores the event and continues execution

Event.isset (): Returns the status value of the event;

Event.wait (): If Event.isset () ==false will block the thread;

Event.set (): Sets the status value of event to true, all the threads of the blocking pool are activated into a ready state, waiting for the operating system to dispatch;

Event.clear (): The status value of recovery event is false.

From threading Import Thread,eventimport timeevent=event () def Light ():    print (' Red light is on ')    Time.sleep (3)    Event.set () #绿灯亮def car (name):    print (' car%s is waiting for green light '%name)    event.wait () #等灯绿    print (' car%s pass '%name) if __name__ = = ' __main__ ':    # traffic light    t1=thread (target=light)    T1.start ()    # Car for    i in range:        t=thread ( Target=car,args= (i,))        T.start ()

Thread queue

Same as the process queue

Import queue# queue. Queue () #先进先出q =queue. Queue (3) q.put (1) q.put (2) q.put (3) print (Q.get ()) print (Q.get ()) print (Q.get ()) # queue. Lifoqueue () #后进先出 stack q=queue. Lifoqueue (3) q.put (1) q.put (2) q.put (3) print (Q.get ()) print (Q.get ()) print (Q.get ()) # queue. Priorityqueue () #优先级q =queue. Priorityqueue (3) #优先级, the priority is expressed numerically, the smaller the number the higher the Priority Q.put ((Ten, ' a ')) Q.put (( -1, ' a ')) Q.put ((c, ' B ')) print (Q.get ()) Print (Q.get ()) Print (Q.get ())

Gil Global interpreter lock, deadlock recursive lock, Semaphore, event event, thread queue

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.