Python GIL (Global interpreter Lock)

Source: Internet
Author: User
Tags mutex semaphore

First, introduce
Definition: In CPython, the global interpreter lock, or GIL, was a mutex that prevents multiple native threads from executing Python Bytecodes at once. This lock is necessary mainly because CPython ' s memory management are not thread-safe. (However, since the GIL exists, other features has grown to depend on the guarantees that it enforces.)

Conclusion: In the CPython interpreter, multiple threads that are opened under the same process can only have one thread at a time and cannot take advantage of multicore advantages

  

    The first thing to make clear is that the Gil is not a Python feature, it is a concept introduced when implementing the Python parser (CPython). Just like C + + is a set of language (syntax) standards, but can be compiled into executable code with different compilers. Well-known compilers such as Gcc,intel c++,visual C + +. Python is the same, and the same piece of code can be executed through different Python execution environments such as Cpython,pypy,psyco. Like the Jpython there is no Gil. However, because CPython is the default Python execution environment for most environments. So in a lot of people's concept CPython is Python, also take for granted the Gil to the Python language flaw.    so let's be clear here: Gil is not a python feature, Python can be completely independent of the Gil
Two, Gil Introduction

Gil is the essence of a mutex, since it is a mutex, all the nature of the mutex is the same, all the concurrent operation into serial, in order to control the same time shared data can only be modified by a task, and thus ensure data security.

  One thing is certain: to protect the security of different data, you should add a different lock.

To understand the Gil, first make a point: each time you execute a python program, you create a separate process. For example, Python Test.py,python Aaa.py,python bbb.py will produce 3 different Python processes

Verifying that Python test.py only produces a single process
"' #验证python test.py will only produce a process #test.py content import os,timeprint (Os.getpid ()) Time.sleep (+)" Python3 test.py # Under Windows tasklist |findstr python# under Linux ps aux |grep python

In a python process, not only the main thread of the test.py or other threads opened by the thread, but also the interpreter-level thread of the interpreter-enabled garbage collection, in short, all threads are running within this process, without a doubt

    1 All data is shared, where the code as a data is shared by all threads (all code for test.py and all code for the CPython interpreter)    For example: test.py defines a function work (code content), where all threads in the process can access the code of the business, so we can open three threads and target points to that code, which means it can be executed.    2 Threads of the task, all need to use the task code as a parameter to the interpreter code to execute, that is, all the thread to run their own tasks, the first thing to do is to have access to the interpreter code.
Comprehensive:

If multiple threads are target=work, then the execution process is

Multiple lines enters upgradeable access to the interpreter's code, that is, get execute permission, and then give the target code to the interpreter code to execute

  Solution The code for the release is shared by all threads, so the garbage collection thread may also be able to access the interpreter's code to execute, which leads to a problem: for the same data 100, it is possible that thread 1 executes the x=100 while garbage collection performs an operation that recycles 100. There is no clever way to solve this problem, that is, lock processing, such as Gil, to ensure that the Python interpreter can only execute one task at a time code

Three, Gil and lock

The witty classmate may ask this question: Python already has a Gil to ensure that only one thread can execute at the same time, why is there a need for lock?

First, we need to agree that the purpose of the lock is to protect the shared data, only one thread at a time can modify the shared data

Then we can conclude that protecting different data should be a different lock.

Finally, the problem is clear, GIL and lock are two locks, the protection of the data is different, the former is the interpreter level (of course, is to protect the interpreter-level data, such as garbage collection data), the latter is to protect the user's own development of the application data, it is obvious that Gil is not responsible for this matter, Only user-defined lock handling, or lock

The Gil protects the interpreter-level data and protects the user's own data by locking them in, such as:

Analysis:

    1, 100 threads to rob the Gil Lock, that is, Rob Execute permission    2, there must be a line enters upgradeable Rob Gil (called Thread 1), and then start execution, once executed will get Lock.acquire ()    3, very likely thread 1 has not run finished, Another thread 2 grabbed the Gil and started running, but thread 2 found that the mutex lock was not released by thread 1, then blocked, forced to surrender execution permissions, releasing the Gil    4, until thread 1 re-grabbed the Gil, and began to continue from the last paused position, Until the mutex lock is normally released, then the other threads repeat the process of 2 3 4

  

code example:

# _*_ Coding:utf-8 _*_ from threading Import threadfrom threading Import Lockimport timen =100def task ():    Global N
   mutex.acquire ()    temp = n    time.sleep (0.1)    n = temp-1    mutex.release () if __name__ = = ' __main__ ':    Mutex = Lock ()    t_l = [] for    i in range:        t = Thread (target=task)        t_l.append (t)        T.start ()    for T in t_l:        t.join ()    print ("primary", N)

Result: Must be 0, from the original concurrent execution into serial, sacrificing the execution efficiency to ensure data security, without locking the result may be 99

Main 0

  

Four, Gil and multithreading

With Gil's presence, at the same moment only one thread in the same process is executed

Heard here, some students immediately questioned: The process can take advantage of multi-core, but the cost is large, and Python's multithreaded overhead, but can not take advantage of multicore advantage, that is python useless?

So to solve this problem, we need to agree on a few points:
    1. Is the CPU used for computing, or is it used for I/O?    2. Multi-CPU means that multiple cores can be computed in parallel, so multicore boosts compute performance    

  

A worker is equivalent to the CPU, at this time the calculation is equivalent to workers in the work, I/O blocking is equivalent to work for workers to provide the necessary raw materials, workers work in the process if there is no raw materials, the workers need to work to stop the process until the arrival of raw materials.

If your factory is doing most of the tasks of preparing raw materials (I/O intensive), then you have more workers, the meaning is not enough, it is not as much as a person, in the course of materials to let workers to do other work,

Conversely, if your plant has a full range of raw materials, the more workers it is, the more efficient it is.

Conclusion:
For computing, the more CPU the better, but for I/O, no amount of CPU is useless of course, running a program, with the increase in CPU execution efficiency will certainly improve (no matter how much increase, there is always improved), this is because a program is basically not pure computing or pure I/O, So we can only compare to see whether a program is computationally intensive or I/O intensive, so that further analysis of Python multithreading in the end there is no useful
Assuming that we have four tasks to deal with, the process must be to play a concurrent effect, and the solution can be:
    Scenario One: Open four process    scenario two: One process, open four threads
Single-core case, the results of the analysis:
    If four tasks are computationally intensive and there is no multicore for parallel computing, the scenario increases the overhead of creating the process. Scenario two wins        if the four tasks are I/O intensive, the cost of the scenario one creation process is large, and the Jin Chengde switching speed is far less than the thread, the scheme wins
Multi-core scenario, the results of the analysis:
    If four tasks are computationally intensive, multicore means parallel computing, where only one thread executes at the same time in a process in Python, not multicore. Scenario One wins        if four tasks are I/O intensive, no matter how many cores can not solve I/O problem, the scheme wins
Conclusion:
    Today's computers are mostly multicore, and Python's efficiency in multithreading for computationally intensive tasks does not lead to much performance gains or even better serial (no large switching), but there is a significant increase in the efficiency of IO-intensive tasks.

  

Five, multi-threaded performance testing if multiple concurrent tasks are computationally intensive: high-process efficiency
# _*_ Coding:utf-8 _*_# Compute intensive with multi-process from multiprocessing Import processfrom threading import threadimport osimport timedef wo RK ():    res = 0 for    i in range (100000000):        res *= 1if  __name__ = = ' __main__ ':    l = []    print (os.cpu_c Ount ())    start = Time.time () for    I in range (8):        # p = Process (target=work)        #run time is:43.40110874176025 4        t = Thread (target=work)        #run time is:62.395447731018066        # l.append (P)        # P.start ()        L.append (t)        T.start () for    T in L:        t.join ()    # for P in L:    #     P.join ()    stop = Time.time ()    print (' Run time is: ', (Stop-start))

  

If multiple concurrent tasks are I/O intensive: multithreading is highly efficient
#IO密集型用多线程from multiprocessing Import processfrom threading import Threadimport osimport timedef work ():    time.sleep (0.5) If  __name__ = = ' __main__ ':    l = []    print (Os.cpu_count ())    start = Time.time () for I in    range (400 ):        # p = Process (target=work)  #run time is:39.320624113082886        p = Thread (target=work)      #run time is:0.5 927295684814453        l.append (p)        P.start () for    p in L:        p.join ()    stop = Time.time ()    print (' Run time is: ', (Stop-start))

  

Application:
    Multithreading for IO-intensive, such as socket crawlers, Web    multi-process for compute-intensive, such as financial analysis

  

Six, deadlock phenomenon

The so-called deadlock refers to two or more than two processes or threads in the execution process, because of the contention for resources caused by a mutual waiting phenomenon, if there is no external force, they will not be able to advance, this time said the system is in a deadlock condition or the system has produced a deadlock, these forever in the process of mutual waiting is called the deadlock process

From threading Import Thread,lockimport Timemutexa=lock () Mutexb=lock () class MyThread (Thread):    def run (self):        self.func1 ()        Self.func2 ()    def func1 (self):        mutexa.acquire ()        print (' \033[41m%s get a lock \033[0m '% Self.name)        mutexb.acquire ()        print (' \033[42m%s get B lock \033[0m '%self.name)        mutexb.release ()        Mutexa.release ()    def func2 (self):        mutexb.acquire ()        print (' \033[43m%s get B lock \033[0m '%self.name ')        Time.sleep (2)        mutexa.acquire ()        print (' \033[44m%s get a lock \033[0m '%self.name)        mutexa.release ()        mutexb.release () if __name__ = = ' __main__ ': for    i in range:        t=mythread ()        T.start ()

Execution effect

Thread-1 get a lock Thread-1 get B lock Thread-1 get B lock Thread-2 get a lock #出现死锁, the whole program blocked
Seven, Recursive lock

The workaround for deadlocks is to use recursive locks, recursive locks, which are available in Python to support multiple requests for the same resource in the same thread, and Python provides a reentrant lock Rlock

This rlock internally maintains a lock and a counter variable, counter records the number of acquire, so that resources can be require multiple times. Until all the acquire of a thread are release, the other threads can get the resources. The above example if you use Rlock instead of lock, there will be no deadlock, the difference is that the recursive lock can be acquire multiple times, and the mutex can only be acquire once.

From threading Import Thread,rlockimport Timemutexa=mutexb=rlock () #一个线程拿到锁, counter plus 1, the line range again encountered lock, then counter continue to add 1,# All other threads in this period can wait, waiting for the thread to release all locks, i.e. counter decrements to 0 Class MyThread (thread):    def run (self):        self.func1 ()        Self.func2 ()    def func1 (self):        mutexa.acquire ()        print (' \033[41m%s get a lock \033[0m '%self.name ')        Mutexb.acquire ()        print (' \033[42m%s get B lock \033[0m '%self.name)        mutexb.release ()        mutexa.release    () def func2 (self):        mutexb.acquire ()        print (' \033[43m%s get B lock \033[0m '%self.name)        time.sleep (2)        Mutexa.acquire ()        print (' \033[44m%s get a lock \033[0m '%self.name)        mutexa.release ()        mutexb.release () if __ name__ = = ' __main__ ': for    i in range:        t=mythread ()        T.start ()

Results:

Thread-1 got a lock Thread-1 got a B lock Thread-1 got a B lock Thread-1 got a lock Thread-2 got a lock Thread-2 got a B lock Thread-2 got B lock Thread-2 Got a lock Thread-4 got a lock Thread-4 got the B lock Thread-4 got the B lock Thread-4 got a lock Thread-6 got a lock Thread-6 got a B lock Thread-6 got B lock Thread-6 Got a lock Thread-8 got a lock Thread-8 got the B lock Thread-8 got the B lock Thread-8 got a lock Thread-10 got a lock Thread-10 got a B lock Thread-10 got B lock Thread-10 Got a lock Thread-5 got a lock Thread-5 got the B lock Thread-5 got the B lock Thread-5 got a lock Thread-9 got a lock Thread-9 got a B lock Thread-9 got B lock Thread-9 Got a lock Thread-7 got a lock Thread-7 got the B lock Thread-7 got a B lock Thread-7 got a lock Thread-3 got a lock Thread-3 got a B lock Thread-3 got a B lock Thread-3 got a lock
Eight, signal volume

Semaphore is also a lock, you can specify a signal volume of 5, compared to a mutex lock can only have a task at the same time to grab the lock to execute, the signal volume at the same time can have 5 tasks to get the lock to execute, if the mutex is a house to rob a toilet, then the signal volume is the equivalent of a group of passers-by to Public toilets have multiple pit positions, which means that at the same time there can be more than one person on the public toilet, but the number of public toilets is certain, which is the size of the semaphore

From threading import Thread,semaphoreimport Threadingimport timedef func ():    sm.acquire ()    print ('%s get SM '% Threading.current_thread (). GetName ())    Time.sleep (3)    sm.release () if __name__ = = ' __main__ ':    sm= Semaphore (5) for    I in range:        t=thread (Target=func)        T.start ()

Analytical:

The semaphore manages a built-in counter that has a built-in counter when calling acquire () 1, a built-in counter +1 when calling release (), a counter that is less than 0, and acquire () blocks the thread until another thread calls release () when the counter is 0 o'clock.

  Unlike the process pool, which is completely different from the concept of process pool (4), the maximum can only generate 4 processes, and from beginning to end it is just these four processes that do not produce new, and semaphores generate a bunch of threads/processes

Nine, Event

Same as the process

A key feature of a thread is that each thread is run independently and the state is unpredictable. Thread synchronization problems can become tricky if other threads in the program need to determine the state of a thread to decide what to do next. To solve these problems, we need to use the event object in the threading library. The object contains a signal flag that can be set by the thread, which allows the thread to wait for certain events to occur. In the initial case, the signal flag in the event object is set to False. If the thread waits for an event object, and the flag of the event object is false, then the threads will be blocked until the flag is true. A thread if the signal flag of an event object is set to true, it will wake up all the threads waiting for the event object. If a thread waits for an event object that is already set to true, it ignores the event and continues execution

Event.isset (): Returns the status value of the event, event.wait (): If Event.isset () ==false will block the thread; Event.set (): Sets the status value of event to True, All blocking pool threads are activated into a ready state, waiting for the operating system to dispatch; Event.clear (): The status value of recovery event is false.

For example, there are multiple worker threads trying to link MySQL, and we want to make sure that the MySQL service is working properly before linking to the MySQL server, and if the connection is unsuccessful, try reconnecting. Then we can use threading. Event mechanism to coordinate the connection operations of individual worker threads

From threading import Thread,eventimport threadingimport time,randomdef conn_mysql ():    count=1 and not    Event.is_set ():        If Count > 3:            raise Timeouterror (' link timeout ')        print (' <%s>%s attempts link '% ( Threading.current_thread (). GetName (), count)        event.wait (0.5)        count+=1    print (' <%s> link success '% Threading.current_thread (). GetName ()) def check_mysql ():    print (' \033[45m[%s] checking mysql\033[0m '% Threading.current_thread (). GetName ())    Time.sleep (Random.randint (2,4))    Event.set () if __name__ = = ' __main_ _ ':    event=event ()    conn1=thread (target=conn_mysql)    conn2=thread (target=conn_mysql)    check= Thread (Target=check_mysql)    Conn1.start ()    Conn2.start ()    Check.start ()
Ten, condition condition (understand)

Causes the thread to wait for n threads to be released only if a condition is met.

Import Threading Def run (n):    con.acquire ()    con.wait ()    print ("Run the Thread:%s"%n)    con.release () If _ _name__ = = ' __main__ ':     con = threading. Condition () for    I in range:        t = Threading. Thread (Target=run, args= (i,))        T.start () while     True:        INP = input (' >>> ')        if inp = = ' Q ':            Break        Con.acquire ()        con.notify (int (INP))        con.release ()

Def condition_func ():    ret = False    INP = input (' >>> ')    if InP = = ' 1 ':        ret = True    return r Etdef run (N):    con.acquire ()    con.wait_for (condition_func)    print ("Run the Thread:%s"%n)    Con.release () if __name__ = = ' __main__ ':    con = threading. Condition () for    I in range:        t = Threading. Thread (Target=run, args= (i,))        T.start ()

  

11, Timer timer specifies n seconds to perform an operation, such as a time bomb
From threading Import timer  def hello ():    print (' Hello, world ') t = Timer (1, hello) t.start ()  # after 1 seconds, "Hello, World" would be printed
Verification Code Timer
From threading Import Timerimport Random,timeclass Code:    def __init__ (self):        Self.make_cache ()    def Make_ Cache (self,interval=5):        self.cache=self.make_code ()        print (Self.cache)        Self.t=timer (interval, Self.make_cache)        Self.t.start ()    def make_code (self,n=4):        res= "for        I in range (n):            s1=str ( Random.randint (0,9))            s2=chr (Random.randint (65,90))            res+=random.choice ([s1,s2])        return res    def check (self):        while True:            inp=input (' >>: '). Strip ()            if inp.upper () = =  Self.cache:                print (' Verification successful ', end= ' \ n ')                self.t.cancel ()                breakif __name__ = = ' __main__ ':    obj=code ()    Obj.check () Verification Code timer

  

Python GIL (Global interpreter Lock)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.