Python GIL (Global interpreter Lock)

Source: Internet
Author: User

一、GIL介绍GIL本质就是一把互斥锁,既然是互斥锁,所有互斥锁的本质都一样,都是将并发运行变成串行,以此来控制同一时间内共享数据只能被一个任务所修改,进而保证数据安全。可以肯定的一点是:保护不同的数据的安全,就应该加不同的锁。要了解GIL,首先确定一点:每次执行python程序,都会产生一个独立的进程。例如python test.py,python aaa.py,python bbb.py会产生3个不同的python进程在一个python的进程内,不仅有test.py的主线程或者由该主线程开启的其他线程,还有解释器开启的垃圾回收等解释器级别的线程,总之,所有线程都运行在这一个进程内。1、所有数据都是共享的 其中代码作为一种数据也是被所有线程共享的(test.py的所有代码以及Cpython解释器的所有代码)2、所有线程的任务,都需要将任务的代码当做参数传给解释器的代码去执行,即所有的线程要想运行自己的任务,首先需要解决的是能够访问到解释器的代码 综上: 如果多个线程的target=work,那么执行流程是 多个线程先访问到解释器的代码,即拿到执行权限,然后将target的代码交给解释器的代码去执行解释器的代码是所有线程共享的,所以垃圾回收线程也可能访问到解释器的代码而去执行,这就导致了一个问题: 对于同一个数据100,可能线程1执行x=100的同时,而垃圾回收执行的是回收100的操作,解决这种问题没有什么高明的方法? 就是加锁处理,如的GIL,保证python解释器同一时间只能执行一个任务的代码二、GIL与LockGIL保护的是解释器级的数据,保护用户自己的数据则需要自己加锁处理,如


Three, Gil and multithreading has the existence of Gil, at the same time only one thread in the same process can take advantage of multi-core, but the cost is large, and python multithreading overhead is small, but can not take advantage of multi-core, to solve this problem, we need to agree on a few points: 1.    Is the CPU used for computing, or is it used for I/O? 2. Multi-CPU means that multiple cores can be computed in parallel, so multicore boosts compute performance by 3. Once each CPU encountered I/O blocking, still need to wait, so much check I/O operation is useless. A worker is the equivalent of a CPU, at which time the calculation is equivalent to the worker working, I/O blocking is equivalent to the work of the workers to provide the necessary raw materials, workers in the process without raw materials, The worker's work process needs to be stopped until the arrival of the raw material is awaited.    If your factory is doing most of the tasks of preparing raw materials (I/O intensive), then you have more workers, meaning is not much, it is better than a person, in the course of material to let workers to do other work, in turn, if your factory raw materials are complete, it is of course the more workers, the more efficient conclusion: For computing, the more CPU the better, but for I/O, no amount of CPU is useless of course, running a program, with the increase in CPU execution efficiency will certainly improve (no matter how much increase, there is always improved), this is because a program is basically not pure computing or pure I/O, So we can only compare to see whether a program is computationally intensive or I/O intensive, so that further analysis of Python multi-threaded in the end there is no useful. Suppose a situation: there are now four tasks to deal with, the process is to have a concurrency effect, the solution can be: Scenario One: Open four process Scenario two: Under a process, open four thread single core case, analysis results: If four tasks are computationally intensive, no multicore to parallel Calculation, the scenario increases the cost of the creation process, the scheme wins if the four tasks are I/O intensive, the cost of the scenario one creation process is large, and the process is much less than the thread, and the scenario is more than the multi-core scenario, the analysis results: If four tasks are computationally intensive, multicore means parallel computing, in the PY Thon in a process at the same time only one thread execution does not use multicore, scheme one wins if four tasks are I/O intensive, no more cores can not solve the I/O problem, the scenario two wins the conclusion: Now the computer is basically multicore, Python for compute-intensive tasks open multi-threading efficiency does not bring How much performance is improved, not even in serial (no large switching) However, there is a significant improvement in the efficiency of IO-intensive tasks. Four, multi-threaded performance test 1, compute-intensive: multi-process efficiency high from multiprocessing import process from threading import Thread import Os,time def wor K (): Res=0 for I in Range (100000000): Res*=i l=[] Print (Os.cpu_count ()) #本机 is 4-core start=time.time () for I in range (4): # p=process (target=work) #耗时18s多 P=thread (target=work  ) #耗时26s多 L.append (P) p.start () for P in L:p.join () Stop=time.time () print (' Run Time is%s '% (Stop-start)) 2, I/O intensive: Multithreading efficiency high from multiprocessing import Process from threading import Thread Impor T Threading import Os,time def work (): Time.sleep (2) print (' ===> ') l=[] Print (os.cpu_            Count ()) #本机为4核 start=time.time () for I in range: p=process (target=work) #耗时4s多, most of the time spent on the creation process # P=thread (target=work) #耗时2s多 l.append (P) p.start () for P in L:p.join () stop =time.time () print (' RUn time is%s '% (Stop-start)) application: Multithreading for IO-intensive, such as sockets, crawlers, Web multi-process for computational intensive, such as financial analysis 

Python GIL (Global interpreter Lock)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.