Python thread module lock sync Lock

Source: Internet
Author: User
Tags mutex semaphore

The thread in Python is the native thread of the operating system, and the Python virtual machine uses a global interpreter lock (Global interpreter Lock) to mutex the use of the Python virtual machine. In order to support multithreading mechanism, a basic requirement is to implement the mutual exclusion of different threads to access shared resources, so the Gil is introduced.
GIL: After one thread has access to the interpreter, all other threads must wait for it to release the interpreter's access, even if the next instruction of those threads does not affect each other.
Before calling any Python C API, get the Gil
Gil disadvantage: Multiprocessor degenerate to single processor, Advantage: Avoid a lot of locking unlock operation.

Early design of the 2.3.1 Gil

Python supports multi-threading, and the simplest way to resolve data integrity and state synchronization between multiple threads is to lock it up naturally. So with the Gil this super lock, and when more and more code base developers accept this setting, they start to rely heavily on this feature (that is, the default Python internal objects are thread-safe, without having to consider additional memory locks and synchronous operations when implemented). Slowly this realization was found to be egg-sore and inefficient. But when you try to split and remove the Gil, it's hard to get rid of a lot of library code developers who are heavily dependent on Gil. How hard is that? To make an analogy, a "small project" such as MySQL, in order to split the buffer Pool mutex this large lock into small locks also took from 5.5 to 5.6 to more than 5.7 large version for nearly 5 years, and continues. What's so hard about MySQL, which is backed by a company and has a fixed development team, not to mention the highly community-based team of core development and code contributors like Python?

The influence of 2.3.2 Gil

No matter how many threads you have, how many CPUs do you have, Python will only allow a single thread to run at the same time while executing a process.
Therefore, Python is unable to utilize multi-core CPUs for multithreading.
In this way, Python is not as efficient as serial (no large switching) for computationally intensive tasks, but there is a significant increase in efficiency for IO-intensive tasks.

So for Gil, since can not resist, then learn to enjoy it!

Sync Lock

Locks are often used to achieve synchronous access to shared resources. Create a lock object for each shared resource, and when you need to access the resource, call the Acquire method to get the lock object (if the other thread has already acquired the lock, the current thread waits for it to be freed), and then calls the release method to release the lock when the resource has finished accessing it.

Import threading

R=threading. Lock ()

R.acquire ()
‘‘‘
Operations on public data
‘‘‘
R.release ()

The so-called deadlock: refers to two or two or more processes or threads in the execution process, because of the contention for resources caused by a mutual waiting phenomenon, if there is no external force, they will not be able to proceed. At this point the system is in a deadlock state or the system generates a deadlock, and these processes, which are always waiting on each other, are called deadlock processes.

In Python, in order to support multiple requests for the same resource in the same thread, Python provides a reentrant lock rlock. The Rlock internally maintains a lock and a counter variable, counter records the number of acquire, so that resources can be require multiple times. Until all the acquire of a thread are release, the other threads can get the resources. In the example above, if you use Rlock instead of lock, a deadlock will not occur.

Semaphore manages a built-in counter,
Built-in counter whenever acquire () is called-1;
Built-in counter +1 when call Release ();
The counter cannot be less than 0, and when the counter is 0 o'clock, acquire () blocks the thread until another thread calls release ().

Example: (at the same time only 5 threads can get semaphore, that is, you can limit the maximum number of connections to 5):

The difference between lock and Rlock

The main difference between the two types of locks is that Rlock allows multiple acquire in the same thread. But lock does not allow this situation. Note: If you use Rlock, then acquire and release must appear in pairs, that is, call n times acquire, must call N times release to really release the occupied locks.

Python thread module lock sync Lock

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.