Here's a brief description of Gil in the Python 2.7.9 manual:
The mechanism used by the CPython interpreter to assure this only one thread executes Python bytecode in a time. This simplifies the CPython implementation by making the object model (including critical built-in types as such) imp licitly safe against concurrent access. Locking the entire interpreter makes it easier for the interpreter to be multi-threaded, at the expense of a much of the par Allelism afforded by multi-processor machines.
However, some extension modules, either standard or third-party, are designed so-release the "GIL" when doing Computat Ionally-intensive tasks such as compression or hashing.Also, the GIL is always released when doing I/O.
Past efforts to create a "free-threaded" interpreter (one which locks shared data at a much finer granularity) have the not is En successful because performance suffered in the common case. It is believed which overcoming this performance issue would make the implementation much more complicated and therefore co Stlier to maintain.
Personal understanding: IO is divided into network io and disk IO, in general, IO has sent data (output) and return data (input) two processes. For example, the browser as the main body, the browser to send the request to the server (output), the server will return the request results to the browser (input). Python releases the Gil (Global interpreter Lock) lock in the case of Io blocking, and other threads continue to execute the Send request (output) while the current thread waits for the return value (blocking). The third thread sends the request (output) when the second thread waits for the return value (block), that is, in the same time fragment, a thread is waiting for the data, and a thread is sending the data. This reduces the time for IO transfer. However, because Python is performing computational tasks with the CPU, the Gil Lock is not freed, and Python multithreading is actually using a single core for CPU computing. A CPU time slice will only be assigned to one thread, so in CPU-intensive situations, multithreading does not speed up the calculation. In addition, if the calculation task is locked, the CPU time slice scheduling mechanism after the end of a CPU time slice (Python defaults to processing 1000 bytecode), to release the Gil Lock, and see if other threads can perform, because the task is locked, The second CPU time slice continues to divide the time slice to the first thread, which causes the CPU scheduling time to be wasted, but leads to the multithreading to take longer than the single thread.