APR-Gil Multi-process multithreaded usage scenario thread mutex vs. GIL based on multithreading implementation of the concurrent socket communication process pool and thread pool synchronization, asynchronous, blocking, non-blocking

Source: Internet
Author: User

APR 18
One, Global interpreter lock (GIL)
To run the test.py process:
A. Read the code of the Python interpreter from the hard disk into memory
B. Read the test.py code from the hard disk into memory (two code is installed in a process)
C, read the code in test.py like a string into the Python interpreter to parse execution

1. GIL: Global interpreter Lock (CPython interpreter feature)
In CPython, the global interpreter lock, or GIL, was a mutex that prevents multiple
Native threads from executing Python bytecodes at once. This lock is necessary mainly
Because CPython's memory management (garbage collection mechanism, which is performed periodically by the interpreter) is not thread-safe (if not serial change data, when the x=10 process in memory produces a 10, has not come and bound X, Can be reclaimed by the garbage collection mechanism). However, since the GIL exists, other features has grown to depend on the guarantees that it enforces.)
The Gil Essence is a mutex (Execute permission) that is clamped on the interpreter. All threads within the same process need to grab the Gil lock before executing the interpreter code
2. Gil's Advantages and disadvantages:
Pros: Ensure thread safety for CPython interpreter memory management
Cons: In the CPython interpreter, multiple threads open in the same process, only one thread can execute at the same time, it is said that multithreading of the CPython interpreter cannot be implemented in parallel cannot take advantage of multicore
Attention:
A, Gil can not be parallel, but it is possible to concurrency, not necessarily serial. Because the serial is a task completely completed before the next one, and in CPython, a thread in the IO, when the CPU is freed, will be forcibly canceled the Gil's use rights
B, multi-core (multi-CPU) advantage is to improve the efficiency of computing
c, compute-intensive-"use multi-process to use multi-core
D, IO-intensive--"using multithreading
Second, CPython interpreter concurrency efficiency verification
1, compute-intensive should use multi-process
From multiprocessing import Process
From threading Import Thread
Import time
# import OS
# Print (Os.cpu_count ()) #查看cpu个数
Def task1 ():
Res=0
For I in Range (1,100000000):
Res+=i
Def task2 ():
Res=0
For I in Range (1,100000000):
Res+=i
Def TASK3 ():
Res=0
For I in Range (1,100000000):
Res+=i
Def TASK4 ():
Res=0
For I in Range (1,100000000):
Res+=i
if __name__ = = ' __main__ ':
# p1=process (TARGET=TASK1)
# p2=process (TARGET=TASK2)
# p3=process (TARGET=TASK3)
# p4=process (TARGET=TASK4)
P1=thread (TARGET=TASK1)
P2=thread (TARGET=TASK2)
P3=thread (TARGET=TASK3)
P4=thread (TARGET=TASK4)
Start_time=time.time ()
P1.start ()
P2.start ()
P3.start ()
P4.start ()
P1.join ()
P2.join ()
P3.join ()
P4.join ()
Stop_time=time.time ()
Print (Stop_time-start_time)
2. IO-intensive should use multithreading
From multiprocessing import Process
From threading Import Thread
Import time
Def task1 ():
Time.sleep (3)
Def task2 ():
Time.sleep (3)
Def TASK3 ():
Time.sleep (3)
Def TASK4 ():
Time.sleep (3)
if __name__ = = ' __main__ ':
# p1=process (TARGET=TASK1)
# p2=process (TARGET=TASK2)
# p3=process (TARGET=TASK3)
# p4=process (TARGET=TASK4)
# P1=thread (TARGET=TASK1)
# P2=thread (TARGET=TASK2)
# P3=thread (TARGET=TASK3)
# P4=thread (TARGET=TASK4)
# Start_time=time.time ()
# P1.start ()
# P2.start ()
# P3.start ()
# P4.start ()
# P1.join ()
# P2.join ()
# P3.join ()
# P4.join ()
# Stop_time=time.time ()
# print (stop_time-start_time) #3.138049364089966
P_l=[]
Start_time=time.time ()
For I in range (500):
P=thread (TARGET=TASK1)
P_l.append (P)
P.start ()
For P in p_l:
P.join ()
Print (Time.time ()-start_time)
Third, thread mutex and Gil comparison
The Gil protects the interpreter level code (related to the garbage collection mechanism) but does not protect other shared data (such as your own code). Therefore, in the program for the need to protect the data to be self-locking
From threading Import Thread,lock
Import time
Mutex=lock ()
Count=0
Def task ():
Global Count
Mutex.acquire ()
Temp=count
Time.sleep (0.1)
Count=temp+1
Mutex.release ()
if __name__ = = ' __main__ ':
T_l=[]
For I in range (2):
T=thread (Target=task)
T_l.append (t)
T.start ()
For T in t_l:
T.join ()
Print (' master ', count)
Four, multi-threaded implementation of the concurrent socket communication
Service side:
From socket Import *
From threading Import Thread
From concurrent.futures import Processpoolexecutor,threadpoolexecutor
Tpool=threadpoolexecutor (3) #进程和线程都不能无限多, import modules to limit the number of process and thread pool priorities; The process thread pool encapsulates the functionality of the processes, thread modules
def communicate (CONN,CLIENT_ADDR):
While True: # Communication Loops
Try
data = CONN.RECV (1024)
If not data:break
Conn.send (Data.upper ())
Except Connectionreseterror:
Break
Conn.close ()
def server ():
Server=socket (Af_inet,sock_stream)
Server.bind (' 127.0.0.1 ', 8080)
Server.listen (5)
While True: # link loop
Conn,client_addr=server.accept ()
Print (CLIENT_ADDR)
# T=thread (target=communicate,args= (CONN,CLIENT_ADDR))
# T.start ()
Tpool.submit (COMMUNICATE,CONN,CLIENT_ADDR)
Server.close ()
if __name__ = = ' __main__ ':
Server ()
Client:
From socket Import *
Client=socket (Af_inet,sock_stream)
Client.connect (' 127.0.0.1 ', 8080)
While True:
Msg=input (' >>>: '). Strip ()
If not msg:continue
Client.send (Msg.encode (' Utf-8 '))
DATA=CLIENT.RECV (1024)
Print (Data.decode (' Utf-8 '))
Client.close ()
V. Process pool and thread pool
Why use a pool: pools to limit the number of concurrent tasks and limit our computers to perform tasks concurrently in a way that is affordable to them
What time does the pool load? Process: Concurrent tasks are computationally intensive
When to install threads in a pool: concurrent tasks are IO intensive
1. Process Pool
From concurrent.futures import Processpoolexecutor,threadpoolexecutor
Import Time,os,random
def task (x):
Print ('%s pick-up '%os.getpid ())
Time.sleep (Random.randint (2,5))
Return x**2
if __name__ = = ' __main__ ':
P=processpoolexecutor () # The number of processes that are turned on by default is the CPU's number of cores
# Alex, Wupech, Yang Li, Wu Chen Taro, Zhang San
For I in range (20):
P.submit (Task,i)
2. Thread pool
From concurrent.futures import Processpoolexecutor,threadpoolexecutor
Import Time,os,random
def task (x):
Print ('%s pick-up '%x)
Time.sleep (Random.randint (2,5))
Return x**2
if __name__ = = ' __main__ ':
P=threadpoolexecutor (4) # The number of threads that are turned on by default is the CPU's core count
# Alex, Wupech, Yang Li, Wu Chen Taro, Zhang San
For I in range (20):
P.submit (Task,i)
Six, synchronous, asynchronous, blocking, non-blocking
1, blocking and non-blocking refers to the two operating states of the program
Blocking: Blocking occurs when an IO is encountered, the program stops in place once it encounters a blocking operation, and immediately releases CPU resources
Non-blocking (ready state or Run State): No IO operation, or some means to let the program even encounter IO operation will not stop in place, perform other operations, and strive to occupy as much CPU
2, synchronous and asynchronous refers to the two ways of submitting a task:
Synchronous invocation: After the task is submitted, wait in place until the task has finished running and get the return value of the task before continuing to execute the next line of code
Asynchronous invocation: After the task is committed, do not wait in situ, directly execute the next line of code. When all is done, the results are taken out.
From concurrent.futures import Processpoolexecutor,threadpoolexecutor
Import Time,os,random
def task (x):
Print ('%s pick-up '%x)
Time.sleep (Random.randint (1,3))
Return x**2
if __name__ = = ' __main__ ':
# Asynchronous call
P=threadpoolexecutor (4) # The number of threads that are turned on by default is the CPU's core count
# Alex, Wupech, Yang Li, Wu Chen Taro, Zhang San
Obj_l=[]
For I in range (10):
Obj=p.submit (Task,i)
Obj_l.append (obj)
# P.close ()
# P.join ()
P.shutdown (Wait=true) (equivalent to P.close () (Does not allow new tasks to be placed in the pool) + p.join ())
Print (Obj_l[3].result ())
Print (' master ')
# Synchronous Call
P=threadpoolexecutor (4) # The number of threads that are turned on by default is the CPU's core count
# Alex, Wupech, Yang Li, Wu Chen Taro, Zhang San
For I in range (10):
Res=p.submit (task,i). Result ()
Print (' master ')

APR-Gil Multi-process multithreaded usage scenario thread mutex vs. GIL based on multithreading implementation of the concurrent socket communication process pool and thread pool synchronization, asynchronous, blocking, non-blocking

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.