GIL, timers, thread queue, process pool, and thread pool

Source: Internet
Author: User
Tags garbage collection mutex

First, GIL
1. What is Gil (this is the CPython interpreter)
Gil is essentially a mutex, that is, since it is a mutex, the principle is the same, is to allow multiple concurrent threads at the same time can only
There is an execution
That is: With the existence of the Gil, multiple threads within the same process can only have one run at a time, meaning that in CPython
Multiple threads in a process cannot implement parallel = = = "means that you cannot take advantage of multicore advantages
But does not affect the implementation of concurrency

The Gil can be likened to execute permissions, so the thread needs to execute the execution permission before executing the same process.

2, why should have Gil
Because the CPython interpreter comes with a garbage collection mechanism that is not thread-safe (running concurrently on shared data modifications, I don't know who changed it)

3. How to use

01. GIL vs Custom Mutex
Gil is equivalent to execute permission and is forcibly released in case the task cannot be executed.
Custom mutexes are not automatically freed even if they cannot be executed

The Gil protects the interpreter level code (related to the garbage collection mechanism) but does not protect other shared data (such as your own code).
Therefore, in the program for the need to protect the data to be self-locking

02. Gil's Advantages and disadvantages:
Pros: Ensure thread safety for CPython interpreter memory management
Cons: In the CPython interpreter, multiple threads that are opened under the same process can only have one thread executing at the same time.
It says that the multithreading of the CPython interpreter cannot be implemented in parallel and cannot take advantage of multicore

Attention:
A, Gil can not be parallel, but it is possible to concurrency, not necessarily serial. Because the serial is a task completely completed before the next;
In CPython, when a thread is freed by the CPU during IO, the use of the Gil is forcibly canceled.
B, multi-core (multi-CPU) advantage is to improve the efficiency of computing
c, compute-intensive-"use multi-process to use multi-core
D, IO-intensive--"using multithreading

4, there are two kinds of concurrent solutions:
Multi-process: compute-intensive
Multithreading: IO-intensive



Computationally intensive: Multiple processes should be used
From multiprocessing import Process
From threading Import Thread
Import Os,time
Def work1 ():
Res=0
For I in Range (100000000):
Res*=i

Def WORK2 ():
Res=0
For I in Range (100000000):
Res*=i

Def WORK3 ():
Res=0
For I in Range (100000000):
Res*=i

Def work4 ():
Res=0
For I in Range (100000000):
Res*=i

if __name__ = = ' __main__ ':
L=[]
# Print (Os.cpu_count ()) # (calculates the number of CPUs) This machine is 4 cores
Start=time.time ()
# p1=process (TARGET=WORK1) #
# p2=process (TARGET=WORK2)
# p3=process (TARGET=WORK3)
# p4=process (TARGET=WORK4)

P1=thread (TARGET=WORK1)
P2=thread (TARGET=WORK2)
P3=thread (TARGET=WORK3)
P4=thread (TARGET=WORK4)

P1.start ()
P2.start ()
P3.start ()
P4.start ()
P1.join ()
P2.join ()
P3.join ()
P4.join ()
Stop=time.time ()
Print (' Run time is%s '% (Stop-start))


IO-intensive: multithreading should be used
From multiprocessing import Process
From threading Import Thread
Import Os,time
Def work1 ():
Time.sleep (5)

Def WORK2 ():
Time.sleep (5)

Def WORK3 ():
Time.sleep (5)

Def work4 ():
Time.sleep (5)



if __name__ = = ' __main__ ':
L=[]
# Print (Os.cpu_count ()) #本机为4核
Start=time.time ()
# p1=process (TARGET=WORK1) #
# p2=process (TARGET=WORK2)
# p3=process (TARGET=WORK3)
# p4=process (TARGET=WORK4)

P1=thread (TARGET=WORK1) #
P2=thread (TARGET=WORK2)
P3=thread (TARGET=WORK3)
P4=thread (TARGET=WORK4)

P1.start ()
P2.start ()
P3.start ()
P4.start ()
P1.join ()
P2.join ()
P3.join ()
P4.join ()
Stop=time.time ()
Print (' Run time is%s '% (Stop-start))


Second, the timer
Timer, specifying n seconds after an action is performed
From threading Import Timer,current_thread


def task (x):
Print ('%s run ... '%x)
Print (Current_thread (). Name)


if __name__ = = ' __main__ ':
T=timer (3,task,args= (10,))
T.start ()
Print (' master ')

Third, thread queue

Import queue
Queue: FIFO
Q=queue. Queue (3)
Q.put (1)
Q.put (2)
Q.put (3)

Print (Q.get ())
Print (Q.get ())
Print (Q.get ())

Stack: Advanced back out
Q=queue. Lifoqueue ()
Q.put (1)
Q.put (2)
Q.put (3)
Print (Q.get ())
Print (Q.get ())
Print (Q.get ())

Priority queue: Priority High first comes out, the smaller the number, the higher the priority
Q=queue. Priorityqueue ()
Q.put ((3, ' data1 '))
Q.put (( -10, ' data2 '))
Q.put ((one, ' data3 '))

Print (Q.get ())
Print (Q.get ())
Print (Q.get ())



Third, multi-threaded implementation of the concurrent socket communication
Service side:
From socket Import *
From threading Import Thread

DEF talk (conn):
While True:
Try
DATA=CONN.RECV (1024)
If Len (data) = = 0:break
Conn.send (Data.upper ())
Except Connectionreseterror:
Break
Conn.close ()

def server (ip,port,backlog=5):
Server = socket (af_inet, SOCK_STREAM)
Server.bind ((IP, port))
Server.listen (Backlog)

Print (' Starting ... ')
While True:
conn, addr = Server.accept ()

t = Thread (Target=talk, args= (conn,))
T.start ()

if __name__ = = ' __main__ ':
Server (' 127.0.0.1 ', 8080)


Client:
From socket Import *
Import OS

Client=socket (Af_inet,sock_stream)
Client.connect (' 127.0.0.1 ', 8080)

While True:
msg= '%s say hello '%os.getpid ()
Client.send (Msg.encode (' Utf-8 '))
DATA=CLIENT.RECV (1024)
Print (Data.decode (' Utf-8 '))





Iv. process pool and thread pool

1. When to use the pool:
The function of a pool is to limit the number of processes started or the number of threads.

When should limit???
When the number of concurrent tasks far exceeds the capacity of the computer, that is, the number of processes or threads cannot be opened at once
You should use the pool concept to limit the number of open processes or the number of threads to the extent that the computer can tolerate

2. Two ways to submit a task:
Synchronous vs Asynchronous
Synchronous, asynchronous refers to two ways to commit a task

Sync: After submitting a task, wait in place until the task is finished and get the return value of the task before continuing to run the next line of code
Async: After committing a task (binding a callback function), simply not waiting in place, run the next line of code directly, wait until the task has a return value will automatically trigger the callback function

Running state of the program (blocking, non-blocking)
1, blocking:
IO blocking
2, non-blocking:
Run
Ready

2.1 Basic methods
Submit (FN, *args, **kwargs)
Asynchronous Submit Task

#map (func, *iterables, Timeout=none, chunksize=1)
Action to replace the for loop submit

#shutdown (Wait=true)
The Pool.close () +pool.join () operation equivalent to the process pool
Wait=true, wait until all the tasks in the pool have been completed and the resources have been reclaimed before continuing
Wait=false, returns immediately, and does not wait for the task in the pool to complete
But regardless of the wait parameter, the entire program waits for all tasks to complete.
Submit and map must be before shutdown

#result (Timeout=none)
Get results

#add_done_callback (FN)
callback function



Process Pool:
From concurrent.futures import Processpoolexecutor,threadpoolexecutor
From threading Import Current_thread
Import Os,time,random

def task (n):
Print ('%s is running '%os.getpid ())
Time.sleep (5)
Return n**2

Def parse (future):
Time.sleep (1)
Res=future.result ()
Print ('%s processed%s '% (Os.getpid (), res))

if __name__ = = ' __main__ ':
Pool=processpoolexecutor (4)
Start=time.time ()
For I in range (1,5):
Future=pool.submit (task,i) #异步提交任务
Future.add_done_callback (parse) # Parse will trigger immediately when the future has a return value and pass the future as a parameter to the parse
Pool.shutdown (Wait=true)
Stop=time.time ()
Print (' Master ', Os.getpid (), (Stop-start))

‘‘‘
4340 is running
6572 is running
6652 is running
392 is running
5148 disposed of 1
5148 disposed of 4
5148 disposed of 9
5148 disposed of 16
Master 5148 9.330533742904663
‘‘‘
Finally, one processing result by the main process



Thread pool:
From concurrent.futures import Processpoolexecutor,threadpoolexecutor
From threading Import Current_thread
Import Time,os,random

def task (n):
Print ('%s is running '%current_thread (). Name)
Time.sleep (5)
Return n**2

Def parse (future):
Time.sleep (1)
Res=future.result ()
Print ('%s processed%s '% (Current_thread (). Name,res))

if __name__ = = ' __main__ ':
Pool=threadpoolexecutor (4)
Start=time.time ()
For I in range (1,5):
Future=pool.submit (Task,i)
Future.add_done_callback (Parse)
Pool.shutdown (Wait=true)
Stop=time.time ()
Print (' Master ', Current_thread (). Name, (Stop-start))

‘‘‘
Threadpoolexecutor-0_0 is running
Threadpoolexecutor-0_1 is running
Threadpoolexecutor-0_2 is running
Threadpoolexecutor-0_3 is running
Threadpoolexecutor-0_2 handled 9.
Threadpoolexecutor-0_1 handled 4.
Threadpoolexecutor-0_3 handled 16.
Threadpoolexecutor-0_0 handled 1.
Main Mainthread 6.002343416213989

‘‘‘
Finally who (threads) idle down who deals with the results

GIL, timers, thread queue, process pool, and thread pool

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.