Python thread (queue, thread pool), co-process (theoretical greenlet,gevent module,)

Source: Internet
Author: User

Queue of Threads:

Queue queues, using Import queue, same usage as process queue

Queue is especially useful in threaded programming when information must be exchanged safely between multiple threads.

queue. class Queue (maxsize=0) #先进先出
Import Queueq=queue. Queue () q.put (' first ') q.put (' second ') Q.put (' third ') print (Q.get ()) (Q.get ()) print (Q.get ()) "Results (FIFO): Firstsecondthird "Advanced First Out

queue. class LifoQueue (maxsize=0) #last in fisrt out

Import Queueq=queue. Lifoqueue () q.put (' first ') q.put (' second ') Q.put (' third ') print (Q.get ()) print (Q.get ()) print (Q.get ()) "Results (LIFO): Thirdsecondfirst ' LIFO

queue. class PriorityQueue (maxsize=0) #存储数据时可设置优先级的队列  

Import Queueq=queue. Priorityqueue () #put进入一个元组, the first element of a tuple is a priority (usually a number, or a comparison between non-numbers), and the smaller the number the higher the Priority Q.put ((A (), ' a ')) Q.put (((a), ' B ')) Q.put (((), ' C ') print (Q.get ()) print (Q.get ()) print (Q.get ()) "Results (the smaller the number, the higher the priority of priority out of the queue):(, ' B ') (A, ' a ') (A, ' C ') ' priority queuing

Problem with thread pool:

#1 Introduction The Concurrent.futures module provides a highly encapsulated asynchronous calling interface Threadpoolexecutor: Thread pool, which provides an asynchronous call to Processpoolexecutor: Process pool, which provides asynchronous calls both implement The same interface, which is defined by the abstract Executor class. #2 Basic Method #submit (FN, *args, **kwargs) Asynchronous Commit task #map (func, *i Terables, Timeout=none, chunksize=1) replaces the For Loop submit Operation #shutdown (Wait=true) equivalent to the Pool.close () +pool.join () operation of the process pool wait= True to wait for all tasks in the pool to finish after the resource has been reclaimed before continuing to Wait=false, returning immediately, without waiting for the task in the pool to complete but regardless of the wait parameter value, the entire program waits until all tasks have been completed submit and map must be before shutdown # Result (Timeout=none) Get results #add_done_callback (FN) callback function
mport timefrom threading Import Currentthread,get_identfrom Concurrent.futures Import Threadpoolexecutor # help you start the thread pool class from concurrent.futures import Processpoolexecutor # help you start the thread pool Class def func (i): Time.sleep (1) print (' in%s%s '% (I,currentthread ())) "Return I**2def Back (FN): print (Fn.result (), C Urrentthread ()) # Map start multi-threaded task # t = threadpoolexecutor (5) # (Func,range ()) # for I in range: # T.submit (func,i) # Submit asynchronous Commit Task # t = threadpoolexecutor (5) # for I in range: # T.submit (Fn=func,) # T.shutdown () # Print (' main: ', Curren TThread ()) # Number of thread pool # 5*cpu # Get task results # t = threadpoolexecutor # ret_l = []# for i in range: # ret = T.submit (func,i) # ret_l.append (ret) # T.shutdown () # for RET in ret_l:# print (Ret.result ()) # Print (' main: ', CurrentThread () # callback Function T = Threadpoolexecutor (a) for I in range: T.submit (func,i). Add_done_callback (back) 
# callback function (process version) Import Osimport timefrom concurrent.futures import processpoolexecutor  # help you to start the thread pool class def func (i)    : Time.sleep (1)    print (' in%s%s '% (I,os.getpid ()))    return i**2def Back (FN):    print (Fn.result (), Os.getpid () if __name__ = = ' __main__ ':    print (' main: ', Os.getpid ())    t = processpoolexecutor () for    I in range: 
   t.submit (func,i). Add_done_callback (back)

The multiprocessing module comes with a process pool
The threading module does not have a thread pool.

Concurrent.futures process pool and thread pool

Create thread pool/process pool Processpoolexecutor threadpoolexecutor

ret = T.submit (Func,arg1,arg2 ...) Asynchronous submit task

Ret.result () Gets the result, and if you want to implement an asynchronous effect, you should use the list

Map (func,iterable)

Shutdown Close+join Synchronous Control

Add_done_callback callback function, the parameter received within the callback function is an object that needs to get the return value by result

The callback function is still executing in the main process


Before we learned the concept of threads and processes, we learned that in the operating system the process is the smallest unit of resource allocation, and the thread is the smallest unit of CPU scheduling. as a logic point, we have increased the CPU utilization by a lot. But we know that whether it's creating multiple processes or creating multithreading to solve problems, it takes time to create processes, create threads, and manage transitions between them.

As our quest for efficiency continues to improve, it is a new task to implement concurrency based on a single thread, which is to implement concurrency with only one main thread, which is obviously only one CPU. This saves the time it takes to create the line process.

For this we need to review the nature of Concurrency: Toggle + Save State

The CPU is running a task that, in both cases, will be cut off to perform other tasks (switching is controlled by the operating system), in which case the task is blocked, and the task is taking too long to calculate

PS: When introducing the process theory, the three execution states of the process are mentioned, and the thread is the execution unit, so it can also be understood as the three states of the thread

One: The second situation does not improve efficiency, just to allow the CPU to rain equitably, to achieve the seemingly all tasks are "simultaneous" effect, if more than one task is pure calculation, this switch will reduce efficiency.

For this we can verify based on yield. Yield itself is a way to save the running state of a task in a single thread, so let's review it briefly:

#1 yiled can save state, the state of yield is similar to the state of the operating system's save thread, but yield is code-level controlled, and more lightweight # send can pass the result of a function to another function, which enables the switch between programs within a single thread.

Process: The smallest unit of resource allocation, class

Thread: Minimum unit of CPU dispatch, person

What is a process: the ability to switch between multiple tasks on a thread basis

Save on thread-open consumption

Is dispatched from the Python code level.

A normal thread is the smallest unit of CPU scheduling

The scheduling of the association is not done by the operating system.

The study of the Association Process:

# You've learned the way # switch between two tasks # def func (): #     Print (1) #     x = Yield ' aaa ' #     print (x) #     yield ' BBB ' # # G = func () # Print (NE XT (g)) # Print (G.send (* * *)) # function switching between functions-Coprocessor # DEF consumer (): # while     true:#         x = yield#         print (x) # def PR Oducer (): #     g = consumer () #     Next (g)   # Pre-excitation # for     i in range: #         G.send (i) # producer () # Yeild Only switch between programs, no time to re-use any IO operations
Introduction of co-process

Co-process: is a single-threaded concurrency, also known as micro-threading, fiber. English name Coroutine. One sentence describes what a thread is: The process is a lightweight thread of user-state, that is, the process is scheduled by the user program itself. ,

It should be emphasized that:

#1. The python thread is at the kernel level, which is the #2 that is controlled by the operating system (such as a single-threaded encounter with IO or an excessive execution time that is forced to surrender CPU execution permissions and switch other threads to run). The single-wire range opens the process, and once the IO is encountered, the switch is controlled from the application level (not the operating system) to increase efficiency (!!!). Non-IO operation switching is not efficiency-independent)

Compared to the operating system control thread switching, the user in a single-threaded control of the switch process

The advantages are as follows:

#1. The switch overhead of the process is smaller, it is program-level switching, the operating system is completely unaware, and therefore more lightweight. Concurrency can be achieved within a single thread, maximizing CPU utilization

Disadvantages are as follows:

#1. The nature of the process is single-threaded, unable to take advantage of multi-core, can be a program to open more than one process, each process to open multiple threads, each line range open the co-thread. The association refers to a single thread, so once the association is blocked, it will block the entire thread

Summary of the characteristics of the process:

    1. Concurrency must be implemented in only one single thread
    2. No lock required to modify shared data
    3. The context stack in the user program that holds multiple control flows
    4. Additional: A co-process encountered IO operation automatically switch to other Io,yield (how to implement detection, Greenlet can not be implemented, the use of the Gevent module (select mechanism))

Installing the Greenlet module using the PIP3 install Greenlet


Def eat (): #     print (' Eat ') #     Time.sleep (1) #     G2.switch ()  # toggle #     print (' Finish ') #     Time.sleep (1) #     G2.switch () # def play (): #     print (' Play ') #     Time.sleep (1) #     G1.switch () #     print (' Play Beautiful ') #     Time.sleep (1) # # G1 = Greenlet (eat) # g2 = Greenlet (play) # G1.switch ()   # Toggle # Encounter IO on toggle # gevent    PIP3 Install gevent# Greenlet is the bottom of Gevent # Gevent is based on the Greenlet implementation of the # Python code in the control program switch

Greenlet just provides a more convenient way to switch than generator, when cutting to a task execution if you encounter Io, then blocking in place, still does not solve the problem of automatically switching to the IO to improve efficiency.

The code for these 20 tasks for single-line thread usually has both a computational and a blocking operation, and we can get stuck in the execution of Task 1 o'clock, using blocking time to perform task 2 .... In this way, the Gevent module is used to improve efficiency.


#顺序执行import timedef F1 ():    Res=1 for    i in range (100000000):        res+=idef F2 ():    Res=1 for I in    range ( 100000000):        res*=istart=time.time () F1 () F2 () Stop=time.time () print (' Run time is%s '% (Stop-start)) # 10.985628366470337# switch from greenlet import greenletimport timedef F1 ():    Res=1 for    i in range (100000000):        Res+=i        G2.switch () def f2 ():    Res=1 for    i in range (100000000):        res*=i        g1.switch () start= Time.time () G1=greenlet (F1) G2=greenlet (F2) G1.switch () Stop=time.time () print (' Run time is%s '% (Stop-start)) # 52.763017892837524 Efficiency comparison
Gevent Module

Installation: PIP3 Install Gevent

Gevent is a third-party library that makes it easy to implement concurrent or asynchronous programming through Gevent, and the main pattern used in Gevent is Greenlet, which is a lightweight coprocessor that accesses Python in the form of a C extension module. Greenlet all run inside the main program operating system process, but they are dispatched in a collaborative manner.

G1=gevent.spawn (func,1,,2,3,x=4,y=5) creates a co-object g1,spawn the first argument in parentheses is the function name, such as Eat, which can be followed by multiple arguments, either positional arguments or keyword arguments, which are passed to the eat of the function g2= Gevent.spawn (FUNC2) g1.join () #等待g1结束g2. Join () #等待g2结束 # or two steps above: Gevent.joinall ([g1,g2]) g1.value# get func1 return value usage Introduction
Import Geventdef Eat (name):    print ('%s eat 1 '%name)    gevent.sleep (2)    print ('%s eat 2 '%name) def play (name): C3/>print ('%s play 1 '%name)    gevent.sleep (1)    print ('%s play 2 '%name) g1=gevent.spawn (Eat, ' Egon ') g2= Gevent.spawn (play,name= ' Egon ') G1.join () G2.join () #或者gevent. Joinall ([g1,g2]) print (' main ') Example: encountered IO active switchover

Example Gevent.sleep (2) simulates an IO block that gevent can recognize, and Time.sleep (2) or other blocking, gevent is not directly recognized by the need to use the following line of code, patching, you can identify the

From gevent import Monkey;monkey.patch_all () must be placed in front of the patched person, such as the Time,socket module

Or we simply remember: To use gevent, you need to put the from Gevent import Monkey;monkey.patch_all () to the beginning of the file

# Use the co-process to reduce the time consumed by IO operations from gevent import Monkey;monkey.patch_all () Import Geventimport timedef eat ():    print (' Eat ')    Time.sleep (2)    print (' finished ') def play ():    print (' play ')    time.sleep (1)    print (' Play beauty ') G1 = Gevent.spawn ( Eat) G2 = Gevent.spawn (play) Gevent.joinall ([g1,g2]) # g1.join () # g2.join () # No execution # Why didn't you do it??? Does it need to be turned on? # no open but switch    # gevent help you to do the switch, do the switch is conditional, encountered io only switch    # gevent do not know except gevent inside the module IO Operation    # Use join to block until the process task completes # Help Gevent to recognize blocking in other modules # from    gevent import Monkey;monkey.patch_all () write before other modules are imported
Import Threadingimport Geventimport Timedef eat ():    print (Threading.current_thread (). GetName ())    print (' Eat Food 1 ')    time.sleep (2)    print (' Eat Food 2 ') def play ():    print (Threading.current_thread (). GetName ())    Print (' Play 1 ')    time.sleep (1)    print (' Play 2 ') G1=gevent.spawn (Eat) g2=gevent.spawn (play) Gevent.joinall ([G1 , G2]) print (' master ') View Threading.current_thread (). GetName () We can use Threading.current_thread (). GetName () to view each G1 and G2, The result of viewing is dummythread-n, which is a dummy thread

Gevent Synchronous and asynchronous

From gevent import spawn,joinall,monkey;monkey.patch_all () Import timedef Task (PID): "" "    Some Non-deterministic task    "" "    time.sleep (0.5)    print (' task%s done '% PID) def synchronous ():  # Sync    for I in range:        task (i) def asynchronous (): # async    G_l=[spawn (task,i) for I in range ()    joinall (g_l)    Print (' Done ')    if __name__ = = ' __main__ ':    print (' Synchronous: ')    synchronous ()    print (' Asynchronous: ')    asynchronous () #  The important part of the above program is to encapsulate the task function into the gevent.spawn of the Greenlet internal thread. # The  initialized greenlet list is stored in the array threads, which is passed to the Gevent.joinall function,  which blocks the current process and performs all the given Greenlet tasks. The execution process will not continue until all greenlet have been executed.

Process to implement sockets ()

From gevent import monkey;monkey.patch_all () import Socketimport Geventdef talk (conn): While    True:        conn.send (b ' Hello ')        print (CONN.RECV (1024x768)) SK = Socket.socket () sk.bind ((' ', 9090)) Sk.listen () while True:    Conn , addr = sk.accept ()    gevent.spawn (Talk,conn)

Import  socketfrom Threading Import Threaddef Client ():    SK = Socket.socket ()    sk.connect (' ', 9090 ) while    True:        print (SK.RECV (1024x768))        sk.send (b ' bye ') for I in range:    Thread (target=client). Start ()



# 4C Concurrent 50000 QPS
# of 5 processes
# 20 Threads
# 500 x co-processes

Co-process: able to significantly increase CPU utilization in single-core situations

There are no data insecure issues

There is no time overhead for thread switching \ Creation

At user level when switching, the program does not block the entire thread because one of the tasks in the process goes into a blocking state

Switching of threads

Time slices to reduce the efficiency of the CPU

IO can be cut for increased CPU efficiency











Python thread (queue, thread pool), co-process (theoretical greenlet,gevent module,)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.