Thread, process, queue basic Usage Summary

Source: Internet
Author: User

One, thread (thread is the smallest unit of work, thread sharing resources within the same process)

Create Thread: Threading Module

Create a thread: Threading. Thread (target= function name, args= (parameter,) ) * Here the args must be followed by Ganso, and when the parentheses are a parameter, the first argument is appended with a comma

The threads we create with the threading module are child threads, because the interpreter creates a thread to work at work, and the threads we create ourselves are child threads

Import Threadingimport  timedef F1 (A1,A2):    time.sleep (2)    print ("456") T = Threading. Thread (target = F1,args = (124,111,)) T.setdaemon (True) #不等待子线程执行完成t. Start () T1 = Threading. Thread (target = F1,args = (124,111,)) T1.setdaemon (True) #不等待子线程执行完成t1. Start () t2 = Threading. Thread (target = F1,args = (124,111,)) T2.setdaemon (True) #不等待子线程执行完成t2. Start ()


Note: The main thread is that the code executes from top to bottom (we can't see), Setdaemon defaults to False, that is, the main thread waits for the child thread to finish, and exits together if we set T2.setdaemon (True), that is, the main thread does not wait for the thread to exit after execution.

T.join (): Executes each thread one by one, execution continues, and the method makes multithreading meaningless

T.join (Parameters): The parameters in the most of the waiting time, if the execution time of the function is less than the waiting time, then wait for the execution of your function, if the wait time is greater than the function execution time, then wait for your parameter time will not wait

T.setdaemon (True) #是决定是否等待子线程执行完成后再退出, plus that it exits without waiting for the child thread to finish executing, without waiting for the child thread to exit

Import Threadingimport  timedef F1 (A1,A2):    time.sleep (3)    print ("456") T = Threading. Thread (target = F1,args = (124,111,)) T.setdaemon (True) #不等待子线程执行完成t. Start () T.join (2) print ("222")

222

Note: The main thread execution time is 3, and I wait time is 2, so in the time I wait for the wife thread is not executed, so only the main thread to exit
Import Threadingimport  timedef F1 (A1,A2):    time.sleep (3)    print ("456") T = Threading. Thread (target = F1,args = (124,111,)) T.start () T.join (2) print ("222") 222456

Note: The main thread execution time is 3, and my Wait time is 2, so in the time I wait for the child thread is not executed, so I first perform the main thread of the re-execution of your sub-thread

Second, threading. Event

Event is one of the most mechanisms of inter-thread communication: One thread sends an event signal, while the other threads wait for the signal. Used for the main thread to control the execution of other threads. Events manages a flag that can be set to true using Set () or reset to false,wait () using clear () to block, before flag is true. Flag defaults to False.

    • Event.wait ([timeout]): Blocks the thread until the event object's internal identity bit is set to true or timed out (if parameter timeout is provided).
    • Event.set (): Set the identity bit to ture
    • Event.clear (): Sets the identity companion to False.
    • Event.isset (): Determines whether the identity bit is ture.
Import Threadingimport timedef do:    time.sleep (1)    print (' start ')    event.wait ()    print (' Execute ') Event_obj = threading. Event () for I in range:    t = Threading. Thread (Target=do, args= (Event_obj,))    T.start () event_obj.clear ()       #设置为Falseinp = input (' input: ') if inp = = ' True ':    event_obj.set ()     #设置为Ture

Third, the thread lock

Because multiple threads under the same process share resources of the same memory address, it is possible for different threads to modify the same data, causing dirty data (incorrect data) to appear, so we should add a lock when a thread is modified, and then modify the thread after it has been modified. So we introduced the concept of thread lock.

Import Threadingimport timeglobals_num = 0lock = Threading. Rlock () def Func ():    lock.acquire ()  # get lock    global globals_num    globals_num + 1    time.sleep (1)    Print (Globals_num)    lock.release ()  # Release lock for I in range (5):    t = Threading. Thread (Target=func)    T.start () Note: Each thread is executed with a lock, there is no dirty data, only the previous thread is finished, after unlocking the subsequent thread to get the variable and modify it, So the output sequence is that each output waits for 1s, and the output sequence increments

4. Queue

Features: FIFO, single queue only one in one out

Create queues: Queue. The queue (parameter) parameter indicates the capacity of the queues to accommodate the maximum number of

Q = queue. Queue (maxsize=0)  # Constructs an advanced presentation queue, maxsize specifies the queue length, which is 0 o'clock, which indicates that the queue length is unrestricted. Q.join ()    # Wait until the queue is empty, perform other operations Q.qsize ()   # Returns the size of the queue (unreliable) q.empty ()   # returns True if the queue is empty otherwise false (unreliable) q.full ()     # returns True when the queue is full, otherwise returns false (unreliable) q.put (item, Block=true, Timeout=none) # denotes q.get (block=true, Timeout=none) in the queue Indicates a value from the queue, if there is no value in the queue will always wait to block Q.put_nowait (item) # is   equivalent to put (Item,block=false) q.get_nowait () # is    equivalent to get (item, Block=false) indicates that if there are no values in the queue

Producer-Consumer model

Import Queueimport threadingmessage = queue. Queue (+) def producer (i): #每一个生成的线程都一直往队列里面放对应生成的值 while    True:        message.put (i) def consumer (i): # # Each generated thread always gets the generated value from the queue while    True:        msg = Message.get ()        print (msg) for I in range:    t = Threading. Thread (Target=producer, args= (i,)) #生成12个线程共同执行producer函数    T.start () for I in range:    t = Threading. Thread (Target=consumer, args= (i,)) # #生成10个线程共同执行consumer函数    T.start ()

Import Queueimport threadingmessage = queue. Queue (+) def producer (i):        message.put (i) #每一个线程把各自生成的数放到队列中, altogether 5 def consumer (i):        msg =  Message.get () # There are 10 threads coming to fetch, only 5 are gone, and the rest is waiting for        print (msg) for I in range (5):    t = Threading. Thread (Target=producer, args= (i,))    T.start () for I in range:    t = Threading. Thread (Target=consumer, args= (i,))    T.start ()
Import Queueimport threadingmessage = queue. Queue (+) def producer (i):        message.put (i) #每一个线程把各自生成的数放到队列中, altogether 5 def consumer (i):        msg = Message. get_nowait () #一共有10个线程过来获取, only 5 were obtained without waiting for the direct exit        Print (msg) for I in range (5):    t = Threading. Thread (Target=producer, args= (i,))    T.start () for I in range:    t = Threading. Thread (Target=consumer, args= (i,))    T.start ()

V. Process

Create process: Multiprocessing module

Create a process: multiprocessing. Process (target= function name, args= (parameter,)) * Here the args must be followed by Ganso, and when the parentheses are a parameter is, the first argument is followed by a comma

The creation of fork () is not supported under WinDOS, and if you want to use a process it is necessary to write the operation of the process in the if __name__== "__main__" under Linux without

The threads we create with the multiprocessing module are child processes, because the interpreter creates a main thread (contained in the main process) to work, and the process we create ourselves is a child process .

Import  timeimport multiprocessingdef F1 (A1,A2):    time.sleep (2)    print ("456") if __name__ = = ' __main__ ':    t = multiprocessing. Process (target = F1,args = (124,111,))    T.daemon = true# does not wait for the child process to complete    t.start ()    t1 = multiprocessing. Process (target = F1,args = (124,111,))    T1.daemon = true# does not wait for the child process to execute to complete    t1.start ()    print ("End")


End

Note: The main thread is that the code executes from top to bottom (we can't see), daemon defaults to False, that is, the calling process waits for the child process to finish, and exits together if we set T1.daemon = True, that is, the main process does not wait for the child process to finish executing to exit

Import  timeimport multiprocessingdef F1 (A1,A2):    time.sleep (2)    print ("456") if __name__ = = ' __main__ ':    t = multiprocessing. Process (target = F1,args = (124,111,))    T.daemon = true# do not wait for the child process to complete    t.start ()    print ("111")    T.join ( 1)    print ("End")

111
End

Note: Execute print first ("111") while waiting for the 1 function to execute without waiting, execute print directly ("End")

Six

Because many processes have their own memory resources, the modifications only modify their own things, so there is no concept of the thread lock

Import Multiprocessingli = []def f (a):    li.append (a) #不同的进程往不同的进程里面的li中添加元素    print (li) if __name__ = = "__main__": For    A in range (5):        t =  multiprocessing. Process (Target=f,args= (A,))        T.start ()

[0]
[2]
[1]
[3]
[4]

Data sharing between processes

If we want to make two different processes work together to modify the same content, that is, if you really need to share data, multiprocessing provides two ways.

1. Array

From multiprocessing Import process,arraytemp = Array (' i ', [11,22,33,44]) def Foo (i):    temp[i] = 100+i for    item I N Temp:        print (i, '-----> ', item) for I in Range (2):    p = Process (target=foo,args= (i,))    P.start ()

2. Manager

Import timefrom  multiprocessing import process,managerdef F (i,dic):    dic[i] = + I    print (DIC) if __name__< c4/>== "__main__":    manage = Manager ()    dic = Manage.dict () for    I in range (5):        p = Process (target= f,args= ( I,dic,))        P.start ()        p.join () #必须要加, or you can't find the file.

{0:100}
{0:100, 1:101}
{0:100, 1:101, 2:102}
{0:100, 1:101, 2:102, 3:103}
{0:100, 1:101, 2:102, 3:103, 4:104}

Process Pool

Because threads and processes are not as much as possible, but rather should create the optimal number, so there is the concept of a process pool, creating a process pool does not immediately create a process, but will be taken from the process pool when necessary to create a process

A process sequence is maintained internally by the process pool, and when used, a process is fetched in the process pool, and the program waits until a process is available in the process pool sequence if there are no incoming processes available for use.

There are two methods in a process pool:

    • Apply
    • Apply_async
From  multiprocessing import process,poolimport timedef Foo (i):    time.sleep (2)    return i+100def Bar (ARG):    Print (ARG) if __name__ = = "__main__":    pool = Pool (5)    #print pool.apply (Foo, (1,))    #print Pool.apply_ Async (Func =foo, args= (1,)). Get () for    I in range:        pool.apply_async (Func=foo, args= (i,), Callback=bar)    print (' End ')    pool.close ()    pool.join () #进程池中进程执行完毕后再关闭, if commented, then the program closes directly.


Note: Request a process to execute the Foo method, the Foo method is not finished, the bar method never fires, only the Foo method executes, the bar method is triggered, and its return value is assigned to the bar method parameters

callback function: Used to verify that the main function is finished executing

Apply:1, requesting a process from the process pool

2. One thread per application always comes with a join

3, the characteristic is to apply for one execution, and then apply for re-execution

4, because each application process has a join, so the task execution process is queued

5, the number of applications and the number of processes in the process pool is not related, because each time I apply for one, and so on after the first execution of the next

From multiprocessing import Poolimport timedef F1 (a):    time.sleep (1)    print (a) if __name__ = = "__main__":    Pool = Pool (5) for    I in range (6):        T = pool.apply (func=f1,args= (i))        print ("222222222222") 022222222222212222222222222222222222222322222222222242222222222225222222222222

Apply_async:1, requesting a process from the process pool

2, each application of a thread has no join, so the application will not wait for execution to apply for the next

3. If the number of requests is greater than the number of processes in the process pool, the number of processes per request is the number of processes in the process pool, waiting for execution to complete before the process returns to the process pool for new applications

4, each task is executed concurrently, and you can set the callback function

5, the execution of the process pool does not wait for the execution of the child process, so, if the main process after the completion of the child process does not complete the direct exit, so we should add a join after the main process execution, so that the main process will wait for the child process to finish Quit (self-guessing O (∩_∩) o haha ~)

From multiprocessing import Poolimport timedef F1 (a):    time.sleep (1)    print (a) if __name__ = = "__main__":    Pool = Pool (5) for    I in range (9):        Pool.apply_async (func=f1,args= (i))        print ("222222222222")    Pool.close ()     pool.join () # Method 2222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222222220123456 of the process pool 78

Thread, process, queue basic Usage Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.