Python full stack development 11, processes and threads

Source: Internet
Author: User

One, thread

Multitasking can be done by multiple processes, or it can be done by a multi-thread within a process, all threads within a process, sharing the same block memory in Python to create a thread is simpler, import the threading module, and take a look at how the code creates multithreading.

def f1 (i):    time.sleep (1)    print (i) if __name__ = = ' __main__ ': for    I in range (5):        t = Threading. Thread (TARGET=F1, args= (i,))        T.start ()    print (' start ')          # Main thread waits for child threads to complete, child threads concurrent execution >>start>>2> >1>>3>>0>>4

The main thread executes from top to bottom, creates 5 sub-threads, prints ' start ', and waits for the child thread to finish executing, and if it wants the thread to execute one after the other instead of the concurrent operation, the Join method will be used. Here's a look at the code

Import Threadingimport Timedef F1 (i):    time.sleep (1)    print (i) if __name__ = = ' __main__ ': for    I in range (5): C10/>t = Threading. Thread (TARGET=F1, args= (i,))        T.start ()        t.join ()    print (' start ')      # threads execute sequentially from top to bottom, and finally print out start> >0>>1>>2>>3>>4>>start

If the above code does not work with join, the main thread will wait for the child thread to end by default, and if you do not want the main thread to wait for the child thread, you can set it as a background thread before the thread is started, and if it is a background thread, the background thread is also in progress while the main thread executes. Background threads, whether successful or not, are stopped, the foreground thread is the opposite, if not specified, the default is the foreground thread, the following from the code to see how to set as a background thread. For example, the main thread prints start directly, finishes execution and ends without waiting for the child thread, and the data in the sub-thread is not printed.

Import Threadingimport Timedef F1 (i):    time.sleep (1)    print (i) if __name__ = = ' __main__ ': for    I in range (5): C3/>t = Threading. Thread (TARGET=F1, args= (i,))        T.setdaemon (True)        t.start ()    print (' start ')      # Main thread does not wait for child threads >> Start

In addition, you can customize the name of the thread by using T = Threading. The name parameter in thread (TARGET=F1, args= (i,), Name= ' mythread{} '. Format (i), and besides, thread has some other methods

    • T.getname (): Gets the name of the thread
    • T.setname (): Sets the name of the thread
    • T.name: Gets or sets the name of the thread
    • T.is_alive (): Determines whether a thread is active
    • T.isalive (): Determines whether a thread is active
    • T.isdaemon (): Determines whether it is a daemon thread
Second, the thread lock

Because the thread is sharing the same memory, so if you manipulate the same data, it is easy to create a conflict, this time you can add a lock for the thread, here we use Rlock, instead of lock, because lock if the lock is more than one error, While Rlock allows to be acquire multiple times in the same thread, but requires n-times release to really release the locks that are occupied, one thread acquires the lock before it is released and the other threads wait.

Import Threadingg = 1lock = Threading. Rlock () def fun ():    lock.acquire ()    # get lock    Global g    g + = 2    print (G, Threading.current_thread (). Name)    lock.release ()   # release lock    returnfor i in range:    t = Threading. Thread (Target=fun, name= ' t-{} '. Format (i))    T.start () 3 t-05 t-17 t-29 t-311 t-413 t-515 t-617 t-719 t-821 t-9    
Third, inter-thread communication event

Event is one of the most inter-thread communication mechanisms, mainly for the main thread to control the execution of other threads, mainly used over Wait,clear,set, the three methods to achieve, the following is a simple example,

Import threadingimport Timedef F1 (event):    print (' Start: ')    event.wait ()            # blocks in, waiting for set    print (' End: ') if __name__ = = ' __main__ ':    event_obj  = Threading. Event () for    I in range (5):        t = Threading. Thread (Target=f1, args= (Event_obj,))        T.start ()    event_obj.clear ()      # Clear Flag bit     INP = input (' >> >>: ')    if InP = = ' true ':        event_obj.set ()   # set Flag bit
Iv. queues

Can be easily understood as an advanced first-out data structure, such as for the producer consumer model, or for the writing thread pool, as well as the previous write select, read and write when the queue can be used to store data, and so on, the queue to use a lot, so the use of the queue to master. Let's start by looking at what the queue provides for usage

Q = queue. Queue (maxsize=0)  # Constructs an advanced presentation queue, maxsize specifies the queue length, which is 0 o'clock, which indicates that the queue length is unrestricted. Q.join ()    # Wait until the queue is Kong, perform other operations Q.qsize ()   # Returns the size of the queue (unreliable) q.empty ()   # When the queue is empty, returns true otherwise false (unreliable ) Q.full ()    # returns True when the queue is full, otherwise false (unreliable) q.put (item, Block=true, Timeout=none) # put item in the queue tail, item must exist, parameter bloc The default is true, which means that when the queue is full, it waits for the # to be false when it is non-blocking, and if the queue is full, queues are raised. Full exception. Optional parameter timeout, which indicates the time at which the setting is blocked, # raises the queue if it fails to fit in the blocking time.                       Full exception Q.get (Block=true, Timeout=none) #  removes and returns a value for the header of the queue, the optional parameter block defaults to True, indicating that when the value is fetched, if the queue is empty, the block #  block If the queue is empty at this point, the queues are raised. Empty exception. Optional parameter timeout, which indicates when the setting is blocked, q.get_nowait () # is  equivalent to get (Item,block=false)

Below with the code to simply demonstrate under the Consumer builder model, just a simple demo under.

Message = queue. Queue (TEN) def product (num): For    i in range (num):        message.put (i)        print (' Add {} to queue '. Format (i))        Time.sleep (Random.randrange (0, 1)) def consume (num):    count = 0 while    count<num:        i = message.get ()        Print (' Remove {} from queue '. Format (i))        Time.sleep (Random.randrange (1, 2))        count + = 1T1 = Threading. Thread (Target=product, args= ()) T1.start () t2 = Threading. Thread (Target=consume, args= ()) T2.start ()
V. Process

The upper level of the thread is the process, the process can contain many threads, the difference between process and thread is that the data is not shared between processes, multi-process can also be used to deal with multitasking, but many processes consume resources, computational tasks are best handed over to multi-process, IO-intensive is best to be processed by multithreading, In addition, the number of processes should be consistent with the core of the CPU.

In Windows can not use fork to create a multi-process, so can only import multiprocessing, to simulate multi-process, the following first look at how to create a process, you can first guess what the following results are

L = []def f (i):    l.append (i)    print (' Hi ', l) if __name__ = = ' __main__ ': for    i in range:        p = Multiprocessi Ng. Process (target=f, args= (i,))        # Data not shared, create 10 parts L list        P.start ()
VI. inter-process data sharing

Data between processes is not shared, but if I have to share the data, then I need to do it in a different way.

1, Value,array
Def f (A, B):    a.value = 3.111 for    i in range (len (b)):        b[i] + = 100if __name__ = = ' __main__ ':    num = value (' F ' , 3.333) # The floating-point number in the C language    L = array (' I ', Range (10)) # is similar to the C language, with a length of    print (num.value)    print (l [:])    p = Process (target=f, args= (num, L))    P.start ()    p.join ()    print (num.value) # Everyone run it and see if the results are two times Same as    print (l[:])
2, manage

Mode one, the use of the C language of the data structure, if you are not familiar with C, it is more troublesome to use, the way 2 can support Python's own data, see below

From multiprocessing import Process,managerdef Foo (DIC, i):    dic[i] = + I    print (dic.values ()) If __name__ = = ' __ Main__ ':    manage = Manager ()    dic = Manage.dict () for    I in range (2):        p = Process (Target=foo, args= (DIC, i))        P.start ()        p.join ()
Vii. Process Pool

In practice, not every time a task is executed, it creates a multi-process, but maintains a process pool, each time it executes, goes to the process pool, and if the process in the process pool is taken out, it will block there until a process is available in the process pool. Let's start by looking at what the process pool provides

    • Apply (func[, args[, Kwds]): The Func function is called with the ARG and Kwds parameters, and the result is blocked until it returns, for this reason, Apply_async () is more suitable for concurrent execution, and the Func function is only run by one process in the pool.

    • Apply_async (func[, args[, kwds[, callback[, Error_callback]]]): A variant of the Apply () method, which returns a result object. If callback is specified, then callback can receive a parameter and be called, and when the result is ready for the callback, the callback is called, and when the call fails, the callback is replaced with Error_callback. Callbacks should be completed immediately, otherwise the thread that processed the result will be blocked.

    • Close (): Waits for the task to complete after it stops the worker process, prevents more tasks from being submitted to the pool, and the worker process exits when the task is completed.

    • Terminate (): Stops the worker process immediately, regardless of whether the task is completed or not. When the pool object process is garbage collected, terminate () is called immediately.

    • Join (): Waits for the worker thread to exit and must call Close () or terminate () before calling join (). This is because the terminated process needs to be called by the parent process wait (join equals wait), otherwise the process will become a zombie process.

Here's a quick look at how the code is used.

From multiprocessing import poolimport timedef F1 (i):    time.sleep (1)    # Print (i)    return IDEF CB (i):    Print (i) if __name__ = = ' __main__ ':    poo = Pool (5) for    I in range:        # poo.apply (func=f1, args= (i,))   # serial Execute, queued execution has join        Poo.apply_async (FUNC=F1, args= (i,), CALLBACK=CB)  # Concurrent execution of the main process unequal subprocess, no join    print (' ********** ')    poo.close ()    poo.join ()
Eight, thread pool

For the previous process pool, Python has a module pool for us to use, but for the thread pool, it is not provided, so we need to write ourselves, we write, we need to use the queue, the following we look at how we implement a thread pool, first write the simplest version.

Import Threadingimport timeimport queueclass ThreadPool:    def __init__ (self, max_num=20):        self.queue = queue. Queue (max_num) for        I in range (max_num):            Self.add ()    def Add (self):        self.queue.put (Threading. Thread)    def get (self):        return Self.queue.get () def f (TP, I):    time.sleep (1)    print (i)    tp.add () p = ThreadPool (Ten) for I in range:    thread = P.get ()    t = Thread (target=f, args= (P, i))    T.start ()

The above code writes a thread pool class, the basic implementation of the thread pool function, but there are many shortcomings, do not implement the rollback function, each time the task is executed, the task handler will need to automatically execute the object's Add method, the thread object is added to the queue, and class initialization, All thread classes are added to the queue at once, although the thread pool above is simple, but in fact there are a lot of problems, let's look at a true sense of threading pools.

Before writing the code, let's take a look at how to design such a thread pool, the thread pool above, our queue, the existence of the threading class, we each process a task to instantiate a thread, and then after the execution, the thread is discarded, which is a bit inappropriate. When we were designing this,

    1. The queue is not a thread class, but a task, and all we get from the queue is the task
    2. Each time you perform a task, you are not going to build a thread, but if the previously generated thread is idle, use the previous thread
    3. Support fallback mechanism, support close,terminate

Here's how the code is implemented

Import threadingimport queueimport timeimport contextlibclass threadingpool:def __init__ (self, num): Self.max = num self.terminal = False self.q = queue. Queue () self.generate_list = [] # Save the thread that has been generated self.free_list = [] # Save those threads that have completed the task def R Un (self, func, Args=none, Callbk=none): Self.q.put ((func, Args, CALLBK) # Put the task information in the queue as a meta-ancestor if L        En (self.free_list) = = 0 and len (self.generate_list) < Self.max:self.threadstart () def ThreadStart (self): t = Threading.        Thread (Target=self.handel) T.start () def Handel (self): Current_thread = Threading.current_thread () Self.generate_list.append (current_thread) event = Self.q.get () while event! = ' Stop ': func, args                 , CALLBK = Event flag = True Try:ret = func (*args) except Exception as E: Flag = False ret = e if calLBK is not NONE:TRY:CALLBK (ret) except Exception as E: Pass if not Self.terminal:with Self.auto_append_remove (current_thread): E Vent = Self.q.get () else:event = ' Stop ' else:self.generate_list.remove (Curre Nt_thread) def terminate (self): self.terminal = True while Self.generate_list:self.q.put (' Sto P ') Self.q.empty () def close (self): num = Len (self.generate_list) while Num:self.q.put (' Stop ') num-= 1 @contextlib. ContextManager def auto_append_remove (self, thread): Self.free_list. Append (thread) Try:yield finally:self.free_list.remove (thread) def f (i): # TIME.SL EEP (1) Return IDEF F1 (i): print (i) p = Threadingpool (5) for I in range: P.run (func=f, args= (i,), CALLBK=F1) p.cl OSE ()
Nine, the co-process

Co-process, also known as micro-threading, the implementation of the process looks a bit like multi-threading, but in fact, the process is only one thread, so there is no thread switching overhead, and multithreading ratio, the more the number of threads, the performance advantage of the more obvious, and because there is only one thread, no multi-threading lock mechanism, there is no Application scenario: When there is a large number of operations in the program that do not require CPU (IO), take a look at the use of the coprocessor example

From gevent import monkeyimport geventimport requests# replace the Thread/socket in the standard library # so we can use it as usual when we use the socket in the back, no need to modify any code , but it becomes non-blocking. Monkey.patch_all () # Monkey patch def f (URL):    print (' GET:%s '% URL)    resp = requests.get (URL)    data = R Esp.text    print ('%d bytes received from%s. '% (len (data), URL)) gevent.joinall ([        gevent.spawn (F, ' https:// www.python.org/'),        gevent.spawn (F, ' https://www.yahoo.com/'),        gevent.spawn (F, ' https://github.com/'),])

The above example, using the co-process, a thread to complete all the requests, when the request is made, will not wait for a reply, but once all the requests are issued, receive a reply to deal with a reply, so that a thread to solve all the things, the efficiency is very high.

Ten, summary

This blog post is the Pyton basic knowledge of the last article, the post will talk about the beginning of the front-end knowledge, here is attached to the directory http://www.cnblogs.com/Wxtrkbc/p/5606048.html, will continue to update,

Python full stack development 11, processes, and threads

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.