Python full stack development 11, processes and threads, python development 11 threads

Source: Internet
Author: User
Tags float number

Python full stack development 11, processes and threads, python development 11 threads
I. threads

Multi-task can be completed by multiple processes, or by multiple threads in a process. All threads in a process share the same memory. It is easier to create a thread in python and import the threading module, the following describes how to create multiple threads in the code.

Def f1 (I): time. sleep (1) print (I) if _ name _ = '_ main _': for I in range (5): t = threading. thread (target = f1, args = (I,) t. start () print ('start') # The main thread waits for the completion of the subthread and concurrent execution of the subthread> start> 2> 1> 3> 0> 4

The main thread runs from top to bottom, creates five sub-threads, prints the 'start', and waits until the sub-threads finish running. If you want the threads to finish execution one by one, rather than concurrent operations, the join method is required. Next let's take a look at the code

Import threadingimport timedef f1 (I): time. sleep (1) print (I) if _ name _ = '_ main _': for I in range (5): t = threading. thread (target = f1, args = (I,) t. start () t. join () print ('start') # The thread runs from top to bottom, and finally prints start> 0> 1> 2> 3> 4> start

If the above Code does not apply to join, the main thread will wait for the sub-thread to end by default. If you do not want the main thread to wait for the sub-thread to end, it can be set as a background thread before the sub-thread starts. If it is a background thread, the background thread is also in progress during the main thread execution. After the main thread finishes executing, the background thread is stopped no matter whether it is successful or not. The foreground thread is opposite. If it is not specified, it is the foreground thread by default. The following code shows how to set it as the background thread. For example, in the following example, the main thread prints start directly and ends after execution, instead of waiting for the Child thread, the data in the Child thread will not be printed.

Import threadingimport timedef f1 (I): time. sleep (1) print (I) if _ name _ = '_ main _': for I in range (5): t = threading. thread (target = f1, args = (I,) t. setDaemon (True) t. start () print ('start') # The main thread does not wait for the sub-thread> start

In addition, you can also customize the thread name through t = threading. thread (target = f1, args = (I,), name = 'mythread {}'. the name parameter in format (I). In addition, there are some methods for Thread.

  • T. getName (): Get the thread name
  • T. setName (): Specifies the thread name.
  • T. name: gets or sets the thread name.
  • T. is_alive (): determines whether the thread is activated.
  • T. isAlive (): determines whether the thread is activated.
  • T. isDaemon (): determines whether the thread is a daemon.
Ii. thread lock

Since threads share the same memory, if you operate on the same data, it is easy to cause conflicts. In this case, you can add a lock to the thread. Here we use Rlock, instead of using the Lock, because the Lock will fail if it is obtained multiple times, and the RLock allows multiple acquire requests in the same thread, however, it takes n times for release to actually release the occupied space. One thread obtains the lock before it is released, and other threads only have to wait.

Import threadingG = 1 lock = threading. RLock () def fun (): lock. acquire () # obtain the lock global G + = 2 print (G, threading. current_thread (). name) lock. release () # release the lock returnfor I in range (10): t = threading. thread (target = fun, name ='t -{}'. format (I) t. start () 3 t-05 t-17 t-29 t-311 t-413 t-515 T-617 t-719 t-821
3. Inter-thread communication Event

Event is one of the most inter-thread communication mechanisms. It is mainly used by the main thread to control the execution of other threads. It has been implemented through the three methods: wait, clear, and set, the following is a simple example,

Import threadingimport timedef f1 (event): print ('start: ') event. wait () # blocked, waiting for set print ('end: ') if _ name _ =' _ main _ ': event_obj = threading. event () for I in range (5): t = threading. thread (target = f1, args = (event_obj,) t. start () event_obj.clear () # Clear the flag position. The value of this flag is ('>>>>:') if indium = 'true': event_obj.set () # Set the flag bit.
4. Queue

It can be simply understood as a first-in-first-out data structure, such as a producer consumer model, a thread pool for writing, and a queue for storing data during read/write splitting when a select statement is written, there will be many places to use queues in the future, so you must be familiar with queue usage. Next, let's take a look at the usage of the queue.

Q = queue. Queue (maxsize = 0) # create an advanced display queue. When maxsize is set to 0, the Queue length is not limited. Q. join () # When the queue is kong, perform other operations q. qsize () # Return the queue size (unreliable) q. empty () # If the queue is empty, True is returned. Otherwise, False (unreliable) q is returned. full () # True is returned when the queue is full; otherwise False (unreliable) q. put (item, block = True, timeout = None) # put the item to the end of the Queue. The item must exist. The default value of the parameter block is True, indicating that when the Queue is full, it will wait # If it is False, it will not be blocked. If the queue is full, it will cause queue. full exception. The optional timeout parameter indicates that the set time will be blocked. # If the queue cannot be placed during the blocking time, the queue will be triggered. full exception q. get (block = True, timeout = None) # Remove and return a value in the queue header. The optional parameter block is set to True by default, indicating that if the queue is empty, blocking # If the queue is empty at this time, the queue is triggered. empty exception. The optional timeout parameter indicates that the set time will be blocked. q. get_nowait () # is equivalent to get (item, block = False)

The following code is a simple demonstration of the consumer model.

Message = queue. queue (10) def product (num): for I in range (num): message. put (I) print ('add {} to the queue '. format (I) time. sleep (random. randrange (0, 1) def consume (num): count = 0 while count <num: I = message. get () print ('remove {} from the queue '. format (I) time. sleep (random. randrange (1, 2) count + = 1t1 = threading. thread (target = product, args = (10,) t1.start () t2 = threading. thread (target = consume, args = (10,) t2.start ()
5. Process

A process can contain multiple threads. The difference between a process and a thread is that data is not shared between processes, and multiple processes can also be used to process multiple tasks. However, multiple processes consume resources, computing tasks should be handled by multiple processes, IO-intensive tasks should be handled by multiple threads, and the number of processes should be consistent with the cpu core.

In windows, you cannot use fork to create multiple processes. Therefore, you can only import multiprocessing to simulate multiple processes. Next, let's take a look at how to create a process. You can first guess what the following result is:

L = [] def f (I): l. append (I) print ('hi', l) if _ name _ = '_ main _': for I in range (10): p = multiprocessing. process (target = f, args = (I,) # create 10 copies of the l list p. start ()
6. data sharing between processes

Data between processes is not shared, but if I want to share data, other methods are required.

1. Value, Array
Def f (a, B):. value = 3.111 for I in range (len (B): B [I] + = 100if _ name _ = '_ main __': num = Value ('F', 3.333) # similar to the float number in C Language l = Array ('I', range (10) # similar to the integer Array in C language, the length is 10 print (num. value) print (l [:]) p = Process (target = f, args = (num, l) p. start () p. join () print (num. value) # Run the command on your own to check whether the print results are the same (l [:]).
2. manage

Method 1: All data structures in c language are used. If you are not familiar with c, it is troublesome to use it. method 2 can support the data that comes with python. Let's take a look at it.

from multiprocessing import Process,Managerdef Foo(dic, i):    dic[i] = 100 + i    print(dic.values())if __name__ == '__main__':    manage = Manager()    dic = manage.dict()    for i in range(2):        p = Process(target=Foo, args=(dic, i))        p.start()        p.join()
VII. Process pool

In actual applications, instead of creating multiple processes each time a task is executed, a process pool is maintained, and a process pool is used for each execution, if the process in the process pool gets light, it will block it until there are available processes in the process pool. First, let's take a look at the methods provided by the process pool.

  • Apply (func [, args [, kwds]): Use the arg and kwds parameters to call the func function. The results are blocked until they are returned. For this reason, apply_async () it is more suitable for concurrent execution. In addition, the func function is only run by one process in the pool.

  • Apply_async (func [, args [, kwds [, callback [, error_callback]): a variant of the apply () method, which returns a result object. If callback is specified, callback can receive a parameter and be called. When the result is ready for callback, callback is called. If the call fails, error_callback is used to replace callback. Callbacks should be completed immediately, otherwise the thread of the processing result will be blocked.

  • Close (): After the task is completed, stop the working process and stop more tasks from submitting to the pool. After the task is completed, the working process will exit.

  • Terminate (): Stop the working process immediately regardless of whether the task is completed or not. When garbage collection is performed on the pool object process, terminate () is called immediately ().

  • Join (): Wait for the worker thread to exit. Before calling join (), you must call close () or terminate (). This is because the terminated process needs to be called by the parent process wait (join is equivalent to wait, otherwise the process will become a zombie process.

Next, let's take a simple look at how to use the code.

From multiprocessing import Poolimport timedef f1 (I): time. sleep (1) # print (I) return idef cb (I): print (I) if _ name _ = '_ main __': poo = Pool (5) for I in range (20): # poo. apply (func = f1, args = (I,) # execute in a serial manner. join poo is executed in the queue. apply_async (func = f1, args = (I,), callback = cb) # execute the main process concurrently, no join print ('************') poo. close () poo. join ()
8. Thread Pool

For the previous process Pool, python comes with a module Pool for our use, but for the thread Pool, it is not provided. Therefore, we need to write it ourselves. If we write it ourselves, we need to use the queue, next, let's take a look at how we can implement a thread pool. First, we should write the simplest version.

import threadingimport timeimport queueclass ThreadPool:    def __init__(self, max_num=20):        self.queue = queue.Queue(max_num)        for i in range(max_num):            self.add()    def add(self):        self.queue.put(threading.Thread)    def get(self):        return self.queue.get()def f(tp, i):    time.sleep(1)    print(i)    tp.add()p = ThreadPool(10)for i in range(20):    thread = p.get()    t = thread(target=f, args=(p, i))    t.start()

The above code writes a thread pool class, basically implementing the thread pool function, but there are many disadvantages, not implementing the function back, each time a task is executed, the task processing function automatically executes the add method of the object after each execution, adds the thread object to the queue, and during class initialization, all the thread classes are added to the queue at one time. In short, although the above thread pool is simple to implement, there are actually many problems. Let's look at a real thread pool.

Before writing code, Let's first look at how to design such a thread pool. The above thread pool, in our queue, stores the Thread class, each time we process a task, we instantiate a thread. After the thread is executed, it is discarded. This is a bit inappropriate. During this design,

The following describes how the code is implemented.

Import threadingimport queueimport timeimport contextlibclass ThreadingPool: def _ init _ (self, num): self. max = num self. terminal = False self. q = queue. queue () self. generate_list = [] # Save the generated thread self. free_list = [] # Save the thread def run (self, func, args = None, callbk = None) that has completed the task: self. q. put (func, args, callbk) # put the task information as a ancestor in the queue if len (self. free_list) = 0 and len (self. generate_list) <self. max: s Elf. threadstart () def threadstart (self): t = threading. thread (target = self. handel) t. start () def handel (self): current_thread = threading. current_thread () self. generate_list.append (current_thread) event = self. q. get () while event! = 'Stop': func, args, callbk = event flag = True try: ret = func (* args) except t Exception as e: flag = False ret = e if callbk is not None: try: callbk (ret) failed t Exception as e: pass if not self. terminal: with self. auto_append_remove (current_thread): event = self. q. get () else: event = 'stop' else: self. generate_list.remove (current_thread) def terminate (self): self. terminal = True while self. generate_list: self. q. put ('stop') self. q. empty () def close (self): num = len (self. generate_list) while num: self. q. put ('stop') num-= 1 @ contextlib. contextmanager def auto_append_remove (self, thread): self. free_list.append (thread) try: yield finally: self. free_list.remove (thread) def f (I): # time. sleep (1) return idef f1 (I): print (I) p = ThreadingPool (5) for I in range (20): p. run (func = f, args = (I,), callbk = f1) p. close ()
9. coroutine

Coroutine, also known as micro-thread, means that coroutine execution looks a bit like multi-thread, but in fact there is only one thread in the coroutine process. Therefore, there is no thread switching overhead. Compared with multithreading, there are more threads, the performance advantage of coroutine is more obvious. In addition, because there is only one thread, there is no need for a multi-thread lock mechanism, and there is no simultaneous write variable conflict. Application scenarios of coroutine: when a program contains a large number of operations that do not require CPU (I/O), let's look at an example of using coroutine.

From gevent import monkeyimport geventimport requests # Replace the thread/socket in the standard library # so that we can use the socket later as usual without modifying any code, but it becomes non-blocking. monkey. patch_all () # Monkey patch def f (url): print ('get: % s' % url) resp = requests. get (url) data = resp. text print ('% d bytes encoded ed from % s. '% (len (data), url) gevent. joinall ([gevent. spawn (f, 'https: // www.python.org/'), gevent. spawn (f, 'https: // www.yahoo.com/'), gevent. spawn (f, 'https: // github.com/'),])

In the preceding example, a thread uses coroutine to complete all requests. When a request is sent, it does not wait for a response, but sends all requests at one time, when you receive a reply, you can process a reply. This way, a thread can solve all the problems with high efficiency.

10. Summary

This blog post is the last article on pyton's basic knowledge. I will talk about the front-end knowledge later. The following is a directory: http://www.cnblogs.com/wxtrkbc/p/5606048.html, which will be updated later,

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.