Python Concurrent Programming & Multithreading (ii)

Source: Internet
Author: User

Knowledge of preamble Theory: Python Concurrent Programming & Multithreading (i)

Introduction of a threading module

The multiprocess module completely imitates the interface of the threading module, which has a great similarity in the use level.

Official website Link: https://docs.python.org/3/library/threading.html?highlight=threading# (loaded in B mode ...). )

Two ways to open a threadWay OneMode two

Three the difference between opening multiple threads under one process and opening multiple sub-processes under one process1 who has the fastest opening speed2 A look at the PID 3 threads in the same process share the process's data? Four exercises

Practice One:

multi-threaded concurrent Socket serverClient

Practice 2:3 tasks, one to receive user input, one to format user input into uppercase, and one to save formatted results to a file

View CodeFive other thread-related methods
Thread instance object's method  # isAlive (): Returns whether the thread is active.  # getName (): Returns the thread name.  # SetName (): Sets the thread name. Some of the methods provided by the threading module are:  # threading.currentthread (): Returns the current thread variable.  # threading.enumerate (): Returns a list that contains the running thread. Running refers to threads that do not include pre-and post-termination threads until after the thread has started and ends.  # Threading.activecount (): Returns the number of running threads with the same result as Len (Threading.enumerate ()).
View Code

The main thread waits for the child thread to end

From threading Import Threadimport timedef Sayhi (name):    time.sleep (2)    print ('%s say hello '%name) if __name__ = = ' __main__ ':    t=thread (target=sayhi,args= (' Egon '))    T.start ()    t.join ()    print (' main thread ')    print ( T.is_alive ())    '    Egon say hello main thread    False    '
Six Daemon threads

Whether it is a process or a thread, follow: Guardian xxx will wait for the main xxx to be destroyed after the completion of the operation

It should be emphasized that the operation is not terminated

#1. For the main process, run complete means that the main process code has finished running. To the main thread, it means that all the non-daemons in the process of the main thread are running, and the main thread is running.

Detailed Explanation:

#1 The main process has finished running after its code is finished (the daemon is being reclaimed at this point), then the main process will always wait until the non-daemon has finished running and reclaim the child process's resources (otherwise it will produce a zombie process) before it ends, #2 The main thread runs after the other non-daemon threads have finished running (the daemon is recycled at this point). Because the end of the main thread means the end of the process, the resources of the process as a whole are recycled, and the process must ensure that the non-daemon threads are finished before they end.
From threading Import Threadimport timedef Sayhi (name):    time.sleep (2)    print ('%s say hello '%name) if __name__ = = ' __main__ ':    t=thread (target=sayhi,args= (' Egon ',))    T.setdaemon (True) #必须在t. Start () before setting    T.start ()    Print (' main thread ')    print (T.is_alive ())    "main thread    True    "
examples of confusing peopleSeven Python GIL (Global interpreter Lock)

Links: http://www.cnblogs.com/linhaifeng/articles/7449853.html

Eight-sync lock
Three points to note: #1. Thread Rob is the Gil Lock, Gil lock equivalent to execute permissions, get execute permission to get the mutex lock, other threads can also Rob Gil, but if found that lock still not released is blocked, even if the access to execute permission Gil will be handed out immediately. Join is waiting for all, that is, the whole serial, and the lock is just to lock the part of the modified shared data, that is, part of the serial, to ensure that the fundamental principle of data security is to make the concurrency into serial, join and mutex can be achieved, there is no doubt that the mutex part of the serial efficiency is higher. Be sure to look at the last of this section. The classic analysis of Gil and mutual exclusion lock

GIL VS Lock

The witty classmate may ask this question, that is, since you said before, Python already has a Gil to ensure that only one thread can execute at the same time, why do we need lock here?

First we need to agree that the purpose of the lock is to protect the shared data, and only one thread at a time can modify the shared data

Then we can conclude that protecting different data should be a different lock.

Finally, the problem is clear, GIL and lock are two locks, the protection of the data is different, the former is the interpreter level (of course, is to protect the interpreter-level data, such as garbage collection data), the latter is to protect the user's own development of the application data, it is obvious that Gil is not responsible for this matter, Only user-defined lock handling, or lock

Process Analysis: All threads rob the Gil lock, or all threads rob the Execute permission

Thread 1 Grab the Gil Lock, get execution permissions, start execution, and then add a lock, not finished, that is, thread 1 has not released lock, it is possible that thread 2 grab the Gil Lock, start execution, the execution of the process found that lock has not been released by thread 1, and then thread 2 into the block, take the execution permissions , it is possible that thread 1 gets the Gil and then normal execution to release lock ... This leads to the effect of serial operation

Since it's serial, we're doing it.

T1.start ()

T1.join

T2.start ()

T2.join ()

This is also serial execution Ah, why add lock, it is necessary to know that the join is waiting for T1 all the code to complete, the equivalent of locking T1 all the code, and lock is only a part of the operation to share data sharing code.

Detail
From threading Import Threadimport os,timedef work ():    global n    temp=n    time.sleep (0.1)    n=temp-1if __ name__ = = ' __main__ ':    n=100    l=[] for I in    Range:        p=thread (target=work)        l.append (p)        P.start () for    p in L:        p.join ()    print (n) #结果可能为99

Locks are often used to achieve synchronous access to shared resources. Create a lock object for each shared resource, and when you need to access the resource, call the Acquire method to get the lock object (if the other thread has already acquired the lock, the current thread waits for it to be freed), and then call the release method to release the lock when the resource has finished accessing it:

Import threadingr=threading. Lock () R.acquire () "operation on public Data" ' R.release ()
View Code Gil Lock and mutual exclusion lock comprehensive analysis (emphasis!!!) )the difference between the mutex and join (emphasis!!!) )Nine deadlock phenomena with recursive lock

The process also has a deadlock with a recursive lock, which in the process has forgotten to say, put it all here to say the amount

The so-called deadlock: refers to two or two or more processes or threads in the execution process, because of the contention for resources caused by a mutual waiting phenomenon, if there is no external force, they will not be able to proceed. At this point the system is in a deadlock state or the system produces a deadlock, the process that is always waiting for each other is called a deadlock process, as follows: Deadlock

View Code

Workaround, recursive lock, Python provides a reentrant lock Rlock in Python in order to support multiple requests for the same resource in the same thread.

The Rlock internally maintains a lock and a counter variable, counter records the number of acquire, so that resources can be require multiple times. Until all the acquire of a thread are release, the other threads can get the resources. In the example above, if you use Rlock instead of lock, a deadlock will not occur:

Mutexa=mutexb=threading. Rlock () #一个线程拿到锁, counter plus 1, the line range again in the case of lock, then counter continue to add 1, during which all other threads can only wait, waiting for the thread to release all locks, that is, counter down to 0
Ten semaphore semaphore

Same as the process

Semaphore manages a built-in counter,
Built-in counter whenever acquire () is called-1;
Built-in counter +1 when call Release ();
The counter cannot be less than 0, and when the counter is 0 o'clock, acquire () blocks the thread until another thread calls release ().

Example: (at the same time only 5 threads can get semaphore, that is, you can limit the maximum number of connections to 5):

View Code

Unlike the process pool, which is completely different from the concept of process pool (4), the maximum can only generate 4 processes, and from beginning to end it is just these four processes that do not produce new, and semaphores generate a bunch of threads/processes

11 Event

Same as the process

A key feature of a thread is that each thread is run independently and the state is unpredictable. Thread synchronization problems can become tricky if other threads in the program need to determine the state of a thread to decide what to do next. To solve these problems, we need to use the event object in the threading library. The object contains a signal flag that can be set by the thread, which allows the thread to wait for certain events to occur. In the initial case, the signal flag in the event object is set to False. If the thread waits for an event object, and the flag of the event object is false, then the threads will be blocked until the flag is true. A thread if the signal flag of an event object is set to true, it will wake up all the threads waiting for the event object. If a thread waits for an event object that is already set to true, it ignores the event and continues execution

Event.isset (): Returns the status value of the event, event.wait (): If Event.isset () ==false will block the thread; Event.set (): Sets the status value of event to True, All blocking pool threads are activated into a ready state, waiting for the operating system to dispatch; Event.clear (): The status value of recovery event is false.

For example, there are multiple worker threads trying to link MySQL, and we want to make sure that the MySQL service is working properly before linking to the MySQL server, and if the connection is unsuccessful, try reconnecting. Then we can use threading. Event mechanism to coordinate the connection operations of individual worker threads

View Code12 piece condition (Learn)

Causes the thread to wait, releasing n threads only if a condition is met

Import Threading Def run (n):    con.acquire ()    con.wait ()    print ("Run the Thread:%s"%n)    con.release () If _ _name__ = = ' __main__ ':     con = threading. Condition () for    I in range:        t = Threading. Thread (Target=run, args= (i,))        T.start () while     True:        INP = input (' >>> ')        if inp = = ' Q ':            Break        Con.acquire ()        con.notify (int (INP))        con.release ()
View Code13 Timers

Timer, specifying n seconds after an action is performed

From threading Import timer  def hello ():    print (' Hello, world ') t = Timer (1, hello) t.start ()  # after 1 secon DS, "Hello, World" would be printed
14 Thread Queue

Queue queues: Use import queue, same usage as process queue

Queue is especially useful in threaded programming when information must be exchanged safely between multiple threads.

queue. class Queue (maxsize=0) #先进先出
View Code

queue. class LifoQueue (maxsize=0) #last in fisrt out

View Code

queue. class PriorityQueue (maxsize=0) #存储数据时可设置优先级的队列

View Code

其他

View CodePython standard module--concurrent.futures

Https://docs.python.org/dev/library/concurrent.futures.html (continue to pack B ...). )

#1 Introduction The Concurrent.futures module provides a highly encapsulated asynchronous calling interface Threadpoolexecutor: Thread pool, which provides an asynchronous call to Processpoolexecutor: Process pool, which provides asynchronous calls both implement The same interface, which is defined by the abstract Executor class. #2 Basic Method #submit (FN, *args, **kwargs) Asynchronous Commit task #map (func, *i Terables, Timeout=none, chunksize=1) replaces the For Loop submit Operation #shutdown (Wait=true) equivalent to the Pool.close () +pool.join () operation of the process pool wait= True to wait for all tasks in the pool to finish after the resource has been reclaimed before continuing to Wait=false, returning immediately, without waiting for the task in the pool to complete but regardless of the wait parameter value, the entire program waits until all tasks have been completed submit and map must be before shutdown # Result (Timeout=none) Get results #add_done_callback (FN) callback function
ProcesspoolexecutorThreadpoolexecutorUsage of Mapcallback function

Python Concurrent Programming & Multithreading (ii)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.