python-thread and process of network programming

Source: Internet
Author: User
Tags mutex semaphore

The difference between a thread and a process

Threads are part of a process, threads run within process space, and threads that are generated by the same process share the same memory space. When the process exits, the threads generated by the process are forced to exit and clear. Threads can share all of the resources owned by a process with other threads that belong to the same process, but they do not inherently have system resources and have only a bit of information (such as program counters, a set of registers, and stacks) that is essential for running.

A process is an instance of a program that is executing. A process consists of at least one thread, and the thread is the smallest execution unit.

Second, threading module
Threading method: T.start ()   # Activates Thread t.getname () # Gets the name of the thread T.setname () # Sets the name of the thread T.name  # Gets or sets the name of the thread t.is_alive () # Checks whether the thread is running in t.isalive () # to check if the thread is running in T.setdaemon ()   # set to a background thread or foreground thread (default: False), or whether the thread is a daemon thread through a Boolean value, which must be executed at start () Method can only be used after the If it is a background thread, during the main thread execution, the background thread is also in progress, after the main thread executes, the background thread will stop whether it succeeds or not, and if it is the foreground thread, the foreground thread is also in progress, and after the main thread executes, waiting for the foreground thread to finish executing, The program stops T.isdaemon ()    # to determine if the daemon thread T.ident # Gets the identifier of the thread. The thread identifier is a non-0 integer that is valid only after the start () method is called, otherwise it only returns none. T.join ()    # executes each thread one by one, execution continues, and the method makes multithreading meaningless T.run () # threads are automatically executed by the CPU after the thread object's Run method

Two ways to invoke a thread:

1. Call directly
#-*-Coding:utf-8-*-import threadingimport timedef Run (num):    print ("Running on number:%s"% num)    time.sleep (1) if __name__ = = ' __main__ ':    t1 = Threading. Thread (Target=run, args= (1,))    t2 = Threading. Thread (Target=run, args= (2,))    T1.start ()  # boot thread    t2.start ()    start_time = Time.time ()    t1.join ()   # Wait for thread T1 execution to complete return results    t2.join ()    end_time = Time.time ()    cost_time = end_time-start_time    print ( Cost_time)    print (T1.getname ())     # get thread name    print (T2.getname ())    t1.setname (' number 1 ')  # Set thread name    t2.setname (' number 2 ') Print (    t1.getname ())    print (T2.getname ()) print (    t1.ident)     # Get thread ID    print (t2.ident)
2. Inherited calls
#-*-Coding:utf-8-*-import threadingclass MyThread (threading. Thread):    def __init__ (self, num):        super ("MyThread, Self"). __init__ ()        self.num = num    def run (self):        print ("Running on number:%s"% self.num)        time.sleep (1) if __name__ = = ' __main__ ':    t1 = MyThread (1)    t2 = M Ythread (2)    T1.start ()    T2.start ()
Daemon method
# _*_coding:utf-8_*_import Timeimport Threadingdef Run (n):    print (' [%s]------running----\ n '% n)    Time.sleep (2)    Print ('--done--') def Main (): For    I in range (5):        t = Threading. Thread (Target=run, Args=[i,])        T.start ()        t.join (1)        print (' Starting thread ', T.getname ()) m = threading . Thread (Target=main, args=[]) M.setdaemon (True)  # Sets the main thread as the daemon thread, which is the daemon thread of the program's main threads, and the M thread exits when the main path exits. Other sub-threads initiated by M will exit at the same time, regardless of whether the task M.start () m.join (timeout=2) print ("---main thread done----")

  A thread starts the main function, as the main thread, when the main thread is the daemon thread, and when the main thread exits, the child thread that is started in the main function exits, regardless of whether the run function finishes execution.

Thread lock (Mutex mutex)

A process can start multiple threads, multiple threads share the memory space of the parent process, which means that each thread can access the same data, at this time, if 2 threads simultaneously to modify the same data, what will happen? In python2.x, if not locked, the modified data is not necessarily 0, and python3.x, even if not locked, the result is always 0. It is possible that the python3.x is automatically locked.

# _*_coding:utf-8_*_import Timeimport threadingdef addnum (): Global    num  # gets this globally variable    print ('--get ' in each thread) Num: ', num)    time.sleep (1)    lock.acquire ()  # change data before requesting lock    num-= 1  # for this public variable-1 operation    Lock.release ()  # Release lock after modifying data num =  # Set a shared variable thread_list = []lock = Threading. Lock ()   # generates global lock for I in range:    t = Threading. Thread (target=addnum)    T.start ()    thread_list.append (t) for T in Thread_list:  # Wait for all threads to finish executing    t.join () print (' Final num: ', num)
GIL VS Lock

In CPython, the global interpreter lock, or GIL, was a mutex that prevents multiple native threads from executing Python by Tecodes at once. This lock is necessary mainly because CPython ' s memory management are not thread-safe. (However, since the GIL exists, other features has grown to depend on the guarantees that it enforces.)

The core of the above is that, no matter how many threads you start, how many CPUs you have, Python will only allow a single thread to run at the same time when it executes.

  The first thing to be clear is GIL not the Python feature, which is a concept introduced when implementing the Python parser (CPython). Just like C + + is a set of language (syntax) standards, but can be compiled into executable code with different compilers. Well-known compilers such as Gcc,intel c++,visual C + +. Python is the same, and the same piece of code can be executed through different Python execution environments such as Cpython,pypy,psyco. Like the Jpython there is no Gil. However, because CPython is the default Python execution environment for most environments. So in many people's concept CPython is Python, also take it for granted that the GIL Python language defects. So let's be clear here: Gil is not a Python feature, and Python can be completely independent of the Gil.

Python already has a Gil to ensure that only one thread can execute at the same time, why is there a need for lock? Notice that the lock here is a user-level lock, which is okay with that Gil.

Rlock (Recursive Lock)

 In a large lock, the child lock is also included. Rlock is allowed to be acquire multiple times in the same thread. But lock does not allow this situation. If you use Rlock, then acquire and release must appear in pairs, that is, call n times acquire, must call the N-time release to really release the occupied locks.

# _*_coding:utf-8_*_import Timeimport threadingdef run1 ():    print ("Grab the first part data")    Lock.acquire ()  # 5, Request lock (Small lock)    global num    num + = 1    lock.release ()  # 6, Release lock (small lock)    return numdef run2 ():    print (" Grab the second part data ")    Lock.acquire ()    global num2    num2 + = 1    lock.release ()    return num2def Run3 ():    lock.acquire ()  # 3, Request lock (Large lock)    res = RUN1 ()    # 4, Run run1 ()    print ('--------between run1 and run2-----')    res2 = run2 ()    lock.release ()  # 7, Release lock (large lock)    print (res, res2) if __name__ = = ' __main__ ' :    num, num2 = 0, 0    lock = Threading. Rlock ()    # 1, generates a recursive lock for    i in range:        t = Threading. Thread (TARGET=RUN3)   # 2, Startup thread        T.start () while Threading.active_count ()! = 1:    # Returns the number    of thread objects currently active Print (Threading.active_count ()) Else: print (    '----all threads done---')    print (num, num2)
Signal Volume semaphore

mutexes allow only one thread to change data, while Semaphore allows a certain number of threads to change the data at the same time.

# _*_coding:utf-8_*_import threading, Timedef Run (n):    semaphore.acquire ()    time.sleep (1)    print ("Run the Thread:%s\n "% n"    semaphore.release () if __name__ = = ' __main__ ':    num = 0    semaphore = Threading. Boundedsemaphore (5)  # allows up to 5 threads to run at the same time for    I in range:        t = Threading. Thread (Target=run, args= (i,))        T.start () while Threading.active_count ()! = 1:    Pass  # Print Threading.active_count () Else: print (    '----all threads done---')    print (num)
Event

The events of the Python thread are used by the main thread to control the execution of other threads, and the event provides three methods set, wait, clear.

Event handling mechanism: A global definition of a "flag", if the "flag" value is False, then when the program executes the Event.wait method is blocked, if the "flag" value is true, then the Event.wait method will no longer block.

# _*_coding:utf-8_*_import threadingdef do:    print (' start ')    event.wait ()    # wait for "Flag" set to True    Print (' execute ') Event_obj = threading. Event ()   # generates events for I in range:    t = Threading. Thread (Target=do, args= (Event_obj,))  # passes the event as a parameter into the Do function    T.start () event_obj.clear ()   # Sets "Flag" to FALSEINP = Input (' input: ') if InP = = ' true ':    event_obj.set ()  # Sets "Flag" to True

Example of traffic lights:

# _*_coding:utf-8_*_import threading, Timeimport randomdef Light (): If not even T.isset (): Event.set () # Wait does not block # Green status Count = 0 while true:if count < 10:print ('--            Green light on---') elif count < 13:print ('--yellow light on---') elif count < 20:             If Event.isset (): Event.clear () print ('--red light on---') else:count = 0 Event.set () # Play The green light Time.sleep (1) Count + = 1def car (n): While 1:time.sleep (Random.ran Drange ()) if Event.isset (): # green light print ("car [%s] is running:"% n) else:print ("CA R [%s] is waiting for the red light, "% n" if __name__ = = ' __main__ ': event = threading. Event () Light = threading. Thread (Target=light) Light.start () for I in Range (3): T = Threading. Thread (Target=car, args= (i,)) T.start () 

Here is an example of an event use, employees into the company door to swipe, we set a thread here is "door", and then set a few threads for "staff", employees see the door is not open, swipe, brush the card, the door opened, Employees will be able to pass.

# _*_coding:utf-8_*_import Threadingimport timeimport randomdef Door (): Door_open_time_counter = 0 while True: If Door_swiping_event.is_set (): # Determines if the identity bit is true print ("door opening ...") Door_open_time_counter +            = 1 Else:print ("door closed ...., swipe to open.") Door_open_time_counter = 0 # empty Timer door_swiping_event.wait () if Door_open_time_counter > 3: # The door is open already     3s, it's off Door_swiping_event.clear () Time.sleep (0.5) def staff (n): print ("Staff [%s] is comming ..."% n)        While True:if Door_swiping_event.is_set (): Print ("Door is opened, passing .....") Break Else:print ("Staff [%s] sees door got closed, swipping the card ..."% n) print (door_swiping_ Event.set ()) Door_swiping_event.set () print ("After Set", Door_swiping_event.set ()) Time.slee P (0.5) Door_swiping_event = threading. Event () # setting Events Door_thread =Threading. Thread (Target=door) Door_thread.start () for I in range (5): P = Threading. Thread (Target=staff, args= (i,)) Time.sleep (Random.randrange (3)) P.start ()
Queue queues

In Python3.5, queues are the most common form of exchanging data between threads. The queue module is a module that provides queued operations, although it's easy to use, but if you're not careful, there are some surprises.

The Python queue module has three types of queues and constructors:
1, the Python queue module FIFO queuing first-out. Class Queue.queue (MaxSize)
2, LIFO similar to the heap, that is, advanced after out. Class Queue.lifoqueue (MaxSize)
3, there is a priority queue level lower the more first out. Class Queue.priorityqueue (MaxSize)

# _*_coding:utf-8_*_# Create a "queue" object import Queueq = queue. Queue (maxsize=10) # put 10 values in the queue for I in range:    q.put (i)    # The Put () method of the call queue object inserts an item at the end of the team. Put () has two parameters, the first item is required, the value of the inserted item, the second block is an optional parameter, the default is    # 1, if the queue is currently empty and the Block is 1,put () method causes the calling thread to pause until a data cell is vacated. If the block is the 0,put method, the full exception is thrown. Print (q.maxsize)    # The queue length can be set through the optional parameter maxsize of the queue's constructor. If MaxSize is less than 1, it means that the queue length is infinite. # Remove 10 values from the queue for I in range:    print (Q.get ())    # Call The Get () method of the queue object to remove from the team header and return an item. The optional parameter is block, which is true by default. If the queue is empty and the Block is True,get (), the calling thread is paused until a project is available.    # If the queue is empty and the block is false, the queue throws an empty exception. # Common methods in this package (q = Queue.queue ()): # q.qsize () # Returns the size of the queue # Q.empty () # If the queue is empty, returns True, and vice versa false# q.full () # If the queue is full, return true, and vice versa false# Q.full and q.maxsize size corresponds to # q.get ([block[, timeout]]) # Get queue, timeout wait time # q.get_nowait () # quite Q.get (False) # Non-blocking Q.put (item) write Queue, timeout wait time # q.put_nowait (item) # Quite Q.put (item, False) # Q.task_done () # After completing a work, the Q.task_done () function sends a signal to the queue that the task has completed # Q.join () # actually means waiting for the queue to be empty before performing another operation
Producer Consumer Model

Using producer and consumer patterns in concurrent programming can solve most concurrency problems. This mode improves the overall processing speed of the program by balancing the productivity of the production line and the consuming thread.

  Why use producer and consumer models?

In the world of threads, the producer is the thread of production data, and the consumer is the thread of consumption data. In multithreaded development, producers have to wait for the consumer to continue producing data if the producer is processing fast and the consumer processing is slow. Similarly, consumers must wait for producers if their processing power is greater than that of producers. To solve this problem, the producer and consumer models were introduced.

  What is the producer-consumer model?

The producer-consumer model solves the problem of strong coupling between producers and consumers through a container. Producers and consumers do not communicate with each other directly, and through the blocking queue to communicate, so producers do not have to wait for consumer processing after the production of data, directly to the blocking queue, consumers do not find producers to data, but directly from the blocking queue, the blocking queue is equivalent to a buffer, Balance the processing power of producers and consumers.

Here's an example of a most basic producer consumer model:

# _*_coding:utf-8_*_import Threadingimport queuedef producer (): For    i in range:        q.put ("Bone%s"% i)    print ("Start waiting for all the bones to be taken away ...")    Q.join ()    print ("All bones are finished ...") def Consumer (name):    while q.qsize () > 0:        print ("%s fetch"% name, Q.get ())        Q.task_done ()  # tells the task to finish executing q = queue. Queue () p = Threading. Thread (Target=producer,) p.start () c1 = Consumer ("Dog")

Another example of eating buns:

# _*_coding:utf-8_*_import time, Randomimport queue, Threadingq = queue. Queue () def Producer (name):  count = 0 while  count <:    Time.sleep (Random.randrange (3))    Q.put ( Count)    print (' Producer%s has produced%s Baozi.. '% (name, count))    count + = 1def Consumer (name):  count = 0
   
    while Count <:    Time.sleep (Random.randrange (4))    if not Q.empty ():        data = Q.get ()        print (data) C12/>print (' \033[32;1mconsumer%s has eat%s baozi...\033[0m '% (name, data))    else:        print ("-----No Baozi Anymore----")    count + = 1P1 = Threading. Thread (Target=producer, args= (' A ',)) C1 = Threading. Thread (Target=consumer, args= (' B ',)) P1.start () C1.start ()
   

python-thread and process of network programming

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.