Python routes, processes, threads, queues

Source: Internet
Author: User
Tags semaphore thread stop

One, thread

Threading is used to provide thread-related operations, which are the smallest unit of work in an application.

#!/usr/bin/env python#-*-coding:utf-8-*-ImportThreadingImport TimedefShow (ARG): Time.sleep (1)    Print 'Thread'+Str (ARG) forIinchRange (10): T= Threading. Thread (Target=show, args=(i,)) T.start ()Print 'Main thread Stop'

The code above creates 10 "foreground" threads, then the controller is handed over to the CPU,CPU according to the specified algorithm for scheduling, shard execution instructions.

More ways:

      • Start thread is ready to wait for CPU scheduling
      • SetName setting a name for a thread
      • GetName Get thread Name
      • Setdaemon set to background thread or foreground thread (default)
        If it is a background thread, during the main thread execution, the background thread is also in progress, and after the main thread finishes executing, the background thread stops regardless of success or not.
        If it is the foreground thread, during the main thread execution, the foreground thread is also in progress, and after the main thread finishes executing, wait for the foreground thread to finish, the program stops
      • The join executes each thread one by one and continues execution after execution, making multithreading meaningless
      • The Run method that executes the thread object automatically after the run thread is dispatched by the CPU
ImportThreadingImport TimeclassMyThread (Threading. Thread):def __init__(self,num): Threading. Thread.__init__(self) self.num=NumdefRun (self):#define the functions to be run by each thread         Print("running on number:%s"%self.num) Time.sleep (3) if __name__=='__main__': T1= MyThread (1) T2= MyThread (2) T1.start () T2.start ()
Custom Threading Classes

Thread Lock (lock, Rlock)

Because there is a random dispatch between threads, and each thread may execute only N, dirty data may appear when multiple threads modify the same piece of data at the same time, so a thread lock is present-allowing one of the threads to perform the operation at the same time.

#!/usr/bin/env python#-*-coding:utf-8-*-ImportThreadingImportTimenum=10deffunc (i,l):GlobalNUM#lockedl.acquire () NUM-= 1Time.sleep (5)    Print(NUM)#Unlockl.release ()#lock=threading. Lock ()#lock=threading. Rlock ()Lock=threading. Boundedsemaphore (5) forIinchRange (10):    #t=threading. Thread (target=func,args= (lock,))T=threading. Thread (target=func,args=(I,lock,)) T.start ()
Lock, Rlock
#!/usr/bin/env python#-*-coding:utf-8-*-ImportThreadingdeffunc (i,e):Print(i) e.wait ()#detect what the lamp is,    Print(i+100) Event=Threading. Event () forIinchRange (10): T=threading. Thread (target=func,args=(I,event,)) T.start ()#--------------Event.clear ()#set as red lightInp=input ('>>>')ifINP = ="1": Event.set ()#set as green light
Event Lock
#!/usr/bin/env python#-*-coding:utf-8-*-#First KindImportThreadingdefcondition (): RET=False R= Input ('>>>')    ifr = ='true': Ret=TrueElse: Ret=Falsereturnretdeffunc (I,con):Print(i) Con.acquire () con.wait_for (condition)Print(i+100) con.release () C=Threading. Condition () forIinchRange (10): T= Threading. Thread (Target=func, args=(I,c,)) T.start ()#The second KindImportThreadingdeffunc (I,con):Print(i) Con.acquire () con.wait ( )Print(i+100) con.release () C=Threading. Condition () forIinchRange (10): T= Threading. Thread (Target=func, args=(I,c,)) T.start () whileTRUE:INP= Input ('>>>')    ifINP = ='Q':         BreakC.acquire () c.notify (int (INP)) c.release () fromThreadingImportTimerdefHello ():Print("Hello, World") T= Timer (1, Hello) t.start ()#After 1 seconds, "Hello, World" would be printed
condition Condition
#mutexes allow only one thread to change data at the same time, while Semaphore allows a certain number of threads to change data, such as a toilet with 3 pits, which allows up to 3 people to go to the toilet, while the latter can only wait for someone to come out. ImportThreading,timedefrun (N): Semaphore.acquire () time.sleep (1)    Print("run the thread:%s"%N) semaphore.release ()if __name__=='__main__': Num=0 Semaphore= Threading. Boundedsemaphore (5)#allow up to 5 threads to run at a time     forIinchRange (20): T= Threading. Thread (target=run,args=(i,)) T.start ()
signal Volume (Semaphore)Ii. process
From multiprocessing import Processimport threadingimport time  def foo (i):    print ' Say hi ', I-  i in range (1 0):    p = Process (target=foo,args= (i,))    P.start ()

Note: Because the data between processes needs to be held separately, the creation process requires very large overhead.

Data sharing between processes

 

#!/usr/bin/env python#-*-coding:utf-8-*-#第一种multiprocessing, queues# from multiprocessing import process# from Multiprocessing Import queues# import multiprocessing## def foo (i,arg): # arg.put (i) # print (' Say hi ', i,arg.qsize () # # If __name__== ' __main__ ': # li = queues. Queue (20,ctx=multiprocessing) # for I in range: # p = Process (target=foo,args= (I,li,)) # P.start () #第二 Species array# from multiprocessing import process# to multiprocessing import queues# import multiprocessing# from multiprocess ing import array# def foo (i,arg): # arg[i]=i+100# for item in arg:# print (item) # Print (' ========== ') # I         F __name__== ' __main__ ': # li=array (' I ', ten) # for I in range: # p=process (target=foo,args= (I,li)) # P.start () #第三种from multiprocessing import processfrom multiprocessing import Queuesimport Multiprocessingfrom    Multiprocessing Import managerdef foo (i,arg): # arg.put (i) # print (' Say hi ', i,arg.qsize ()) # Arg[i] = i + 100 # For item in ARG: # print (item) # Print (' ========== ') arg[i] = i + print (arg.values ()) if __name__ = = ' __main__ ': #li = [] #li = queues.  Queue (20,ctx=multiprocessing) obj = Manager () Li = obj.dict () #li = Array (' i ', ten) for I in range: p = Process (target=foo,args= (I,li)) #p. Daemon = True P.start () #p. Join () Import time Ti Me.sleep (0.1)
    'C': Ctypes.c_char,'u': Ctypes.c_wchar,'b': Ctypes.c_byte,'B': Ctypes.c_ubyte,'h': Ctypes.c_short,'H': Ctypes.c_ushort,'I': Ctypes.c_int,'I': Ctypes.c_uint,'L': Ctypes.c_long,'L': Ctypes.c_ulong,'F': Ctypes.c_float,'D': ctypes.c_double
Array_ type Correspondence table

Process Lock instance:

#!/usr/bin/env python#-*-coding:utf-8-*-from multiprocessing import Process, Array, Rlockdef Foo (lock,temp,i): "    " "Add    No. 0 Number" "" "    Lock.acquire ()    temp[0] = 100+i for    item in temp:        print I, '-----> ', item    lock.release () lock = Rlock () temp = Array (' i ', [one, one, all, and]) for I in range:    p = Process (target=foo,args= (l Ock,temp,i,))    P.start ()

Process Pool

A process sequence is maintained internally by the process pool, and when used, a process is fetched in the process pool, and the program waits until a process is available in the process pool sequence if there are no incoming processes available for use.

There are two methods in a process pool:

    • Apply
    • Apply_async
#!/usr/bin/env python#-*-coding:utf-8-*-from  multiprocessing Import process,poolimport time  def Foo (i):    Time.sleep (2)    return i+100  def Bar (ARG):    print arg  pool = Pool (5) #print pool.apply (Foo, (1,)) # Print Pool.apply_async (func =foo, args= (1,)). Get () for  I in range:    pool.apply_async (Func=foo, args= (i,), Callback=bar)  print ' End ' Pool.close () pool.join () #进程池中进程执行完毕后再关闭, if commented, then the program closes directly.
Third, the co-process

The operation of the thread and process is triggered by the program to trigger the system interface, the final performer is the system, and the operation of the coprocessor is the programmer.

The significance of the existence of the process: for multi-threaded applications, the CPU by slicing the way to switch between threads of execution, thread switching takes time (save state, next continue). , only one thread is used, and a code block execution order is specified in one thread.

Application scenario: When there are a large number of operations in the program that do not require the CPU (IO), it is suitable for the association process;

#!/usr/bin/env python#-*-coding:utf-8-*-from  greenlet import Greenlet  def test1 ():    print    Gr2.switch ()    print    gr2.switch ()  def test2 ():    print    gr1.switch ()    Print Gr1 = Greenlet (test1) GR2 = Greenlet (test2) Gr1.switch ()

Import Gevent def foo ():    print (' Running in foo ')    gevent.sleep (0)    print (' Explicit context switch to Foo Again ') def Bar ():    print (' Explicit context to bar ')    gevent.sleep (0)    print (' implicit context switch back To Bar ') Gevent.joinall ([    gevent.spawn (foo),    gevent.spawn (bar),])
Iv. queues
#!/usr/bin/env python#-*-coding:utf-8-*-import queue,time# FIFO queue #put put data, whether blocking, time-out when blocking #get fetching data (default blocking), blocking, time-out when blocking #队列最大长度 #qsize () is really the number #maxsize the maximum number of support #join,task_done, blocking the process, when the queue of tasks completed, no longer block # Q=queue. Queue (2) # Print (Q.empty ()) #判断队列有没有元素, with return true# Q.put (one) # Q.put ($) # print (Q.empty ()) # Print (Q.qsize ()) # q.put (22) # Q.put (33,block=false) # q.put (33,block=false,timeout=2) # Print (Q.get ()) # Print (Q.get ()) # Print (Q.get (timeout=2)) Import  queue               #先进先出 # q = queue. Lifoqueue () # q.put (123) # q.put (456) # Print (Q.get ()) # q = queue. Priorityqueue ()   #根据优先级处理 # Q.put ((1, "Alex1")) # Q.put ((2, "Alex2")) # Q.put ((3, "Alex3")) # Print (Q.get ()) # q= Queue.deque ()          #双向队列 # Q.append ((123)) # Q.append (234) # Q.appendleft (456) # Print (Q.pop ()) # Print (Q.popleft ()) # q= Queue. Queue (5) # q.put (123) # q.put (456) # q.get () # Q.task_done () # q.get () # Time.sleep (5) # Q.task_done () # Q.join ()

Python routes, processes, threads, queues

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.