Python learning process Thread learning two

Source: Internet
Author: User

One, the threading module is the Threading,multiprocessing module processing mode with threading similar to open the thread two ways: Example:from threading import  Threadfrom multiprocessing import processdef work (name):     print ('%s  say hello '  %name) if __name__ ==  ' __main__ ':     t =  thread (target=work, args= (' Hyh ',))     t.start ()     print (' Main thread ')     class work (thread):     def __init__ (self,name):         super (). __init__ ()          self.name = name    def run (self):         print ('%s say hello '  %self.name) if __name__ ==  ' __main__ ':     t = work (' Hyh ')     t.start ()      Print (' main thread ')      second,Thread method Queue Method Example: Import queueq = queue. Queue (3)      #先进先出q. Put (1) q.put (' Hyh ') q.put ([1,2,3,4]) print (Q.get ()) print (Q.get ()) Print ( Q.get ()) Q = queue. Lifoqueue ()      #后进先出q. Put (1) q.put (' Hyh ') q.put ([1,2,3,4]) print (Q.get ()) print (Q.get ()) Print ( Q.get ()) Q = queue. Priorityqueue ()      #优先级, the smaller the number, the higher the Priority Q.put ((10,  ' a ')) Q.put ((9,  ' B ')) Q.put ((11,  ' C ') print (Q.get ()) print (Q.get ()) print (Q.get ()) thread other Method Example:import timefrom threading import  Threadimport threadingdef work ():     time.sleep (2)     print ( '%s say hello '  % (Threading.current_thread (). GetName ())) if __name__ ==  ' __main__ ' :     t = thread (target=work)     t.setdaemon (True)       #设置成守护线程     t.start ()     t.join ()      print (THREADING.ENumerate ())      #当前活跃的线程对象, is a list form     print (Threading.active_count ())   #当前活跃的线程数目     print (' main thread ',  threading.current_thread (). GetName ())   # Thread name      python global interpreter lock Gilpython threads in the same process cannot take advantage of multicore advantages, because one thread runs when the Gil Lock is acquired, and the Gil is released until the end of the run, Other threads to request Gil The current computer is mostly multicore, and Python's efficiency in multithreading for computationally intensive tasks does not lead to much performance gains, or even as a serial (without a lot of switching), but there are significant improvements to IO-intensive task efficiency: Compute-intensive from threading import threadfrom multiprocessing import processimport  Osimport timedef work ():     res = 0    for i  in range (1000000):         res += iif __name_ _ ==  ' __main__ ':     t_l = []    start_time =  time.time ()     for i in range ():         t = thread (TarGet=work)         t_l.append (t)          t.start ()     for i in t_l:         i.join ()     stop_time = time.time ()     print (' run time is %s '  % (stop_time - start_time))     print (' main thread ') IO-intensive     from threading import threadfrom multiprocessing import  processimport timeimport osdef work ():     time.sleep (2)      print (Os.getpid ()) if __name__ ==  ' __main__ ':     t_l =  []    start_time = time.time ()     for i in  range (+):         t = thread (target=work)        &nbsP; t_l.append (t)         t.start ()     for  t in t_l:        t.join ()     stop_ Time = time.time ()     print (' run time is %s '  % (stop_time  - start_time)) line lock lockimport threadingr=threading. Lock () R.acquire () "operation on common Data" r.release () Deadlock Example:from threading import thread,lockimport  Timemutexa = lock () Mutexb = lock () Class mythread (Thread):     def  run (self):         self.func1 ()          SELF.FUNC2 ()     def func1 (self):         mutexa.acquire ()         print (' \033[41m%s  get a lock \ 033[0m '  %self.name)         mutexB.acquire ()         print (' \033[42m%s  get B lock \033[0m '  %self.name )         mutexb.release ()          mutexa.release ()     def func2 (self):         mutexb.acquire ()         print (' \033[43m%s  get B lock \033[0m '  %self.name)         time.sleep (2)          mutexa.acquire ()         print (' \033[44m%s get a lock \033[0m '  %self.name)         mutexa.release ()          mutexb.release () if __name__ ==  ' __main__ ':     for  i in range ():         t = mythread ()       &nbsP;  t.start ()          output results:thread-1  get a lock thread-1  Get B lock thread-1  get B lock thread-2  get a lock stuck ... Recursive lock Rlock This rlock internally maintains a lock and a counter variable, counter records the number of acquire, so that resources can be require multiple times. Until all the acquire of a thread are release, the other threads can get the resources. In the example above, if you use Rlock instead of lock, no deadlock will occur from threading import thread,rlockimport timemutex =  rlock () Class mythread (Thread):     def run (self):         SELF.FUNC1 ()         self.func2 ()      def func1 (self):         mutex.acquire ()          print (' \033[41m%s  get a lock \033[0m '  %self.name)          mutex.acquire ()         print (' \033[42m%s   Get B lock \033[0m '  %self.name)         muTex.release ()         mutex.release ()     def  Func2 (self):         mutex.acquire ()          print (' \033[43m%s  get B lock \033[0m '  %self.name)          time.sleep (2)         mutex.acquire ()          print (' \033[44m%s get a lock \033[0m '  %self.name)          mutex.release ()         mutex.release () if __name __ ==  ' __main__ ':     for i in range (Ten):         t = mythread ()         t.start ()          Semaphore Semahporesemaphore manages a built-in counter whenever a acquire () is called with a built-in counter-1 ; Call Release ()   built-in counter +1; counter cannot be less than 0; When counter is 0 o'clock, acquire () will block the thread until another thread calls the release () Example: Import threadingimport timesemaphore = threading. Semaphore (5) Def func ():     if semaphore.acquire ():         print (Threading.current_thread (). GetName ()  +  '  get spmaphore ')          time.sleep (2)          Semaphore.release () For i in range ():     t1 = threading. Thread (Target=func)     t1.start ()      A key feature of the event object thread is that each thread is run independently and the state is unpredictable. If a program's   his thread needs to determine its next action by judging the state of a thread, the problem of thread synchronization   becomes tricky. To solve these problems, we need to use the event object in the threading library. The   object contains a signal flag that can be set by the thread, which allows the thread to wait for certain events to occur. In the   initial case, the signal flag in the event object is set to False. If the thread waits for an event object,  and the flag of the event object is false, then the threads will be blocked until the flag is true. A thread if the signal flag of an event object is set to true, it will wake up all the threads waiting for the event object. If a thread waits for an event object that has already been set to true, then it ignores the event,  continues execution Event.isset (): Returns the status value of the event; Event.wait ():If  event.isset () ==false will block the thread; Event.set ():  sets the status value of event to true, all blocking pool threads are activated into a ready state,  wait for the operating system to dispatch ; Event.clear (): Recovery event Status value is false example:from threading import thread,eventimport  Threadingimport time,randomdef conn_mysql ():     print (' \033[42m%s  waiting for link MySQL ... \033[0m '  %threading.current_thread (). GetName ())     event.wait ()      print (' \033[42mmysql initialization succeeded,%s started connection ... \033[0m '  %threading.current_thread (). GetName ()) Def check_ MySQL ():     print (' \033[41m checking mysql...\033[0m ')     time.sleep ( Random.randint (1,3))     event.set ()     time.sleep (Random.randint (1,3)) if __name__ ==  ' __main__ ':     event = event ()      t1 = thread (Target=conn_mysql)     t2 = thread (target=conn_mysql)     t3 = thread (Target=check_mysql)     t1.start ()     t2.start ()      t3.start ()     wait (time) sets the timeout from threading import thread, Eventimport threadingimport time,randomdef conn_mysql ():     while not  event.is_set ():         print (' \033[42m%s  Wait for connection mysql...\033[ 0m '  %threading.current_thread (). GetName ())         event.wait (0.1)     print (' \033[42mmysql initialization succeeded,%s started connection ... \033[0m '  %threading.current_thread (). GetName () ) Def check_mysql ():     print (' \033[41m checking mysql...\033[0m ')      Time.sleep (Random.randint (1,3))     event.set ()     time.sleep ( Random.randint (1,3)) if __name__ ==  ' __main__ ':     event=event ()      t1 = thread (TargEt=conn_mysql)     t2 = thread (target=conn_mysql)     t3  = thread (Target=check_mysql)     t1.start ()     t2.start ()      t3.start ()     timer timer, specify n seconds to perform the operation example:from threading import  Timerdef hello ():     print ("Hello, world") T = timer (3, hello) T.start () Four, Cheng:  single-threaded concurrency, also known as micro-threading, the process is a user-state of the lightweight thread, that is, the process is the user Program control scheduling to implement the process, the key is the user program to switch their own procedures, You must save the state of the last call by the user program before switching, so that we can continue from the last position each time we recall, we have learned a way to save the program's running state in a single thread, that is, yield does not use yieldimport  Timedef consumer (item):    x = 1111111111111    y  = 222222222222222    z = 3333333333333333    x1 =  122324234534534    x2 = 21324354654654    x3 =  3243565432435def producer (target,sEQ):    for item in seq:         Target (item) each time the function is called, the namespace is temporarily generated, the call is released at the end of the loop 100 million times, then repeated so many times the creation and release, the overhead is very large start_time = time.time () Producer (Consumer,range (100000000)) Stop_time = time.time () print (' run time is:%s '  % ( stop_time - start_time)) Printing results: run time is:14.8908851146698 using yieldimport timedef  Init (func):     def wrapper (*args, **kwargs):         g = func (*args, **kwargs)         next (g)         return g    return  Wrapper@initdef consumer ():     x = 1111111111111    y  = 222222222222222    z = 3333333333333333    x1  = 122324234534534    x2 = 21324354654654    x3 = 3243565432435     while true:        item = yielddef producer ( TARGET, SEQ):    for item in seq:         target.send (item) Start_time = time.time () Producer (consumer (),  range (100000000)) stop _time=time.time () print (' run time is:%s '  % (stop_time-start_time)) Greenlet implement thread switching example:from  Greenlet import greenletdef test1 ():     print (' Test1,first ')      gr2.switch ()     print (' Test1,second ')     gr2.switch () def  test2 ():     print (' Test2,first ')     gr1.switch ()      print (' Test2,second ') gr1 = greenlet (test1) gr2 = greenlet (test2) Gr1.switch () Switch Pass parameter Import timefroM greenlet import greenletdef eat (name):     print ('%s eat  Food 1 '  %name)     gr2.switch (' Alex fly fly fly ')      print ('%s eat food 2 '  %name)     gr2.switch () Def play_ Phone (name):     print ('%s play 1 '  %name)     gr1.switch ( )     print ('%s play 2 '  %name) gr1 = greenlet (Eat) Gr2=greenlet (play _phone) Gr1.switch (name= ' Egon cheerleaders) gevent third-party library gevent  is a third-party library that can easily be implemented by Gevent for concurrent synchronous or asynchronous programming, The main pattern used in Gevent is greenlet, , which is a lightweight process that accesses Python in the form of a C extension module.  greenlet all run inside the main program operating system process, but they are dispatched in a collaborative manner.  g1=gevent.spawn () creates a co-object G1io block Switch example: Import geventimport timedef eat ():     print (' eat food 1 ')     gevent.sleep (2)     print (' Eat  food 2 ') Def play_phone ():   &Nbsp; print (' play phone 1 ')     gevent.sleep (1)     print (' play phone 2 ') g1 = gevent.spawn (Eat) g2 = gevent.spawn (Play_phone) Gevent.joinall ([g1, g2]) print (' Master ') Gevent.sleep (2) simulates an IO blocking time.sleep (2) or other blocking that the gevent can recognize, Gevent is not directly identifiable need to use the following line of code example: From gevent import monkey;monkey.patch_all () Import geventimport  timedef eat ():     print (' eat food 1 ')      Time.sleep (2)     print (' eat food 2 ') def play_phone ():     print (' play phone 1 ')     time.sleep (1)     print (' Play  phone 2 ') g1 = gevent.spawn (Eat) g2 = gevent.spawn (Play_phone) Gevent.joinall ([G1,  G2]) print (' master ') Gevent implements a single-threaded socket concurrency Example: server-side From gevent import monkey;monkey.patch_all () from  socket import *import geventdef&nBsp;server (Server_ip, port):     s = socket (Af_inet, sock_stream)      s.setsockopt (sol_socket,so_reuseaddr, 1)     s.bind ((Server_ip,port) )     s.listen (5)     while True:         conn, addr = s.accept ()          Gevent.spawn (TALK, CONN, ADDR) def talk (conn,addr):    try:         while True:             RES = CONN.RECV (1024x768)              print (' client %s:%s msg: %s '  % (addr[0], addr[1], res))              conn.send (Res.upper ())      Except exception as e:         print (E)     finally:         conn.close () if __name__ ==  ' __main__ ':     Server (' 127.0.0.1 ',  8080)      client #!/usr/bin/python# --*-- coding: utf-8  --*--from socket import *client=socket (Af_inet, sock_stream) client.connect (' 127.0.0.1 ',  8080)) while true:    msg = input (' >>:  '). Strip ( )     if not msg:continue    client.send (Msg.encode (' Utf-8 '))     MSG = CLIENT.RECV (1024x768)     print (Msg.decode (' Utf-8 '))

This article is from the "Linux Technology" blog, so be sure to keep this source http://haoyonghui.blog.51cto.com/4278020/1944191

Python learning process Thread learning two

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.