One, thread
1. Basic use
Create threads in two ways
The first type:
Import Threadingdef F1 (ARG): print (ARG) t=threading. Thread (target=f1,args= (123,)) T.start ()
Results:
123
The second type:
Import Threadingclass Myclass (threading. Thread): def __init__ (Self,func,args): self.func=func self.args=args Super (myclass,self). __init __ () def run (self): Self.func (Self.args) def F2 (ARG): print (ARG) obj=myclass (f2,123) Obj.start ()
Results:
123
The second method is to create the already class ourselves and then call threading. Thread this parent class, and then use Super to make the call
2, the creator consumer model (queue, but unlike RABBITMQ, this is Python's own)
Queue: There are four queues in total, namely FIFO queue, LIFO queue, Priority queue, two-way queue
Import Queueq=queue. Queue (2) print (Q.empty ()) q.put (one) q.put () print (Q.qsize ()) print (Q.get ()) Q.get () Q.task_done () print (Q.get ()) Q.get () Q.task_done () #q. Join ()
Results:
True211
This is the normal FIFO queue.
Put is put data, whether blocking, blocking Yes timeout
Get is fetching data (default blocking), blocking, time-out when blocking
Queue. Queue (2) Maximum length
Qsize () True number
MaxSize Maximum number of supported
Join, Task_done, block the process, and no longer block when a task in the queue finishes executing
Import Queueq=queue. Lifoqueue () q.put (123) q.put (345) print (Q.get ()) Q1=queue. Priorityqueue () Q1.put ((1, "WZC1")) Q1.put ((1, "Wzc11")) Q1.put ((1, "wzc1111")) Q1.put ((1, "wzc111")) Q1.put ((3, "WZC3") ) Q1.put ((2, "WZC4")) print (Q1.get ()) Q2=queue.deque () q2.append (123) q2.append (333) q2.appendleft (456) print (Q2.pop () ) Print (Q2.popleft ())
Results:
345 (1, ' wzc1 ') 333456
Post-backward-out queue ==lifoqueue
Priority Queue ==priorityqueue
Bidirectional queue ==deque
Thread Lock
The difference between lock and Rlock, one is to support only single-layer lock, one is to support multi-layer lock
Import Threadingimport timenum=10def func (WWW): global NUM www.acquire () num-=1 time.sleep (1) Print (NUM) www.release () lock=threading. Lock () for I in range: t=threading. Thread (target=func,args= (lock,)) T.start ()
Results:
9876543210
Mport threadingdef func (i,e): print (i) e.wait () #检测是什么信号 print (i+100) event=threading. Event () for I in range: t=threading. Thread (target=func,args= (i,event,)) T.start () event.clear () #禁止信号inp =input (">>>") if InP = = "1": Event.set () #通行信号
Results:
012345678910111213141516171819>>>1103102106107108109112113115118101104110114117100116105119111
Another way to use
Import threadingdef func (I,con): print (i) con.acquire () con.wait () print (i+100) con.release ( ) c=threading. Condition () for I in range: t=threading. Thread (target=func,args= (I,c,)) T.start () while True: inp=input (">>>") if InP = = "Q": Break C.acquire () c.notify (int (INP)) c.release ()
Results:
012345678910111213141516171819>>>3>>>1011001022>>>1031045>>> 105108106109107
Ii. process
1. Basic use
From multiprocessing import processfrom multiprocessing import queuesimport multiprocessingimport timedef foo (i, ARG): arg.put (i) print ("Say HI", I,arg.qsize ()) li=queues. Queue (20,ctx=multiprocessing) for I in range: p=process (target=foo,args= (I,li,)) P.start ()
process test without the IF __name__ = = ' __main__ ': Windows cannot execute, but Linux and mac can, plus can execute
Basic use
Default data is not shared
Queue
Array
Manager.dict
Pipe
Process Pool
Array
From multiprocessing import processfrom multiprocessing import arrayimport multiprocessingimport timedef foo (i,arg ): arg[i]=i+100 for items in arg: print (items) print ("========") Li=array ("I", ten) for I in Range (10): p=process (target=foo,args= (I,li,)) P.start ()
Results:
100000000000========10010100000000========1001011020000000========100101102103000000======== 10010110210310400000========1001011021031041050000========100101102103104105106000======== 10010110210310410510610700========1001011021031041051061071080========100101102103104105106107108109========
Manager.dict
From multiprocessing import processfrom multiprocessing import managerimport multiprocessingimport timedef foo (i, ARG): arg[i]=i+100 print (Arg.values ()) Obj=manager () li=obj.dict () for I in range: p=process (target= Foo,args= (I,li,)) P.start () Time.sleep (0.1)
Results:
[100] [100, 101] [100, 101, 102] [100, 101, 102, 103] [100, 101, 102, 103, 104] [100, 101, 102, 103, 104, 105] [100, 101, 102, 103, 104, 105, 106] [100, 101, 102, 103, 104, 105, 106, 107] [100, 101, 102, 103, 104, 105, 106, 107, 108, 109] [100, 101, 102, 103, 104, 105, 106, 107, 108, 109]
Process Pool
Pool.close () All tasks are completed
Pool.terminal () immediate termination
From multiprocessing import poolimport timedef F1 (ARG): time.sleep (1) print (ARG) Pool=pool (5) for I in range (30) : #pool. Apply (func=f1,args= (i)) Pool.apply_async (func=f1,args= (i)) pool.close () #time. Sleep (1) # Pool.terminate () Pool.join ()
Results:
01234567891110121314161517181921222023242527262829
Third, the co-process
1. When to use the co-process
2, Greenlet
3, Gevent
Greenlet:
From Greenlet import Greenletdef test1 (): print (All) gr2.swith () print (Gr2.swith) () def test2 (): Print (Gr1.swith) ( ) print (Gr1=greenlet) ( test1) gr2=greenlet (test2) Gr1.swith ()
Gevent:
From gevent import monkey;monkey.patch.all () import geventimport requestsdef f (URL): print ("GET:%s"% URL) resp =requests.get (URL) data=resp.text print ("%d bytes received from%s"% (len (data), URL)) Gevent.joinall ([ Gevent.spwan (F, ' Https://www.baidu.con '), Gevent.spwan (F, ' Https://www.yahoo.con '), Gevent.spwan (F, ' Https://www.github.con ')])
Python Finishing-day11