Process Line Cheng

Source: Internet
Author: User
Tags epoll mutex semaphore redis server

One, process and thread

1. Process

Our computer applications are processes, assuming that the computer we use is single-core and that the CPU can execute only one process at a time. When the program for I/O blocking, the CPU and the program to wait together, it is too wasteful, the CPU will go to perform other programs, at this time involves switching, before switching to save the state of the previous program running, to recover, so there needs to be something to record this thing, you can elicit the concept of the process.

A process is a dynamic execution of a program on a data set. The process consists of three parts: program, data set and process control block. The program is used to describe the functions of the process and how it is done; The data set is the resource used in the program execution, and the process control block is used to save the state of the program running

2. Threads

A process can open multiple threads, why should there be a process, rather than make a thread? Because a program, a thread to share a set of data, if all into a process, each process exclusive piece of memory, then this set of data will be copied several to each program, unreasonable, so there are threads.

A thread is also called a lightweight process, which is a basic CPU execution unit and the smallest unit during program execution. A process will have at least one main thread, in the main thread through the threading module, in the open sub-threading

3. Relationship of process threads

(1) A thread can belong to only one process, while a process may have multiple threads, but at least one thread

(2) A resource is assigned to a process, and the process is the body of the program, and all the threads of the same process share all the resources of that process

(3) The CPU is assigned to a thread, that is, a thread that actually runs on the CPU

(4) The thread is the smallest execution unit and the process is the smallest resource management unit

4. Parallelism and concurrency
Parallel processing refers to a computing method in which two or more tasks can be performed simultaneously in a computer system, and parallel processing can work simultaneously in different aspects of the same program.

Concurrent processing is the same time period in which several programs are running in one CPU, but only one program runs on the CPU at any one moment.

Concurrency focuses on having the ability to handle multiple tasks, not necessarily at the same time, while the focus of parallelism is the ability to handle multiple tasks at the same time. Parallelism is a subset of concurrency

What's special about Python is that Python has a Gil lock, which restricts the ability for a process to have only one thread to use the CPU at the same time.

Second, threading module

The function of this module is to create a new thread, there are two ways to create a thread:

1. Create directly

Import Threadingimport timedef foo (n):    print (' >>>>>>>>>>>>>>>%s ' %n)    Time.sleep (3)    print (' Tread 1 ') t1=threading. Thread (target=foo,args= (2,)) #arg后面一定是元组, T1 is the child thread object created T1.start () #把子进程运行起来print (' ending ')

The code above is creating a child thread in the main thread

The result is: Print >>>>>>>>>>>>>2 first, print ending, and wait 3 seconds to print thread 1

2. Another way to create a thread object from an inherited class

Import  threadingimport timeclass MyThread (threading. Thread):    def __init__ (self):        threading. Thread.__init__ (self)    def run (self):        print (' OK ')        time.sleep (2)        print (' End ') T1=mythread () # Create thread Object T1.start () #激活线程对象print (' End Again ')

3.join () method

The function of this method is that the parent thread of this child thread will wait until the child thread finishes running before the child thread has finished running

Import Threadingimport timedef foo (n):    print (' >>>>>>>>>>>>>>>%s ' %n)    time.sleep (n)    print (' Tread 1 ') def bar (n):    print (' >>>>>>>>>>> >>>>>%s '%n)    time.sleep (n)    print (' Thread 2 ') S=time.time () t1=threading. Thread (target=foo,args= (2,)) T1.start () #把子进程运行起来t2 =threading. Thread (target=bar,args= (5,)) T2.start () t1.join ()     #只是会阻挡主线程运行, with T2 no problem t2.join () print (Time.time ()-s) print (' Ending ') ' Run result:>>>>>>>>>>>>>>>2>>>>>>>> >>>>>>>>5tread 1thread 25.001286268234253ending "

4.setDaemon () method

The function of this method is to declare the thread as a daemon thread, which must be set before the start () method call.

By default, the main thread finishes running to check if the child thread is complete, and if not, the main thread waits for the child thread to complete before exiting. Set Setdaemon (True) If you exit after the main thread is finished without having to run out of pipe threads

Import  threadingimport timeclass MyThread (threading. Thread):    def __init__ (self):        threading. Thread.__init__ (self)    def run (self):        print (' OK ')        time.sleep (2)        print (' End ') T1=mythread () # Create Thread Object T1.setdaemon (True) T1.start () #激活线程对象print (' End Again ') #运行结果是马上打印ok和 end again #然后程序终止, does not print end

The main thread is the default non-daemon thread, the child threads are the main thread of inheritance, so the default is also non-daemon threads

5. Other methods

IsAlive (): Returns whether the thread is active

GetName (): Returns the thread name

SetName (): Set thread name

Threading.currentthread (): Returns the current thread variable

Threading.enumerate (): Returns a list containing the threads that are running

Threading.activecount (): Returns the number of running threads

Three, various locks

1. Sync Lock (User lock, mutex)

Let's look at an example:

The requirement is that there is a global variable that has a value of 100, we open 100 threads, each thread performs an operation that is minus one for this global variable, and the last import threading

Import Threadingimport Timedef sub ():    global num    temp=num    num=temp-1    time.sleep (2) num=100l=[]for I In range:    t=threading. Thread (target=sub,args= ())    T.start ()    l.append (t) for I in L:    i.join () print (num)

As if everything is OK, now we change, in the sub function of the Temp=num, and num=temp-1 in the middle, add a time.sleep (0.1), will find the problem, the result becomes two seconds after printing 99, changed to Time.sleep (0.0001) it, The results are uncertain, but they're all 90, what's going on?

This is about the Gil lock in Python, and let's go through it:

Defining a global variable num=100 for the first time, and then opening up 100 sub-threads, but Python's Gil lock restricts the CPU to one thread at a time, so the 100 threads are in the state of the grab lock, and whoever grabs it can run their own code. In the first case, each thread grabbed the CPU and immediately performed a minus one on the global variable, so there was no problem. But we changed, before the global variable minus one, let him sleep for 0.1 seconds, the program fell asleep, the CPU can not wait for this thread, when the thread is in I/O blocking, other threads can also grab the CPU, so the other threads grabbed, began to execute code, To know that 0.1 seconds is a long time for the CPU to run, it's enough time for the first thread to wake up, and all the other threads have grabbed the CPU once. The num they got was 100, and when they woke up, they were doing 100-1, So the final result is 99. Similarly, if you sleep a little bit shorter and become 0.001, it's possible that when the 91st thread first grabs the CPU, the first thread wakes up and modifies the global variable. So the 91st thread gets the global variable is 99, and then the second third thread wakes up and modifies the global variable, so the final result is an unknown number. A picture to understand the process

This is a thread-safety issue that can be problematic whenever a thread is involved. The solution is to lock

We add a lock in the global, lock the operation involving data operations lock up, the code into a serial, the code:

Import Threadingimport Timedef sub ():    Global num    lock.acquire () #获取锁    temp=num    time.sleep (0.001)    num=temp-1    lock.release () #释放锁    time.sleep (2) num=100l=[]lock=threading. Lock () for I in range:    t=threading. Thread (target=sub,args= ())    T.start ()    l.append (t) for I in L:    i.join () print (num)

Once the lock is acquired, it must be released before it can be acquired again. This lock is called the user lock.

2. Deadlock and Recursive lock

A deadlock is two or more processes or threads in the process of execution, due to mutual constraints caused by a mutual waiting phenomenon, if there is no external force, they will always be stuck there. As an example:

 1 Import threading,time 2 3 class MyThread (threading. Thread): 4 def __init (self): 5 threading.         Thread.__init__ (self) 6 7 def Run (self): 8 9 Self.foo () Self.bar () One def foo (self): 12 Locka.acquire () print (' I am%s get locka------%s '% (Self.name,time.ctime ())) #每个线程有个默认的名字, Self.name gets this Name of Lockb.acquire () print (' I am%s GET lockb-----%s '% (Self.name,time.ctime ())) LOCKB.R Elease () time.sleep (1) locka.release () def bar (self): #与24 lockb.acquire () p Rint (' I am%s get lockb------%s '% (Self.name,time.ctime ())) #每个线程有个默认的名字, Self.name get the name. Locka.acqu IRE () print (' I am%s GET locka-----%s '% (Self.name,time.ctime ())) Locka.release () lockb.re Lease () locka=threading. Lock () lockb=threading. Lock () Panax Notoginseng for I in range: T=mythread () T.start () #运行结果: 10:00am I am Thread-1 GET locka------Sun Jul 11:25:48 201743 I am Thread-1 get lockb-----Sun Jul 11:25:48 201744 I am Thread-1 GET lockb------Sun Jul 11:25:49 201745 I am Thread-2 GET locka------Sun Jul 23 11:25:49 201746 and then it's jammed.

In this example, thread 2 waits for thread 1 to release the B lock, thread 1 waits for thread 2 to release a lock, to restrict each other

When we use mutexes, once we use more locks, it's easy to get this problem.

In Python, in order to solve this problem, Python provides a concept called reusable lock (Rlock), which maintains a lock and a counter variable, counter records the number of acquire, each acquire, Counter add 1, each release,counter minus 1, only counter the value of 0, other threads to obtain resources, the following with Rlock replace Lock, in the run will not be stuck:

 1 Import threading,time 2 3 class MyThread (threading. Thread): 4 def __init (self): 5 threading.         Thread.__init__ (self) 6 7 def Run (self): 8 9 Self.foo () Self.bar () One def foo (self): 12 Rlock.acquire () print (' I am%s get locka------%s '% (Self.name,time.ctime ())) #每个线程有个默认的名字, Self.name gets this Name of Rlock.acquire () print (' I am%s GET lockb-----%s '% (Self.name,time.ctime ())) RLOCK.R Elease () time.sleep (1) rlock.release () def bar (self): #与24 rlock.acquire () p Rint (' I am%s get lockb------%s '% (Self.name,time.ctime ())) #每个线程有个默认的名字, Self.name get the name. Rlock.acqu IRE () print (' I am%s GET locka-----%s '% (Self.name,time.ctime ())) Rlock.release () rlock.re Lease () locka=threading. Lock () lockb=threading. Lock () Notoginseng rlock=threading. Rlock () max for I in range: T=mythread () T.start () 

This lock is also called a recursive lock.

3.Semaphore (semaphore)
This is also a lock, you can specify a few threads can get this lock at the same time, up to 5 (the previously said mutex can only have one thread to obtain)

Import Threadingimport timesemaphore=threading. Semaphore (5) def foo ():    semaphore.acquire ()    time.sleep (2)    print (' OK ')    semaphore.release () for I in Range:    t=threading. Thread (target=foo,args= ())    T.start ()

The result is a 5 OK print every two seconds

4.Event objects
The running of the thread is independent, and if a thread needs to communicate, or if a thread needs to perform the next operation based on the state of a thread, the event object needs to be used. The event object can be thought of as a flag bit, the default value is False, if a thread waits for the event object, and the flag bit in the event object is False, then the thread waits until the flag is true and the thread that waits for the event object is awakened after it is true

Event.isset (): Returns the status value of the event, event.wait (): If Event.isset () ==false will block the thread; Event.set (): Sets the status value of event to True, All blocking pool threads are activated into a ready state, waiting for the operating system to be dispatched; When setting the object, the default is False Event.clear (): The status value of the recovery event is false.

Use an example to demonstrate the use of the event object:

Import threading,timeevent=threading. Event ()     #创建一个event对象def foo ():     print (' Wait ... ')     event.wait ()     #event. Wait (1) #if event The flag bit inside the object is Flase, then the     parameter inside the Block #wait () means: Wait only 1 seconds, if the flag bit is not changed after 1 seconds, and then continue to execute the following code     print (' Connect to Redis server ') Print (' Attempt to start Redis sever) ' Time.sleep (3) Event.set () for I in range (5):     t=threading. Thread (target=foo,args= ())     T.start () #3秒之后, the main thread ends, but the child thread is not the daemon thread, the child thread is not finished, so the program does not end, it should be after 3 seconds, the flag bit is set to true, That is Event.set ()

5. Queues

Official documentation says queues are very useful for ensuring data security in multiple threads

A queue can be understood as a data structure that can store and read data. A lock is added to the list.

5.1get and Put methods

Import queue# queue Read and write data only put and get two methods, the list of those methods are not q=queue. Queue () #创建一个队列对象  FIFO first-in #q=queue. Queue (#这里面可以有一个参数), set the maximum amount of data stored, can be understood as a maximum number of squares
#如果设置参数为20, the 21st time put, the program will block, until there is a vacant position, that is, the data is get Walk q.put (one) #放值q. put (' Hello ') q.put (3.14) print (Q.get ()) #取值11print ( Q.get ()) #取值helloprint (Q.get ()) #取值3.14print (Q.get ()) #阻塞, waiting to put a data

The Get method has a default parameter of Block=true, which changes the parameter to False, and when the value is not taken, the queue will be error. Empty

This is equivalent to writing q.get_nowait ())

5.2join and Task_done methods

Join is used to block processes, and it makes sense to use task_done with them. Can use the event object to understand, not once put (), join inside the counter plus 1, no time task_done (), counter minus 1, counter is 0 when the next put ()

Note that you need to add task_done after each get ().

Import Queueimport threading# queue only put and get two methods, the list of those methods are not q=queue. Queue () #def foo (): #存数据    # while True:    q.put (111)    Q.put (222)    Q.put (333)    q.join ()    print (' OK ' #有个join, the program stops here. Def bar ():    print (Q.get ())    q.task_done ()    print (Q.get ())    q.task_done ()    Print (Q.get ())    Q.task_done () #要在每个get () statements are appended with t1=threading. Thread (target=foo,args= ()) T1.start () t2=threading. Thread (target=bar,args= ()) T2.start () #t1, T2 who first who does not matter, because it will block, waiting for the signal

5.3 Other methods

Q.qsize () returns the size of the queue Q.empty () returns True if the queue is empty, whereas Falseq.full () returns True if the queue is full, whereas the falseq.full corresponds to the maxsize size 5.4 other modes

The preceding queues are in FIFO mode, plus the advanced out-of-out (LIFO) mode and Priority queue

The advanced back-out mode creates queues in the following way: Class queue. Lifoqueue (maxsize)

The priority queue is written as: Class queue. Priorityueue (maxsize)

Q=queue. Priorityqueue () q.put ([5,100]) #这个方括号只是代表一个序列类型, the tuple list is OK, but all must be the same q.put ([7,200]) q.put ([3, "Hello"]) q.put ([4,{] Name ":" Alex "}]) the first position in parentheses is the priority 5.5 the producer consumer model producer is equivalent to the thread that produces the data, and the consumer is the thread that takes the data. When we write the program, we must consider the ability of production data and the ability of consumption data to match, if not match, it must have one side need to wait, so introduced the producer and consumer model. This model solves the problem of strong coupling between producers and consumers through a container. With this container, they do not have to communicate directly, but through this container, the container is a blocking queue, equivalent to a buffer, balancing the capacity of producers and consumers. We write programs using the directory structure, is not to understand the coupling and not only to solve the strong coupling problem, the producer consumer model can also be implemented concurrency when the producer consumer capacity mismatch, consider adding restrictions, similar to if Q.qsize () <20, this

Iv. Multi-process

Python has a global lock (GIL) that makes multithreading unusable for multicore, but if it's multi-process, the lock is limited. How to open multiple processes, you need to import a multiprocessing module

Import Multiprocessingimport timedef foo ():    print (' OK ')    time.sleep (2) if __name__ = = ' __main__ ': #必须是这个格式    P=multiprocessing. Process (target=foo,args= ())    P.start ()    print (' ending ')

Although you can open many processes, but must be careful not to open too much, because inter-process switching consumes system resources very much, if thousands of sub-processes are opened, the system will crash, and inter-process communication is also a problem. So, the process can not be used, can use less to use less

1. Inter-process communication

There are two ways of interprocess communication, queues and pipelines

1.1 Inter-process queues

Each process is a separate piece of space in memory, and no thread can share the data, so the parent process can pass the queue to the child process only by passing the parameter.

Import Multiprocessingimport threadingdef foo (q):    q.put ([+, ' hello ', True]) if __name__ = = ' __main__ ':    q= Multiprocessing. Queue () #创建进程队列    #创建一个子线程    p=multiprocessing. Process (target=foo,args= (q,))    #通过传参的方式把这个队列对象传给父进程    p.start ()    print (Q.get ())

1.2 Piping

Before the socket is actually the pipeline, the client's sock and service side of the Conn is the end of the pipeline, in the process is also this play, but also to have the two ends of the pipeline

From multiprocessing import  pipe,processdef foo (SK):    sk.send (' hello ') #主进程发消息    print (Sk.recv ()) # Main process Receive Message sock,conn=pipe () #创建了管道的两头if __name__ = = ' __main__ ':    p=process (target=foo,args= (sock,))    P.start ()    print (CONN.RECV ()) #子进程接收消息    conn.send (' Hi son ') #子进程发消息

2. Inter-process data sharing

We have implemented inter-process communication through process queues and pipelines in two ways, but data sharing has not yet been implemented

Data sharing between processes requires referencing a Manager object implementation, and all data types used are created through the manager point

From multiprocessing import processfrom multiprocessing import managerdef foo (l,i):    l.append (i*i) if __name__ = = ' __ Main__ ':    Manager = Manager ()    mlist = Manager.list ([11,22,33]) #创建一个共享的列表    l=[] for    i in range (5):        # Open 5 sub-processes        p = Process (Target=foo, args= (mlist,i))        P.start ()        l.append (p) for    I in L:        i.join () #join method is to wait for the process to finish before executing the next    print (mlist)

3. Process Pool

The role of the process pool is to maintain a maximum process volume, and if the maximum value is exceeded, the program will block, knowing that a process is available

From multiprocessing import poolimport timedef foo (n):    print (n)    time.sleep (2) if __name__ = = ' __main__ ':    Pool_obj=pool (5) #创建进程池    #通过进程池创建进程 for    i in range (5):        P=pool_obj.apply_async (func=foo,args= (i ,))        #p是创建的池对象    # The use of the pool is first close (), in join (), remember on the line    pool_obj.close ()    pool_obj.join ()    print ( ' Ending ')

There are several methods in the process pool:

1.apply: Take a process from the process pool and execute the asynchronous version 3.terminate of 2.apply_async:apply: Shut down the thread pool now 4.join: The main process waits for all child processes to complete, must be close or terminate after 5.close: Wait for all processes to finish before shutting down the thread pool

Five, the co-process

Co-process in hand, the world I have, said go away. Knowing the progression, the process thread that we said earlier is forgotten.

The process can be opened a lot, no upper limit, switching between the consumption can be negligible

1.yield

First to recall the word yield, familiar with, yes, that is the generator that used. Yield is a magical thing, which is a feature of Python.

The general function is to stop when it encounters a return, and then return the value after returning, by default it is None,yield and return, but the yield does not stop immediately, but pauses until next () is encountered, (the principle of the For loop is also next ()) will continue to execute. Yield is also preceded by a variable, passing the value of yield through the Send () function, and storing the value in the variable in front of yield

Import Timedef Consumer (): #有yield, is a generator    r= ""    while True:        n=yield r# program paused, waiting for next () signal        # if not n:        #     return        print (' Consumer <--%s. ') %n)        time.sleep (1)        r= ' OK ' def producer (c):    next (c) #激活生成器c    n=0 while    n<5:        n=n+1        print (' produer-->%s.. ') %n)        cr = C.send (n) #向生成器发送数据        print (' Consumer return: ', CR)
C.close () #生产过程结束, close generator if __name__ = = ' __main__ ': C=consumer () producer (c)

Look at the above example, the entire process is not locked, but also to ensure data security, but also can control the sequence, elegant implementation of the concurrency, multi-threaded several street

A thread is called a micro-process, and the association is called a micro-thread. The coprocessor has its own register context and stack, so it can retain the state of the last call.

2.greenlet Module

This module encapsulates yield, which makes program switching very convenient, but does not allow for the function of the value of the pass

From Greenlet import greenletdef foo ():    print (' Ok1 ')    gr2.switch ()    print (' Ok3 ')    Gr2.switch () def Bar ():    print (' Ok2 ')    gr1.switch ()    print (' Ok4 ') Gr1=greenlet (foo) gr2=greenlet (bar) Gr1.switch () #启动

3.gevent Module

On the basis of Greenlet module, a more bull module is developed gevent

Gevent provides more complete support for Python, with the following basic principles:

When an greenlet encounters an IO operation, it automatically switches to the other Greenlet, and when the IO operation is completed and then switched back, this ensures that there is always greenlet running, not waiting

Import requestsimport geventimport timedef foo (URL):    response=requests.get (URL)    response_str=response.text    print (' Get Data%s '%len (RESPONSE_STR)) S=time.time () Gevent.joinall ([Gevent.spawn (foo, "https://itk.org/"),                gevent.spawn (foo, "https://www.github.com/"),                gevent.spawn (foo, "https://zhihu.com/"),] # foo ("https:// itk.org/") # foo (" https://www.github.com/") # foo (" https://zhihu.com/") print (Time.time ()-s)

4. Advantages and disadvantages of the co-process:

Advantages:

Context switching consumes less

Easy switching of control flow and simplified programming model

High concurrency, high scalability, low cost

Disadvantages:

Cannot take advantage of multi-core

Block the entire program when blocking is done

VI. IO model

We'll compare four IO models below

1.blocking IO

2.nonblocking IO

3.IO multiplexing

4.asynchronous IO

For example, we take the IO of the network transmission data, it involves two system objects, one is the thread or process that calls this IO, the other is the system kernel, and when reading the data, it goes through two stages:

Waiting for data preparation

The process of copying data from the kernel state to the user state (because the data transmission of the network is realized by the physical device, the physical device is hardware, only the kernel state of the operating system can be processed, but the read data is used by the program, so it is necessary to switch the step)

1.blocking io (blocking IO)

A typical read operation such as

Linux, the default socket is blocking, recall that we used the SOCKET,SOCK and Conn is two connections, the server can only listen to a connection at the same time, so if the server waiting for the client to send messages, other connections are not connected to the server.

In this mode, waiting for the data and copying the data all need to wait, so the whole block

2.nonlocking io (non-blocking IO)

After the connection is established on the server, the command becomes non-blocking IO mode.

This mode, there is data to take, no error, you can add an abnormal capture. It is not blocked while waiting for data, but still blocks when the copy data is

The advantage is that the waiting time for the connection can be exploited, but the shortcomings are obvious: there are many system calls, consumption is very large, and when the program to do other things, the data to, although not lost, but the program received data is not real-time

3.IO multiplexing (IO multiplexing)

This is more commonly used, we used the accept (), has two functions:

1. Listening, waiting for connection

2. Establish a connection

Now we use Select to replace the first function of accept, the advantage of select is that it can listen to many objects, regardless of which object activity, can react, and collect the active object into a list

Import Socketimport Selectsock=socket.socket () sock.bind ((' 127.0.0.1 ', 8080)) Sock.listen (5) Inp=[sock,]while True:    R=select.select (inp,[],[])    print (' R ', r[0]) for    obj in r[0]:        if obj = = Sock:            conn,addr= Obj.accept ()

But the function of establishing the connection or accept, with this, we can implement the TCP chat in a concurrent way

1 # server 2 Import Socket 3 Import time 4 Import Select 5  6 sock=socket.socket () 7 sock.setsockopt (socket. Sol_socket,socket. so_reuseaddr,1) 8 Sock.bind ((' 127.0.0.1 ', 8080)) 9 Sock.listen (5) ten Inp=[sock,] #监听套接字对象的列表12 while true:14     r= Select.select (inp,[],[])     print (' R ', r[0]) for     obj in r[0]:17         if obj = = sock:18             conn,addr= Obj.accept ()             inp.append (conn)         else:21             data=obj.recv (1024x768)             print (Data.decode (' UTF8 ')) 23             response=input (' >>>>: ')             obj.send (Response.encode (' UTF8 '))

Only when the connection is established, sock is active, the list will have this object, if the connection is established, the process of sending and receiving messages, the active object is not sock, but Conn, so in the actual operation to determine whether the object in the list is sock

In this model, the process of waiting for the data and copy data is blocked, so it is also called the whole block, compared with the blocking IO model, this model has the advantage of processing multiple connections

IO Multiplexing In addition to select, there are two ways of poll and Epoll

Only select is supported under Windows, and all three are available in Linux. Epoll is the best, the only advantage of select is that multi-platform can be used, but the shortcomings are also obvious, is the efficiency is poor. Poll is the intermediate transition between Epoll and select, and there is no limit to the number of poll that can be monitored compared to select. Epoll There is no maximum connection limit, and the monitoring mechanism is also completely changed, the mechanism of select is polling (every data is checked again, even if the change will continue to check), epoll mechanism is to use the callback function, which object has changed, that call this callback function

4. Asynchronous IO (Asynchronous IO)

This mode is non-blocking all the way, only the entire non-blocking can be called asynchronous, although the pattern looks good, but the actual operation, if the request is very large, the efficiency will be very low, and the operating system task is very heavy

Seven, selectors module

Learn this module, do not care about the use of a select, or poll, or epoll, their interface is this module. We just need to know how this interface is used, what it encapsulates, it doesn't have to be considered.

In this module, the socket and function binding is a regesier () method, the use of the module is very fixed, the server example is as follows:

 1 Import selectors,socket 2 3 sel=selectors. Defaultselector () 4 5 sock=socket.socket () 6 sock.setsockopt (socket. Sol_socket,socket.     so_reuseaddr,1) 7 Sock.bind ((' 127.0.0.1 ', 8080)) 8 Sock.listen (5) 9 sock.setblocking (False) Ten def Read (Conn,mask): 12 DATA=CONN.RECV (1024x768) print (Data.decode (' UTF8 ')) + res=input (' >>>>>>: ') Conn.send (res.en Code (' UTF8 ')) (Sock,mask): Conn,addr=sock.accept () Sel.register (conn,selectors. Event_read,read) #conn和read函数绑定20 #绑定套接字对象和函数21 #绑定 (register) means that when a socket object conn is changed, the bound function can perform a sel.register (sock, Selectors. event_read,accept) #中间那个是固定写法23 while true:24 Events=sel.select () #监听套接字对象 (the one registered) #下面几行代码基本上就固定写法了26 # print (         ' Events ', events) Key,mask in events:28 callback = key.data# bound function, # Key.fileobj is the active socket object 30 # print (' callback ', callable) #mask是固定的32 callback (Key.fileobj,mask) #callback是回调函数33 # Prin T (' Key.fileobj ', key.fileobj)

Process Line Cheng

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.