I often talk about process thread coroutine, and I often talk about thread coroutine.

Source: Internet
Author: User
Tags epoll

I often talk about process thread coroutine, and I often talk about thread coroutine.

I. Processes and threads

1. Process

The applications on our computers are all processes. Assume that the computers we use are single-core, and the cpu can only execute one process at the same time. When the program is out of I/O congestion, if the CPU waits with the program, it will be too wasteful. The cpu will execute other programs, and then it will involve switching, before switching, you need to save the status of the previous program to restore it. Therefore, you need something to record it and you can introduce the concept of a process.

A process is a dynamic execution process of a program on a dataset. A process consists of three parts: a program, a dataset, and a process control block. Programs are used to describe the functions of processes and how they are completed. datasets are resources used during program execution. process control blocks are used to save the running status.

2. threads

Multiple Threads can be opened in a process. Why do we need a process instead of a thread? In a program, threads share a set of data. If all the data is made into a process and each process occupies a memory, this set of data must be copied several times to each program. This is unreasonable, so there is a thread.

A thread is also called a lightweight process. It is a basic cpu Execution Unit and the smallest unit in the program execution process. A process has at least one main thread. In the main thread, the threading module is used to open sub-threads.

3. Relationship between process threads

(1) A thread can belong to only one process, and a process can have multiple threads, but at least one thread

(2) resources are allocated to a process. A process is the main body of the program. All threads of the same process share all the resources of the process.

(3) the cpu is allocated to the thread, that is, the thread that actually runs on the cpu

(4) The thread is the smallest Execution Unit and the process is the smallest Resource Management Unit.

4. parallelism and concurrency

Parallel Processing refers to the computing method for executing two or more tasks simultaneously in a computer system. Parallel Processing can work in different aspects of the same program at the same time.

In concurrent processing, several programs are running in one cpu in the same period of time, but only one program runs on the cpu at any time.

The focus of concurrency is to have the ability to process multiple tasks at the same time, while the focus of parallelism is to have the ability to process multiple tasks at the same time. Parallelism is a subset of concurrency.


As mentioned above, Python has a GIL lock, which limits the cpu usage of only one thread in a process at a time.

Ii. threading Module

The function of this module is to create a new thread. There are two ways to create a thread:

1. directly create

Import threadingimport timedef foo (n): print ('>>>>>>>>>>>>>>>>>> % s' % n) time. sleep (3) print ('tread 1') t1 = threading. thread (target = foo, args = (2,) # arg must be followed by a tuple. t1 is the created sub-Thread object t1.start () # Run the sub-process print ('ending ')

The above Code creates a subthread in the main thread.

The running result is: print first >>>>>>>>>>>>> 2. Print the ending, and then wait 3 seconds to print thread 1

2. Another method is to create a thread object through the inheritance class.

Import threadingimport timeclass MyThread (threading. thread): def _ init _ (self): threading. thread. _ init _ (self) def run (self): print ('OK') time. sleep (2) print ('end') t1 = MyThread () # create thread object t1.start () # activate thread object print ('end again ')

3. join () method

The function of this method is to wait until the child thread finishes running.

Import threadingimport timedef foo (n): print ('>>>>>>>>>>>>>>>>>> % s' % n) time. sleep (n) print ('tread 1') def bar (n ): print ('>>>>>>>>>>>>>>>>>> % s' % n) time. sleep (n) print ('thread 2') s = time. time () t1 = threading. thread (target = foo, args = (2,) t1.start () # Run the sub-process t2 = threading. thread (target = bar, args = (5,) t2.start () t1.join () # only blocks the main Thread from running, t2.join () print (time. time ()-s) print ('ending') ''' running result: >>>>>>>>>>>>>>>>>> 2 >>>>>>>>>>>>>>>> 5 tread 1 thread 25.001286268234253ending '''

4. setDaemon () method

This method declares a thread as a daemon thread and must be set before the start () method is called.

By default, the main thread will check whether the Sub-thread is complete after running. If not, the main thread will wait until the sub-thread completes and then exit. However, if the main thread does not need to exit after running, set setDaemon (True)

Import threadingimport timeclass MyThread (threading. thread): def _ init _ (self): threading. thread. _ init _ (self) def run (self): print ('OK') time. sleep (2) print ('end') t1 = MyThread () # create a thread object t1.setDaemon (True) t1.start () # activate the thread object print ('end again ') # The running result is to print "OK" and "end again" immediately. # The program is terminated and "end" is not printed.

The main thread is a non-daemon thread by default, and the sub-threads are inherited main threads. Therefore, both are non-daemon threads by default.

5. Other Methods

IsAlive (): indicates whether the returned thread is active.

GetName (): the name of the returned thread.

SetName (): Set the thread name

Threading. currentThread (): returns the current thread variable.

Threading. enumerate (): returns a list of running threads.

Threading. activeCount (): returns the number of running threads.

Iii. Various locks

1. Synchronization lock (User lock, mutex lock)

Let's look at an example:

The requirement is that the value of a global variable is 100. We open 100 threads. The operation executed by each thread is to subtract one from the global variable, and finally import threading.

import threadingimport timedef sub(): global num temp=num num=temp-1 time.sleep(2)num=100l=[]for i in range(100): t=threading.Thread(target=sub,args=()) t.start() l.append(t)for i in l: i.join()print(num)

It seems everything is normal, now let's change it, In the sub function temp = num, and num = temp-1, add a time. sleep (0.1), you will find a problem, the result is changed to two seconds after printing 99, changed to time. what about sleep (0.0001)? The results are not clear, but they are all 90. What is the problem?

Let's talk about the GIL lock in Python. Let's take a look:

Define a global variable num = 100 for the first time, and then open up 100 sub-threads. However, the GIL lock in Python restricts that only one thread can use the cpu at the same time, therefore, these 100 threads are in the locked state. Whoever gets the lock can run their own code. At the very beginning, each thread grabbed the cpu and immediately executed the global variable minus one operation, so there was no problem. However, after the modification, before the global variable is reduced by one, we let him sleep for 0.1 seconds, the program fell asleep, and the cpu could not keep waiting for this thread, when this thread is in I/O blocking, other threads can grab the cpu again, so other threads get it and start to execute the code, it has been a long time for the cpu to run in 0.1 seconds. This time is enough for other threads to grab the cpu once before the first thread is awake. The num they get is 100. When they wake up, all the operations are 100-1, so the final result is 99. in the same way, if the sleep time is shorter than 0.001, it may be that when the first 91st threads reach the cpu, the first thread is awake and the global variable is modified. Therefore, the global variables obtained by these 91st threads are 99, and the second and third threads gradually wake up and modify the global variables respectively. Therefore, the final result is an unknown number. A figure to understand this process

This is a thread security issue. This issue occurs when it comes to threads. The solution is to lock

We apply a global lock to lock the operations involving data operations with the lock, and then convert the code into a serial one:

Import threadingimport timedef sub (): global num lock. acquire () # obtain the lock temp = num time. sleep (0.001) num = temp-1 lock. release () # release lock time. sleep (2) num = 100l = [] lock = threading. lock () for I in range (100): t = threading. thread (target = sub, args = () t. start () l. append (t) for I in l: I. join () print (num)

After the lock is obtained, it must be released before it can be acquired again. This lock is called a user lock.

2. deadlocks and recursive locks

A deadlock is a mutual wait between two or more processes or threads during execution. If there is no external force, they will always be stuck there. For example:

Deadlock example

Import threading, timeclass MyThread (threading. thread): def _ init (self): threading. thread. _ init _ (self) def run (self): self. foo () self. bar () def foo (self): LockA. acquire () print ('I am % s GET LOCKA ------ % s' % (self. name, time. ctime () # Each thread has a default name, self. name: Obtain the name LockB. acquire () print ('I am % s GET LOCKB ----- % s' % (self. name, time. ctime () LockB. release () time. sleep (1) LockA. release () def bar (self): # With LockB. acquire () print ('I am % s GET LOCKB ------ % s' % (self. name, time. ctime () # Each thread has a default name, self. name: Obtain the name LockA. acquire () print ('I am % s GET LOCKA ----- % s' % (self. name, time. ctime () LockA. release () LockB. release () LockA = threading. lock () LockB = threading. lock () for I in range (10): t = MyThread () t. start () # running result: I am Thread-1 get locka ------ Sun Jul 23 11:25:48 unzip I am Thread-1 get lockb ----- Sun Jul 23 11:25:48 unzip I am Thread-1 get lockb ------ Sun Jul 23 11:25:49 unzip I am Thread- 2 get locka ------ Sun Jul 23 11:25:49 2017 then it gets stuck

In the above example, thread 2 is waiting for thread 1 to release the B lock, thread 1 is waiting for thread 2 to release the lock, and mutual restraint

When we use mutex locks, once more locks are used, this problem is very likely to occur.

In Python, to solve this problem, Python provides a concept called RLock, which maintains a lock and a counter variable, counter records the number of acquire times. Each acquire and counter increases by 1, and each release and counter decreases by 1. Only when the counter value is 0 Can other threads obtain resources, the following uses RLock to replace the Lock so that it will not get stuck during running:

Recursive lock example

Import threading, timeclass MyThread (threading. thread): def _ init (self): threading. thread. _ init _ (self) def run (self): self. foo () self. bar () def foo (self): RLock. acquire () print ('I am % s GET LOCKA ------ % s' % (self. name, time. ctime () # Each thread has a default name, self. obtain the name RLock. acquire () print ('I am % s GET LOCKB ----- % s' % (self. name, time. ctime () RLock. release () time. sleep (1) RLock. release () def bar (self): # With RLock. acquire () print ('I am % s GET LOCKB ------ % s' % (self. name, time. ctime () # Each thread has a default name, self. obtain the name RLock. acquire () print ('I am % s GET LOCKA ----- % s' % (self. name, time. ctime () RLock. release () RLock. release () LockA = threading. lock () LockB = threading. lock () RLock = threading. RLock () for I in range (10): t = MyThread () t. start ()

This lock is also called a recursive lock.

3. Semaphore (Semaphore)

This is also a lock. You can specify several threads to obtain the lock at the same time, up to five (the mutex lock mentioned above can only be obtained by one thread)

import threadingimport timesemaphore=threading.Semaphore(5)def foo(): semaphore.acquire() time.sleep(2) print('ok') semaphore.release()for i in range(10): t=threading.Thread(target=foo,args=()) t.start()

The running result is to print 5 OK records every two seconds.

4. Event object

The running of a thread is independent. If communication is required between threads, or a thread needs to perform the next operation based on the state of a thread, the Event object is used. You can regard the Event object as a flag. The default value is false. If a thread waits for the Event object, and the flag in the Event object is false, the thread will wait forever, after the flag is true, all threads waiting for the Event object will be awakened.

Event. isSet (): returns the status value of the event; event. wait (): If event. isSet () = False will block the thread; event. set (): set the event status to True. All threads in the blocking pool are activated and ready to wait for scheduling by the operating system. When setting an object, the default value is False. clear (): The status of event Recovery is False.

Use an example to demonstrate the usage of the Event object:

Import threading, timeevent = threading. event () # create an event object def foo (): print ('wait ....... ') event. wait () # event. wait (1) # if the flag in the event object is Flase, then blocking # The parameters in wait () mean: only wait 1 second, if the flag has not been changed in 1 second, it will not wait. Continue to execute the following code print ('connect to redis Server') print ('attempt to start redis sever) ') time. sleep (3) event. set () for I in range (5): t = threading. thread (target = foo, args = () t. start () #3 seconds later, the main thread ends, but the sub-thread is not a daemon thread and the sub-thread has not ended. Therefore, the program has not ended. It should be three seconds later, set the flag to true, that is, event. set ()

5. Queue

The official documentation says that it is very useful for queues to ensure data security in multiple threads.

A queue is a data structure that can store and read/write data. Just like adding a lock to the list

5.1get and put Methods

Import queue # Only put and get methods are used to read and write data in the queue. None of the methods in the list are q = queue. queue () # create a queue object FIFO first-in-first-out # q = Queue. queue (20) # Here you can set a parameter to set the maximum data size, which can be understood as the largest number of grids. # If the parameter is set to 20 or 21st put times, the program will be blocked until there is no available location, that is, data is get and q. put (11) # put the value q. put ('hello') q. put (3.14) print (q. get () # value 11 print (q. get () # value: helloprint (q. get () # set the value to 3.14 print (q. get () # blocking, waiting for a put data

The get method has a default parameter block = True. If this parameter is set to False, the error queue. Empty will be returned if the value cannot be obtained.

This is equivalent to q. get_nowait ())

5.2join and task_done Methods

Import queueimport threading # There are only put and get methods in the queue, and none of the methods in the list have q = queue. queue () # def foo (): # store data # while True: q. put (111) q. put (222) q. put (333) q. join () print ('OK') # if there is a join, the program stops here def bar (): print (q. get () q. task_done () print (q. get () q. task_done () print (q. get () q. task_done () # Add t1 = threading to the end of each get () statement. thread (target = foo, args = () t1.start () t2 = threading. thread (target = bar, args = () t2.start () # t1. It doesn't matter who is the first person in t2, because it will block and wait for the signal

5.3 Other Methods

Q. qsize () returns the size of the queue.
Q. empty (): If the queue is empty, True is returned. Otherwise, False is returned.
Q. full () if the queue is full, True is returned; otherwise, False is returned.
Q. full corresponds to maxsize

5.4 other modes

The queues mentioned above are in the first-in-first-out (FIFO) mode, and the first-out (LIFO) mode and priority queue.

The first method to create a queue in post-exit mode is: class queue. LifoQueue (maxsize)

The format of priority queue is: class queue. Priorityueue (maxsize)

Q = queue. PriorityQueue ()
Q. put ([5,100]) # This square brackets only represent a sequence type. The list of tuples can be used, but all must be the same.
Q. put ([1, 7,200])
Q. put ([3, "hello"])
Q. put ([4, {"name": "alex"}])

The first position in the brackets is the priority.

5.5 producer and consumer model

The producer is equivalent to the thread that generates data, and the consumer is equivalent to the thread that retrieves data. When writing a program, we must consider whether the production data and consumption data capabilities match. If they do not match, one party must wait, therefore, the producer and consumer models are introduced.

This model uses a container to solve the strong coupling problem between producers and consumers. With this container, they do not need to communicate directly, but use this container. This container is a blocking queue, which is equivalent to a buffer and balances the capabilities of producers and consumers. Isn't the directory structure we use when writing programs for decoupling?

In addition to solving the strong coupling problem, producer and consumer models can also achieve concurrency

When the producer's consumer capabilities do not match, add restrictions, such as if q. qsize () <20.

4. Multi-Process

Python has a global lock (GIL) that prevents multiple threads from using multiple cores. However, if it is a multi-process, this lock cannot be limited. How can we start multiple processes? We need to import a multiprocessing module.

Import multiprocessingimport timedef foo (): print ('OK') time. sleep (2) if _ name _ = '_ main _': # The format must be p = multiprocessing. process (target = foo, args = () p. start () print ('ending ')

Although multiple processes can be enabled, you must note that too many processes cannot be enabled, because inter-process switching consumes a lot of system resources. If thousands of sub-processes are enabled, the system will crash, in addition, inter-process communication is also a problem. Therefore, the process can be used without any need.

1. inter-process communication

There are two methods for inter-process communication: queue and pipeline.

1.1 queue between processes

Each process is an independent block of space in the memory, and data can be shared without a thread. Therefore, the parent process can only pass the queue to the child process by passing parameters.

Import multiprocessingimport threadingdef foo (q): q. put ([12, 'Hello', True]) if _ name _ = '_ main _': q = multiprocessing. queue () # create a process Queue # create a sub-thread p = multiprocessing. process (target = foo, args = (q,) # pass the queue object to the parent Process p by passing parameters. start () print (q. get ())

1.2 Pipelines

The socket we have learned before is actually a pipeline. The sock of the client and the conn of the server are both ends of the pipeline. In the process, this method also requires two ends of the pipeline.

From multiprocessing import Pipe, Processdef foo (sk): sk. send ('hello') # print (sk. recv () # sock, conn = Pipe () # if _ name _ = '_ main _' at both ends of the MPs queue are created __': p = Process (target = foo, args = (sock,) p. start () print (conn. recv () # The Subprocess receives the message conn. send ('Hi son') # send messages to sub-Processes

2. data sharing between processes

We have achieved inter-process communication through process queues and pipelines, but data sharing has not yet been implemented.

A manager object must be referenced for data sharing between processes. All data types used must be created through the manager point.

From multiprocessing import Processfrom multiprocessing import Managerdef foo (l, I): l. append (I * I) if _ name _ = '_ main _': manager = Manager () Mlist = manager. list ([11, 22, 33]) # create a shared list l = [] for I in range (5): # Open up 5 sub-processes p = Process (target = foo, args = (Mlist, I) p. start () l. append (p) for I in l: I. join () # The join method is to wait until the process ends before executing the next print (Mlist)

3. Process pool

The role of the Process pool is to maintain the maximum number of processes. If the maximum number is exceeded, the program will be blocked until there are available processes.

From multiprocessing import Poolimport timedef foo (n): print (n) time. sleep (2) if _ name _ = '_ main _': pool_obj = Pool (5) # create a process pool # create a process in the process pool for I in range (5): p = pool_obj.apply_async (func = foo, args = (I ,)) # p is the created pool object # pool usage is to close () first. In join (), remember that pool_obj.close () pool_obj.join () print ('ending ')

There are several methods in the process pool:

1. apply: extract a process from the process pool and execute 2. apply_async: asynchronous version of apply 3. terminate: immediately shut down the thread pool. 4. join: the main process waits for all sub-processes to complete execution. It must be 5 after close or terminate. close: the thread pool is closed only after all processes are completed.

5. coroutine

The coroutine is in hand. I have it all over the world. Let's go. If you know the coroutine, forget all the process threads mentioned above.

Coroutine can be opened many times, with no upper limit. The consumption between switches is negligible.

1. yield

Let's take a look at the word "yield". If you are familiar with yield, that is, the one used by the generator. Yield is a magical thing, which is a feature of Python.

A common function is to stop when return is encountered, and then return the value after return. The default value is None. yield is similar to return, but yield does not stop immediately, but is paused, it is not until next () occurs that the (for loop principle is also next. Before yield, you can also use the send () function to send a value to yield and save the value in the variable before yield.

Import timedef consumer (): # has yield, which is a generator r = "" while True: n = yield r # The program is paused and waits for the next () signal # if not n: # return print ('consumer <-- % s .. '% n) time. sleep (1) r = '200 OK 'def producer (c): next (c) # activate generator c n = 0 while n <5: n = n + 1 print ('produer --> % s .. '% n) cr = c. send (n) # send data print ('consumer return: ', cr) c. close () # When the production process ends, close the generator if _ name _ = '_ main _': c = consumer () producer (c)

Looking at the example above, there is no lock in the whole process and data security can be ensured. Even worse, the order can be controlled, and concurrency can be achieved elegantly.

A thread is called a microprocess, and a coroutine is also called a microthread. Coroutine has its own register context and stack, so it can retain the status of the last call.

2. greenlet Module

This module encapsulates yield, making program switching very convenient, but it cannot implement the value passing function.

From greenlet import greenletdef foo (): print ('ok1') gr2.switch () print ('ok3') gr2.switch () def bar (): print ('ok2') gr1.switch () print ('ok4') gr1 = greenlet (foo) gr2 = greenlet (bar) gr1.switch () # Start

3. gevent Module

Based on the greenlet module, gevent has been developed for a better module.

Gevent provides more comprehensive coroutine support for Python. The basic principle is:

When a greenlet encounters an I/O operation, it will automatically switch to another greenlet. After the I/O operation is completed, it will switch back. This ensures that greenlet is always running, rather than waiting.

import requestsimport geventimport timedef foo(url): response=requests.get(url) response_str=response.text print('get data %s'%len(response_str))s=time.time()gevent.joinall([gevent.spawn(foo,"https://itk.org/"),  gevent.spawn(foo, "https://www.github.com/"),  gevent.spawn(foo, "https://zhihu.com/"),])# foo("https://itk.org/")# foo("https://www.github.com/")# foo("https://zhihu.com/")print(time.time()-s)

4. Advantages and Disadvantages of coroutine:

Advantages:

Low context switching consumption

Easy to switch control flow and simplify programming model

High concurrency, high scalability, and low cost

Disadvantages:

Multi-core cannot be used

During the blocking operation, the entire program will be blocked.

Vi. IO Model

We will compare the four IO models below

1. blocking IO

2. nonblocking IO

3. IO multiplexing

4. asynchronous IO

Let's take the I/O of data transmitted over the network as an example. It involves two System Objects: one is the thread or process that calls the I/O, and the other is the system kernel. When reading data, it will go through two phases:

Waiting for data preparation

The process of copying data from the kernel state to the user State (because network data transmission is implemented by physical devices and physical devices are hardware, only the kernel state of the operating system can be processed, however, read data is used by the program, so this step must be switched)

1. blocking IO (blocking IO)

Typical read operations are as follows:

In linux, the default socket is blocking. In retrospect, the socket we used previously, sock And conn are two connections, and the server can only listen to one connection at the same time, therefore, if the server is waiting for the client to send messages, other connections cannot be connected to the server.

In this mode, waiting for data and copying data all need to wait, so the whole process is blocked.

2. nonlocking IO (non-blocking IO)

After the connection is established on the server, the non-blocking I/O mode is changed by adding this command.

In this mode, if data is available, an error is returned. You can add an exception capture. It is not blocked while waiting for data, but it still blocks when copying data,

The advantage is that you can use the waiting time for the connection, but the disadvantage is also obvious: there are many system calls, which consume a lot; and when the program does something else, the data will not be lost, but the data received by the program is not real-time.

3. IO multiplexing (IO multiplexing)

This is usually used. The previously used accept () has two functions:

1. Listen and wait for connection

2. Establish a connection

Now we use select to replace the first role of accept. The advantage of select is that it can listen to many objects, regardless of which object activity can respond, and collect the objects of activity to a list.

import socketimport selectsock=socket.socket()sock.bind(('127.0.0.1',8080))sock.listen(5)inp=[sock,]while True: r=select.select(inp,[],[]) print('r',r[0]) for obj in r[0]: if obj == sock:  conn,addr=obj.accept()

However, the connection establishment function is still implemented by accept. With this, we can use the concurrent method to implement tcp chat.

# Server import socketimport timeimport selectsock = socket. socket () sock. setsockopt (socket. SOL_SOCKET, socket. SO_REUSEADDR, 1) sock. bind ('2017. 0.0.1 ', 8080) sock. listen (5) indium = [sock,] # listen to the list of socket objects while True: r = select. select (indium, [], []) print ('R', r [0]) for obj in r [0]: if obj = sock: conn, addr = obj. accept () indium. append (conn) else: data = obj. recv (1024) print (data. decode ('utf8') response = input ('>>>>:') obj. send (response. encode ('utf8 '))

Sock is active only when a connection is established, and this object is available in the list. If the activity object is not sock after a connection is established and the message is sent and received, it is conn, so in actual operations, you need to determine whether the object in the list is sock

In this model, the process of waiting for data and copying data is blocked, so it is also called full-process blocking. Compared with the blocking IO model, this model has the advantage of processing multiple connections.

IO multiplexing in addition to select, there are two ways: poll and epoll

In windows, only select is supported, and in linux, all three are supported. Epoll is the best. The only advantage of select is that multiple platforms can be used, but the disadvantage is obvious, that is, the efficiency is very poor. Poll is the intermediate transition between epoll and select. Compared with select, there is no limit on the number of poll listeners. Epoll does not have the maximum connection limit, and the listening mechanism also changes completely. The select mechanism is round robin (each data is checked once, and the check will continue even if any change is found ), the epoll mechanism is to use a callback function. If any object changes, call this callback function.

4. Asynchronous IO (Asynchronous IO)

This mode is non-blocking throughout the process. It can be called asynchronous only when there is no blocking throughout the process. Although this mode looks good, the actual operation will be very inefficient if the request volume is large, and the operating system is very heavy.

VII. selectors Module

After learning this module, you don't have to worry about using select, poll, or epoll. Their interfaces are all this module. We only need to know how to use this interface and what it encapsulates, so we don't have to consider it.

In this module, the socket and function are bound using a regesier () method. The usage of the module is fixed. The server example is as follows:

Import selectors, socketsel = selectors. defaultSelector () sock = socket. socket () sock. setsockopt (socket. SOL_SOCKET, socket. SO_REUSEADDR, 1) sock. bind ('2017. 0.0.1 ', 8080) sock. listen (5) sock. setblocking (False) def read (conn, mask): data = conn. recv (1024) print (data. decode ('utf8') res = input ('>>>>>:') conn. send (res. encode ('utf8') def accept (sock, mask): conn, addr = sock. accept () sel. register (conn, selectors. EVENT_READ, read) # conn AND read function binding # bind socket object and function # BIND (register) means that when the socket object conn changes, the bound function can execute sel. register (sock, selectors. EVENT_READ, accept) # The middle one is fixed syntax while True: events = sel. select () # Listen to the socket object (the registered one) # The following lines of code are basically written in a fixed way # print ('events', events) for key, mask in events: callback = key. data # bound function, # key. fileobj is the active socket object # print ('callback', callable) # mask is a fixed callback (key. fileobj, mask) # callback is the callback function # print ('Key. fileobj ', key. fileobj)

The above is a story about the process thread coroutine, which is all the content shared by the editor. I hope to give you a reference and support for the customer's house.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.