Python IO Model

Source: Internet
Author: User
Tags connection pooling epoll

IO Model Introduction

To better understand the IO model, we need to review it in advance: synchronous, asynchronous, blocking, non-blocking

What is the difference between synchronous (synchronous) IO and asynchronous (asynchronous) Io, what is blocking (blocking) IO and non-blocking (non-blocking) IO respectively? The problem is that different people may give different answers, such as wikis, that asynchronous IO and non-blocking io are a thing. This is because different people have different backgrounds, and the context is not the same when discussing this issue. Therefore, in order to better answer this question, I first limit the context of this article.

This article discusses the background of network IO in a Linux environment. The most important reference in this document is Richard Stevens's UNIX? Network Programming Volume 1, third edition:the Sockets Networking ", 6.2 section" I/O Models ", Stevens in this section detailing the features and differences of various IO, if English is good enough , it is recommended to read directly. The style of Stevens is famous, so don't worry about it. The flowchart in this paper is also intercepted from the reference literature.

Stevens compared five IO Model in the article:
* Blocking IO blocking IO
* nonblocking io non-blocking IO
* IO multiplexing IO multiplexing
* Signal driven IO signal driver IO
* Asynchronous IO Asynchronous IO
By signal driven IO (signal driven IO) is not commonly used in practice, so the main introduction of the remaining four IO Model.

Again, the objects and steps involved in the IO occur. For a network IO (here we read for example), it involves two system objects, one that calls the IO process (or thread), and the other is the system kernel (kernel). When a read operation occurs, the operation goes through two stages:

#1) wait for data preparation (waiting for the Copying) to copy the data from the kernel into the process (the kernel to the procedure)

It is important to remember these two points, because the difference between these IO models is that there are different situations in both phases.

Blocking io (blocking IO)

In Linux, all sockets are blocking by default, and a typical read operation flow is probably this:

  

When the user process invokes the RECVFROM system call, Kernel begins the first phase of IO: Preparing the data. For network IO, there are times when the data has not arrived at the beginning (for example, a full UDP packet has not been received), and kernel waits for enough data to arrive.

On this side of the user process, the entire process is blocked. When kernel waits until the data is ready, it copies the data from the kernel to the user's memory, and then kernel returns the result, and the user process removes the block state and re-runs it.
Therefore, the blocking IO is characterized by the block of two phases of IO execution (two stages of waiting for data and copying data).

Almost all programmers first come into contact with the network programming from Listen (), send (), recv () and other interfaces, the use of these interfaces can be very convenient to build a server/client model. However, most socket interfaces are blocking types. Such as

PS: The so-called blocking interface refers to a system call (typically an IO interface) that does not return the call result and keeps the current thread blocked until the system call gets the result or the timeout error occurs.

Virtually all IO interfaces (including the socket interface) are blocking, unless specifically specified. This poses a big problem for network programming, such as when calling recv (1024), the thread will be blocked, during which time the thread will be unable to perform any operations or respond to any network requests.

A simple solution:

#在服务器端使用多线程 (or multiple processes). The purpose of multithreading (or multi-process) is to have separate threads (or processes) for each connection, so that blocking of any one connection does not affect other connections.

The problem with this scenario is:

#开启多进程或都线程的方式, when encountering a connection request that responds to hundreds or thousands of simultaneous requests, the system resources are heavily occupied by multiple threads or processes, reducing the system's responsiveness to the outside world, and the threads and processes themselves are more likely to go into suspended animation.

Improvement Program:

#很多程序员可能会考虑使用 the thread pool or connection pool. The thread pool is designed to reduce the frequency of creating and destroying threads, maintaining a reasonable number of threads, and allowing idle threads to re-assume new execution tasks. Connection pooling maintains a connected cache pool, reusing existing connections as much as possible, and reducing the frequency with which connections are created and closed. Both of these technologies can reduce system overhead and are widely used in many large systems, such as WebSphere, Tomcat, and various databases.

There are problems with the post-improvement scheme:

# The thread pool and connection pooling technologies are only to some extent mitigated by the frequent invocation of the IO interface for resource consumption. Moreover, the so-called "pool" always has its upper limit, when the request greatly exceeds the upper limit, the "pool" composed of the system response to the outside world is not much better than when there is no pool. So using the pool must consider the scale of the response it faces and adjust the size of the pool based on the response scale.

The "thread pool" or "Connection pool" may alleviate some of the stress, but not all of them, in response to the thousands or even thousands of client requests that may appear in the previous example. In short, multithreaded models can easily and efficiently solve small-scale service requests, but in the face of large-scale service requests, multithreading model will encounter bottlenecks, you can use non-blocking interface to try to solve the problem.

Non-blocking IO (non-blocking io)

Under Linux, you can make it non-blocking by setting the socket. When you perform a read operation on a non-blocking socket, the process looks like this:

  

As you can see, when the user process issues a read operation, if the data in kernel is not ready, it does not block the user process, but returns an error immediately. From the user process point of view, it initiates a read operation and does not need to wait, but immediately gets a result. When the user process determines that the result is an error, it knows that the data is not ready, so the user can do something else in the interval between this time and the next time the read query is initiated, or send the read operation directly again. Once the data in the kernel is ready and again receives the system call of the user process, it immediately copies the data to the user's memory (this phase is still blocked) and returns.

That is, after a non-blocking recvform system call, the process is not blocked, the kernel is returned to the process immediately, and if the data is not ready, an error is returned. After the process returns, it can do something else before initiating the recvform system call. Repeat the above process and iterate through the recvform system calls. This process is often called polling. Poll the kernel data until the data is ready, and then copy the data to the process for data processing. It is important to note that the process of copying data is still a blocking state.

Therefore, in non-blocking IO, the user process is actually required to constantly proactively ask kernel data ready to be prepared.

Import Socketsk = Socket.socket () sk.bind ((' 127.0.0.1 ', 9090)) sk.setblocking (False) sk.listen () conn_l = []del_l = [] While True:    try:        conn,addr = sk.accept ()        conn_l.append (conn)    except Blockingioerror as E:        for Conn in conn_l:            try:                ret = CONN.RECV (1024x768)                if ret:                    print (ret)                    conn.send (b ' Hello ')                else:                    conn.close ()                    del_l.append (conn)            except (blockingioerror,oserror):p to        Conn in del_l:            Conn_l.remove (conn)        Del_l.clear ()

But non-blocking IO models are never recommended.

We cannot otherwise have the advantage of being able to do other things while waiting for the task to be completed (including submitting other tasks, that is, "backstage" can have multiple tasks at "" and "").

But it's also hard to hide its drawbacks:

#1. Cyclic invocation of recv () will significantly push up CPU occupancy, which is why we leave a sentence of Time.sleep (2) in the code, otherwise it is very easy to appear on the low-cost host machine. The response latency for task completion is increased because each time a read operation is polled, the task may be completed at any time between polling two times. This can result in a decrease in overall data throughput.

In addition, in this scenario recv () is more of a test "operation is complete" role, the actual operating system provides more efficient detection "operation is completed" function of the interface, such as select () multiplexing mode, can detect more than one connection active at a time.

multiplexed io (io multiplexing)

The word IO multiplexing may be a bit unfamiliar, but if I say select/epoll, I'll probably get it. Some places also call this IO mode for event-driven IO(driven io). As we all know, the benefit of Select/epoll is that a single process can simultaneously handle multiple network connections of IO. The basic principle of the select/epoll is that the function will constantly poll all sockets that are responsible, and when a socket has data arrives, notifies the user of the process. It's process

When the user process invokes select, the entire process is blocked, and at the same time, kernel "monitors" all select-responsible sockets, and when the data in any one socket is ready, select returns. This time the user process then invokes the read operation, copying the data from the kernel to the user process.
This figure is not much different from the blocking IO diagram, in fact it's even worse. Because there are two system calls (select and recvfrom) that need to be used, blocking IO only calls a system call (Recvfrom). However, the advantage of using select is that it can handle multiple connection at the same time.

Emphasize:

1. If the number of connections processed is not high, Web server using Select/epoll does not necessarily perform better than the Web server using multi-threading + blocking IO, and may be more delayed. The advantage of Select/epoll is not that a single connection can be processed faster, but that it can handle more connections.

2. In a multiplexed model, for each socket, it is generally set to non-blocking, but, as shown, the entire user's process is always block. Only the process is the block of the Select function, not the socket IO.

Conclusion: The advantage of select is that it can handle multiple connections, not for a single connection

# IO multiplexing-os supplied # 1. program cannot intervene # 2. Differences between operating systems import Selectimport Socketsk = Socket.socket () sk.bind ((' 127.0.0.1 ', 9090) Sk.listen () sk.setblocking (False) rlst = [Sk]while True:    RL,WL,XL = Select.select (rlst,[],[])  #[sk,conn1, CONN2]    # Why the sudden return of SK back?  SK object has data that can be read    # Why return three lists? List of List of write events for a read event list    # Why is the list?  It is possible that at the same time there are multiple monitored objects that have read time for the    obj in RL:        if obj was SK:   # is meant to be more precise, judging that obj is sk            conn,addr = obj.accept ()            Rlst.append (conn)        else:            try:                ret = OBJ.RECV (1024x768)                print (ret)                obj.send (b ' Hello ')            Except Connectionreseterror:                obj.close ()                rlst.remove (obj)

TCP protocol, if the other side shuts down the connection

The other party may continue to receive an empty message or an error.

What is IO multiplexing

IO multiplexing is a mechanism provided by the operating system to monitor network IO operations

Listen to three lists

When a list has a corresponding event that occurs

Operating system Notification Application

The operating system makes specific actions based on what is returned.

For situations where only one object needs to be monitored, IO multiplexing does not work for concurrency requirements

Scenario IO multiplexing for concurrent receive network requests can help you accomplish this by saving CPU utilization and operating system calls

  

IO multiplexing

Select is the mechanism polling on the window to listen to each object for the occurrence of corresponding events, the more data delay the greater the number of objects can be processed is limited

The mechanism of poll Linux and select is basically the same, and the data structure of the underlying storage being monitored objects is optimized to handle the number of objects increased

EPLL Linux uses a callback function to notify the application that an event has occurred.

The process analysis of elect monitoring FD changes:

#用户进程创建socket对象, the copy monitoring of FD to the kernel space, each FD will correspond to a System file table, the kernel space of FD response to the data, will send a signal to the user process data has arrived; #用户进程再发送系统调用, For example (accept) copy the kernel space data to the user space, as well as the data to accept the core space of the data purged, so that the new monitoring of the FD and then again the data can be responded to (the sending side because the TCP protocol is based on the need to receive a reply before clearing).

Advantages of this model:

#相比其他模型, the event-driven model using select () executes only single-threaded (process), consumes less resources, consumes too much CPU, and provides services to multiple clients. If you try to build a simple event-driven server program, this model has some reference value.

Disadvantages of the Model:

The #首先select () interface is not the best choice for implementing event-driven. Because the Select () interface itself consumes a lot of time to poll each handle when the value of the handle to be probed is large.
#很多操作系统提供了更为高效的接口, such as Linux provided by EPOLL,BSD provides Kqueue,solaris with/dev/poll, ....
#如果需要实现更高效的服务器程序, interfaces like Epoll are recommended. Unfortunately, the Epoll interface for different operating systems is a big difference,
#所以使用类似于epoll的接口实现具有较好跨平台能力的服务器会比较困难. #其次, this model is a combination of event detection and event response, which is catastrophic for the entire model once the event response is large.

Asynchronous IO (asynchronous I/O)

The asynchronous IO in Linux is not used much, and is only introduced from kernel version 2.6. Let's take a look at its process:

After the user process initiates the read operation, you can begin to do other things immediately. On the other hand, from the perspective of kernel, when it receives a asynchronous read, first it returns immediately, so no block is generated for the user process. Then, kernel waits for the data to be ready and then copies the data to the user's memory, and when all this is done, kernel sends a signal to the user process to tell it that the read operation is complete.

IO Model comparison and analysis

So far, four IO model has been introduced. Now back to the first few questions: what is the difference between blocking and non-blocking, and what is the difference between synchronous IO and asynchronous IO?
First answer the simplest of this: blocking vs non-blocking. The difference between the two is clearly explained in the previous introduction. Calling blocking IO will block the corresponding process until the operation is complete, and non-blocking IO will return immediately when the kernel is ready for the data.

Before you tell the difference between synchronous IO and asynchronous IO, you need to give a definition of both. The definition given by Stevens (in fact, the definition of POSIX) is this:
A synchronous I/O operation causes the requesting process to being blocked until that I/O operationcompletes;
An asynchronous I/O operation does not cause the requesting process to be blocked;
The difference is that synchronous IO will block the process when it does "IO operation". According to this definition, four IO models can be divided into two categories, previously described in the blocking io,non-blocking Io,io multiplexing belong to the synchronous IO category, and asynchronous I/O after the class.

One might say that non-blocking io is not block. Here is a very "tricky" place, defined in the "IO operation" refers to the real IO operation, is the example of recvfrom this system call. Non-blocking IO does not block the process when it executes recvfrom this system call if the kernel data is not ready. However, when the data in the kernel is ready, recvfrom copies the data from the kernel to the user's memory, at which point the process is blocked, during which time the process is block. The asynchronous IO is not the same, and when the process initiates an IO operation, the direct return is ignored until the kernel sends a signal telling the process that IO is complete. Throughout this process, the process has not been blocked at all.

Comparison of each IO model:

  

As described above, the difference between non-blocking io and asynchronous io is obvious. In non-blocking io, although the process will not be blocked for most of the time, it still requires the process to go to the active check, and when the data is ready, it is also necessary for the process to proactively call Recvfrom to copy the data to the user's memory. and asynchronous Io is completely different. It's like a user process handing over an entire IO operation to someone else (kernel) and then sending a signal notification when someone finishes it. During this time, the user process does not need to check the status of the IO operation, nor does it need to actively copy the data.

Selectors module

IO multiplexing: In order to explain this noun, first to understand the next reuse of the concept, reuse is the meaning of common, so that understanding or some abstraction, so we understand the use of the next reuse in the field of communication in order to make full use of the network connected physical media, Often on the same network link using time Division multiplexing or frequency division multiplexing technology to transmit multiple signals on the same link, here we basically understand the meaning of reuse, that is, a common "medium" to do as much as possible the same class (nature), that IO multiplexing "media" is what? To this end we first look at the server programming model, the client sends the request service side will produce a process to service it, whenever a customer request to generate a process to serve, but the process is not infinite generation, so in order to solve the problem of a large number of client access, the introduction of IO multiplexing technology, That is, a process can service multiple customer requests at the same time. That is, the IO multiplexing "media" is a process (which is exactly the same as select and poll, because the process is also done by invoking Select and poll), reusing a process (select and poll) to service multiple IO  Although the IO that is sent by the client is concurrent but the read-write data required by IO is not prepared in most cases, a function (select and poll) can be used to listen to the state of the data required for IO, and once IO has data to read and write, the process is able to service such IO. After understanding IO multiplexing, we are looking at the differences and connections between the three APIs (select, poll, and Epoll) that implement IO multiplexing, and the select,poll,epoll is the mechanism of IO multiplexing, where I/O multiplexing is a mechanism by which multiple descriptors can be monitored. Once a descriptor is ready (usually read-ready or write-ready), the application can be notified of the appropriate read and write operations. But select,poll,epoll are essentially synchronous I/O because they all need to read and write when the read-write event is ready, that is, the read-write process is blocked, and asynchronous I/O is not responsible for reading and writing, and the asynchronous I/O implementation is responsible for copying the data from the kernel to the user space. The three prototypes are as follows: int select (int Nfds, fd_set *readfds, Fd_set *writefds, Fd_set *exceptfds, struct timeval *timeout); int poll (str UCT pollfd *fds, nfds_t nfds, int timeout); int epoll_wait (int epfd, struct epoll_event *events, int maxevents, int timeout); The first parameter of the 1.select Nfds is the maximum descriptor value in the Fdset collection plus 1,fdset is a bit array whose size is limited to __fd_setsize (1024), and each bit of the bit array represents whether its corresponding descriptor needs to be checked. The No. 234 parameter represents an array of file descriptor bits that require attention to read, write, and error events, which are both input parameters and output parameters, and may be modified by the kernel to indicate events of interest on which descriptors, so the fdset needs to be reinitialized each time a select is called. The timeout parameter is a time-out, and the structure is modified by the kernel with a value of the time remaining for the timeout. The call steps for select are as follows: (1) Use Copy_from_user to register the callback function from the user space copy Fdset to the kernel space (2) __pollwait (3) Traverse all FD, call its corresponding poll method (for socket, This poll method is sock_poll,sock_poll according to the situation will call to Tcp_poll,udp_poll or Datagram_poll) (4) Tcp_poll as an example, its core implementation is __pollwait, This is the callback function registered above. (5) The main task of __pollwait is to hang the current process into the waiting queue of the device, different devices have different waiting queues, and for Tcp_poll, their waiting queue is sk->sk_ Sleep (note that suspending the process to the waiting queue does not mean that the process is already asleep). When the device receives a message (network device) or fills out the file data (disk device), it wakes the device to wait for the sleep process on the queue, and current is awakened. (6) The poll method returns a mask mask that describes whether the read-write operation is ready and assigns a value to fd_set based on the mask mask. (7) If all FD is traversed and no read-write mask is returned, the call to Schedule_timeout is the process that calls select (that is, current) into sleep. When a device driver takes its own resource to read and write, it wakes up the process of waiting for sleep on the queue. If there is more than a certain timeout (schedule_timeout specified), or no one wakes up, then the process calling select will be woken up to get the CPU again, and then iterate over the FD to determine if there is no ready FD. (8) Copy the fd_set from the kernel space to the user space. Summarize the main disadvantages of select: (1) Each call to select, the FD collection needs to be copied from the user state to the kernel state, the cost of FD is very large (2)At the same time, each call to select needs to traverse all the FD passed in the kernel, which is also very large at FD (3) The number of file descriptors supported by Select is too small and the default is 1024 2. Poll, unlike Select, passes a POLLFD array to the kernel to pass events that need attention, so there is no limit to the number of descriptors, and the events field and revents in POLLFD are used to indicate the event of concern and the event that occurs. Therefore, the POLLFD array needs to be initialized only once. Poll's implementation mechanism is similar to select, which corresponds to the sys_poll in the kernel, except that poll passes the POLLFD array to the kernel, then POLLFD each descriptor in poll, which is more efficient than fdset. When poll returns, it is necessary to check its revents value for each element in the POLLFD, referring to whether the event occurred. 3. It was not until Linux2.6 that the kernel directly supported the implementation method, that is Epoll, is recognized as the best performance of the Linux2.6 multi-channel I/O readiness notification method. Epoll can support both horizontal and edge triggering (edge triggered, which only tells the process which file descriptor has just become ready, it only says it again, and if we do not take action then it will not be told again, this way is called edge triggering), The performance of edge triggering is theoretically higher, but the code implementation is quite complex. Epoll also only informs those file descriptors that are ready, and when we call Epoll_wait () to get the ready file descriptor, the return is not the actual descriptor, but a value representing the number of ready descriptors, You just have to go to the Epoll specified array to get the appropriate number of file descriptors, and memory mapping (MMAP) technology is used, which completely eliminates the overhead of copying these file descriptors on system calls. Another essential improvement is the epoll adoption of event-based readiness notification methods. In Select/poll, the kernel scans all monitored file descriptors only after a certain method is called, and Epoll registers a file descriptor beforehand with Epoll_ctl (), once it is ready based on a file descriptor, The kernel uses a callback mechanism like callback to quickly activate the file descriptor and be notified when the process calls Epoll_wait (). Since Epoll is an improvement on select and poll, the above three drawbacks should be avoided. How did that epoll all work out? Before we take a look at the different invocation interfaces of Epoll and select and poll, both Select and poll provide only a function--select or poll function. andEpoll provides three functions, Epoll_create,epoll_ctl and epoll_wait,epoll_create are creating a epoll handle; Epoll_ctl is the type of event registered to listen; Epoll_  Wait is waiting for the event to occur. For the first drawback, the Epoll solution is in the Epoll_ctl function. Each time a new event is registered in the Epoll handle (specifying Epoll_ctl_add in Epoll_ctl), all FD is copied into the kernel instead of being duplicated at epoll_wait.  Epoll guarantees that each FD will be copied only once throughout the process. For the second disadvantage, the solution for Epoll is not to add current to the FD-corresponding device-waiting queue each time, like Select or poll, but to hang the current only once at Epoll_ctl (which is necessary) and to specify a callback function for each FD. This callback function is invoked when the device is ready to wake the waiting queue, and the callback function will add the ready FD to a ready list.  Epoll_wait's job is actually to see if there is a ready-to-use FD (using Schedule_timeout () to sleep for a while, judging the effect of a meeting, and the 7th step in the Select implementation is similar). For the third disadvantage, Epoll does not have this limit, it supports the maximum number of FD can open file, this number is generally far greater than 2048, for example, in 1GB memory of the machine is about 100,000, the exact number can cat/proc/sys/fs/ File-max, in general, this number and system memory relationship is very large. Summary: (1) The Select,poll implementation requires itself to constantly poll all FD collections until the device is ready, during which time the sleep and wake-up cycles may be repeated. While Epoll actually needs to call Epoll_wait to constantly poll the ready linked list, there may be multiple sleep and wake alternates, but when it is device ready, call the callback function, put the ready FD into the Ready list, and wake the process into sleep in epoll_wait. Although all have to sleep and alternate, but select and poll in "Awake" time to traverse the entire FD collection, and Epoll in "awake" as long as to determine whether the ready linked list is empty, which saves a lot of CPU time, this is the callback mechanism brought about by the performance improvement. (2) Select,poll each call to the FD set from the user state to the kernel state copy once, and to the device to wait for the queue to hang once, and epoll as long as a copy, and the current to waitThe queue is also hung only once (at the beginning of epoll_wait, note that the wait queue here is not a device waiting queue, just a epoll internally defined wait queue), which can also save a lot of overhead. Select,poll,epoll

  

These three kinds of IO multiplexing models have different support on different platforms, and Epoll is not supported under Windows, fortunately we have selectors module, help us choose the most suitable under the current platform by default

#服务端from socket import *import selectorssel=selectors. Defaultselector () def accept (Server_fileobj,mask): Conn,addr=server_fileobj.accept () Sel.register (conn,selectors. Event_read,read) def READ (conn,mask): Try:data=conn.recv (1024x768) if not data:print (' closing ', CO NN) sel.unregister (conn) conn.close () return Conn.send (Data.upper () +b ' _SB ') exc EPT exception:print (' closing ', conn) sel.unregister (conn) conn.close () Server_fileobj=socket (Af_inet, SOCK_STREAM) server_fileobj.setsockopt (sol_socket,so_reuseaddr,1) server_fileobj.bind ((' 127.0.0.1 ', 8088)) Server_ Fileobj.listen (5) server_fileobj.setblocking (False) #设置socket的接口为非阻塞sel. Register (server_fileobj,selectors. event_read,accept) #相当于网select的读列表里append了一个文件句柄server_fileobj, and binds a callback function Acceptwhile True:events=sel.select () # Detects all fileobj, if there is a for sel_obj,mask in Events:callback=sel_obj.data that completes wait data #callback =accpet callback ( Sel_obj.fIleobj,mask) #accpet (server_fileobj,1) #客户端from socket import *c=socket (af_inet,sock_stream) c.connect (' 127.0.0.1 ',    8088) while True:msg=input (' >>: ') If not msg:continue c.send (Msg.encode (' Utf-8 ')) DATA=C.RECV (1024) Print (Data.decode (' Utf-8 ')) for chat based on selectors module

  

Python IO Model

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.