Python_day11_ Synchronous IO and asynchronous IO

Source: Internet
Author: User
Tags epoll

What is the difference between synchronous IO and asynchronous Io, what is blocking IO and non-blocking IO respectively? The answers given by different people in different contexts are different. So first limit the context of this article.

本文讨论的背景是Linux环境下的network IO。

A concept note

Before interpreting, there are a few concepts to be explained:
-User space and kernel space
-Process Switching
-Blocking of the process
-File descriptor
-Cache I/O

User space and kernel space

Now that the operating system is using virtual memory, the 32-bit operating system, its addressing space (virtual storage space) is 4G (2 of 32). The core of the operating system is the kernel, which is independent of the normal application, has access to protected memory space, and has all the permissions to access the underlying hardware device. In order to ensure that the user process can not directly manipulate the kernel (kernel), to ensure the security of the kernel, worry about the system to divide the virtual space into two parts, part of the kernel space, part of the user space. For the Linux operating system, the highest 1G bytes (from the virtual address 0xc0000000 to 0xFFFFFFFF) for the kernel to use, called the kernel space, and the lower 3G bytes (from the virtual address 0x00000000 to 0xBFFFFFFF) for each process to use, Called User space.

Process switching

To control the execution of a process, the kernel must have the ability to suspend a process that is running on the CPU and resume execution of a previously suspended process. This behavior is called process switching. So it can be said that any process that runs under the support of the operating system kernel is closely related to the kernel.

The process of moving from one process to another runs through the following changes:
1. Save the processor context, including program counters and other registers.
2. Update the PCB information.

3. Move the PCB of the process into the appropriate queue, such as ready, in an event blocking queue.
4. Select another process to execute and update its PCB.
5. Update the data structure of the memory management.
6. Restore the processing machine context.

In short, it is very resource-intensive, specific can refer to this article: process switching

Note: The Process Control block (processing control blocks) is a data structure in the core of the operating system that mainly represents the state of the process . The purpose of this is to make a program (with data) that cannot be run independently in a multi-channel program Environment, to be a basic unit that can run independently, or a process that executes concurrently with other processes. In other words, the OS is based on the PCB to control and manage the concurrent execution of the process. The PCB is usually a contiguous storage area in the system memory footprint, which holds all the information the operating system needs to describe the process and control the process.

Blocking of processes

The executing process, because some expected events did not occur, such as requesting system resources failed, waiting for the completion of an operation, new data has not arrived or no new work to do, etc., the system automatically executes the blocking primitive (block), making itself from the running state into a blocking state. It can be seen that the blocking of a process is an active behavior of the process itself, and therefore it is possible to turn it into a blocking state only if the process is in a running state (acquiring the CPU). 当进程进入阻塞状态,是不占用CPU资源的.

File Descriptor FD

File descriptor, a term in computer science, is an abstraction that describes a reference to a file.

The file descriptor is formally a non-negative integer. In fact, it is an index value that points to the record table in which the kernel opens a file for each process maintained by the process. When a program opens an existing file or creates a new file, the kernel returns a file descriptor to the process. In programming, some of the underlying programming often revolves around file descriptors. However, the concept of file descriptors is often applied only to operating systems such as UNIX and Linux.

Cache I/O

Cache I/O is also known as standard I/O, and most file system default I/O operations are cache I/O. In the Linux cache I/O mechanism, the operating system caches the I/O data in the file system's page cache, which means that the data is copied into the buffer of the operating system kernel before it is copied from the operating system kernel buffer to the application's address space.

Disadvantages of Cache I/O:
Data is required to perform multiple copies of data in the application address space and the kernel during transmission, and the CPU and memory overhead of these data copy operations is very large.

Two IO modes

Just now, for an IO access (read example), the data is copied to the operating system kernel buffer before it is copied from the operating system kernel buffer to the application's address space. So, when a read operation occurs, it goes through two stages:
1. Wait for data preparation (waiting for the
2. Copying data from the kernel to the process (Copying the data from the kernel to the)

Formally because of these two phases, the Linux system produces the following five kinds of network mode scheme.
-Blocking I/O (blocking IO)
-Non-blocking I/O (nonblocking IO)
-I/O multiplexing (IO multiplexing)
-Signal-driven I/O (signal driven IO)
-Asynchronous I/O (asynchronous IO)

Note: Since signal driven IO is not commonly used in practice, I only refer to the remaining four IO Model.

Blocking I/O (blocking IO)

In Linux, all sockets are blocking by default, and a typical read operation flow is probably this:

When the user process invokes the RECVFROM system call, Kernel begins the first phase of IO: Preparing the data (for network IO, many times the data has not arrived at the beginning.) For example, you have not received a full UDP packet. This time kernel will have to wait for enough data to arrive. This process needs to wait, which means that the data is copied into the buffer of the operating system kernel, which requires a process. On this side of the user process, the entire process is blocked (of course, by the process's own choice of blocking). When kernel waits until the data is ready, it copies the data from the kernel to the user's memory, and then kernel returns the result, and the user process removes the block state and re-runs it.

Therefore, the blocking IO is characterized by block in both phases of IO execution.

Non-blocking I/O (nonblocking IO)

Under Linux, you can make it non-blocking by setting the socket. When you perform a read operation on a non-blocking socket, the process looks like this:

When the user process issues a read operation, if the data in the kernel is not yet ready, it does not block the user process, but returns an error immediately. From the user process point of view, it initiates a read operation and does not need to wait, but immediately gets a result. When the user process determines that the result is an error, it knows that the data is not ready, so it can send the read operation again. Once the data in the kernel is ready and again receives the system call of the user process, it immediately copies the data to the user's memory and then returns.

Therefore, nonblocking IO is characterized by the user process needs to constantly proactively ask kernel data well no.

I/O multiplexing (IO multiplexing)

Io Multiplexing is what we call Select,poll,epoll, and in some places this IO mode is the event driven IO. The benefit of Select/epoll is that a single process can simultaneously handle multiple network connections of IO. The basic principle of the select,poll,epoll is that the function will constantly poll all sockets that are responsible, and when a socket has data arrives, notifies the user of the process.

当用户进程调用了select,那么整个进程会被block, and at the same time, kernel will "monitor" all the select-responsible sockets, and when the data in any one socket is ready, select will return. This time the user process then invokes the read operation, copying the data from the kernel to the user process.

Therefore, I/O multiplexing is characterized by a mechanism in which a process can wait for multiple file descriptors at the same time, and any one of these file descriptors (socket descriptors) goes into a read-ready state, and the Select () function can be returned.

This figure is not much different from the blocking IO diagram, in fact, it's even worse. Because two system calls (select and Recvfrom) are required, blocking IO only invokes one system call (Recvfrom). However, the advantage of using select is that it can handle multiple connection at the same time.

Therefore, if the number of connections processed is not high, Web server using Select/epoll does not necessarily perform better than the Web server using multi-threading + blocking IO, and may be more delayed. The advantage of Select/epoll is not that a single connection can be processed faster, but that it can handle more connections. )

In the IO multiplexing model, the actual, for each socket, is generally set to become non-blocking, but, as shown, the entire user's process is actually always block. Only the process is the block of the Select function, not the socket IO.

asynchronous I/O (asynchronous IO)

The asynchronous IO under Inux is actually used very little. Let's take a look at its process:

After the user process initiates the read operation, you can begin to do other things immediately. On the other hand, from the perspective of kernel, when it receives a asynchronous read, first it returns immediately, so no block is generated for the user process. Then, kernel waits for the data to be ready and then copies the data to the user's memory, and when all this is done, kernel sends a signal to the user process to tell it that the read operation is complete.

Summarize the differences between blocking and non-blocking

Calling blocking IO will block the corresponding process until the operation is complete, and non-blocking IO will return immediately when the kernel is ready for the data.

The difference between synchronous IO and asynchronous IO

Before explaining the difference between synchronous IO and asynchronous IO, you need to give a definition of both. The definition of POSIX is this:
-A Synchronous I/O operation causes the requesting process to being blocked until that I/O operation completes;
-an asynchronous I/O operation does not cause the requesting process to be blocked;

The difference is that synchronous IO will block the process when it does "IO operation". According to this definition, the blocking io,non-blocking Io,io Multiplexing described previously are synchronous IO.

Some people will say, non-blocking io is not block AH. Here is a very "tricky" place, defined in the "IO operation" refers to the real IO operation, is the example of recvfrom this system call. Non-blocking IO does not block the process when it executes recvfrom this system call if the kernel data is not ready. However, when the data in the kernel is ready, recvfrom copies the data from the kernel to the user's memory, at which point the process is blocked, during which time the process is block.

The asynchronous IO is not the same, and when the process initiates an IO operation, the direct return is ignored until the kernel sends a signal telling the process that IO is complete. Throughout this process, the process has not been blocked at all.

Comparison of each IO model:

Three I/O multiplexing Select, poll, Epoll detailed

The select,poll,epoll is a mechanism for IO multiplexing. I/O multiplexing is a mechanism by which a process can monitor multiple descriptors and, once a descriptor is ready (usually read-ready or ready to write), notifies the program to read and write accordingly. But select,poll,epoll are essentially synchronous I/O because they all need to read and write when the read-write event is ready, that is, the read-write process is blocked, and asynchronous I/O is not responsible for reading and writing, and the asynchronous I/O implementation is responsible for copying the data from the kernel to the user space.

Select select(rlist, wlist, xlist, timeout = None )

The Select function monitors file descriptors in 3 categories, Writefds, Readfds, and Exceptfds, respectively. After the call, the Select function blocks until a description is ready (with data readable, writable, or except), or timed out (timeout Specifies the wait time, and if the return is set to null immediately), the function returns. When the Select function returns, you can find the ready descriptor by traversing Fdset.

Select is currently supported on almost all platforms, and its good cross-platform support is one of its advantages. A disadvantage of select is that the maximum number of file descriptors that a single process can monitor is 1024 on Linux, which can be improved by modifying the macro definition or even recompiling the kernel, but this also results in a decrease in efficiency.

Poll int poll (struct pollfd * fds, unsigned int nfds, int timeout);

Unlike select, which uses three bitmaps to represent three Fdset, poll is implemented using a POLLFD pointer.

struct pollfd {    int fd; /* file descriptor */    short events; /* requested events to watch */ short revents; /* returned events witnessed */};

The POLLFD structure contains the event to be monitored and the event that occurred, no longer using the Select "parameter-value" delivery method. At the same time, POLLFD does not have the maximum number of limits (but the performance will also decrease if the number is too large). As with the Select function, poll returns, you need to poll the POLLFD to get the ready descriptor.

From the above, select and poll all need to be returned after 通过遍历文件描述符来获取已经就绪的socket . In fact, a large number of clients connected at the same time may only be in a very small state of readiness at a time, so their efficiency will decrease linearly as the number of monitored descriptors increases.

Epoll

Epoll is presented in the 2.6 kernel and is an enhanced version of the previous select and poll. Compared to select and poll, Epoll is more flexible and has no descriptor restrictions. Epoll uses a file descriptor to manage multiple descriptors, storing the event of the user-relationship's file descriptor in an event table in the kernel, which is only needed once for the user-space and kernel-space copy.

A epoll operation process

The Epoll operation process requires three interfaces, respectively, as follows:

int epoll_create( int size); //创建一个epoll的句柄,size用来告诉内核这个监听的数目一共有多大 int epoll_ctl( int epfd, int op, int fd, struct epoll_event *event); int epoll_wait( int epfd, struct epoll_event * events, int maxevents, int timeout);

1. int epoll_create (int size);
Create a handle to the Epoll, which tells the kernel how large the number of listeners is, which differs from the first parameter in select () and gives the value of the maximum listening fd+1 参数size并不是限制了epoll所能监听的描述符最大个数,只是对内核初始分配内部数据结构的一个建议 .
When the Epoll handle is created, it will occupy an FD value, under Linux if the view/proc/process id/fd/, is able to see this fd, so after the use of Epoll, must call Close () closed, otherwise it may cause FD to be exhausted.

2. int epoll_ctl (int epfd, int op, int fd, struct epoll_event *event);
The function performs an OP operation on the specified descriptor FD.
-EPFD: Is the return value of Epoll_create ().
-op: Represents the OP operation, represented by three macros: Add Epoll_ctl_add, delete Epoll_ctl_del, modify Epoll_ctl_mod. Add, delete, and modify the listener events for FD, respectively.
-FD: Is the FD (file descriptor) that needs to be monitored
-Epoll_event: To tell the kernel what to listen to

3. int epoll_wait (int epfd, struct epoll_event * events, int maxevents, int timeout);
Waits for an IO event on EPFD to return up to maxevents events.
Parameter events is used to get the collection of event from the kernel, the maxevents tells the kernel how big the events are, the value of this maxevents cannot be greater than the size when the Epoll_create () was created, the parameter timeout is the timeout (in milliseconds, 0 returns immediately , 1 will be uncertain, and there are statements that are permanently blocked). The function returns the number of events that need to be processed, such as returning 0 to indicate a timeout.

Python_day11_ Synchronous IO and asynchronous IO

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.