High-performance server framework-I/O model, high-performance-I/O model

Source: Internet
Author: User

High-performance server framework-I/O model, high-performance-I/O model

The socket is blocked by default when it is created. We can pass the SOCK_NONBLOCK flag through the second parameter called by the socket system, or set it to non-blocking through the F_SETFL command called by the fcntl system. The concept of blocking and non-blocking can be applied to all file descriptors, not just socket. We call the blocked file descriptor as blocking I/O, but the non-blocking file descriptor as non-blocking I/O.

System calls to block I/O execution may be suspended by the operating system because they cannot be completed immediately until the waiting event occurs. For example, when a client initiates a connection to the server through connect, connect will first send a synchronous packet segment to the server, and then wait for the server to return the validation packet segment. If the server's validation packet segment does not arrive at the client immediately, the connect call will be suspended until the client receives the validation message segment and wakes up the connect call. In the basic API of the socket, system calls that may be blocked include acept send rev connect.

System calls for non-blocking I/O execution always return immediately, regardless of whether or not the event has occurred. If the event does not happen immediately, these system calls will return-1, like an error, we must divide the situation based on errno. For accept send recv, errno is usually set to EAGAIN or EWOULDBLOCK (meaning expected blocking) when the event does not occur ), for connect, errno is set to EINPROGRESS (meaning processing ).

Obviously, we can improve program efficiency only by operating non-blocking I/o (read/write, etc.) when an event has occurred, non-blocking I/O is usually used with other I/O notification mechanisms, such as I/O multiplexing and SIGIO signals.

I/O multiplexing is the two most common I/O notification mechanisms. It means that an application wants to register a group of events through the I/O multiplexing function, the kernel uses the I/O multiplexing function to notify the application of the ready events. The common I/O multiplexing functions in Linux are select and poll epoll_wait. It should be clear that I/O complex functions are blocked, and they can improve program efficiency because they have the ability to listen to multiple I/O events at the same time.

SIGIO signals can also be used to report I/O events. When an event occurs on the target file descriptor, the signal processing function of the SIGIO signal will be triggered, we can also perform non-blocking I/O operations on the description II of the target file in the signal processing function.

Theoretically, blocking I/O multiplexing and signal-driven I/O are both synchronous I/O models, because in these three I/O models, i/O read/write operations are performed by the application after an I/O event occurs. asynchronous I/O models defined in POSIX specifications are different. For asynchronous I/O, you can directly perform read/write operations on I/O. These operations tell the kernel user the location of the read/write buffer and I. the method in which the kernel notifies the application after the/O operation is completed. Asynchronous I/O reads and writes always return immediately, regardless of whether I/O is blocked, because the real read and write operations are taken over by the kernel, that is, to synchronize the I/O model, user code must perform I/O operations on its own (read data from the kernel buffer into the user buffer, or write data from the user buffer to the kernel buffer ), the asynchronous I/O mechanism is implemented by the kernel for I/O operations (data movement between the kernel buffer and the user buffer is completed by the kernel in the background ). You can think that synchronous I/O notifies the application of the I/O readiness event, while asynchronous I/O wants the application to notify the I/O completion event. In linux, the aio. h header file defines functions to provide asynchronous I/O support.

Summary

I/O model read/write operations and blocking phase

Blocking I/O program blocking and read/write operations

The I/O multiplexing program blocks the calls of the I/O multiplexing system, but can listen to multiple I/O events at the same time. The read/write operations on the I/O itself are not blocked.

The SIGIO signal triggers the read/write readiness event. The user program executes the read/write operation and the program is not blocked.

The asynchronous I/O kernel executes read/write and triggers the read/write completion event. The program is not blocked.


At the same time, there are also synchronous/asynchronous methods in the concurrency model, but they are different from the concept here.

In the I/O model, synchronization and Asynchronization differentiate the I/O events (readiness events or completion events) that the kernel notifies the application ), and who should perform I/O read/write (application or kernel). In the concurrency model, synchronization means that the program is executed in the order of the code sequence, asynchronous program execution needs to be driven by system events. Common system events include interruptions and signals.


What is the use of the I/O model in Socket programming? Where is the connection with the server model?

The I/o model is mainly used to solve the problem of I/O speed and cpu processing speed and mismatch. The I/o speed is usually 1% or 1‰ of the cpu speed, not an order of magnitude. If the cpu remains on the I/O, the cpu computing capability will be greatly wasted. That's why we invented so many models.

And I/o ports are not a concept.

Socket programming is a specific implementation in the I/o model. In fact, I/o models can also be reflected in other device drivers.

You can search for them online. For example, synchronous blocking, synchronous non-blocking, asynchronous blocking (I/o multiplexing), asynchronous non-blocking.
 
What is the I/O model?

I/O is the abbreviation of input/output, that is, the input/output port. Each device has a dedicated I/O address to process its own input and output information. The connection and data exchange between CPU and external devices and memory must be implemented through interface devices. The former is called the I/O interface, and the latter is called the memory interface. The memory usually works under the synchronous control of the CPU, and the interface circuit is relatively simple. However, there are a wide variety of I/O devices, and their corresponding interface circuits are also different. Therefore, in practice, interfaces only refer to I/O interfaces.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.