Original: http://www.cnblogs.com/kunhu/p/3624000.html
1. Conceptual understanding
In network programming, we often see synchronous (sync)/async (Async), blocking (block)/non-blocking (Unblock) Four call modes:
Synchronous:
The so-called synchronization is that when a function call is made, the call does not return until the result is obtained. That is, one thing must be done . To do the next thing when the previous one is done.
For example, normal B/s mode (synchronous): Submit request, wait for server processing, processing completed return this period the client browser can not do anything
Asynchronous:
Asynchronous concepts and synchronization are relative. When an asynchronous procedure call is made, the caller cannot get the result immediately. The part that actually handles the call notifies the caller via status, notification, and callback after completion.
For example, an AJAX request (asynchronous): The request isprocessed by the event-triggered server (which is what the browser can still do) .
Blocking
A blocking call means that the current thread is suspended until the call results are returned (the thread goes into a non-executable state where the CPU does not allocate a time slice to the thread, that is, the thread pauses). Functions are returned only after the result is obtained.
Someone might equate blocking calls with synchronous calls, and in fact he is different. For synchronous calls, many times the current thread is still active, but logically the current function does not return. For example, we call the RECV function in the socket, and if there is no data in the buffer, the function waits until there is data to return. At this point, the current thread will continue to process a wide variety of messages.
Non-blocking
The concept of non-blocking and blocking corresponds to a function that does not block the current thread and returns immediately until the result is not immediately available.
Blocking mode and blocking function calls for objects
Whether the object is in blocking mode and if the function is not a blocking call has a strong correlation, but not one by one corresponds. Blocking objects can have non-blocking calls, we can use a certain API to poll the state, the appropriate time to call the blocking function, you can avoid blocking. For non-blocking objects, calling a special function can also enter a blocking call. The function Select is an example of this.
1. Sync, that is, I call a function, the function does not end before I death the result.
2. Asynchronous, that is, I call a function, do not need to know the result of the function, the function has a result notify me (callback notification)
3. Blocking is called me (function), I (function) did not receive the data or did not get the results, I will not return.
4. Non-blocking, just call me (function), I (function) immediately return, via select Notify caller
The difference between synchronous IO and asynchronous IO is that the process is blocked when the data is copied!
The difference between blocking IO and non-blocking IO is that the application's call returns immediately!
For a simple C/s mode:
Synchronization: Submit request, wait for server processing, processing completed returns this period the client browser cannot do anything
Asynchronous: The request is processed by the event trigger, server processing (this is what the browser can still do), and both synchronous and asynchronous are for native sockets only.
Synchronous and asynchronous, blocking and non-blocking, some mixing, in fact, they are completely not the same, and they modify the object is not the same.
Blocking and non-blocking refers to whether the process needs to wait when the data accessed by the process is not ready, simply saying that this is equivalent to the implementation difference within the function, that is, when it is not ready to return directly or wait for the ready;
While synchronous and asynchronous refers to the mechanism of accessing data, synchronization generally refers to the active request and wait for the completion of the I/O operation, when the data is ready to read and write must block (difference between ready and read and write two stages, synchronous read and write must block), asynchronous refers to the active request data can continue processing other tasks, and then wait for I/ Notification that the operation is complete, which allows the process to read and write data without blocking. (Wait for "notification")
1. Five I/O models under Linux
1) Blocking I/O (blocking I/O)
2) non-blocking I/O (nonblocking I/O)
3) I/O multiplexing (SELECT and poll) (I/O multiplexing)
4) Signal-driven I/O (signal driven I/O (SIGIO))
5) asynchronous I/O (asynchronous I/O (the POSIX aio_functions))
The first four types are synchronous, and only the last is asynchronous IO.
Blocking I/O model:
Summary: The process will block until the data copy is complete
The application calls an IO function that causes the application to block and wait for the data to be ready. If the data is not ready, wait .... The data is ready to be copied from the kernel to the user space, and the IO function returns a success indicator.
blocking I/O model diagrams: when calling the Recv ()/recvfrom () function, the process of waiting for data and replicating data in the kernel occurs.
When the recv () function is called, the system first checks to see if there is ready data. If the data is not ready, then the system is in a wait state. When the data is ready, the data is copied from the system buffer to the user space, and then the function returns. In a socket application, when the recv () function is called, the data is not necessarily present in the user space, then the recv () function is in a wait state.
When you use the socket () function and the WSASocket () function to create a socket, the default socket is blocked. This means that when the call to Windows Sockets API does not complete immediately, the thread waits until the operation is complete.
Not all Windows Sockets APIs will block if the socket is called as a parameter call. For example, when you call the bind (), listen () function in a socket with blocking mode, the function returns immediately. The Windows Sockets API calls that might block sockets are divided into the following four types:
1. Input operations: recv (), Recvfrom (), WSARecv (), and WSARecvFrom () functions. Call the function to receive data as a blocking socket for the parameter. If no data is readable in the socket buffer at this point, the calling thread sleeps until the data arrives.
2. Output operations: Send (), sendto (), WSASend (), and WSASendTo () functions. Calls the function to send data with a blocking socket as a parameter. If the socket buffer does not have free space, the thread will sleep until there is space.
3. Accepts connections: the Accept () and wsaacept () functions. Call the function as a blocking socket, waiting for the connection request to be accepted. If there is no connection request at this time, the thread goes to sleep.
4. Outgoing connections: Connect () and WSAConnect () functions. For a TCP connection, the client invokes the function to initiate a connection to the server, with the blocking socket as the parameter. The function does not return until it receives a reply from the server. This means that TCP connections always wait at least one round trip to the server.
Using the blocking mode socket, the development of the network program is relatively simple and easy to implement. When you want to be able to send and receive data immediately, and the number of sockets processed is relatively small, it is appropriate to use blocking mode to develop a network program.
The inadequacy of blocking mode sockets is that it is difficult to communicate between a large number of well-established socket threads. When developing a network program using the producer-consumer model, each socket is assigned a separate read thread, a processing data thread, and an event for synchronization, which undoubtedly increases the overhead of the system. The biggest drawback is that when you want to handle a large number of sockets at the same time, it will not do, its extensibility is poor
non-blocking IO model
Summary: Non-blocking IO calls the IO function repeatedly through the process (multiple system calls and returns immediately); In the process of data copying, the process is blocked;
We set a socket interface to non-blocking to tell the kernel that when the requested I/O operation cannot be completed, do not sleep the process, but return an error. This way our I/O operations function will constantly test whether the data is ready, and if not, continue testing until the data is ready. In this continuous testing process, the CPU will be a lot of time.
Set the socket to non-blocking mode, that is, notify the system kernel: when calling the Windows Sockets API, do not let the thread sleep, but should let the function return immediately. On return, the function returns an error code. As shown in the figure, a non-blocking pattern socket calls the recv () function multiple times. The kernel data is not ready when you call the recv () function the first three times. Therefore, the function immediately returns the WSAEWOULDBLOCK error code. The fourth time the recv () function is called, the data is ready to be copied into the application's buffer, and the recv () function returns a successful indication that the application is starting to process the data.
When you create a socket using the socket () function and the WSASocket () function, the default is blocked. After the socket is created, the socket is set to non-blocking mode by calling the Ioctlsocket () function. The functions under Linux are: Fcntl ().
When the socket is set to non-blocking mode, the calling function returns immediately when calling the Windows Sockets API function. In most cases, these function calls will call "failed" and return the Wsaewouldblock error code. Indicates that the requested operation did not have time to complete during the call. Typically, the application needs to call the function repeatedly until a successful return code is obtained.
It is necessary to note that not all Windows Sockets APIs are called in nonblocking mode and will return a wsaewouldblock error. For example, when you call the bind () function in a non-blocking socket for a parameter, the error code is not returned. Of course, the error code is not returned when the WSAStartup () function is called, because the function is the first call of the application and certainly does not return such an error code.
To set the socket to non-blocking mode, you can use the WSAAsyncSelect () and WSAEventSelect () functions in addition to the ioctlsocket () function. When the function is called, the socket is automatically set to non-blocking mode.
Because a non-blocking socket is used when calling a function, wsaewouldblock errors are often returned. So at any time, be careful to check the return code and prepare for the "failure". The application calls this function continuously until it returns to the successful instruction. In the above program manifest, the recv () function is constantly called in the while loop to read in 1024-byte data. This approach is a waste of system resources.
To do this, someone calls the recv () function using the MSG_PEEK flag to see if there is data readable in the buffer. Similarly, this method is not good. Because this practice is costly to the system, the application must call the recv () function at least two times to actually read the data. It is a good practice to use the "I/O model" of sockets to determine whether non-blocking sockets are readable and writable.
Non-blocking mode sockets are not easy to use compared to blocking mode sockets. With non-blocking mode sockets, you need to write more code to process the received Wsaewouldblock errors in each Windows Sockets API function call. Therefore, non-blocking sockets appear to be difficult to use.
However, the non-blocking sockets in the control of the establishment of multiple connections, the data received and received the amount of uneven, time is uncertain, obviously has the advantage. This kind of socket is difficult to use, but as long as the difficulty is eliminated, it is very powerful in function. In general, consider using the "I/O model" of sockets, which helps the application to manage the communication of one or more sockets asynchronously.
IO multiplexing Model:
Introduction: Mainly select and Epoll; for an IO port, two calls, two return, there is no advantage than blocking IO, the key is to enable the simultaneous monitoring of multiple IO ports;
The I/O multiplexing model uses the Select, poll, Epoll functions, which also block the process, but unlike blocking I/O, these two functions can block multiple I/O operations at the same time. I/O functions can be detected at the same time for multiple read operations, multiple write operations, and I/O operation functions are not actually invoked until there is data readable or writable.
Signal-driven IO
Introduction: Two calls, two returns;
First we allow the socket interface to drive the signal-driven I/O and install a signal processing function, and the process continues to run without blocking. When the data is ready, the process receives a sigio signal that can be called by the I/O operation function in the signal processing function to process the data.
Asynchronous IO Model
Summary: The process does not need to block when copying data.
When an asynchronous procedure call is made, the caller cannot get the result immediately. The part that actually handles the call notifies the caller of the input-output operation by state, notification, and callback after completion
Synchronous IO causes the process to block until the IO operation is complete.
Asynchronous IO does not cause the process to block.
Io multiplexing is blocked by a select call first.
Comparison of 5 I/O models:
1. Introduction to select, Poll and Epoll
Epoll and select provide a multi-channel I/O multiplexing solution. In the current Linux kernel can be supported, where Epoll is unique to Linux, and select should be POSIX rules, the general operating system has implemented
Select
Select essentially processes the next step by setting or checking the data structure that holds the FD flag bit. The disadvantages of this are:
1, a single process can monitor the number of FD is limited, that is, the size of the listening port is limited.
In general, this number and system memory relationship is very large, the specific number can be cat/proc/sys/fs/file-max. A 32-bit machine defaults to 1024. The 64-bit machine defaults to 2048.
2, the socket scan is a linear scan, that is, the use of polling method, low efficiency:
When the socket is more, each time the select () through the traversal of the fd_setsize socket to complete the dispatch, regardless of which socket is active, are traversed again. This can waste a lot of CPU time. If you can register a callback function with the socket, and when they are active, the related actions are automatically done, then polling is avoided, which is exactly what Epoll and Kqueue do.
3, the need to maintain a large number of FD data structure, which will make the user space and kernel space in the transfer of the structure when the replication cost is large
Poll
Poll is essentially not the same as SELECT, it copies the user's incoming array to the kernel space, and then queries each FD corresponding device state, if the device is ready to add an entry in the device waiting queue and continue to traverse, if not found ready device after traversing all FD, the current process is suspended, Until the device is ready or the active timeout is awakened, it again iterates over the FD. This process has gone through many meaningless loops.
It does not have a limit of the maximum number of connections because it is stored based on a linked list, but there is also a disadvantage:
1, a large number of FD arrays are copied in the whole between the user State and the kernel address space, regardless of whether such replication is meaningful. 2, poll also has a feature is "horizontal trigger", if the FD is reported, is not processed, then the next poll will report the FD again.
Epoll:
The epoll supports both horizontal and edge triggering, and the biggest feature is the Edge trigger, which only tells the process which FD has just become the desired state and only notifies once. Another feature is that Epoll uses the "event" ready notification method to register FD through EPOLL_CTL, once the FD is ready, the kernel will use a callback mechanism similar to callback to activate the fd,epoll_wait to receive notification
Advantages of Epoll:
1, there is no limit of the maximum concurrent connection,The upper limit of the FD that can be opened is much larger than the maximum (1G memory can listen to about 100,000 ports);
2. Efficiency improvement, not polling, does not decrease as the number of FD increases in efficiency. Only active FD will invoke the callback function;
The biggest advantage of Epoll is that it's just your "active" connection, which has nothing to do with the total number of connections, so in a real network environment, Epoll is much more efficient than select and poll.
3. Memory Copy, using the mmap () file to map memory to accelerate message delivery to kernel space, that is, Epoll uses mmap to reduce replication overhead.
Select, poll, Epoll difference summary:
1. Support the maximum number of connections that a process can open
Select |
The maximum number of connections that a single process can open is defined by the Fd_setsize macro, the size of which is 32 integers (on a 32-bit machine, the size is 32*32, and the 64-bit machine fd_setsize is 32*64), of course, we can modify it, and then recompile the kernel, However, performance may be impacted, which requires further testing. |
Poll |
Poll is essentially no different from Select, but it does not have the maximum number of connections because it is stored based on a linked list |
Epoll |
Although the number of connections is capped, but large, 1G of memory on the machine can open about 100,000 of the connection, 2G memory of the machine can open about 200,000 of the connection |
2. The IO efficiency problem caused by FD surge
Select |
Because the connection is linearly traversed each time it is invoked, the increase in FD results in a "linear descent performance problem" with slow traversal. |
Poll |
Ditto |
Epoll |
Because the implementation in the Epoll kernel is implemented according to the callback function on each FD, only the active socket will actively invoke callback, so in the case of less active sockets, using Epoll does not have a performance problem with the linear descent of the preceding two. However, when all sockets are active, there may be performance issues. |
3. Message Delivery method
Select |
The kernel needs to pass the message to the user space, requiring the kernel copy action |
Poll |
Ditto |
Epoll |
Epoll is implemented by sharing a piece of memory with the kernel and user space. |
Summarize:
In summary, the choice of Select,poll,epoll should be based on the specific use of the occasion and the three ways of their own characteristics.
1, the surface of the Epoll performance is the best, but the number of connections and connections are very active, select and poll performance may be better than epoll, after all, Epoll notification mechanism requires a lot of function callbacks.
2. Select is inefficient because it requires polling every time. But inefficient is also relative, depending on the situation, but also through a good design to improve
(GO) IO Model of Linux network programming