Generalized synchronous/asynchronous, blocking/ non-blocking
Synchronous vs Asynchronous (synchronous vs asynchronous)
A message communication mechanism that synchronizes and asynchronously concerns.
Synchronization and Asynchrony are just the mechanisms for how messages of concern are notified. In the case of synchronization, it is up to the handler to wait for the message to be triggered (take the initiative to get the state), and asynchronously to notify the processing message by the triggering mechanism.
Blocking vs non-blocking (blocking vs nonblocking)
The state of a program that blocks and does not block attention waits for the call result (message, return value).
A blocking call is the current thread is suspended until the call result returns. The calling thread returns only after the result is obtained.
A non-blocking call is one that does not block the current thread (it may return an error to notify the process that the operation is not getting results) before the result is immediately available.
synchronous/asynchronous, blocking/non-blocking in Linux Unix network IO Model & network Model
for a network IO, it involves two system objects, one is to call this IO process (or thread), and the other is the system kernel (kernel). An input operation usually consists of two different stages:
phase One-waiting for data ready (waiting for the "date to" Ready) Phase II -Copy the data from the kernel to the process (copy the "the" the Kernel to the Process
For an input operation on a socket, the first step usually involves waiting for the data to arrive from the network. When all waiting groups arrive, it is copied to a buffer in the kernel. The second step is to copy the data from the kernel buffer to the application buffer.
5 Kinds of Unix available I/O models: blocking I/O models
By default, all sockets are blocked.
When the user process invokes the RECVFROM system call, Kernel begins the first phase of IO and prepares the data. For network IO, most of the time the data did not arrive at the beginning (for example, a full UDP packet has not been received), and kernel will have to wait for enough data to arrive. And on the user process side, the entire process is blocked. When kernel waits until the data is ready, it copies the data from the kernel to the user's memory, and then kernel returns the result, and the user process touches the block state and runs again.
So, the feature of blocking IO is that it's blocked in both phases of IO execution.
This part of the process of marking is blocked until the blocking end recvfrom to return.
non-blocking I/O model
Under Linux, you can make it non-blocking by setting the socket.
The process of setting a socket to Non-blocking is to notify the kernel that when the requested I/O operation must put the process into sleep to complete, do not put the process into sleep, but return an error.
When a user process emits a read operation, if the data in the kernel is not ready, it does not block the user process, but returns an error immediately. From the user's point of view, it initiates a read operation, does not need to wait, but immediately obtains a result. When the user process determines that the result is an error, it knows that the data is not ready, so it can send the read operation again. Once the data in the kernel is ready and the system call of the user process is received again, it copies the data to the user's memory and then returns. Therefore, the user process is actually need to constantly actively ask kernel data is good.
You can see that recvfrom always returns immediately (when the data is not ready).
I/O multiplexing model
The benefit of Select/epoll is that a single process can handle the IO of multiple network connections at the same time. The basic principle of select/epoll is that this function will continually poll all of the sockets that are responsible, and notify the user of the process when a socket has data arriving.
When a user process invokes a select, the entire process is block, while kernel monitors all the select-responsible sockets, and when the data is ready in any one socket, the select returns. This time the user process then invokes the read operation, copying the data from the kernel to the user process.
The advantage of select is that it can work with multiple connection (so if the connection is not very high, using Select/epoll Web server is not necessarily better than using the multi-threading + blocking IO web SE RVer performance is better, it may be more delayed, select/epoll advantage is not for a single connection can be processed faster, but is able to handle more connections.
(After using the Select function, calling this block is above the Select function.) The SELECT function executes a dead loop, the dead loop, over and over again within the Select function, querying whether a socket is ready for the data. In fact, the Select function encapsulates the polling work of non-blocking Io, which is essentially still a synchronous operation.
Signal-driven I/O
Less used
I/O multiplexing, signal driven
Both of these approaches are asynchronous in processing business logic, but still synchronized at the I/O level.
asynchronous I/O model
This type of function works by telling the kernel to start an operation and letting the kernel notify us when the entire operation, including copying the data from the kernel to the user space, is complete.
Once the user process initiates the read operation, you can begin to do something else immediately. On the other hand, from the kernel point of view, when it receives an asynchronous read, it first returns immediately, so no block is generated for the user process. Then, kernel waits for the data to be ready, and then copies the data to the user's memory, and when all is done, kernel sends a signal to the user process, telling it that the read operation is complete.
Note that the Red line mark indicates that it can be returned immediately upon invocation, and we will be notified when the function operation is complete.
Summary
The first four I/O models are synchronous I/O operations, and their difference is in phase one, and their second phase is the same. When data is copied from the kernel to the application buffer (user interval), the process is blocked by the recvfrom call.
In contrast, the asynchronous I/O model is processed in both phases.
Poxis strictly defined asynchronous I/O requires no blocking at all, and the first four of these are blocked in varying degrees, and all have a common block: the time that the kernel copies data to the process space needs to wait.
Poxis the definitions of these two terms:
Synchronous I/O operations: Causes the request process to block until I/O operation completes asynchronous I/O operations: does not cause the request process to block
blocking, Non-blocking: Whether the process/thread is ready to access data, and whether the process/thread needs to wait for data to be ready (for the first phase)
Synchronous, asynchronous: the way to access data, synchronization needs to read and write data actively, in the process of reading and writing data will still block; asynchronous only need I/O operation to complete the notification, not actively read and write data, the operating system kernel to complete the data read and write (first + phase II)
In processing Io, both blocking and non-blocking are synchronous IO, and only special APIs are used for asynchronous IO.
POSIX defines these two terms as follows:
· A synchronous I/O operation causes the requesting process to is blocked until that I/O operation.
· An asynchronous I/O operation does not cause the requesting process to be blocked.
Using These definitions, the four I/O models-blocking, nonblocking, I/O multiplexing, and Signal-driven I/o-are All synchronous because the actual I/O operation (recvfrom) blocks the process. Only the asynchronous I/O model matches the asynchronous I/O definition.
Original from <<unix Network programming >> Third Edition 6.2 section
the difference between the two is that the process blocks when synchronous IO does IO operation. According to this definition, the first four types mentioned above belong to synchronous IO. One might say that non-blocking io has not been blocked. Here's a very tricky place, in the definition of the IO operation refers to the real IO operation, is the example of the Recvfrom this system call. Non-blocking IO does not block the process if the kernel data is not ready when executing recvfrom this system call. However, when the data is ready in the kernel, recvfrom will copy the data from the kernel into the user's memory, at which point the process is blocked, and the process is blocked during this time. Asynchronous IO is not the same, and when the process initiates an IO operation, it simply returns and ignores it until kernel sends a signal telling the process that IO is complete, and that the process is completely out of block.
The comparison of each IO model is shown in the following illustration:
For UNIX: Blocking I/O (default), non-blocking I/O (nonblock), I/O multiplexing (select/poll/epoll) are synchronous I/O, because they are replicated back to the process buffer by the kernel space (recvfrom, The second stage is blocked (can't do anything else). Only asynchronous I/O models (AIO) are compatible with asynchronous I/O operations, that is, when 1 data is ready to complete, 2 is copied back to the buffer from kernel space, and you can do something else during the period of waiting for the notification.
send and receive of data in block/non-blocking-socket mode
send-Send (TCP), SendTo (UDP)
The first thing to note is that, regardless of blocking or non-blocking, the data is copied from the application buffer to the kernel buffer (SO_RCVBUF option declaration unless the buffer size is 0) when sent.
the send operation in blocking mode will wait for all data to be copied to the send buffer before returning.
If the current send buffer total size is 8192, has been copied to the buffered data of 8000, the remaining size of 192, now need to send 2000 bytes of data, the blocking send will wait for buffer enough to copy all 2000 bytes of data, such as the first copy into 192 bytes, When the buffer successfully sends out 1808 bytes, the remaining 1808 bytes of the application buffer are copied to the kernel buffer, and the send operation returns the number of bytes successfully sent.
It's not hard to see from the above process that the sending size of the blocked send operation is necessarily the size of the sending length in your argument.
sendto operations in blocking mode are not blocked.
The reason for this is that UDP does not really send a buffer, it does is only the application of the buffer copy to the lower protocol stack, in the process of adding UDP headers, IP headers, so there is no actual blocking.
The Send action call returns immediately in non-blocking mode.
There is no objection to the immediate return. Or take the example of blocking send, when the buffer is only 192 bytes, but it needs to send 2000 bytes, the call immediately returns, and the return value is 192. As you can see, non-blocking send is simply copying as much data as possible to the buffer, so sending send in non-blocking may return a value smaller than the length of the send in your argument.
What if the buffer does not have any space. This is definitely the return immediately, but you will get wsaewouldblock/e Wouldblock error, at this point you can not copy any data to the buffer, you'd better take a break and try to send.
sendto operations are not blocked in non-blocking mode (consistent with blocking, not described).
Receive-recv (TCP), Recvfrom (UDP)
in blocking mode the Recv,recvfrom operation will block to a buffer with at least one byte (TCP) or a full UDP datagram to return.
When no data arrives, calls to them are in a sleep state and are not returned.
the recv,recvfrom operation will return immediately in non-blocking mode.
If the buffer has any one byte data (TCP) or a full UDP datagram, they will return the received data size. If there is no data, the error wsaewouldblock/e Wouldblock is returned.