OriginalHaizi
Source:http://www.cnblogs.com/dolphin0520/
This article belongs to the authorHaizi
and Blog Park, welcome reprint, but without the consent of the author must retain this paragraph, and in the article page obvious location to the original link, otherwise reserves the right to pursue legal responsibility.
Perhaps a lot of friends in the study of NiO will feel a little hard, the inside of many concepts are not so clear. Before we go into Java NIO programming, let's talk about some of the more basic basics today: I/O models. The following article starts with the concept of synchronous and asynchronous, then explains the difference between blocking and non-blocking, then introduces the difference between blocking IO and non-blocking IO, then introduces the differences between synchronous IO and asynchronous Io, and then introduces 5 IO models, Finally, two design patterns related to high performance IO design (reactor and proactor) are introduced.
The following is the directory outline for this article:
I. What is synchronization? What is async?
Two. What is blocking? What is non-blocking?
Three. What is blocking IO? What is non-blocking IO?
Four. What is sync io? What is asynchronous IO?
Five. Five types of IO models
Six. Two types of high-performance IO design Patterns
If there is any difference, please understand and welcome the criticism.
Please respect the author's labor results, reproduced please indicate the original link:
Http://www.cnblogs.com/dolphin0520/p/3916526.html
I. What is synchronization? What is async?
The concept of synchronous and asynchronous has been coming out for a long time, and there are many other online statements about synchronization and asynchrony. Here is my personal understanding:
Synchronization is: If there are multiple tasks or events to occur, these tasks or events must be carried out individually, an event or the execution of a task will cause the entire process of temporary wait, these events can not be implemented in parallel;
Asynchronous is: If there are more than one task or event, these events can be executed concurrently, an event or the execution of a task will not cause the whole process of temporary waiting.
This is synchronous and asynchronous. For a simple example, if there is a task that includes two subtasks A and B, for synchronization, when a is in the process of execution, B waits until a executes, B can execute, and for async A and B are executed concurrently, B does not have to wait for a to execute, This will not result in a temporary wait for the entire task due to the execution of a.
If you do not understand, you can first look at the following 2 pieces of code:
1 voidfun1 () {2 3 }4 5 voidfun2 () {6 7 }8 9 voidfunction () {Ten fun1 (); One fun2 () A ..... - ..... -}
This code is a typical synchronization, in the method function, fun1 in the execution of the process will cause the subsequent fun2 can not be executed, fun2 must wait for fun1 execution to execute.
Then look at the following code:
1 voidfun1 () {2 3 }4 5 voidfun2 () {6 7 }8 9 voidfunction () {Ten NewThread () { One Public voidrun () { A fun1 (); - } - }.start (); the - NewThread () { - Public voidrun () { - fun2 (); + } - }.start (); + A ..... at ..... -}
This code is a typical asynchronous, the execution of FUN1 does not affect the execution of fun2, and the execution of FUN1 and fun2 will not cause its subsequent execution process to be temporarily waiting.
In fact, synchronization and Asynchrony are a very broad concept, and their focus is on whether the occurrence or execution of an event causes a temporary wait for the entire process when multiple tasks and events occur. I think it's possible to associate synchronous and asynchronous with the Synchronized keyword in java. When multiple threads access a variable at the same time, each thread accesses the variable as an event, and for synchronization, it is the thread that must access the variable individually, while one thread accesses the variable, while the other threads must wait, and for async, multiple threads do not have to access the variable individually. can be accessed at the same time.
As a result, individuals feel that synchronization and asynchrony can manifest themselves in many ways, but it is important to remember that when multiple tasks and events occur, the occurrence or execution of an event results in a temporary wait for the entire process. In general, asynchronous can be implemented in a multi-threaded way, but remember not to put the multi-threaded and asynchronous equal sign, asynchronous is only a macro mode, the use of multithreading to achieve asynchronous is only a means, and through a multi-process way can also be asynchronous.
Two. What is blocking? What is non-blocking?
The differences between synchronous and asynchronous are described earlier, and this section looks at the difference between blocking and non-blocking.
Blocking is: When an event or task is executed, it makes a request operation, but because the request operation needs the condition is not satisfied, then will wait until the condition is satisfied;
Non-blocking is: When an event or task is executing, it issues a request action, and if the requested action requires a condition that is not met, it returns a flag message that the condition is not met and will not wait there.
This is the difference between blocking and non-blocking. That is, the difference between blocking and non-blocking is that when a request is made, if the condition is not met, it waits or returns a flag message.
To give a simple example:
If I want to read the contents of a file, if there is no content in the file readable, for the synchronization will be waiting until the contents of the file is readable, and for non-blocking, will directly return a flag information to inform the file is temporarily no content readable.
There are some friends on the web that equate synchronous and asynchronous with blocking and non-blocking, in fact, they are two completely different sets of concepts. Note that understanding the differences between these two sets of concepts is important for understanding the later IO model.
Synchronous and asynchronous focus on the execution of multiple tasks, whether the execution of a task will lead to a temporary wait for the whole process;
While blocking and non-blocking focuses on issuing a request operation, if the condition of the operation does not meet whether it will return a flag message that the condition is not satisfied.
Understanding blocking and non-blocking can be interpreted in the same way as thread blocking, when a thread makes a request operation and if the condition is not met, it is blocked, that is, waiting for the condition to be satisfied.
Three. What is blocking IO? What is non-blocking IO?
Before understanding blocking IO and non-blocking IO, let's look at how the next specific IO operation is done.
Typically, IO operations include reading and writing to the hard disk, reading and writing the socket, and reading and writing the peripherals.
When the user thread initiates an IO request operation (This article takes the read request operation as an example), the kernel checks to see if the data to be read is ready, and for blocking IO, if the data is not ready, it waits until the data is ready and, for non-blocking IO, if the data is not ready, A flag message is returned informing the user that the data currently being read is not ready. When the data is ready, the data is copied to the user thread, which completes a full IO read request operation, which means that a complete IO read request operation consists of two phases:
1) See if the data is ready;
2) Copy the data (the kernel copies the data to the user thread).
The difference between blocking (blocking IO) and non-blocking (non-blocking io) is that the first stage, if the data is not ready, is waiting in the process of viewing the data is ready, or directly returning a flag message.
In Java, the traditional io is blocking IO, such as through the socket to read the data, after calling the read () method, if the data is not ready, the current thread will be blocked in the Read method call there until there is data to return, and if the non-blocking IO, when the data is not ready, read ( ) method should return a flag message informing the current thread that the data is not ready, rather than waiting there all the time.
Four. What is sync io? What is asynchronous IO?
Let's take a look at the definition of synchronous IO and asynchronous IO, as defined in the book UNIX Network programming for synchronous IO and asynchronous IO:
A synchronous I/O operation causes the requesting process to being blocked until that I/O operation completes.
An asynchronous I/O operation does not cause the requesting process to be blocked.
It can be seen from the literal meaning: synchronous io is a thread that, if requested by an IO operation, is blocked until the IO operation is completed;
Asynchronous IO does not cause the request thread to block if the IO operation is requested by one thread.
In fact, the synchronous IO and asynchronous IO models are for the interaction of the user thread and the kernel:
For synchronous IO: After the user makes an IO request operation, if the data is not ready, the user thread or the kernel is required to continually poll the data for readiness, and when the data is ready, the data is copied from the kernel to the user thread;
Asynchronous IO: Only the issue of the IO request operation is performed by the user thread, and the two phases of the IO operation are automatically completed by the kernel and then sent a notification informing the user that the thread IO operation has been completed. That is, in asynchronous Io, no blocking is generated for the user thread.
This is the key difference between synchronous IO and asynchronous Io, where the key difference between synchronous IO and asynchronous IO is reflected in whether the data copy phase is done by the user thread or the kernel. Therefore, asynchronous IO must have the underlying support of the operating system.
Note that synchronous IO and asynchronous IO are two distinct sets of concepts that are different from blocking IO and non-blocking IO.
Blocking IO and nonblocking io are reflected when the user requests an IO operation, if the data is not ready, whether the user thread is waiting for data to be ready, or receives a flag message above it. That is, blocking IO and non-blocking IO are reflected in the first phase of the IO operation, and how the data is handled when it is ready to be viewed.
Five. Five types of IO models
In UNIX network programming, five IO models are mentioned: Blocking io, nonblocking io, multiplexed io, signal-driven IO, and asynchronous IO.
Here are the differences between the 5 IO models.
1. Blocking IO Model
One of the most traditional IO models is the blocking phenomenon that occurs during the reading and writing of data.
When the user thread makes an IO request, the kernel checks to see if the data is ready, waits for the data to be ready if it is not ready, and the user thread is blocked and the user thread surrenders the CPU. When the data is ready, the kernel copies the data to the user thread and returns the result to the user thread before the user thread unlocks the block state.
Examples of typical blocking IO models are:
data = Socket.read ();
If the data is not ready, it will remain blocked in the Read method.
2. Non-blocking IO model
When a user thread initiates a read operation, it does not need to wait, but immediately gets a result. If the result is an error, it knows that the data is not ready, so it can send the read operation again. Once the data in the kernel is ready and again receives a request from the user thread, it immediately copies the data to the user thread and then returns.
So in fact, in a non-blocking IO model, the user thread needs to constantly ask if the kernel data is ready, and it says that non-blocking IO will not hand over the CPU and will always consume the CPU.
Typical non-blocking IO models are generally as follows:
while (true) { = socket.read (); if (data!= error) { processing data break ; }}
However, there is a very serious problem with non-blocking IO, and in the while loop it is necessary to constantly ask if the kernel data is ready, which can lead to very high CPU usage, so it is generally rare to use a while loop to read the data.
3. multiplexed IO Model
The multiplexed IO model is a model that is used much more recently. Java NiO is actually multiplexed io.
In the multiplexed IO model, a thread is constantly polling for the state of multiple sockets, and only the actual IO read and write operations are actually invoked when the socket actually has read and write events. Because in the multiplexed IO model, only one thread is needed to manage multiple sockets, the system does not need to establish new processes or threads, does not have to maintain these threads and processes, and only uses IO resources when there is a real socket read-write event, so it greatly reduces resource consumption.
In Java NIO, it is through Selector.select () to query whether each channel has an arrival event, and if there is no event, it is stuck there, so this approach can cause a user thread to block.
Maybe some friends will say, I can use multi-threaded + blocking IO to achieve similar results, but because in multi-threaded + blocking IO, each socket corresponds to a thread, which will cause a lot of resource consumption, and especially for long connections, the thread's resources will not be released, if there are many connections later , it can cause a bottleneck in performance.
In the multiplexed IO mode, multiple sockets can be managed through a single thread, and only when a socket actually has a read-write event occurs does it consume resources for actual read and write operations. As a result, multiplexing IO is more suitable for a more connected number of cases.
The reason why multiplexing IO is more efficient than non-blocking IO models is that, in non-blocking IO, the socket state is constantly being queried through the user thread, whereas in multiplexed IO, polling each socket state is the kernel, which is much more efficient than the user thread.
It is important to note, however, that the multiplexed IO model detects the arrival of events by polling and responds to the events that arrive. Therefore, for the multiplexed IO model, once the event response is large, it will cause the subsequent events to be delayed and will affect the new event polling.
4. Signal-driven IO model
In the signal-driven IO model, when the user thread initiates an IO request operation, it registers a signal function with the corresponding socket, and then the user thread resumes execution, sending a signal to the user thread when the kernel data is ready, and after the user thread receives the signal, The IO read-write operation is invoked in the signal function to perform the actual IO request operation.
5. Asynchronous IO Model
Asynchronous IO model is the most ideal IO model, in the asynchronous IO model, when the user thread initiates a read operation, it can start to do other things immediately. On the other hand, from the kernel point of view, when it receives a asynchronous read, it returns immediately stating that the read request has been successfully initiated, so no block is generated for the user thread. The kernel then waits for the data to be ready and then copies the data to the user thread, and when it's all done, the kernel sends a signal to the user thread that the read operation is complete. It is also said that the user thread completely does not need the actual whole IO operation is how to do, only need to initiate a request, when the receiving kernel returns the success signal when the IO operation is completed, you can directly go to use the data.
It is also said that in the asynchronous IO model, both phases of the IO operation do not block the user thread, both of which are automatically completed by the kernel and then send a signal to inform the user that the thread operation is complete. The user thread does not need to call the IO function again for specific reads and writes. This is different from the signal-driven model, in the signal-driven model, when the user thread receives a signal indicating that the data is ready, and then requires the user thread to invoke the IO function for the actual read and write operations, and in the asynchronous IO model, the received signal indicates that the IO operation has been completed, You do not need to invoke the IO function in the user thread to perform actual read and write operations.
Note that asynchronous IO is the underlying support that requires the operating system, and in Java 7, asynchronous IO is provided.
The first four IO models are actually synchronous IO, and only the last is the true asynchronous Io, because the 2nd phase of the IO operation causes the user thread to block, either the multiplexed IO or the signal-driven model, which means that the process of copying the data from the kernel will cause the user thread to block.
Six. Two types of high-performance IO design Patterns
In the traditional network service design pattern, there are two kinds of classic models:
One is multi-threading and one is thread pooling.
For multithreaded mode, the client is also called, and the server creates a new thread to handle the client's read-write events, as shown in:
This mode is easy to handle, but because the server uses a single thread for each client connection, the resource usage is very large. Therefore, when the number of connections reached the upper limit, then there are user requests to connect, directly lead to resource bottlenecks, serious may directly cause the server to crash.
Therefore, in order to solve this problem of a client-side pattern, a thread pool is proposed, and a fixed-size thread pool is created, and a client takes an idle thread from the thread pool to handle it, and when the client finishes processing the read and write operations, it will hand over the thread. Therefore, this avoids the waste of resources for each client to create a thread, so that threads can be reused.
However, the thread pool also has its drawbacks, if the connection is mostly long connections, which can cause threads in the thread pool to be occupied over a period of time, then when a user requests a connection, the client connection fails because no free threads are available to process, which can affect the user experience. Therefore, the thread pool is more suitable for a large number of short-connected applications.
As a result, the following two high-performance IO design modes are present: Reactor and Proactor.
In reactor mode, each client is registered for an event of interest, and then a thread specifically polls each client for an event, and when an event occurs, each event is processed sequentially, and when all the events are processed, it is then transferred to continue polling, as shown in:
It can be seen from this that the multiplexed io in the five IO models above is in reactor mode. Note that the above figure shows that each event is processed sequentially, and of course, in order to increase event processing speed, events can be handled in a multithreaded or thread pool manner.
In the Proactor mode, when an event is detected, a new asynchronous operation is made, which is then left to the kernel thread to process, and when the kernel thread completes the IO operation, sending a notification that the operation is complete, it is known that the asynchronous IO model is in the Proactor mode.
Resources:
"UNIX Network Programming"
http://blog.csdn.net/goldensuny/article/details/30717107
http://my.oschina.net/XYleung/blog/295122
http://xmuzyq.iteye.com/blog/783218
Http://www.cnblogs.com/ccdev/p/3542669.html
http://alicsd.iteye.com/blog/868702
http://www.smithfox.com/?e=191
Http://www.cnblogs.com/Anker/p/3254269.html
http://blog.csdn.net/hguisu/article/details/7453390
Http://www.cnblogs.com/dawen/archive/2011/05/18/2050358.html
Java NIO: An analysis of I/O models