Java Network I/O model

Source: Internet
Author: User
Tags epoll

Network I/O model

A lot of people, there will be problems. When the web first appeared, there were few people to patronize. In recent years, the scale of network application is gradually enlarged, and the application architecture needs to change. C10K's problems allow engineers to think about the performance of services and the concurrency of their applications.

Network applications need to deal with nothing more than two major types of problems, network I/O, data calculation . In contrast to the latter, the latency of network I/O, the performance bottleneck for the application is greater than the latter. The model of network I/O is broadly as follows:

    • Synchronization model (synchronous I/O)
      • Blocking I/O (bloking I/O)
      • Non-blocking I/O (non-blocking I/O)
      • multiplexed I/O (multiplexing I/O)
      • Signal-driven I/O (signal-driven I/O)
    • asynchronous I/O (asynchronous I/O)

The essence of network I/O is the socket read, socket in Linux system is abstracted as stream, I/O can be understood as the operation of convection. This operation is divided into two phases:

    1. Wait for stream data preparation (wating for the.
    2. Copy the data from the kernel to the process (copying the data from the kernel to the procedure).

For the socket stream only,

    • The first step usually involves waiting for a packet of data on the network to arrive and then being copied to a buffer in the kernel.
    • The second step is to copy the data from the kernel buffer to the application process buffer.
I/O model

Give a simple metaphor to understand these models. Network IO is like fishing, waiting for the bait is the network waiting for data preparation process, fish hooked, pull the fish ashore is the core replication data phase. The person who is fishing is an application process.

Blocking I/O (bloking I/O)

Blocking I/O is the most prevalent I/O model. It conforms to the most common thinking logic of people. blocking is the process of "being" rested, CPU processing other processes went . At the time of network I/O, the process initiates recvform a system call, and then the process is blocked and nothing is done until the data is ready and the data is copied from the kernel to the user process, and the process processes the data, and the process is blocked while waiting for the data to process the data in two stages. No other network I/O can be processed. Roughly like:


1.png

This is like we go fishing, after the rod has been on the shore until waiting for the fish to bite. And then once again to throw the rod, waiting for the next fish to bait, waiting for the time, do not do anything, presumably will be cranky it.

Blocking IO is characterized by the block of both phases of IO execution.

Non-blocking I/O (non-bloking I/O)

In the case of network I/O, non-blocking I/O also makes Recvform system calls, checking that the data is ready, unlike blocking I/O, "non-blocking divides large chunks of time into more than n small blocks, so the process continues to have the opportunity ' to be ' CPU-patronized '.

That is, after a non-blocking recvform system call, the process is not blocked, the kernel is returned to the process immediately, and if the data is not ready, an error is returned. After the process returns, it can do something else before initiating the recvform system call. Repeat the above process and iterate through the recvform system calls. This process is often called 轮询 . Poll the kernel data until the data is ready, and then copy the data to the process for data processing. It is important to note that the process of copying data is still a blocking state.


2.png

We then use the way of fishing to the category, when we throw the rod into the water, we look at the Bobber whether there is movement, if there is no fish hooked, go to do something else, such as digging a few earthworms. And then soon came to see if Bobber had any fish to bait. The check of the return leaves until the fish is hooked and then processed.

Non-blocking IO is characterized by the need for the user process to constantly proactively ask whether kernel data is ready.

multiplexed I/O (multiplexing I/O)

As you can see, polling takes up a large part of the process due to non-blocking calls, and polling consumes a lot of CPU time. Combined with the previous two modes. If polling is not the user state of the process, it is good to have someone help. Multiplexing just handles such problems.

Multiplexing has two special system calls select or poll . The select call is at the kernel level, and the Select poll is the difference between non-blocking polling---the former can wait for multiple sockets, and when any one of the sockets has a good data, it can be returned to be readable, and then the process then makes the Recvform system call, The process of copying data from the kernel to the user process is, of course, blocked. Multiplexing has two kinds of blocking, after a select or poll call, the process is blocked, unlike the first block, when the select is not waiting until the socket data is all reached for processing, but a portion of the data is called the user process to handle. How do you know if some of the data arrives? The monitoring is given to the kernel and the kernel is responsible for the processing of data arrival. It can also be understood as "non-blocking".


3.png

For multiplexing, that is, polling multiple sockets. When we were fishing, we hired a helper, he could throw a number of fishing rods at the same time, and if any of the fish were hooked, he would pull the lever. He was only in charge of fishing for us and would not help us, so we had to wait in a gang and wait for him to take the lever. We'll deal with the fish again. Since multiplexing can handle multiple I/O, it creates new problems, the order between multiple I/O becomes uncertain, and of course it can be different numbers.

Multiplexing is characterized by a mechanism in which a process can simultaneously wait for an IO file descriptor, the kernel monitors these file descriptors (socket descriptors), any one of which goes into the read-ready state, select, the Poll,epoll function can be returned. For the way of monitoring, can be divided into Select, poll, Epoll three ways.

Understand the previous three modes, when the user process to make system calls, they wait for data to arrive, the way of processing different, direct waiting, polling, select or poll polling, the first process some blocking, some do not block, some can block and can not block. The second process was blocked at the time. From the entire I/O process, they are executed sequentially, so they can be classified as synchronous models (asynchronous). are processes actively checking to the kernel.

asynchronous I/O (asynchronous I/O)

asynchronous I/O is not sequential execution, relative to synchronous I/O. After a user process makes a aio_read system call, the kernel data is returned directly to the user process, regardless of whether it is ready, and then the user-state process can do something else. When the socket data is ready, the kernel copies the data directly to the process and sends notifications from the kernel to the process. I/O two stages, processes are non-blocking.


4.png

It's a different way of fishing than before, and this time we hired a fishing ace. He will not only fish, but also send us a message after the bait, informing us that the fish is ready. We just have to entrust him to throw a rod, and then we can run to do something else until he's texting. We'll come back and deal with the fish that have landed.

The difference between synchronous and asynchronous

Through the discussion of the above models, it is necessary to distinguish between blocking and non-blocking, synchronous and asynchronous. They are actually two sets of concepts. The difference between the previous group is easier and the latter one is often easy to mix with the front. In my opinion, the so-called synchronization is the whole I/O process. In particular, the process of copying data is a blocking process, and it is the application process state to check the kernel state. Asynchronous is the entire process I/O process the user process is non-blocking, and when copying data is sent by the kernel notification to the user process.


5.png

For the synchronization model, the first phase of the processing method is different. and the asynchronous model, the two phases are different. Here we ignore the signal-driven mode. These nouns are still easy to confuse, only the synchronization model to consider blocking and non-blocking, because asynchronous is definitely non-blocking, asynchronous non-blocking claims feel superfluous.

The IO model discussed in this article comes from the famous UNIX Network programming: Volume 1 socket Networking API. Linux systems in a single server. The distributed environment may be different. Personal study notes, which refer to most articles on the Web, do a little bit of testing.


Original link: http://www.jianshu.com/p/55eb83d60ab1

The relevant methods of Java I/O are described below:Sync and block (I/O method):The server implementation mode initiates a thread for a connection, each thread handles I/O itself and waits until the I/O is complete, that is, when the client has a connection request, the server side needs to start a thread to process it. But if this connection does nothing, it creates unnecessary thread overhead, which can certainly be improved through the thread pool mechanism. The limitation of I/O is that it is a stream-oriented, blocking, serial process. The socket connection I/O for each client requires a thread to process, and during this time the thread is occupied until the socket is closed. During this time, TCP connections, data reads, and data returns are blocked. This means that during this period a lot of wasted CPU time slices and threads occupy memory resources. In addition, each time a socket connection is established, a new thread is created at the same time to communicate with the socket separately (in a blocking manner). This approach has fast response times and is easy to control. It is very effective when the number of connections is low, but if a thread is generated for each connection, it is undoubtedly a waste of system resources, and if the number of connections is high, there will be insufficient resources.Synchronous non-blocking (NIO method, JDK1.4 introduced):The server implementation mode initiates a thread for one request, each thread handles I/O personally, but the other thread polls to check if I/O is ready and does not have to wait for I/O to complete, that is, the connection request sent by the client is registered to the multiplexer, and the multiplexer polls to the connected I/O Only one thread is started for processing when requested. NIO is buffer-oriented, non-blocking, selector-based, with a thread polling to monitor multiple data transmission channels, which channel is ready (that is, a set of data that can be processed) to process which channel. The server side holds a list of socket connections, then polls the list and invokes the corresponding read operation of the socket if there is data readable on a socket port, which is called when data writable is found on a socket port The corresponding write operation for the connection, or if the Socket connection for a port has been interrupted, call the appropriate destructor to close the port. This can make full use of server resources, the efficiency has been greatly improved;asynchronous non-blocking (AIO method, JDK7 release):The server implementation mode starts a thread for a valid request, and the client's I/O request is done by the operating system before notifying the server application to start the thread for processing, and each thread does not have to handle I/O personally, but instead delegates the operating system to process it and does not need to wait for I/O to complete. If the operating system is completed, it will be notified otherwise. This model uses the Epoll model of Linux.


Source: Http://www.ibm.com/developerworks/cn/java/j-lo-io-optimize/index.html?hmsr=toutiao.io

From for notes (Wiz)

Java Network I/O model

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.