The pros and cons of blocking and non-blocking IO packets in Java

Source: Internet
Author: User

The cornerstone behind NIO design: The reactor pattern, an architectural pattern for event multiplexing and dispatch.

Reactors (Reactor): Architectural patterns for event multiplexing and dispatch

Typically, the file or device specified for a file descriptor has two ways of working: blocking and non-blocking . The so-called blocking means that when attempting to read and write the file descriptor, if there is nothing to read, or temporarily not writable, the program enters the waiting state until something is readable or writable. For non-blocking states, if nothing is readable or not writable, the read-write function returns immediately without waiting .

A common practice is to create a new thread to communicate with each socket separately (in a blocking manner), each time a socket connection is established. This approach has a high response speed, and control is very simple, when the number of connections is very effective, but if each connection to produce a thread is undoubtedly a waste of system resources, if the number of connections will be insufficient resources.

Another more efficient approach is to save a list of sockets on the server side, then poll the list, and if you find that the data is readable on a socket port (read-ready), call the corresponding read operation of the socket connection; When data is writable on the socket port (write-ready), the corresponding write operation for the socket is invoked, and if a port's socket connection is interrupted, the appropriate destructor is called to close the port. This can make full use of server resources, the efficiency has been greatly improved.

Traditional blocking IO, each connection must be opened with a thread to process, and the thread cannot exit without processing.

Non-blocking IO, because of the architecture pattern for event Multiplexing and dispatch based on the reactor pattern, can be handled using a thread pool. When the incident comes, deal with it and return the thread. Traditional blocking methods can not be processed using the thread pool, assuming that there are currently 10,000 connections, non-blocking mode may be used with 1000 threads of the thread pool, and traditional blocking methods need to open 10,000 to handle. If the number of connections is large, there will be a shortage of resources. The core advantage of non-blocking is here.

Why this is the case, the following will be a further detailed analysis of their specific:

First, let's analyze where the bottleneck of traditional blocking IO is. In the case of a few connections, traditional IO authoring is easy to use. But as the number of connections increases, the problem of traditional IO is out of the question. As mentioned earlier, the traditional IO process consumes one thread per connection, and the efficiency of the program increases as the number of threads increases, but after a certain amount, it decreases as the number of threads increases. Here we conclude that the bottleneck of traditional blocking IO is the inability to handle too many connections.

Then, the purpose of non-blocking IO is to solve this bottleneck. And how is non-blocking IO implemented? There is no connection between the number of threads for non-blocking IO processing connections and the number of connections, which means that processing 10,000 non-blocking IO requires no more than 10,000 threads, and you can use 1000 or 2000 threads for processing. Because the non-blocking IO processing connection is asynchronous. When a connection sends a request to the server, the server treats the connection request as a request "event" and assigns the "event" to the corresponding function. We can put this handler function in the thread to execute and return the thread after execution. Such a thread can handle multiple events asynchronously. The blocking IO threads are wasted most of the time waiting for requests.

Common problems with Java NIO:

Registered events are not logged in time, resulting in a constant triggering. Then cpu100%.

1. The Read event is not logged off: When the client closes the connection, Channel.read (BUF) returns the value of <=0, and the read event is triggered. This time, if the current event is not canceled, there will be a problem. It's easy to confuse it with multiple read packets, When the client disconnects, it cannot be judged that the client has been disconnected except for the read elsewhere.

2. Write events do not log off in a timely manner: when the event can be written, you should immediately turn off the attention to write events, or when the network card IO can be written when the event will be triggered.


The pros and cons of blocking and non-blocking IO packets in Java

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.