Extracting the classic model of NIO architecture Web server from jetty, Tomcat and Mina

Source: Internet
Author: User

Transferred from: http://blog.csdn.net/cutesource/article/details/6192016

http://blog.csdn.net/cutesource/article/details/6192145

http://blog.csdn.net/cutesource/article/details/6192163

How to use NIO to structure the network server has been a problem of recent thinking, then analyzed the jetty, Tomcat and Mina about NIO source code, found that everyone is based on a similar way, I feel this should be the NIO Architecture Network Server Classic mode, And based on this model wrote a small network server, stress test, the effect is good. Don't say much nonsense, first look at how the three are using NIO.

The realization of Jetty connector

First look at the class diagram:

which

Selectchannelconnector is responsible for assembling the components

SelectSet is responsible for listening for client requests

Selectchannelendpoint is responsible for reading and writing IO.

Httpconnection responsible for logical processing

The process of processing requests across the server can be divided into three phases, as shown in the sequence diagram below:

Phase one: Listen and establish the connection

This process is mainly to start a thread responsible for the new connection, the supervisor heard after the allocation to the corresponding selectset, the allocation of the policy is polling.

Stage two: Listening for client requests

This process is mainly to start multiple threads (the number of threads is generally the number of server CPUs), let SelectSet listen to the governing channel queue, each selectset maintain a selector, this selector listen to all the channel in the queue, Once there is a read event, take a thread from the thread pool to do the processing request

Phase three: Processing requests

This process is the data processing process of each client request, it is worth noting that in order to prevent the backend business processing to hinder selector from listening to the new request, the multi-thread separates the listening request and processing request two phases.

In this way, it is possible to summarize the pattern of jetty related to NiO, as shown in:

The core is to isolate three different things and use different sizes of threads to handle them, maximizing the asynchronous and notification characteristics of NIO

========================================================================

Here's a look at how Tomcat uses NIO to frame connector this piece.

First look at the class diagram of the Tomcat connector:

which

Nioendpoint is responsible for assembling the components

Acceptor is responsible for monitoring the new connection and handing over the connection to Poller

The poller is responsible for monitoring the channel queue under the jurisdiction and handing the request to socketprocessor for processing.

Socketprocessor is responsible for data processing and passes the request to the backend Business processing module

The process of processing requests across the server can be divided into three phases, as shown in the sequence diagram below:

Phase one: Listen and establish the connection

This phase is primarily acceptor listening for new connections, and polling for a poller to deliver the connection to Poller

Stage two: Listening for client requests

This process is primarily to allow each poller to listen to the channel queue under the jurisdiction, select to the new request after the delivery to the Socketprocessor processing

Phase three: Processing requests

The process is to execute socketprocessor from multithreading, doing data and business processing

So we found that with the specifics of the code, Tomcat and jetty were very consistent in the use of NIO, and the pattern used remained:

================================================================

Finally, we look at the most famous framework of the NIO Mina, put aside the Mina about the session and the processing chain, and so on, just pick out the front-end network layer processing, but also the use of jetty and Tomcat similar mode, but it did some simplification, It does not separate the request listening and request processing two stages, so macro-view it is only divided into two stages.

Let's look at its class diagram first:

which

Socketacceptor Thread Call Socketacceptor.work responsible for new connection listening and handing over to socketioprocessor processing

Socketioprocessor Thread Call Socketioprocessor.work is responsible for listening to the channel queue under control, select to the new request to Iofilterchain processing

Iofilterchain assembled the Mina processing chain.

The process of processing requests across the server can be divided into two phases, as shown in the sequence diagram below:

Phase one: Listen and establish the connection

Phase two: Listen for and process client requests

Summing up jetty, Tomcat, and Mina, we also probably know how to frame a Web server based on NiO, and through this extracted pattern, I wrote a very simple NiO Server, in the case of keeping the connection, It is easy to maintain a 60,000 connection (due to the 65535 connection limit) and be able to take 3 to 40,000 TPS requests (4 cores) in the case of a load of only 3 or so (the simple thing to do is simply to convert the buffer into a custom protocol package, and then convert the packet to buffer to write to the client). So simply practice to prove the validity of this model, you may wish to look at this diagram, I hope that everyone will write server later useful:

Extracting the classic model of NIO architecture Web server from jetty, Tomcat and Mina

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.