A high-concurrency Web server based on the Tomcat response processing model

Source: Internet
Author: User

In the previous blog, a simple aioweb processing example, you can see the AIO asynchronous processing, relying on the operating system to complete the IO operation of the Proactor processing model is really powerful, can be high concurrency, high response server is a good choice, But the processing model of connector in Tomcat is still based on NIO, and of course I think this might be improved in later versions, but on the other hand, I think that the handling of the load control of AIO can be difficult because AIO The API does not provide us with the processing of the assigned thread group, but simply provides a thread group to the operating system to solve the IO processing problems, so this may give the need for complex processing of load balancing to bring some control difficulty


For Tomcat connector processing, I recommend looking at this blog, the analysis of the comparison in place http://liudeh-009.iteye.com/blog/1561638





Tomcat's processing model is like this, through the acceptor to accept the channel, and then register the channel to the pollers in the selector, to select and behind the Business IO processing, Next is the work of the servlet engine, which responds to the request of the channel.


Well, let's introduce my implementation, in my example, using a static page as a response, and if any of your friends know that if you join a servlet and a servlet engine like Tomcat, please advise

Class of the structure of the diagram, here is not affixed to the code, interested friends, can be downloaded to http://download.csdn.net/detail/chenxuegui123/7330269


My processing model is pretty much the same, based on the separation of acceptor and business IO processing workers, resulting in higher response times, and some differences from Tomcat implementations. In each acceptor in the acceptor thread group, my implementation is not to let the non-blocking serversocketchannel always accept, and then handle the Socketchannel, Instead, by adding a selector to the acceptor, an operating system notifies us of the occurrence of an event of interest. (Above the source code, interested to know the friend can look at the above source code, as well as my source download), of course, this through testing, in the acceptor to add selector operation will make concurrency higher

In the worker thread group, each of my workers maintains this blocking queue, caches the IO that needs to be processed, rather than joining selector, which notifies us of the time of interest, because in this case a static page-based response, So it's all about responding to every request, simple, not as complicated as Tomcat.

Of course, with 100,000 concurrent response tests, the concurrency rate of this processing model is still good.




Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.