High-concurrency web Server Based on tomcat Response Processing Model

Source: Internet
Author: User

In the previous blog, a simple AIOweb processing example shows that AIO asynchronous processing relies on the operating system to complete IO operations. The Proactor processing model is indeed powerful and can achieve high concurrency, the high-response Server is a good choice, but the ctor Processing Model in tomcat is still based on NIO. Of course, I think this may be improved in future versions, but on the other hand, I think the processing of AIO load control may be difficult, because the AIO api does not provide our processing of the allocation thread group, but only provides a thread group, it is handed over to the operating system to solve the problem of io processing. Therefore, this may bring some control difficulties to the load balancing that requires complex processing.


For tomcat connector processing, I recommend that you take a look at this blog, the analysis is in place of http://liudeh-009.iteye.com/blog/1561638





The processing model of tomcat is like this: Use the acceptor to access the channel, register the channel to the selector in pollers, and select and perform business IO processing later, the next step is the servlet engine's work, which responds to the channel request.


Okay, let's introduce my implementation. In my example, static pages are used as responses. If any of you know that you want to add a servlet or something like a tomcat servlet engine, please advise

Class structure. No code is posted here. If you are interested, you can download it at http://download.csdn.net/detail/chenxuegui123/7330269.


My processing model is similar. It is based on splitting the acceptor and the worker of business IO processing to achieve a higher response speed, which is different from tomcat implementation, in each acceptor in the acceptor thread group, my implementation is not to keep the non-blocking ServerSocketChannel accept, but to process the SocketChannel, but to add a selector to the acceptor, events that interest us are notified by the operating system. (If you do not post the source code above, you can check the source code above and download my source code.) Of course, this is a test, adding selector to an acceptor increases the concurrency.

In the worker thread group, each worker maintains this blocking queue and caches the io to be processed, instead of adding the selector, the operating system notifies us of the occurrence of the time of interest, because in this example, the response is based on the static page, so it is a response to each request, simple, not as complicated as tomcat.

Of course, after 100000 concurrent response tests, the concurrency speed of this processing model is still good.




Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.