In the previous blog, a simple aioweb processing example, you can see the AIO asynchronous processing, relying on the operating system to complete the IO operation of the Proactor processing model is really powerful, can be high concurrency, high response server is a good choice, But the processing model of connector in Tomcat is still based on NIO, and of course I think this might be improved in later versions, but on the other hand, I think that the handling of the load control of AIO can be difficult because AIO The API does not provide us with the processing of the assigned thread group, but simply provides a thread group to the operating system to solve the IO processing problems, so this may give the need for complex processing of load balancing to bring some control difficulty
For Tomcat connector processing, I recommend looking at this blog, the analysis of the comparison in place http://liudeh-009.iteye.com/blog/1561638
Tomcat's processing model is like this, through the acceptor to accept the channel, and then register the channel to the pollers in the selector, to select and behind the Business IO processing, Next is the work of the servlet engine, which responds to the request of the channel.
Well, let's introduce my implementation, in my example, using a static page as a response, and if any of your friends know that if you join a servlet and a servlet engine like Tomcat, please advise
Class of the structure of the diagram, here is not affixed to the code, interested friends, can be downloaded to http://download.csdn.net/detail/chenxuegui123/7330269
My processing model is pretty much the same, based on the separation of acceptor and business IO processing workers, resulting in higher response times, and some differences from Tomcat implementations. In each acceptor in the acceptor thread group, my implementation is not to let the non-blocking serversocketchannel always accept, and then handle the Socketchannel, Instead, by adding a selector to the acceptor, an operating system notifies us of the occurrence of an event of interest. (Above the source code, interested to know the friend can look at the above source code, as well as my source download), of course, this through testing, in the acceptor to add selector operation will make concurrency higher
In the worker thread group, each of my workers maintains this blocking queue, caches the IO that needs to be processed, rather than joining selector, which notifies us of the time of interest, because in this case a static page-based response, So it's all about responding to every request, simple, not as complicated as Tomcat.
Of course, with 100,000 concurrent response tests, the concurrency rate of this processing model is still good.