First, the implementation of the server is not only in either of these ways.
Let's talk about the two types of server models that the main topic says:
1, received a request to deal with, this time can not handle the new request, this is blocking
This is a single-threaded model, cannot be concurrent, a request does not finish processing the server will block, will not process the next request. Generic servers are not implemented in this way.
2, received a request to open a new thread to process the task, the main thread back, continue to handle the next task, this is a non-blocking
First correct an error, this is not non-blocking, It is also blocked . Relative to the first model, it solves the problem of the main thread blocking, with a certain amount of concurrency, but is still blocked in each new thread. If 100 people visit at the same time, will open 100 threads, that 1000 people, 10,000 people? Frequent switching threads consume resources so that server performance is still low.
In addition to the above two ways, the next is to say other better ways:
3, similar to the 2 model, but not every time you receive a request to open a new thread, but the use of the thread pool
If you do not know the thread pool, you may understand the database connection pool, because frequent creation, shutdown database connection consumes resources, so the database connection pool will be used to save a certain number of connections, if necessary from the connection pool to take the connection, do not need to put back the connection pool, not frequently created. The thread pool, too, manages multiple threads, and the performance is much higher than creating threads frequently. This way, the server performance will be higher than 2. However, it is still blocked . There is usually a limit to the number of threads in the thread pool, and if all threads are blocked (such as slow speeds or malicious use of the connection), then the next request will be queued.
4. Server model based on Java NIO implementation
Several of the models mentioned above are based on bio (blocking IO). NIO is non-blocking IO, which is based on Io multiplexing (e.g. reactor mode) implementation, which requires only one thread or a small number of threads to handle a large number of requests. For performance, the server concurrency of NIO is generally greater than that of bio, so high-performance servers can be implemented. If interested, you can learn some of the NIO-based network programming frameworks, such as Netty, MINA.
Finally, answer the question that the Lord speaks of Tomcat. Tomcat runs can choose either bio or NIO models, which correspond to the above 3 and 42 ways. Tomcat is run by the Bio mode by default, and if you want to switch to NIO, you can configure Server.xml:
<Connector port="8080" protocol="org.apache.coyote.http11.Http11NioProtocol" .../>
It is recommended to use NIO from a performance consideration.
This is my original set of Mina, Netty learning Tutorials:
http://xxgblog.com/categories/%E5%BC%82%E6%AD%A5%E7%BD%91%E7%BB%9C%E7% ...
You can also probation the Netty authoritative guide (especially the second chapter on various models):
Http://cread.e.jd.com/read/startRead.action?bookId=30186249&readTy ...
Does Java service open a new thread for each request it receives? Where's tomcat?