Let's take a look at this situation. You and thousands of players are super fans of World of Warcraft. Every weekend, they will form a group of bosses. The game server is Alexander every weekend, because at least several hundred thousand users are online at the same time. Is it feasible to use our multi-thread blocking server as a game server? First, analyze the features of the game server:
① An online game is not like a web page. Once it is opened, the connection can be closed. Online Games must have a persistent and stateful connection. Each client must have a persistent connection with the server to send messages quickly and timely. As the number of concurrent users increases, the multi-thread blocking server cannot allocate a thread to each client.
② Unlike ordinary application servers, online games with CS structure generally put complicated logic processing on the client, while on the game server only process simple logic and even only transmit messages. In such a simple logic, we actually allocate a thread to each request. Is this seriously different from the actual situation?
③ Online games require fast response, timely message exchange, and bidirectional communication. Therefore, frequent requests and responses are required. If we have used persistent connections, but the server does not have new data every time and does not need to be sent to the client. So we still occupy a thread, isn't it a waste?
From the above analysis, for online games and other such occasions, our traditional multi-threaded server is obviously insufficient. The thread pool can alleviate the resource occupation caused by frequent Io calls to a certain extent, but the pool has a certain size limit. In the face of the high concurrency of thousands of client requests, but it is always not the best solution. Is it possible to maintain many persistent connections with one or a few threads? The following describes a new server model-non-blocking server model.
One of the most important features of the non-blocking server model is that an interface is returned immediately after it is called, without blocking the wait. As shown in 2-6-2-1, when multiple clients request from the server, the server stores a socket connection list and a dedicated thread polls the list. If you find that a socket has data readable, you can call the corresponding read operation of the socket. If you find that the socket has data writable, you can call the corresponding write operation of the socket; if a socket is interrupted, the socket close operation is called. To achieve better performance, you can also combine the thread pool. Once the socket that needs to be processed (read data, write data, and close) is detected, another thread is started to process the data.
Figure 2-6-2-1 non-blocking Server Model
In this way, no matter how many socket connections can be managed by a thread, a thread is responsible for Traversing these socket lists and handing them over to the thread pool, which makes good use of the blocking time, the processing capability is improved. However, this model involves traversing all the socket lists and processing data merging. It also occupies a large amount of CPU resources when idle, and is still not suitable for high concurrency scenarios. Make some improvements-the event-driven model. Its core is event-driven, and the thread does not traverse the socket list. Instead, it detects events and responds to detected events one by one. This greatly improves the detection efficiency and improves the processing capability.