The c10k problem
Our servers can easily support tens of thousands of things, and today's hardware and bandwidth are not a problem, how to write software that can support a large number of concurrent services.
The following is a measurement of the performance of a fork test, each version of the operating system Fork sub-process consumes more time, where the performance of Linux 2.6 is better.
As you can see from the above figure, assuming that each child process is created consumes 500 microseconds, then one second can create 2000 processes, and if each process also handles heavy business logic, it is not possible to provide high concurrency in a multi-process way, so we need to take advantage of more efficient I/O models.
some I/O frameworks : ACE (A large C + + I/O framework, internally an object-oriented interface), ASIO (c + + version of the I/O framework in Boost library), Libevent is a lightweight C-implemented I/O framework, POLLER,RN, etc.
I/O policy --There are several options for developing network programs as follows
Whether to handle concurrent I/O requests if the concurrent I/O requests are handled in a single thread, blocking/synchronizing requests, possibly using non-blocking calls for concurrent purposes with multithreading and multiple processes, such as setting a non-blocking bit for the socket descriptor, and by detecting some status bits, mainly used in the network i/ O instead of using asynchronous calls on disk I/O (such as the Aio_write function to operate I/O) after completion through signaling or completion of port notifications, you can use how the network I/O and disk I/O control code for the client service a process service a client one system-level thread service all clients, Each client is provided by the following service user-level thread state machine continuation (unknown what) a system-level thread serves a client with a system-level thread serving an active client (completion port, thread pool) whether to use an operating system-level service, or to embed code into the kernel (Driver, System module , VXD, etc.)
The following five combinations are more commonly used
One thread serves multiple clients, uses non-blocking I/O and horizontally triggers a thread to serve multiple clients, uses non-blocking I/O and edges to trigger a thread service to multiple clients, uses asynchronous I/O to service a client with a thread, and uses blocking I/O to implant the service code into the kernel
Here's a look at each of their features.
1, one thread serving multiple clients, using non-blocking I/O and horizontal triggering
To set non-blocking bits for all network handles, it is a traditional design choice to use the Select or poll function to detect the existence of data read and write for those handles. In this mode, the system tells you whether a file descriptor is ready to complete the task that you specified since the last time you were told. (horizontal triggering of the corresponding edge trigger, is the hardware design of the position, the latter more real-time advanced, IOCP and Aio used to), such as select,poll,/dev/poll,kqueue functions such as the trigger.
Note: It is not entirely reliable to detect descriptors with the Select and poll functions, because we do not necessarily have to be prepared when we actually read and write, so we need to record the status information.
The bottleneck of this mode is to read and write disk files (such as the required page content is not in memory), the set descriptor is non-blocking meaningless, when a service requires disk I/O, the process and the client needs to wait, the advantages of single-threaded is discarded.
2, one thread serving multiple clients, using non-blocking I/O and Status code change notifications
"Edge trigger" means that we assign a file descriptor to the system, which notifies us when the descriptor is never available and becomes available. The system assumes that we know that the descriptor is available, and we do not need to record what the available status code is like to know that we are going to manipulate the descriptor to make it unusable again. This pattern includes the Kqueue function and the Epoll function (Kqueue can specify a horizontal trigger or edge trigger).
3, one thread serves multiple clients, using asynchronous I/O
This approach has not been popularized in Unix, after all, the few systems that support asynchronous I/O require us to refactor the code. In standard UNIX, asynchronous I/O is provided by The_aio_interface, which associates a signal and value with each I/O operation, and signals and values are placed in the queue to be passed to the user process. AIO generally uses edge triggering, and in the Windows platform, the response to the asynchronous I/O mechanism is IOCP.
4, a thread serves a client, using blocking I/O
Although today's thread-alone blocking service performance is very low, with the update and development of threading technology, it is not impossible to provide 10,000 client concurrency in the future.
5, the service code is implanted into the kernel (this part does not understand that does not mention)