Scenario 0:accept+read/write, blocking, one customer service at a time.
Scheme 1:accept+fork, blocking, multi-process, long connection, low concurrency, high overhead, process-per-connection
Scheme 2:accept+thread, blocking, multi-threading, long connection, concurrency, overhead, multi-connectivity, thread-per-connection.
Scenario 3:prefork, variant of Scenario 2.
Scenario 4:prethread, variant of Scenario 3.
Start with I/O multiplexing
Scheme 5:poll (reactor), non-blocking, high concurrency, low overhead, single thread reactor.
Scenario 6:reactor+thread-per-task, non-blocking, concurrency, multicore, overhead, open a thread to the request to handle, thread-per-request.
Scenario 7:reactor+workerthread, non-blocking, concurrency, overhead, there is a worker thread that specifically handles the request.
Scheme 8:reactor+threadpoll, non-blocking, high concurrency, low overhead, revision of program 7.
Scheme 9:multiple reactors, non-blocking, high concurrency, low overhead, Muduo network library Adoption, one loop per thread.
Scenario 10:multiple Reactors+threadpoll.
Scenario 9 is a muduo built-in multithreaded scenario, with a main reactor responsible for Accpet, and then distributing the received connections to other sub reactor, using the role of rotation to share and take full advantage of multicore. One loop per thread. The number of generic threads is computed by the number of cores, fixed.
Before you learn, explain the various server models and Muduo server models.