8 High-Performance Server program framework

Source: Internet
Author: User
Tags epoll lock queue

8.1 Server Model

C/S model

Peer model

The in-peer model that is actually used usually comes with a dedicated discovery server that provides a lookup service

8.2 Server Programming Framework

The I/O processing unit is the module where the server manages client connections

A logical unit is typically a process or thread, and the server typically consists of multiple logical units that enable parallel processing of multiple client tasks

8.3 I/O model

The socket is blocked by default when it is created, either by passing the SOCK_NONBLOCK flag to the second parameter called by the socket system, or by fcntl the F_setel command called by the system to set it to a non-blocking

System calls for blocking I/O execution may be suspended by the operating system because they cannot be completed immediately, and system calls that may be blocked include the Accept send recv Connect

System calls to non-blocking I/O execution always return immediately

Non-blocking I/O is typically used with other I/O notification mechanisms, such as I/O multiplexing and Sigio signals

The most commonly used I/O multiplexing function on Linux is the select poll epoll_wait

These are synchronous I/O models because I/O reads and writes are done by the application after I/O events occur

asynchronous I/O Read and write operations always return immediately, regardless of whether I/O is blocked, because the actual read and write operations have been taken over by the kernel

Summarize:

Synchronous I/O model requires user code to perform I/O operations, asynchronous I/O mechanism by the kernel to perform I/O operations

Synchronous I/O notifies the application that an I/O ready event is an I/O completion event that is notified to the application by asynchronous I/O

8.4 Two efficient event-handling modes

Server typically handles three types of events: I/O event Signal Event Timing Event

The synchronous I/O model is typically used to implement the reactor mode asynchronous I/O model is typically used to implement Proactor mode

Reactor Mode

The Synchronous IO Model (in epoll_wait as an example) implements the workflow of the reactor mode:

1. The main thread registers the read-ready event on the socket in the Epoll kernel event table.

2. The main thread call epoll_wait waits for data to be read on the socket.

3. Epoll_wait notifies the main thread when data is readable on the socket. The main thread puts the socket-readable event into the request queue.

4. Sleep the worker thread on the request queue is awakened, it reads the data from the socket, processes the client request, and then registers the write-ready event on the socket in the Epoll kernel event table.

5. The main thread calls epoll_wait wait for the socket to be writable.

6. When the socket is writable, the epoll_wait notifies the main thread. The main thread puts the socket writable event into the request queue.

7. Sleep on a worker thread in the request queue is awakened, and it writes the result of the client request to the socket on the server.

After the worker thread takes an event from the queue, it reads and writes the data and processes the request based on whether the event is readable or writable. Therefore, in reactor mode, it is not necessary to distinguish between so-called "read worker threads" and "write Worker threads".

Proactor mode

Unlike the reactor mode, the Proactor mode gives all IO operations to the main thread and the kernel for processing, and the worker thread is only responsible for the business logic.

(Take Aio_read and Aio_write as an example) workflow:
1. The main thread calls the Aio_read function to register the read completion event on the socket to the kernel, and tells the kernel where the user read the buffer, and how to notify the application when the read operation is complete (here, for example, details sigevent man manual)
2. The main thread continues to handle other logic.
3. When the data on the socket is read into the user buffer, the kernel sends a signal to the application to notify the application that the data is available.
4. Application pre-defined signal processing function Select a worker thread to process the customer request. After the worker thread finishes processing the client request, call the Aio_write function to register the write completion event on the socket to the kernel and tell the kernel user where to write the buffer, and how to notify the application when the write operation is complete (still with the signal as an example)
5. The main thread continues to handle other logic.
6. After the data in the user buffer is written to the socket, the kernel sends a signal to the application to notify the application that the data has been sent.
7. Application pre-defined signal processing function Select a worker thread to do the aftercare, such as deciding whether to close the socket.

Analog Proactor Mode

The main thread performs data reading and writing, and the main thread wants the worker to notify the completion event

1. The main thread registers the read-ready event on the socket in the Epoll kernel event table.
2. The main thread call epoll_wait waits for data to be read on the socket.
3. Epoll_wait notifies the main thread when data is readable on the socket. The main thread reads the data from the socket loop, knows that no more data is readable, and then encapsulates the read data into a Request object and inserts it into the request queue.
4. Sleep on a worker thread in the request queue is awakened, it obtains the request object and processes the client request, and then registers the write-ready event on the socket in the Epoll kernel event table.
5. The main thread calls epoll_wait wait for the socket to be writable.
6.0 when the socket is writable, the epoll_wait notifies the main thread. The main thread network writes the result of the client request to the socket on the server.

8.5 Two efficient concurrency modes

The purpose of concurrent programming is to allow the program to perform multiple tasks simultaneously, suitable for I/O intensive, not suitable for compute-intensive

Semi-synchronous semi-asynchronous mode

The "synchronous" and "asynchronous" and "synchronous" and "asynchronous" of the preceding IO are completely different concepts here. In the IO model, "synchronous" and "asynchronous" differentiate between what IO events the kernel notifies the application (whether it is a ready or completed event) and who is going to complete the IO read-write (application or kernel).

In concurrency mode, "synchronization" means that the program executes exactly in the order of the Code sequence; "Async" means that the execution of a program needs to be driven by a system event. Common system events include interrupts, signals, and so on.

It is obvious that asynchronous threads have high execution efficiency and strong real-time performance, which is a model adopted by many embedded systems. However, writing a program executed asynchronously is relatively complex, difficult to debug and extend, and is not suitable for large amounts of concurrency. Synchronous threads, on the other hand, have relatively low efficiency and poor real-time performance, but are logically simple.

In semi-synchronous semi-asynchronous mode, the synchronization thread is used to handle the client logic, which is used by asynchronous threads to handle IO events. When an asynchronous line Cheng hears a customer request, it encapsulates it as a request object and inserts it into the request queue. The request queue notifies a work thread that is working in synchronous mode to read and process the request object.

Disadvantages:

1. Main thread and worker thread on shared request queue, main thread to request queue Add task and worker thread remove task from request queue need to lock queue

2. Each worker thread can only process one client request at a time, and the switching of worker threads will consume a lot of CPU time

An efficient semi-synchronous/semi-asynchronous mode (Nginx should be this mode)

Each worker thread can handle multiple client connections at the same time, the main thread listens to the socket, and the socket is managed by the worker thread.

8 High-Performance Server program framework

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.