High concurrency epoll+ thread pool, business online Cheng

Source: Internet
Author: User
Tags epoll

We know that the server concurrency model can often be divided into single-threaded and multithreaded models, where threads usually refer to "I/O threads", which are responsible for I/O operations, which coordinate the "management threads" of the assigned tasks, while the actual requests and tasks are usually referred to the so-called "worker threads". Typically, in a multithreaded model, each thread is both an I/O thread and a worker thread. So what we're talking about here is the single I/O thread + multi-worker threading model, which is also the most common type of server concurrency model. This model is ubiquitous in the server code in my project. It also has a name called the "semi-synchronous/semi-asynchronous" model, which is also a representation of the producer/consumer (especially the multi-consumer) model.

This architecture is mainly based on I/O multiplexing ideas (mainly epoll,select/poll obsolete), through single-threaded I/O multiplexing, can achieve efficient concurrency, while avoiding the various overhead of multi-threaded I/O switching, clear thinking, easy to manage, And the multi-worker thread based on thread pool can make full use of the advantages of multithreading, utilize thread pool, further improve the reuse of resources and avoid multithreading.

The bottleneck is io density .
Thread pool You open 10 threads of course you can all accept blocking, so the client will automatically activate a thread to process, but imagine that if all 10 threads are used, the 11th client will be discarded. In order to achieve "high concurrency" you have to keep increasing the number of thread pools. This can lead to serious memory usage and thread switching latency issues.
So the proposal for a pre-event polling facility came into being,
The main thread poll is responsible for IO, and the job is given to the thread pool.
In high concurrency, 10W clients come up, the main thread is responsible for the accept, put in the queue, not to occur without a timely handshake and discard the connection occurs, and the job thread from the queue to claim the job, finished reply main thread, the main thread is responsible for write. This allows a large number of connections to be handled with minimal system resources.
Under low concurrency, such as 2 clients, there will be no more than 100 threads holding on to the situation where the system resources are wasted.

The core of the correct implementation of the basic thread pool model:
The main thread takes care of all I/O operations, and then , if necessary, to the worker thread for processing after a request for all the data is received. Once the processing is complete, return the data that needs to be written back to the main thread to write back/try to write back the data until it blocks, then return to the main thread to continue.
Here " if necessary " means: measured to confirm that the CPU time consumed by this process (excluding any I/O waits, or the associated I/O wait operation cannot be taken over with Epoll) is quite significant. If this process (which does not contain the I/O operations that can be taken over) is not significant, it can be resolved directly in the main thread.
The premise of this "necessity" is only three words: hypothesis, analysis, measurement.


So, a properly implemented thread pool environment clock, withEpoll + non-blocking I/oThe advantage of replacing Select + blocking I/O is that when processing a large number of sockets, the former is more efficient than the latter because the former does not need to re-examine all FD to determine which FD's state change can be read and written after each wake-up.

Key

1. Single I/O thread epoll

The Epoll model for implementing single I/O threading is the first technical point of this architecture, with the following main ideas:

Single-threaded Create Epoll and wait for an I/O request (socket) to arrive, add it to Epoll and take a free worker thread from the thread pool, and leave the actual business to the worker thread .

Pseudo code:

Create a Epoll instance;  while (server running) {    epoll wait event;     if (The new connection arrives and is a valid connection)    {        accept this connection;        Set this connection to non-blocking;
   Set event for this connection (Epollin | Epollet ...);
Add         This connection to the Epoll listening queue;        Take a free worker thread from the thread pool and process the connection    ;    } Else if (read request)    {        Take a free worker thread from the thread pool and process the read request;    }     Else if (write request)    {        Take a free worker thread from the thread pool and process the write request;    }     Else         other events;     }

2. Thread Pool Implementation

When the server starts, a certain number of worker threads are created to join the thread pool, such as (20) for I/O threads to fetch;

Whenever an I/O thread requests an idle worker thread, a free worker thread is fetched from the pool to process the request;

When the request is processed and the corresponding I/O connection is closed, the corresponding thread is recycled and put back into the thread pool for next use;

When a worker thread pool is requested, there are no idle worker threads, which can be handled as follows:

(1) If the total number of "managed" threads in the pool does not exceed the maximum allowable value, create a batch of new worker threads to join the pool and return one for the I/O thread to use;

(2) If the total number of "managed" threads in the pool has reached the maximum, no further new threads should be created, wait a short period of time and try again. Note Because I/O threads are single-threaded and should not be blocked waiting here, the management of the thread pool should actually be done by a dedicated management thread, including creating new worker threads. At this point the management thread is blocking the wait (such as using a condition variable and waiting for Wake), and after a short period of time, there should be idle worker threads available in the thread pool. Otherwise, the server load estimate is out of the question.

Epoll is the perfect solution for high-concurrency servers in Linux because it is event-triggered , so faster than select is not just an order of magnitude. single-threaded epoll, the trigger volume can reach 15000, but with the business, because most of the business with the database, so there will be blocking situation, this time must be multi-threaded to speed up.   business in the thread pool, here to lock the line. Test results 2,300/s   Test Tool: Stressmark because the code applicable with AB is added, the AB can also be used for stress testing. char buf[1000] = {0};
sprintf (buf, "http/1.0 ok\r\ncontent-type:text/plain\r\n\r\n%s", "Hello world!\n");
Send (Socketfd,buf, strlen (BUF), 0);

High concurrency epoll+ thread pool, business online Cheng

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.