High concurrent epoll+ thread pool, business online Cheng

Source: Internet
Author: User
Tags epoll mutex

As we know, the server concurrency model is usually divided into single-threaded and multithreaded models, where threads usually refer to "I/O threads", which are responsible for I/O operations, coordinating the allocation of "management threads" of tasks, while actual requests and tasks are usually referred to as "worker threads". Typically, under a multithreaded model, each thread is both an I/O thread and a worker thread. So what we're talking about here is a single I/O thread + worker threading model, which is also one of the most common server concurrency models. This model is ubiquitous in the server code of my project. It also has a name called the "semi-synchronous/semi-asynchronous" model, which is also an expression of producer/consumer (especially consumer) models.

This architecture is mainly based on the idea of I/O multiplexing (mainly epoll,select/poll obsolete), through single path I/O multiplexing, can achieve efficient concurrency, while avoiding multithreading I/O switching between the various overhead, clear thinking, easy to manage, Multi-worker thread based on thread pool can give full play to the advantages of multithreading and use thread pool to further improve resource reuse and avoid producing multithreading.

1 Model architecture

2 Implementation points

2.1 Single I/O thread epoll

The Epoll model that implements a single I/O thread is the first technical point of this architecture, with the following main ideas:

Single-threaded creates Epoll and waits, when an I/O request (socket) arrives, joins it Epoll and takes an idle worker thread from the thread pool, handing the actual task to the worker thread.

Pseudo code: Create a Epoll instance;
while (server running)
{
Epoll wait for the event;
If (new connection arrives and is a valid connection)
{
Accept this connection;
Set this connection to non-blocking;
Set event for this connection (Epollin | Epollet ...);
Adding this connection to the Epoll listening queue;
Take an idle worker thread from the thread pool and handle the connection;
}
else if (read request)
{
Take an idle worker thread from the thread pool and process the read request;
}
else if (write request)
{
Take an idle worker thread from the thread pool and process the write request;
}
Else
other events;
}

The pseudo code may write not very good, actually is the basic epoll use.

However, it is necessary to pay attention to the use of the thread pool, if the thread pool does not get the idle worker thread, still need to do some processing.

2.2 Thread Pooling Implementation essentials

When server starts, create a certain number of worker threads to join the thread pool, such as (20), for I/O threads to take;

Every time an I/O thread requests an idle worker thread, a free worker thread is fetched from the pool to handle the corresponding request;

When the request is processed and the corresponding I/O connection is closed, the corresponding thread is reclaimed and put back into the thread pool for next use;

When you request an idle worker thread pool, there are no idle worker threads, which you can do as follows:

(1) If the total number of "managed" threads in the pool does not exceed the maximum allowable value, create a batch of new worker threads to join the pool and return one of them for use by I/O threads;

(2) If the total number of "managed" threads in the pool has reached its maximum, you should not continue creating a new thread, wait a short time and try again. Note Because the I/O thread is single-threaded and should not be blocked waiting here, the management of the thread pool should be done by a dedicated management thread, including creating a new worker thread. While the administrative thread is blocking the wait (such as using a conditional variable and waiting to wake), there should be idle worker threads available in the thread pool after a short period of time. Otherwise, the server load estimate is out of the question.



Epoll is the perfect solution for high concurrency servers under Linux because it is triggered by events, so it's not just a order of magnitude faster than a select.   Single-threaded epoll, the trigger can reach 15000, but plus the business, because most businesses are dealing with the database, so there will be congestion, this time must use multithreading to speed up. In the business thread pool, there is a lock to be added to the line. Test results 2,300/s test tools: Stressmark Because of the addition of the code applicable to AB, you can also apply AB for stress testing. Char buf[1000] = {0};
sprintf (buf, "http/1.0 ok\r\ncontent-type:text/plain\r\n\r\n%s", "Hello world!\n");
Send (Socketfd,buf, strlen (BUF), 0);

#include <iostream>
#include <sys/socket.h>
#include <sys/epoll.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdio.h>
#include <pthread.h>

#include <errno.h>

#define Maxline 10
#define OPEN_MAX 100
#define LISTENQ 20
#define Serv_port 8006
#define INFTIM 1000

Thread pool task Queue structure body

struct task{
int FD; File descriptors to read and write

struct task *next; Next task

};

Two aspect pass parameters for reading and writing two

struct user_data{
int FD;
unsigned int n_size;
Char Line[maxline];
};

Task functions for threads

void * Readtask (void *args);
void * Writetask (void *args);


Declares a variable of the epoll_event structure, used by the EV to register events, and an array to return events to be processed

struct Epoll_event ev,events[20];
int EPFD;
pthread_mutex_t Mutex;
pthread_cond_t Cond1;
struct task *readhead=null,*readtail=null,*writehead=null;

void setnonblocking (int sock)
{
int opts;
Opts=fcntl (SOCK,F_GETFL);
if (opts<0)
{
Perror ("Fcntl (SOCK,GETFL)");
Exit (1);
}
opts = opts| O_nonblock;
if (Fcntl (sock,f_setfl,opts) <0)
{
Perror ("Fcntl (sock,setfl,opts)");
Exit (1);
}
}

int main ()
{
int I, Maxi, LISTENFD, CONNFD, Sockfd,nfds;
pthread_t Tid1,tid2;

struct task *new_task=null;
struct User_data *rdata=null;
Socklen_t Clilen;

Pthread_mutex_init (&mutex,null);
Pthread_cond_init (&cond1,null);
Initialize the thread used to read the thread pool

Pthread_create (&tid1,null,readtask,null);
Pthread_create (&tid2,null,readtask,null);

Generate a Epoll-specific file descriptor for processing accept

Epfd=epoll_create (256);

struct sockaddr_in clientaddr;
struct sockaddr_in serveraddr;
LISTENFD = socket (af_inet, sock_stream, 0);
Set the socket to non-blocking

Setnonblocking (LISTENFD);
Set the file descriptor associated with the event to be processed

EV.DATA.FD=LISTENFD;
Set the type of event to be handled

ev.events=epollin| Epollet;
Registering Epoll Events

Epoll_ctl (Epfd,epoll_ctl_add,listenfd,&ev);

Bzero (&serveraddr, sizeof (SERVERADDR));
serveraddr.sin_family = af_inet;
Serveraddr.sin_port=htons (Serv_port);
SERVERADDR.SIN_ADDR.S_ADDR = Inaddr_any;
Bind (LISTENFD, (SOCKADDR *) &serveraddr, sizeof (SERVERADDR));
Listen (LISTENFD, Listenq);

Maxi = 0;
for (;;) {
Waiting for the Epoll event to occur

Nfds=epoll_wait (epfd,events,20,500);
Handling all events that occur

for (I=0;i<nfds;++i)
{
if (EVENTS[I].DATA.FD==LISTENFD)
{
                    

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.