: This article mainly introduces nginx's analysis of keepalive and pipeline request processing. For more information about PHP tutorials, see.
Original article, reprinted please note: Reprinted from pagefault
Link: nginx analysis of keepalive and pipeline request processing
This time, we mainly look at how to process keepalive and pipeline in nginx. we don't need to introduce this concept here. Let's look at how nginx works.
First, let's look at the handling of keepalive. We know that keepalive in http 1.1 is the default, unless the client explicitly specifies that the connect header is close. The following is the code for nginx to determine whether keepalive is required.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
void
ngx_http_handler(ngx_http_request_t *r)
{
.........................................
switch (r->headers_in.connection_type) {
case 0:
// If the version is later than 1.0, the default value is keepalive.
r->keepalive = (r->http_version > NGX_HTTP_VERSION_10);
break ;
case NGX_HTTP_CONNECTION_CLOSE:
// Keepalive is not required if the specified connection header is close
r->keepalive = 0;
break ;
case NGX_HTTP_CONNECTION_KEEP_ALIVE:
r->keepalive = 1;
break ;
}
..................................
}
|
Then we know that keepalive, that is, the current http request will not directly close the current connection after it is executed. Therefore, the related processing of nginx keepalive is to clear the request function.
The nginx requst cleanup function is ngx_http_finalize_request. this function calls ngx_http_finalize_connection to release the connection, and the keepalive judgment is in this function.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
static void
ngx_http_finalize_connection(ngx_http_request_t *r)
{
ngx_http_core_loc_conf_t *clcf;
clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
.....................................................................
// You can see that if keepalive is set and timeout is greater than 0, the process of keepalive is started.
if (!ngx_terminate
&& !ngx_exiting
&& r->keepalive
&& clcf->keepalive_timeout > 0)
{
ngx_http_set_keepalive(r);
return ;
} else if (r->lingering_close && clcf->lingering_timeout > 0) {
ngx_http_set_lingering_close(r);
return ;
}
ngx_http_close_request(r, 0);
}
|
Through the above, we can see that keepalive is set through ngx_http_set_keepalive. next we will look at this function in detail.
In this function, requests for pipeline are processed along with each other. so let's take a look at how nginx distinguishes pipeline requests first, it assumes that if the data read from the client contains more data, that is, some data after the current request is parsed, it is considered as a pipeline request.
Another important thing is http_connection. we know from the previous blog that if alloc large header is required, it will be obtained from hc-> free first. if not, it will be created, then hand it to hc-> busy for management. This buf will be reused here, because large buf needs to be re-alloc for the second time. if the buf is reused here, the allocation will be reduced.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
hc = r->http_connection;
b = r->header_in;
// Generally, after header_in is parsed, the pos is set to last. That is, the data read is a complete http request. when the pos is smaller than the last one, it may be a pipeline request.
if (b->pos < b->last) {
/* the pipelined request */
if (b != c->buffer) {
/*
* If the large header buffers were allocated while the previous
* request processing then we do not use c->buffer for
* the pipelined request (see ngx_http_init_request()).
*
* Now we would move the large header buffers to the free list.
*/
cscf = ngx_http_get_module_srv_conf(r, ngx_http_core_module);
// If free is empty, create
if (hc-> free == NULL) {
// The number of large_client_headers is displayed.
hc-> free = ngx_palloc(c->pool,
cscf->large_client_header_buffers.num * sizeof (ngx_buf_t *));
if (hc-> free == NULL) {
ngx_http_close_request(r, 0);
return ;
}
}
// Then clear the busy of the current request
for (i = 0; i < hc->nbusy - 1; i++) {
f = hc->busy[i];
hc-> free [hc->nfree++] = f;
f->pos = f->start;
f->last = f->start;
}
// Save the current header_in buf for free next time.
hc->busy[0] = b;
hc->nbusy = 1;
}
}
|
Then, this part is free request and the keepalive timer is set.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
r->keepalive = 0;
ngx_http_free_request(r, 0);
c->data = hc;
// Set the timer
ngx_add_timer(rev, clcf->keepalive_timeout);
// Then set the readable event
if (ngx_handle_read_event(rev, 0) != NGX_OK) {
ngx_http_close_connection(c);
return ;
}
wev = c->write;
wev->handler = ngx_http_empty_handler;
|
Then, this part is the processing of pipeline.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
if (b->pos < b->last) {
ngx_log_debug0(NGX_LOG_DEBUG_HTTP, c-> log , 0, "pipelined request" );
#if (NGX_STAT_STUB)
( void ) ngx_atomic_fetch_add(ngx_stat_reading, 1);
#endif
// Set the tag.
hc->pipeline = 1;
c-> log ->action = "reading client pipelined request line" ;
// Enter the post queue to continue processing.
rev->handler = ngx_http_init_request;
ngx_post_event(rev, &ngx_posted_events);
return ;
}
|
When the request arrives below, it indicates that it is not a pipeline request. Therefore, the request and http_connection will be cleaned up.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
if (ngx_pfree(c->pool, r) == NGX_OK) {
hc->request = NULL;
}
b = c->buffer;
if (ngx_pfree(c->pool, b->start) == NGX_OK) {
/*
* the special note for ngx_http_keepalive_handler() that
* c->buffer's memory was freed
*/
b->pos = NULL;
} else {
b->pos = b->start;
b->last = b->start;
}
.....................................................................
if (hc->busy) {
for (i = 0; i < hc->nbusy; i++) {
ngx_pfree(c->pool, hc->busy[i]->start);
hc->busy[i] = NULL;
}
hc->nbusy = 0;
}
|
Set the handler of keepalive.
1 2 3 4 5 6 7 8 9 |
// This function will be analyzed in detail later
rev->handler = ngx_http_keepalive_handler;
if (wev->active && (ngx_event_flags & NGX_USE_LEVEL_EVENT)) {
if (ngx_del_event(wev, NGX_WRITE_EVENT, 0) != NGX_OK) {
ngx_http_close_connection(c);
return ;
}
}
|
The last step is to process tcp push. I will not introduce it here for the time being. next I will have a special blog to introduce nginx's tcp push operations.
Then let's take a look at the ngx_http_keepalive_handler function, which is used to process keepalive connections. when there is another readable event on the connection, the handler will be called.
This handler is relatively simple, that is, creating a new buf and re-starting an http request execution (call ngx_http_init_request ).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
b = c->buffer;
size = b->end - b->start;
if (b->pos == NULL) {
/*
* The c->buffer's memory was freed by ngx_http_set_keepalive().
* However, the c->buffer->start and c->buffer->end were not changed
* to keep the buffer size.
*/
// Re-allocate the buf
b->pos = ngx_palloc(c->pool, size);
if (b->pos == NULL) {
ngx_http_close_connection(c);
return ;
}
b->start = b->pos;
b->last = b->pos;
b->end = b->pos + size;
}
|
Then try to read the data. if there is no readable data, the handle will be added to the readable event again.
1 2 3 4 5 6 7 8 9 |
n = c->recv(c, b->last, size);
c->log_error = NGX_ERROR_INFO;
if (n == NGX_AGAIN) {
if (ngx_handle_read_event(rev, 0) != NGX_OK) {
ngx_http_close_connection(c);
}
return ;
}
|
If the data is read, the request is processed.
1 |
ngx_http_init_request(rev);
|
Finally, let's look at the ngx_http_init_request function. this time, we mainly look at how nginx reused the request during the pipeline request.
Pay attention to hc-> busy [0]. as we know before, if it is a pipeline request, we will save the previously unresolved request header_in, this is because we may have read some headers of the second request in the pipeline request.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
// Get the request. here we know that in the pipeline request, we will save the previous request.
r = hc->request;
if (r) {
// If yes, we will reuse the previous request.
ngx_memzero(r, sizeof (ngx_http_request_t));
r->pipeline = hc->pipeline;
// If nbusy exists
if (hc->nbusy) {
// Save the header_in and parse it directly below.
r->header_in = hc->busy[0];
}
} else {
r = ngx_pcalloc(c->pool, sizeof (ngx_http_request_t));
if (r == NULL) {
ngx_http_close_connection(c);
return ;
}
hc->request = r;
}
// Save the request
c->data = r;
|
From the above code, and then combined with my previous blog, we will know that large header is mainly for pipeline, because in pipeline, if the previous request reads some headers of the next request, the next resolution may exceed the originally allocated client_header_buffer_size. in this case, we need to re-allocate a header, that is, large header, so here httpconnection is mainly for the pipeline situation, while the keepalive connection is not a pipeline request, in order to save memory, the previous request is released.
The preceding section describes how to process and analyze keepalive and pipeline requests in nginx, including some content. if you are interested in the PHP Tutorial.