Long connection characteristics of nginx1.1.4+ to back-end machines

Source: Internet
Author: User
Tags memcached varnish

Turn from: http://zhangxugg-163-com.iteye.com/blog/1551269

Nginx Upstream keepalive connections

Nginx from 1.1.4, which enables long connection support to the back-end machine, is an exciting improvement, which means that nginx is more efficient at communicating with the back-end machines and less burdensome on the back-end machines.

For example, for a back-end machine that does not have a long connection, a large number of time_wait states are connected and validated using the following command:

Netstat-n | grep time_wait

After reviewing the official documentation, it has now implemented the long Connection support for HTTP, FastCGI, and memcache protocols. In the previous version, only the Memcache protocol was supported.

1. Enable long connections to the Memcache server
Add keepalive n instructions in the upstream configuration section:
Upstream Memcached_backend {

Server 127.0.0.1:11211;

Server 10.0.0.2:11211;

keepalive;

}

server {

...

location/memcached/{

Set $memcached _key $uri;

Memcached_pass Memcached_backend;

}

}

2. Enable fastcgi long Connection support
In addition to the need to configure KeepAlive N in upstream, additional fastcgi_keep_conn on is required in location;

Upstream Fastcgi_backend {

Server 127.0.0.1:9000;

KeepAlive 8;

}

server {

...

location/fastcgi/{

Fastcgi_pass Fastcgi_backend;

fastcgi_keep_conn on;

...

}

}

3. Enable HTTP long connection support for back-end machines

Upstream Http_backend {

Server 127.0.0.1:8080;

KeepAlive 16;

}

server {

...

location/http/{

Proxy_pass Http://http_backend;

Proxy_http_version 1.1;

Proxy_set_header Connection "";

...

}

}

Note: The HTTP protocol version number required to set the Nginx proxy request is 1.1, and the connection request header is cleared, as described in the official document:

For HTTP, the proxy_http_version directive should is set to "1.1" and the "Connection" header field should is cleared.

The connections parameter should is set low enough to allow upstream a to process servers new additional incoming S as a.

That is to say, the value of n in the keepalive n instruction should be set as small as possible so that the backend machine can accept the new connection at the same time.

In my production environment, the front-end is nginx, static file caching using varnish, after using long connection, varnish machine connection number from more than 8,000 down to 200, the load value is also significantly reduced.

However, for FastCGI, where the back-end machine is the PHP-FPM service, the following error occurred in the Nginx log:

Upstream sent unsupported FastCGI protocol version:0 while reading.

Extensive collection, is not yet resolved. If you encounter the same problem and solve it, please be sure to contact the author's mailbox zhangxugg@163.com, thank you very much.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.