nginx1.1.4+ long connection characteristics to back-end machines

Source: Internet
Author: User
Tags varnish

Transferred from: http://zhangxugg-163-com.iteye.com/blog/1551269

Nginx Upstream keepalive connections

Nginx starts with 1.1.4, enabling long connection support for back-end machines, an exciting improvement that means that nginx communicates more efficiently with back-end machines and has a lower burden on the backend machines.

For example, for a back-end machine that does not have long connection support, a large number of time_wait connections are present, and the following command is used to verify:

Netstat-n | grep time_wait

After reviewing the official documentation, it has now implemented a long connection support for HTTP, fastcgi, memcache protocol. In previous versions, only the Memcache protocol was supported.

1. Enable a long connection to the Memcache server
Add the keepalive n directive to the upstream configuration segment:
Upstream Memcached_backend {

Server 127.0.0.1:11211;

Server 10.0.0.2:11211;

KeepAlive 32;

}

server {

...

location/memcached/{

Set $memcached _key $uri;

Memcached_pass Memcached_backend;

}

}

2. Enable fastcgi long Connection support
In addition to the need to configure KeepAlive N in upstream, you also need to add fastcgi_keep_conn on the location;

Upstream Fastcgi_backend {

Server 127.0.0.1:9000;

KeepAlive 8;

}

server {

...

location/fastcgi/{

Fastcgi_pass Fastcgi_backend;

Fastcgi_keep_conn on;

...

}

}

3. Enable HTTP long connection support for back-end machines

Upstream Http_backend {

Server 127.0.0.1:8080;

KeepAlive 16;

}

server {

...

location/http/{

Proxy_pass Http://http_backend;

Proxy_http_version 1.1;

Proxy_set_header Connection "";

...

}

}

Note: You need to set the HTTP protocol version number 1.1 for the Nginx proxy request, and clear the connection request header, Official document Description:

For HTTP, the proxy_http_version directive should is set to "1.1" and the "Connection" header field should is cleared.

The connections parameter should is set low enough to allow upstream servers to process additional new incoming connection s as well.

In other words: keepalive n instruction, the value of n should be set as small as possible so that the backend machine can accept new connections at the same time.

In my responsible production environment, the front end is Nginx, the static file cache uses varnish, after the use of long connection, the varnish machine connection number from 8,000 to more than 200, the load value is also significantly reduced.

However, the following error occurred in the Nginx log for fastcgi, that is, the backend machine is the PHP-FPM service:

Upstream sent unsupported FastCGI protocol version:0 while reading upstream.

Widely collected, not yet resolved. If you encounter the same problem and resolve it, please contact the author mailbox [email protected], thank you very much.

nginx1.1.4+ long connection characteristics to back-end machines

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.