Nginx Configuration High concurrency

Source: Internet
Author: User
Tags epoll

One, in general Nginx configuration file in the optimization of the comparison has a role in the following:

1. worker_processes 8;

The number of nginx processes is recommended according to the number of CPUs, usually a multiple of it (for example, 2 four-core CPU counts as 8).

2. worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;

Allocate CPU for each process, the above example allocates 8 processes to 8 CPUs, of course, can write multiple, or a
Process to allocate more than one CPU.

3. worker_rlimit_nofile 65535;

This instruction refers to the maximum number of file descriptors opened by an nginx process, and the theoretical value should be the most open
The number of pieces (ulimit-n) is divided by the number of nginx processes, but the Nginx allocation request is not uniform, so it is best to keep the value of ulimit-n consistent.

Now the number of open files opened in the Linux 2.6 kernel is 65535,worker_rlimit_nofile 65535 should be filled accordingly.

This is because Nginx dispatch when the allocation request to the process is not so balanced, so if you fill 10240, total concurrency reached 340,000, there is a process may exceed 10240, this will return 502 error.

How to view Linux system file descriptors:

[Email protected] ~]# Sysctl-a | grep fs.file

Fs.file-max = 789972

FS.FILE-NR = 510 0 789972

4.    Use epoll;

Using the Epoll I/O model

(

Additional notes:

Similar to Apache, Nginx has different event models for different operating systems

A) Standard event model
Select, poll belongs to the standard event model, and if the current system does not have a more efficient method, Nginx chooses Select or poll
B) Efficient Event model
kqueue: used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Using the kqueue with a dual-processor MacOS X system can cause the kernel to crash.
epoll: used in Linux kernel version 2.6 and later systems.

/dev/poll: Used for Solaris 7 11/99+, Hp/ux 11.22+ (eventport), IRIX 6.5.15+, and Tru64 UNIX 5.1a+.

Eventport: Used for Solaris 10. To prevent kernel crashes, it is necessary to install security patches.

)

5. worker_connections 65535;

The maximum number of connections allowed per process is theoretically worker_processes*worker_connections per nginx server.

6. keepalive_timeout;

KeepAlive time-out period.

7 . client_header_buffer_size 4k;

Client request the buffer size of the head, this can be set according to your system paging size, generally a request header size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size.

The paging size can be obtained with the command getconf PAGESIZE .

[Email protected] ~]# getconf PAGESIZE

4096

But there are also cases where client_header_buffer_size exceeds 4k, but client_header_buffer_size the value must be set to the integer multiple of system paging size.

8 . open_file_cache max=65535 inactive=60s;

This will specify the cache for the open file, which is not enabled by default, max Specifies the number of caches, the recommended and the number of open files, and inactive refers to how long the file has not been requested to delete the cache.

9 . open_file_cache_valid 80s;

This refers to how long it takes to check the cache for valid information.

Ten . open_file_cache_min_uses 1;

The minimum number of times the file is used in the inactive parameter time in the Open_file_cache directive, if this number is exceeded, the file descriptor is always opened in the cache, as in the previous example, if a file is not used once in inactive time, it will be removed.

Second, about the optimization of kernel parameters:

Net.ipv4.tcp_max_tw_buckets = 6000

The number of timewait, by default, is 180000.

Net.ipv4.ip_local_port_range = 1024 65000

Allows the system to open a range of ports.

Net.ipv4.tcp_tw_recycle = 1

Enable Timewait Quick Recycle.

Net.ipv4.tcp_tw_reuse = 1

Turn on reuse. Allows time-wait sockets to be re-used for new TCP connections.

Net.ipv4.tcp_syncookies = 1

Turn on SYN cookies to enable cookies to be processed when a SYN wait queue overflow occurs.

Net.core.somaxconn = 262144

The BACKLOG of LISTEN functions in Web applications restricts the net.core.somaxconn of our kernel parameters to 128, and the Nginx-defined ngx_listen_backlog defaults to 511, so it is necessary to adjust this value.

Net.core.netdev_max_backlog = 262144

The maximum number of packets that are allowed to be sent to the queue when each network interface receives a packet at a rate that is faster than the rate at which the kernel processes these packets.

Net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system are not associated with any one of the user file handles. If this number is exceeded, the orphan connection is immediately reset and a warning message is printed. This limitation is only to prevent a simple Dos attack, not to rely too much on it or artificially reduce the value, but should increase this value (if the memory is increased).

Net.ipv4.tcp_max_syn_backlog = 262144

Record the maximum number of connection requests that have not received the client acknowledgment information. For systems with 128M of memory, the default value is 1024, and the small memory system is 128.

Net.ipv4.tcp_timestamps = 0

Timestamps can prevent the winding of serial numbers. A 1Gbps link will definitely encounter a previously used serial number. Timestamps allow the kernel to accept this "exception" packet. You need to turn it off here.

Net.ipv4.tcp_synack_retries = 1

In order to open the connection to the end, the kernel sends a SYN and comes with an ACK that responds to the previous syn. The second handshake in the so-called three-time handshake. This setting determines the number of Syn+ack packets sent before the kernel abandons the connection.

Net.ipv4.tcp_syn_retries = 1

The number of SYN packets sent before the kernel abandons the connection.

Net.ipv4.tcp_fin_timeout = 1

If the socket is closed by the local side, this parameter determines how long it remains in the fin-wait-2 state. The peer can make an error and never shut down the connection, or even accidentally become a machine. The default value is 60 seconds. 2.2 The normal value of the kernel is 180 seconds, 3 You can press this setting, but remember that even if your machine is a light-load Web server, there is a large number of dead sockets and memory overflow risk, fin-wait-2 is less dangerous than fin-wait-1, Because it can only eat up to 1.5K of memory, but they have a longer lifetime.

Net.ipv4.tcp_keepalive_time = 30

When KeepAlive is employed, the frequency at which TCP sends keepalive messages. The default is 2 hours.

Three, paste a complete kernel optimization settings:

vi/etc/sysctl.conf All content can be emptied directly in CentOS5.5 with the following content:

Net.ipv4.ip_forward = 0
Net.ipv4.conf.default.rp_filter = 1
Net.ipv4.conf.default.accept_source_route = 0
KERNEL.SYSRQ = 0
Kernel.core_uses_pid = 1
Net.ipv4.tcp_syncookies = 1
KERNEL.MSGMNB = 65536
Kernel.msgmax = 65536
Kernel.shmmax = 68719476736
Kernel.shmall = 4294967296
Net.ipv4.tcp_max_tw_buckets = 6000
Net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
Net.ipv4.tcp_rmem = 4096 87380 4194304
Net.ipv4.tcp_wmem = 4096 16384 4194304
Net.core.wmem_default = 8388608
Net.core.rmem_default = 8388608
Net.core.rmem_max = 16777216
Net.core.wmem_max = 16777216
Net.core.netdev_max_backlog = 262144
Net.core.somaxconn = 262144
Net.ipv4.tcp_max_orphans = 3276800
Net.ipv4.tcp_max_syn_backlog = 262144
Net.ipv4.tcp_timestamps = 0
Net.ipv4.tcp_synack_retries = 1
Net.ipv4.tcp_syn_retries = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_mem = 94500000 915000000 927000000
Net.ipv4.tcp_fin_timeout = 1
Net.ipv4.tcp_keepalive_time = 30
Net.ipv4.ip_local_port_range = 1024 65000

To make the configuration effective immediately, use the following command:
/sbin/sysctl-p

Four, the following is about the optimization of the system connection number

Linux defaults open files and max user processes to 1024

#ulimit-N

1024

#ulimit –u

1024

Description : Description Server only allows 1024 files to be opened at a time, processing 1024 user processes

Use Ulimit-a to view all the limits of the current system and use Ulimit-n to view the current maximum number of open files.

The newly installed Linux default is only 1024, and as a server with a large load, it is easy to encounter error:too many open files. Therefore, it needs to be changed to a larger extent.

Workaround:

The use of Ulimit–n 65535 can be modified instantly, but it is not available after a reboot. (note ulimit-shn 65535 equivalent ulimit-n 65535,-s refers to soft,-H refers to hard)

There are three ways to modify:

1. Add a line ulimit-shn 65535 in/etc/rc.local
2. Add a line ulimit-shn 65535 in/etc/profile
3. Last additions in /etc/security/limits.conf :

* Soft Nofile 65535
* Hard Nofile 65535
* Soft Nproc 65535
* Hard Nproc 65535

The use of the 1th method in CentOS is ineffective, using the 3rd method, and using the 2nd effect in Debian

# Ulimit-n

65535

# Ulimit-u

65535

Note: The Ulimit command itself is divided into soft and hard settings, plus-H is hard, plus-S is soft default display is soft limit

The soft limit refers to the setting value that the current system is in effect. Hard limit values can be reduced by ordinary users. But cannot be increased. The soft limit cannot be set higher than the hard limit. Only the root user can increase the hard limit value.

Recommend a blog http://my.oschina.net/fqing/blog?catalog=232290

Nginx Configuration High concurrency

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.