Nginx Server High Performance optimization configuration method Summary _nginx

Source: Internet
Author: User
Tags ack epoll sendfile nginx server

Typically, an optimized Nginx Linux server can achieve 500,000–600,000/sec request processing performance, while my Nginx server can stabilize the processing performance of 904,000 times/sec, and I test over 12 hours with this high load, The server is working stably.

Specifically, all of the configurations listed in this article are validated in my test environment, and you need to configure them according to your server:

Install Nginx from Epel Source:

Yum-y Install Nginx

Back up the configuration file and configure it according to your needs:

Cp/etc/nginx/nginx.conf/etc/nginx/nginx.conf.orig
  vim/etc/nginx/nginx.conf

# This number should is, at maximum, the number of CPU cores on your system.
# (since Nginx doesn ' t benefit from the more than one worker per CPU.)
# The value here cannot exceed the total number of CPUs, because deploying more than 1 Nginx service processes on a single core does not have the effect of lifting performance.
 
Worker_processes 24; # Number of file descriptors used for Nginx.  This is set in the OS with ' Ulimit-n 200000 ' # or using/etc/security/limits.conf # Nginx the maximum number of available file descriptors, while configuring the operating system's "Ulimit 
-n 200000, or configured in/etc/security/limits.conf.
 
Worker_rlimit_nofile 200000; # only log critical Errors # log critical level error logs Error_log/var/log/nginx/error.log Crit # determines how many clients
'll is served by the each worker process. # (Max clients = worker_connections * worker_processes) # "Max Clients" is also limited by the number of socket connection s available on the system (~64K) # Configures the number of clients that can be serviced by a single Nginx single process (maximum number of clients = Single process connections * number of processes) # Maximum number of clients is also affected by operating system socket connections (maximum 64
 
K) Worker_connections 4000; # essential for Linux, optmized to serve many clients with each thread # Linux critical configuration, allowing a single threadHandle multiple client requests.
 
Use Epoll;
# Accept as many connections as possible, after Nginx gets about a new notification.
# May flood worker_connections, if ' option is set too low.
# allows more connections to be processed as much as possible, and if the worker_connections configuration is too low, a large number of invalid connection requests are generated.
 
Multi_accept on;
# caches information about open FDs, freqently accessed files.  # Changing this setting, in my environment, brought performance up from 560k req/sec, to 904k req/sec. # I recommend using
Some varient of these options, though is not the specific values listed below.
# cache FDS (file descriptor/file handle) for high-frequency operation files in my device environment, performance is raised from 560k Requests/sec to 904k Requests/sec by modifying the following configuration.
# I suggest you try a different combination of the following configurations instead of using these data directly.
Open_file_cache max=200000 inactive=20s;
Open_file_cache_valid 30s;
Open_file_cache_min_uses 2;
 
Open_file_cache_errors on;
# Buffer Log writes to speed up IO, or disable them altogether # writes logs to a high-speed IO storage device, or closes the log directly.
# Access_log/var/log/nginx/access.log main buffer=16k;
 
Access_log off;
# Sendfile copies data between one FD and other from within the kernel. # moreEfficient than read () + write (), since the requires transferring data to and from the user.
# Open the Sendfile option, using the kernel's FD file transfer function, which is more efficient than in user mode with read () + write ().
 
Sendfile on; # Tcp_nopush causes Nginx to attempt to send it HTTP response head in one packet, # instead of using partial frames.
This is useful to prepending headers before calling Sendfile, # or for throughput.
# Open the Tcp_nopush option, Nginux allows the HTTP reply header to be sent in the same message as the data content.
# This option enables the server to prepare HTTP headers in advance when sendfile, and to achieve optimal throughput performance.
 
Tcp_nopush on; # don ' t buffer data-sends (disable Nagle algorithm).
Good for sending frequent small bursts of the data in real time.
# do not cache Data-sends (closed Nagle algorithm), this can improve the high frequency of sending small data packets real-time.
 
Tcp_nodelay on; # Timeout for keep-alive connections.
The Server would close connections after this time.
# Configure the connection keep-alive timeout, the server will close the appropriate connection after the timeout period.
 
Keepalive_timeout 30; # Number of requests a client can make over the keep-alive connection.
This is set high for testing. # The number of requests that a single client can send on a keep-alive connection, and in a test environment, you need to configure a ratioA larger value.
 
Keepalive_requests 100000; # allow the server to close the connection after a client stops responding.
Frees up socket-associated memory.
# allows the server to turn off the connection after the client stops sending the answer to free the appropriate socket memory overhead for the connection.
 
Reset_timedout_connection on; # Send the client a ' request timed out ' if the ' is ' not ' loaded by '.
Default 60.
# Configure client data request Timeout, default is 60 seconds.
 
Client_body_timeout 10; # If The client stops reading data, free up the stale client connection the more time.
Default 60.
# Client Data read Timeout configuration, client stop reading data, timeout time after the corresponding connection, the default is 60 seconds.
 
Send_timeout 2; # Compression.
Reduces the amount of the data that needs to be transferred over the network # compression parameter configuration to reduce the amount of traffic transmitted over the network.
gzip on;
Gzip_min_length 10240;
Gzip_proxied expired No-cache No-store private auth;
Gzip_types text/plain text/css text/xml text/javascript application/x-javascript;
 Gzip_disable "MSIE [1-6].";

Start the Nginx and configure the machine to load automatically.

Service Nginx start
  chkconfig nginx on

Configure Tsung and start the test to test the peak capacity of the server in approximately 10 minutes or so, with specific time associated with your Tsung configuration.

[Root@loadnode1 ~] Vim ~/.tsung/tsung.xml
   <server host= "YourWebServer" port= "type=" "TCP"/>

Tsung Start

If you think the test results are enough, you can exit by CTRL + C, and then use the alias command Treport we configured before to view the test report.

WEB Server Tuning, Part two: TCP protocol stack Tuning

This section is not only applicable to Ngiinx, but can also be used on any WEB server. The server network bandwidth can be improved by optimizing the kernel TCP configuration.

The following configuration worked perfectly on my 10-gbase-t server, and the server increased the 8Gbps bandwidth from the default configuration to 9.3Gbps.

Of course, the conclusions on your server may be different.

For the following configuration items, I recommend that you revise only one of them at a time, and then test the server multiple times with the network Performance Test tool Netperf, Iperf, or with my similar test script cluster-netbench.pl.

Yum-y Install Netperf iperf
vim/etc/sysctl.conf

# Increase system IP port limits to allow for greater connections
# to increase the number of IP and port data restrictions, from which more connections can be accepted
Net.ipv4.ip_local_po Rt_range = 65000
 
net.ipv4.tcp_window_scaling = 1 # Number of
 
packets to keep in backlog before the kernel STA RTS dropping them
# Set the protocol stack can cache the number of message thresholds, messages exceeding the threshold will be discarded by the kernel
net.ipv4.tcp_max_syn_backlog = 3240000
 
# Increase Socket Listen Backlog
# High Socket listener number threshold
Net.core.somaxconn = 3240000
net.ipv4.tcp_max_tw_buckets = 1440000
 
# Increase TCP buffer Sizes
# Large TCP storage size
Net.core.rmem_default = 8388608
Net.core.rmem_max = 16777216
Net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem
= 4096 65536 16777216
Net.ipv4.tcp_congestion_control = cubic

The following commands are required to take effect each time the configuration is revised.

Sysctl-p/etc/sysctl.conf

Don't forget to make sure that you have a network benchmark test after you configure the revisions, so that you can see which configuration revisions are the most obvious optimizations. This effective testing method can save you a lot of time.

Common Optimization Configuration Items

Generally speaking, the Nginx configuration file has the following items that are useful for optimization comparisons:
1. worker_processes 8;
Nginx number of processes, it is recommended that the number of CPUs to specify, typically its multiple (for example, 2 four cores CPU is 8).
2. worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;
Allocate CPUs for each process, in the example above, 8 processes are allocated to 8 CPUs, of course you can write multiple, or a
A process is assigned to multiple CPUs.
3. Worker_rlimit_nofile 65535;
This instruction is when a nginx process opens the maximum number of file descriptors, the theoretical value should be the most open text
The number of pieces (ulimit-n) is divided by the number of nginx processes, but Nginx allocation requests are not so homogeneous, so it is best to be consistent with the ulimit-n values.
Now in the Linux 2.6 kernel open file opening number for 65535,worker_rlimit_nofile should be filled in 65535.

This is because the allocation request to the Nginx scheduling is not so balanced, so if you fill in 10240, the total concurrency reached 340,000 when the process may be more than 10240, then return 502 errors.
How to view Linux system file descriptors:

[Root@web001 ~]# sysctl-a | grep fs.file
Fs.file-max = 789972
Fs.file-nr = 510 0 789972

4. Use Epoll;
Using the Epoll I/O model
(
Supplementary Note:
Similar to Apache, Nginx has different event models for different operating systems
A) Standard event model
Select, poll belong to the standard event model, and if the current system does not have a more efficient method, Nginx selects Select or poll
B. High-efficiency event model
Kqueue: Used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS x. Using a dual-processor MacOS x system with Kqueue can cause a kernel crash.
Epoll: Used in Linux kernel version 2.6 and future systems.

/dev/poll: Used in Solaris 7 11/99+, Hp/ux 11.22+ (eventport), IRIX 6.5.15+, and Tru64 UNIX 5.1a+.
Eventport: Used in Solaris 10. To prevent kernel crashes, it is necessary to install security patches.
)
5. Worker_connections 65535;
The maximum number of connections allowed per process, in theory, the max number of connections per Nginx server is worker_processes*worker_connections.
6. Keepalive_timeout 60;
KeepAlive timeout time.
7. Client_header_buffer_size 4k;
The client requests the size of the head buffer, which can be set according to your system paging size, generally a request header size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size.
The paging size can be obtained using the command getconf PAGESIZE.

[root@web001 ~]# getconf PAGESIZE
4096

However, there are client_header_buffer_size over 4k, but client_header_buffer_size this value must be set to the integer multiple of the system paging size.
8. Open_file_cache max=65535 inactive=60s;
This will specify caching for open files, which are not enabled by default, max Specifies the number of caches, and the recommended number of files is the same, inactive refers to how long the file has not been requested to delete the cache.
9. Open_file_cache_valid 80s;
This refers to how long it is to check the cache for valid information.
Open_file_cache_min_uses 1;
The minimum number of times a file is used in a inactive parameter in the open_file_cache instruction, and if this number is exceeded, the file descriptor is always open in the cache, as in the example above, if a file is not used once in inactive time, it will be removed.

About optimization of kernel parameters:

Net.ipv4.tcp_max_tw_buckets = 6000

Number of timewait, default is 180000.

Net.ipv4.ip_local_port_range = 1024 65000

Allows the system to open a range of ports.

Net.ipv4.tcp_tw_recycle = 1

Enable Timewait Quick Recycle.

Net.ipv4.tcp_tw_reuse = 1

Turn on reuse. Allows time-wait sockets to be reconnected to a new TCP connection.

Net.ipv4.tcp_syncookies = 1

Open the Syn cookies and enable cookies when the Syn wait queue overflows.

Net.core.somaxconn = 262144

The BACKLOG of the LISTEN function in the Web application defaults to the net.core.somaxconn limit of our kernel parameters to 128, and Nginx defines the Ngx_listen_backlog defaults to 511, so it is necessary to adjust this value.

Net.core.netdev_max_backlog = 262144

Each network interface receives packets at a rate that allows the maximum number of packets to be sent to the queue when the kernel processes these packets faster.

Net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system are not associated with any one of the user file handles. If this number is exceeded, the orphan connection is immediately reset and a warning message is printed. This limit is only to prevent simple Dos attacks, not to rely too much on it or artificially reduce this value, but also to increase this value (if memory is added).

Net.ipv4.tcp_max_syn_backlog = 262144

The maximum number of connection requests that have been logged that have not received client confirmation information. For systems with 128M of memory, the default value is 1024, and the small memory system is 128.

Net.ipv4.tcp_timestamps = 0

The time stamp avoids the winding of the serial number. A 1Gbps link is sure to encounter a previously used serial number. The timestamp allows the kernel to accept this "exception" packet. We need to turn it off.

Net.ipv4.tcp_synack_retries = 1

In order to open the connection to the end, the kernel needs to send a SYN with an ACK in response to a previous syn. The second handshake in the so-called three handshake. This setting determines the number of Syn+ack packets sent before the kernel discards the connection.

Net.ipv4.tcp_syn_retries = 1

Number of SYN packets sent before the kernel abandons the connection.

Net.ipv4.tcp_fin_timeout = 1

This parameter determines the time it remains in the fin-wait-2 state if the socket is closed by the local request. The right end can be an error and never close the connection, or even accidentally machine. The default value is 60 seconds. 2.2 The normal value of the kernel is 180 seconds, 3 You can press this setting, but keep in mind that even if your machine is a lightweight Web server, there is a risk of memory overflow because of a lot of dead sockets, fin-wait-2 is less dangerous than fin-wait-1, Because it can only eat 1.5K of memory, but they have a longer lifetime.

Net.ipv4.tcp_keepalive_time = 30


The frequency with which TCP sends keepalive messages when KeepAlive is enabled. The default is 2 hours.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.