Nginx High concurrency parameter configuration and optimization of Linux kernel parameters

Source: Internet
Author: User
Tags ack epoll nginx server

One, in general Nginx configuration file in the optimization of the comparison has a role in the following:

1. worker_processes 8;

The number of nginx processes is recommended according to the number of CPUs, usually a multiple of it (for example, 2 four-core CPU counts as 8).

2. worker_cpu_affinity 00000001 0000001000000100 00001000 00010000 00100000 01000000 10000000;

Allocate CPU for each process, the above example allocates 8 processes to 8 CPUs, of course, can write multiple, or a
Process to allocate more than one CPU.

3. worker_rlimit_nofile65535;

This instruction refers to the maximum number of file descriptors opened by an nginx process, and the theoretical value should be the most open
The number of pieces (ulimit-n) is divided by the number of nginx processes, but the Nginx allocation request is not uniform, so it is best to keep the value of ulimit-n consistent.

Now open the number of files in the linux2.6 kernel for 65535,worker_rlimit_nofile should be filled in 65535 accordingly.

This is because Nginx dispatch when the allocation request to the process is not so balanced, so if you fill 10240, total concurrency reached 340,000, there is a process may exceed 10240, this will return 502 error.

How to view Linux system file descriptors:

[Email protected] ~]# Sysctl-a | grep fs.file

Fs.file-max = 789972

FS.FILE-NR = 510 0 789972

4. Useepoll;

Using the Epoll I/O model

(

Additional notes:

Similar to Apache, Nginx has different event models for different operating systems

A) Standard event model
Select, poll belongs to the standard event model, and if the current system does not have a more efficient method, Nginx chooses Select or poll
B) Efficient Event model
kqueue: used in FreeBSD 4.1+, OpenBSD 2.9+, NetBSD2.0 and MacOS X. Using the kqueue with a dual-processor MacOS x system can cause the kernel to crash.
epoll: used in Linux kernel version 2.6 and later systems.

/dev/poll: Used for Solaris 7 11/99+, Hp/ux 11.22+ (eventport), IRIX 6.5.15+, and Tru64 UNIX 5.1a+.

Eventport: Used for Solaris 10. To prevent kernel crashes, it is necessary to install security patches.

)

5. worker_connections65535;

The maximum number of connections allowed per process is theoretically worker_processes*worker_connections per nginx server.

6. Keepalive_timeout60;

KeepAlive time-out period.

7. client_header_buffer_size4k;

Client request the buffer size of the head, this can be set according to your system paging size, generally a request header size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size.

The paging size can be obtained with the command getconf PAGESIZE .

[Email protected] ~]# getconf PAGESIZE

4096

But there are also cases where client_header_buffer_size exceeds 4k, but client_header_buffer_size the value must be set to the integer multiple of system paging size.

8. open_file_cachemax=65535 inactive=60s;

This will specify the cache for the open file, which is not enabled by default, max Specifies the number of caches, the recommended and the number of open files, and inactive refers to how long the file has not been requested to delete the cache.

9. open_file_cache_valid80s;

This refers to how long it takes to check the cache for valid information.

Open_file_cache_min_uses1;

The minimum number of times the file is used in the inactive parameter time in the Open_file_cache directive, if this number is exceeded, the file descriptor is always opened in the cache, as in the previous example, if a file is not used once in inactive time, it will be removed.


Second, about the optimization of kernel parameters:

Net.ipv4.tcp_max_tw_buckets = 6000

The number of timewait, by default, is 180000.

Net.ipv4.ip_local_port_range = 1024 65000

Allows the system to open a range of ports.

Net.ipv4.tcp_tw_recycle = 1

Enable Timewait Quick Recycle.

Net.ipv4.tcp_tw_reuse = 1

Turn on reuse. Allows time-wait sockets to be re-used for new TCP connections.

Net.ipv4.tcp_syncookies = 1

Turn on SYN cookies to enable cookies to be processed when a SYN wait queue overflow occurs.

Net.core.somaxconn = 262144

The BACKLOG of LISTEN functions in Web applications restricts the net.core.somaxconn of our kernel parameters to 128, and the Nginx-defined ngx_listen_backlog defaults to 511, so it is necessary to adjust this value.

Net.core.netdev_max_backlog = 262144

The maximum number of packets that are allowed to be sent to the queue when each network interface receives a packet at a rate that is faster than the rate at which the kernel processes these packets.

Net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system are not associated with any one of the user file handles. If this number is exceeded, the orphan connection is immediately reset and a warning message is printed. This limitation is only to prevent a simple Dos attack, not to rely too much on it or artificially reduce the value, but should increase this value (if the memory is increased).

Net.ipv4.tcp_max_syn_backlog = 262144

Record the maximum number of connection requests that have not received the client acknowledgment information. For systems with 128M of memory, the default value is 1024, and the small memory system is 128.

Net.ipv4.tcp_timestamps = 0

Timestamps can prevent the winding of serial numbers. A 1Gbps link will definitely encounter a previously used serial number. Timestamps allow the kernel to accept this "exception" packet. You need to turn it off here.

Net.ipv4.tcp_synack_retries = 1

In order to open the connection to the end, the kernel sends a SYN and comes with an ACK that responds to the previous syn. The second handshake in the so-called three-time handshake. This setting determines the number of Syn+ack packets sent before the kernel abandons the connection.

Net.ipv4.tcp_syn_retries = 1

The number of SYN packets sent before the kernel abandons the connection.

Net.ipv4.tcp_fin_timeout = 1

If the socket is closed by the local side, this parameter determines how long it remains in the fin-wait-2 state. The peer can make an error and never shut down the connection, or even accidentally become a machine. The default value is 60 seconds. 2.2 The normal value of the kernel is 180 seconds, 3 You can press this setting, but remember that even if your machine is a light-load Web server, there is a large number of dead sockets and memory overflow risk, fin-wait-2 is less dangerous than fin-wait-1, Because it can only eat up to 1.5K of memory, but they have a longer lifetime.

Net.ipv4.tcp_keepalive_time = 30

When KeepAlive is employed, the frequency at which TCP sends keepalive messages. The default is 2 hours.


Three, paste a complete kernel optimization settings:

vi/etc/sysctl.conf All content can be emptied directly in CentOS5.5 with the following content:

Net.ipv4.ip_forward = 0
Net.ipv4.conf.default.rp_filter = 1
Net.ipv4.conf.default.accept_source_route = 0
KERNEL.SYSRQ = 0
Kernel.core_uses_pid = 1
Net.ipv4.tcp_syncookies = 1
KERNEL.MSGMNB = 65536
Kernel.msgmax = 65536
Kernel.shmmax = 68719476736
Kernel.shmall = 4294967296
Net.ipv4.tcp_max_tw_buckets = 6000
Net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
Net.ipv4.tcp_rmem = 4096 87380 4194304
Net.ipv4.tcp_wmem = 4096 16384 4194304
Net.core.wmem_default = 8388608
Net.core.rmem_default = 8388608
Net.core.rmem_max = 16777216
Net.core.wmem_max = 16777216
Net.core.netdev_max_backlog = 262144
Net.core.somaxconn = 262144
Net.ipv4.tcp_max_orphans = 3276800
Net.ipv4.tcp_max_syn_backlog = 262144
Net.ipv4.tcp_timestamps = 0
Net.ipv4.tcp_synack_retries = 1
Net.ipv4.tcp_syn_retries = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_mem = 94500000 915000000 927000000
Net.ipv4.tcp_fin_timeout = 1
Net.ipv4.tcp_keepalive_time = 30
Net.ipv4.ip_local_port_range = 1024 65000

To make the configuration effective immediately, use the following command:
/sbin/sysctl-p

Four, the following is about the optimization of the system connection number

Linux defaults open files and max user processes to 1024

#ulimit-N

1024

#ulimit –u

1024

Description : Description Server only allows 1024 files to be opened at a time, processing 1024 user processes

Use Ulimit-a to view all the limits of the current system and use Ulimit-n to view the current maximum number of open files.

The newly installed Linux default is only 1024, and as a server with a large load, it is easy to encounter error:too many open files. Therefore, it needs to be changed to a larger extent.

Workaround:

The use of Ulimit–n 65535 can be modified instantly, but it is not available after a reboot. (note ulimit-shn 65535 equivalent ulimit-n 65535,-s refers to soft,-H refers to hard)

There are three ways to modify:

1. Add a line ulimit-shn 65535 in/etc/rc.local
2. Add a line ulimit-shn 65535 in/etc/profile
3. Last additions in /etc/security/limits.conf :

* Soft Nofile 65535
* Hard Nofile 65535
* Soft Nproc 65535
* Hard Nproc 65535

The use of the 1th method in CentOS is ineffective, using the 3rd method, and using the 2nd effect in Debian

# Ulimit-n

65535

# Ulimit-u

65535

Note: The Ulimit command itself is divided into soft and hard settings, plus-H is hard, plus-S is soft default display is soft limit

The soft limit refers to the setting value that the current system is in effect. Hard limit values can be reduced by ordinary users. But cannot be increased. The soft limit cannot be set higher than the hard limit. Only the root user can increase the hard limit value.


Five, the following is a simple nginx configuration file:

User www www;
Worker_processes 8;
Worker_cpu_affinity 00000001 00000010 00000100 00001000 0001000000100000
01000000;
Error_log/www/log/nginx_error.log Crit;
Pid/usr/local/nginx/nginx.pid;
Worker_rlimit_nofile 204800;
Events
{
Use Epoll;
Worker_connections 204800;
}
http
{
Include Mime.types;
Default_type Application/octet-stream;
CharSet Utf-8;
Server_names_hash_bucket_size 128;
client_header_buffer_size 2k;
Large_client_header_buffers 4 4k;
Client_max_body_size 8m;
Sendfile on;
Tcp_nopush on;
Keepalive_timeout 60;
Fastcgi_cache_path/usr/local/nginx/fastcgi_cache Levels=1:2
keys_zone=test:10m
inactive=5m;
Fastcgi_connect_timeout 300;
Fastcgi_send_timeout 300;
Fastcgi_read_timeout 300;
Fastcgi_buffer_size 4k;
Fastcgi_buffers 8 4k;
Fastcgi_busy_buffers_size 8k;
Fastcgi_temp_file_write_size 8k;
Fastcgi_cache TEST;
Fastcgi_cache_valid 302 1h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1m;
Fastcgi_cache_min_uses 1;
Fastcgi_cache_use_stale error timeout invalid_headerhttp_500;
Open_file_cache max=204800 inactive=20s;
Open_file_cache_min_uses 1;
Open_file_cache_valid 30s;
Tcp_nodelay on;
gzip on;
Gzip_min_length 1k;
Gzip_buffers 4 16k;
Gzip_http_version 1.0;
Gzip_comp_level 2;
Gzip_types Text/plain Application/x-javascript Text/cssapplication/xml;
Gzip_vary on;
Server
{
Listen 8080;
server_name backup.aiju.com;
Index index. PHP index.htm;
root/www/html/;
Location/status
{
Stub_status on;
}
Location ~. *\. (PHP|PHP5)? $
{
Fastcgi_pass 127.0.0.1:9000;
Fastcgi_index index.php;
Include fcgi.conf;
}
Location ~. *\. (GIF|JPG|JPEG|PNG|BMP|SWF|JS|CSS) $
{
Expires 30d;
}
Log_format Access ' $remote _addr--$remote _user [$time _local] "$request"
' $status $body _bytes_sent ' $http _referer '
' "$http _user_agent" $http _x_forwarded_for ';
Access_log/www/log/access.log access;
}
}

Vi. several directives on fastcgi:

Fastcgi_cache_path/usr/local/nginx/fastcgi_cache levels=1:2keys_zone=test:10minactive=5m;

This instruction specifies a path for the fastcgi cache, a directory structure level, a keyword area to store time, and an inactive delete time.

Fastcgi_connect_timeout 300;

Specifies the time-out for connecting to the backend fastcgi.

Fastcgi_send_timeout 300;

The time-out of the request to the fastcgi, which is the time-out for sending the request to fastcgi after two handshake has been completed.

Fastcgi_read_timeout 300;

The timeout period for receiving the fastcgi answer, which is the time-out for receiving the fastcgi answer after two handshakes have been completed.

Fastcgi_buffer_size 4k;

Specifies how much buffer is required to read the first part of the fastcgi answer, usually the first part of the answer will not exceed 1k, because the page size is 4k, so this is set to 4k.

Fastcgi_buffers 8 4k;

Specifies how many and how large buffers are needed locally to buffer the fastcgi response.

Fastcgi_busy_buffers_size 8k;

I don't know what to do with this instruction, only know the default value is twice times the fastcgi_buffers.

Fastcgi_temp_file_write_size 8k;

How much data block will be used when writing Fastcgi_temp_path, the default value is twice times of fastcgi_buffers.

Fastcgi_cache TEST

Turn on the fastcgi cache and set a name for it. The personal sense of unlocking the cache is useful to reduce CPU load and prevent 502 errors.

Fastcgi_cache_valid 302 1h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1m;

Specifies the cache time for the specified answer code, such as the 200,302 answer cache for one hour in the previous example, 301 for the cache for 1 days, and the other for 1 minutes.

Fastcgi_cache_min_uses 1;

The minimum number of times the cache is used in the fastcgi_cache_path instruction inactive parameter value time, as in the above example, if a file is not used 1 times in 5 minutes, the file will be removed.

Fastcgi_cache_use_stale error timeout invalid_headerhttp_500;

Without knowing what this parameter does, the conjecture should be to let nginx know which types of caches are useless. The above is the Nginx FastCGI related parameters, in addition, FastCGI itself has some configuration needs to be optimized, if you use PHP-FPM to manage FastCGI, you can modify the following values in the configuration file:

<valuename= "Max_children" >60</value>

The number of concurrent requests processed concurrently, that is, it will open up to 60 child threads to handle the concurrent connection.

<valuename= "Rlimit_files" >102400</value>

The maximum number of open files.

<valuename= "Max_requests" >204800</value>

The maximum number of requests that each process can perform before it is reset.


This article is from the "Dream to Reality" blog, please be sure to keep this source http://lookingdream.blog.51cto.com/5177800/1836128

Nginx High concurrency parameter configuration and optimization of Linux kernel parameters

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.