Nginx explanation, configuration and deployment, and high concurrency Optimization

Source: Internet
Author: User
Tags epoll sendfile website performance

Nginx explanation, configuration and deployment, and high concurrency Optimization

I. Common Nginx commands:

1. Start Nginx/usr/local/nginx/sbin/nginx

Poechant @ ubuntu: sudo./sbin/nginx

2. Stop Nginx

Poechant @ ubuntu: sudo./sbin/nginx-s stop
Poechant @ ubuntu: sudo./sbin/nginx-s quit

-S sends signals to Nginx.
3. Nginx reload Configuration

Poechant @ ubuntu: sudo./sbin/nginx-s reload

The above method is to send a signal to Nginx, or use:

Poechant @ ubuntu: service nginx reload

4. Specify the configuration file

Poechant @ ubuntu: sudo./sbin/nginx-c/usr/local/nginx/conf/nginx. conf

-C indicates configuration, which specifies the configuration file.
5. View Nginx version

There are two types of parameters for viewing Nginx version information. The first type is as follows:

Poechant @ ubuntu:/usr/local/nginx $./sbin/nginx-v
Nginx: nginx version: nginx/1.0.0

The detailed version information is displayed as follows:

Poechant @ ubuntu:/usr/local/nginx $./sbin/nginx-V
Nginx: nginx version: nginx/1.0.0
Nginx: built by gcc 4.3.3 (Ubuntu 4.3.3-5ubuntu4)
Nginx: tls sni support enabled
Nginx: configure arguments: -- with-http_ssl_module -- with-openssl =/home/luming/openssl-1.0.0d/

6. Check whether the configuration file is correct

Poechant @ ubuntu:/usr/local/nginx $./sbin/nginx-t
Nginx: [alert] cocould not open error log file: open () "/usr/local/nginx/logs/error. log" failed (13: Permission denied)
Nginx: the configuration file/usr/local/nginx/conf/nginx. conf syntax is OK
16:45:09 [emerg] 23898 #0: open () "/usr/local/nginx/logs/nginx. pid" failed (13: Permission denied)
Nginx: configuration file/usr/local/nginx/conf/nginx. confTestFailed

If the above message is displayed, no access error is returned.LogsFile and process, sudo (super user do:

Poerchant @ ubuntu:/usr/local/nginx $ sudo./sbin/nginx-t
Nginx: the configuration file/usr/local/nginx/conf/nginx. conf syntax is OK
Nginx: configuration file/usr/local/nginx/conf/nginx. conf test is successful

If the preceding information is displayed, the configuration file is correct. Otherwise, there will be related prompts.
7. display help information

Poechant @ ubuntu:/user/local/nginx $./sbin/nginx-h

Or:

Poechant @ ubuntu:/user/local/nginx $./sbin/nginx -?

Ii. Explanation of simple nginx configuration in Chinese:

# Define the number of Nginx processes run by the user and user Group www; # Set to equal to the total number of CPU cores. Worker_processes 8; # global error log definition type, [debug | info | notice | warn | error | crit] error_log/var/log/nginx/error. log info; # process file pid/var/run/nginx. pid; # the maximum number of file descriptors opened by an nginx process. The theoretical value should be the maximum number of opened files (the system value ulimit-n, however, nginx allocation requests are uneven, so we recommend that you keep them consistent with the ulimit-n value. Worker_rlimit_nofile 65535; # working mode and maximum number of connections events {# refer to the event model, use [kqueue | rtsig | epoll |/dev/poll | select | poll]; the epoll model is a high-performance network I/O model in the kernel of Linux 2.6 or later versions. If you are running on FreeBSD, use the kqueue model. Use epoll; # maximum number of connections (maximum number of connections = number of connections * Number of processes) worker_connections 65535;} # Set http Server http {include mime. types; # default_type application/octet-stream; # default file type # charset UTF-8; # server_names_hash_bucket_size 128; # client_header_buffer_size 32 k for the hash table of the server name; # the size of the uploaded file is limited to large_client_header_buffers 4 64 k; # Set client_max_body_size 8 m for request easing; # Set sendfile on; # enable the efficient file transmission mode. The sendfile command specifies whether or not nginx calls the sendfile function to output files. For general applications, set it to on. If it is used for downloading and other application disk I/O heavy load applications,
# It can be set to off to balance the disk and network I/O processing speed and reduce the system load. Note: If the image is not displayed properly, change it to off. Autoindex on; # enable directory list access, which is suitable for downloading servers. It is disabled by default. Tcp_nopush on; # prevent network congestion tcp_nodelay on; # prevent network congestion keepalive_timeout 120; # long connection timeout time, in seconds # FastCGI-related parameters are used to improve website performance: reduces resource usage and increases access speed. The following parameters can be understood literally. Fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64 k; fastcgi_buffers 4 64 k; Limit 128 k; Limit 128 k; # gzip module setting gzip on; # enable gzip compression output gzip_min_length 1 k; # minimum compressed file size gzip_buffers 4 16 k; # compressed buffer gzip_http_version 1.0; # compressed version (1.1 by default, use 1.0 for the front-end squid2.5) gzip_comp_level 2; # compression grade gzip_types text/plain application/x-javascript te Xt/css application/xml; # compression type. text/html is already included by default, so you don't need to write it again. It won't be a problem if you write it, but there will be a warn. Gzip_vary on; # limit_zone crawler $ binary_remote_addr 10 m; # Use upstream blog.ha97.com {# upstream for Load Balancing when limiting IP connections. weight is the weight and the weight can be defined based on machine configuration. The weigth parameter indicates the weight. A higher weight indicates a higher probability of being assigned. Server 192.168.80.121: 80 weight = 3; server 192.168.80.122: 80 weight = 2; server 192.168.80.123: 80 weight = 3;} # configure server for the VM {# listen to port listen 80; # multiple domain names can be separated by spaces: server_name www.ha97.com ha97.com; index index.html index.htm index. php; root/data/www/ha97; location ~. * \. (Php | php5 )? $ {Fastcgi_pass 127.0.0.1: 9000; fastcgi_index index. php; include fastcgi. conf ;}# set the image cache time location ~. * \. (Gif | jpg | jpeg | png | bmp | swf) $ {expires 10d;} # JS and CSS cache time settings location ~. * \. (Js | css )? $ {Expires 1 h ;} # Set the log format to log_format access' $ remote_addr-$ remote_user [$ time_local] "$ request" ''$ status $ response" $ http_referer "'' "$ http_user_agent" $ http_x_forwarded_for '; # define the access log access_log/var/log/nginx/ha97access of the VM. log access; # enable reverse proxy location/{proxy_pass http: // 127.0.0.1: 88; proxy_redirect off; proxy_set_header X-Real-IP $ remote_addr; # The backend Web server can use X-Forwarded-For to obtain the user's real IPproxy_se T_header X-Forwarded-For $ proxy_add_x_forwarded_for; # The following are some reverse proxy configurations, which are optional. Proxy_set_header Host $ host; client_max_body_size 10 m; # maximum number of single-file bytes allowed for client requests client_body_buffer_size 128 k; # maximum number of bytes cached by the buffer proxy client request, proxy_connect_timeout 90; # nginx and backend server connection timeout (proxy connection timeout) proxy_send_timeout 90; # backend server data return time (proxy sending timeout) proxy_read_timeout 90; # after successful connection, response time of the backend server (proxy receiving timeout) proxy_buffer_size 4 k; # Set the buffer size proxy_buffers 4 32 k for the proxy server (nginx) to save user header information; # proxy_buffers buffer, set proxy_busy_buffers_size 64 k on average for Web pages below 32 k; # buffer under high load Size (proxy_buffers * 2) proxy_temp_file_write_size 64 k; # sets the cache folder size. If it is greater than this value, it will be uploaded from the upstream server} # sets the address location/NginxStatus {stub_status on for viewing Nginx status; access_log on; auth_basic "NginxStatus"; auth_basic_user_file conf/htpasswd; # the content of the htpasswd file can be generated using the htpasswd tool provided by apache .} # Reverse proxy configuration for local dynamic/static separation # All jsp pages are handled by tomcat or resin. location ~. (Jsp | jspx | do )? $ {Proxy_set_header Host $ host; proxy_set_header X-Real-IP $ remote_addr; proxy_set_header X-Forwarded-For $ scheme; proxy_pass http: // 127.0.0.1: 8080 ;} # All static files are directly read by nginx without passing through tomcat or resinlocation ~. *. (Htm | html | gif | jpg | jpeg | png | bmp | swf | ioc | rar | zip | txt | flv | mid | doc | ppt | pdf | xls | mp3 | wma) $ {expires 15d;} location ~. *. (Js | css )? $ {Expires 1 h ;}}}

III. The nginx configuration file has the following functions for optimization:

Worker_processes 8;

The number of nginx processes. We recommend that you specify the number of CPUs, which is generally a multiple of them.

Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;

Allocate a cpu for each process. In the previous example, allocate eight processes to eight CPUs. Of course, you can write multiple or
Processes are allocated to multiple CPUs.

Worker_rlimit_nofile 102400;

This command indicates the maximum number of file descriptors opened by an nginx process. The theoretical value is the maximum number of opened files.
The number of parts (ulimit-n) is the same as the number of nginx processes, but the nginx allocation request is not so uniform, so it is best to keep the same as the value of ulimit-n.

Use epoll;

Use the I/O model of epoll

Worker_connections 102400;

The maximum number of connections allowed by each process. Theoretically, the maximum number of connections per nginx server is worker_processes * worker_connections.

Keepalive_timeout 60;

Keepalive timeout.

Client_header_buffer_size 4 k;

The buffer size of the client request header. This can be set based on the size of your system page. Generally, a request
The size of the header is no more than 1 k. However, because the system usually has more than 1 k pages, set this parameter to the page size. You can use the getconf PAGESIZE command to obtain the page size.

Open_file_cache max = 102400 inactive = 20 s;

This will specify the cache for files to be opened. By default, the cache is not enabled. max specifies the number of files to be opened. It is recommended that the cache be deleted when the file is not requested.

Open_file_cache_valid 30 s;

This refers to how long it takes to check the cache's valid information.

Open_file_cache_min_uses 1;

The inactive parameter in the open_file_cache command specifies the minimum number of times files are used. If this number is exceeded, the file descriptor is always opened in the cache. For example, if a file is not used once in the inactive time, it will be removed.

Kernel Parameter Optimization:

Net. ipv4.tcp _ max_tw_buckets = 6000

The number of timewait instances. The default value is 180000.

Net. ipv4.ip _ local_port_range = 1024 65000

Port range that can be opened by the system.

Net. ipv4.tcp _ tw_recycle = 1

Enable timewait quick recovery.

Net. ipv4.tcp _ tw_reuse = 1

Enable reuse. Allow TIME-WAIT sockets to be re-used for a New TCP connection.

Net. ipv4.tcp _ syncookies = 1

Enable SYN Cookies. When a SYN wait queue overflow occurs, enable cookies for processing.

Net. core. somaxconn = 262144

By default, the backlog of the listen function in web applications limits the net. core. somaxconn of kernel parameters to 128. nginx defines NGX_LISTEN_BACKLOG as 511 by default, so it is necessary to adjust this value.

Net. core. netdev_max_backlog = 262144

The maximum number of packets that can be sent to the queue when each network interface receives packets faster than the kernel processes these packets.

Net. ipv4.tcp _ max_orphans = 262144

The maximum number of TCP sockets in the system is not associated with any user file handle. If this number is exceeded, the orphan connection is immediately reset and a warning is printed. This limit is only used to prevent simple DoS attacks. You cannot rely too much on it or artificially reduce the value. You should also increase this value (if the memory is increased ).

Net. ipv4.tcp _ max_syn_backlog = 262144

The maximum number of connection requests that have not received confirmation from the client. For systems with 128 MB of memory, the default value is 1024, and for systems with small memory, it is 128.

Net. ipv4.tcp _ timestamps = 0

Timestamp can avoid serial number winding. A 1 Gbit/s link must have a previously used serial number. The timestamp allows the kernel to accept such "abnormal" packets. Disable it here.

Net. ipv4.tcp _ synack_retries = 1

To enable the peer connection, the kernel needs to send a SYN with an ACK that responds to the previous SYN. That is, the second handshake in the three-way handshake. This setting determines the number of SYN + ACK packets sent before the kernel disconnects.

Net. ipv4.tcp _ syn_retries = 1

Number of SYN packets sent before the kernel disconnects the connection.

Net. ipv4.tcp _ fin_timeout = 1

If the socket is disabled by the local end, this parameter determines the time it remains in the FIN-WAIT-2 state. The peer can make an error and never close the connection, or even become an unexpected machine. The default value is 60 seconds. 2.2 The kernel value is usually 180 seconds, 3 You can follow this setting, but remember that even if your machine is a lightweight WEB server, there is also a risk of memory overflow due to a large number of dead sockets. The risk of FIN-WAIT-2 is smaller than that of FIN-WAIT-1 because it can only eat up to 1.5 kb of memory, however, they have a longer lifetime.

Net. ipv4.tcp _ keepalive_time = 30

The frequency of keepalive messages sent by TCP when keepalive is in use. The default value is 2 hours.

The following is a complete kernel optimization setting:

net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 30
net.ipv4.ip_local_port_range = 1024 65000

The following is a simple nginx configuration file:

user www www;
worker_processes 8;
worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000
01000000;
error_log /www/log/nginx_error.log crit;
pid /usr/local/nginx/nginx.pid;
worker_rlimit_nofile 204800;
events
{
use epoll;
worker_connections 204800;
}
http
{
include mime.types;
default_type application/octet-stream;
charset utf-8;
server_names_hash_bucket_size 128;
client_header_buffer_size 2k;
large_client_header_buffers 4 4k;
client_max_body_size 8m;
sendfile on;
tcp_nopush on;
keepalive_timeout 60;
fastcgi_cache_path /usr/local/nginx/fastcgi_cache levels=1:2
keys_zone=TEST:10m
inactive=5m;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_buffer_size 4k;
fastcgi_buffers 8 4k;
fastcgi_busy_buffers_size 8k;
fastcgi_temp_file_write_size 8k;
fastcgi_cache TEST;
fastcgi_cache_valid 200 302 1h;
fastcgi_cache_valid 301 1d;
fastcgi_cache_valid any 1m;
fastcgi_cache_min_uses 1;
fastcgi_cache_use_stale error timeout invalid_header http_500;
open_file_cache max=204800 inactive=20s;
open_file_cache_min_uses 1;
open_file_cache_valid 30s;
tcp_nodelay on;
gzip on;
gzip_min_length 1k;
gzip_buffers 4 16k;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_types text/plain application/x-javascript text/css application/xml;
gzip_vary on;
server
{
listen 8080;
server_name backup.aiju.com;
index index.php index.htm;
root /www/html/;
location /status
{
stub_status on;
}
location ~ .*\.(php|php5)?$
{
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fcgi.conf;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|js|css)$
{
expires 30d;
}
log_format access '$remote_addr -- $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" $http_x_forwarded_for';
access_log /www/log/access.log access;
}
}

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.