Linux Learning: nginx--common feature configuration fragments and optimizations-06

Source: Internet
Author: User
Tags sendfile temporary file storage browser cache

1. Common fragments

1.1 Server configured to listen on IP and port

server{Listen 127.0.0.1:9080; server_name 127.0.0.1;}

1.2 Server configured to listen for domain names and ports

server{Listen 80; server_name www.sishuok.com sishuok.com *.sishuok.com;}

1.3 Deliver the client's real IP back to the server

Location ~ \.  (JSP|ACTION|MVC) $ {proxy_set_header Host $host;  Proxy_set_header x-forwarded-for $remote _addr; Proxy_pass http://xiaoliu.com;}

1.4 Configuration of backend server failover in load balancing

Location ~ \.  (JSP|ACTION|MVC) $ {proxy_next_upstream http_502 http_504 timeout;  Proxy_set_header Host $host;  Proxy_set_header x-forwarded-for $remote _addr; Proxy_pass Http://server_pool;}

1.5 Simple anti-theft chain

Location/{... valid_referers blocked sishuok.com *.sishuok.com;  if ($invalid _referer) {rewrite ^/http://sishuok.com; }}

1.6 Simple Limit Download speed

Location/{limit_rate 256K;}

1.7 Using the Proxy_cache configuration

http{         #下面这两个path指定的路径必须在同一个分区     proxy_temp _path /cachetemp/proxy_temp_path;     #设置名称为mycache, Memory cache 100m, automatic erase 1 days unused content, hard disk cache space 1g     proxy_cache_path /cachetemp/proxy_cache_path levels=1:2 keys_zone= mycache:100m    inactive=1d max_size=1g;    server{         location ~ .*\. (GIF|JPG|HTML|JS|CSS) $ {            proxy_ cache mycache;  #使用名称为mycache的缓存               #对不同的Http状态码设置不同的缓存时间             proxy_cache _valid 200 304 24h;             proxy_cache_valid 301 302 10m;            proxy_cache_valid any 1m;              #设置缓存的key值              proxy_cache_key  $host $uri$is_args$args;        }     }}

2. Optimization part

2.1 Configuration 1

A, if not enough strength and the need to rewrite Nginx, then Nginx optimization is mainly:

Optimized nginx configuration for reasonable and efficient use

b, the direction and objectives of optimization, no outside:

Maximize single machine handling efficiency

Minimizing the load on a single machine

Reduce disk I/O as much as possible

Minimizing network I/O

Minimize Memory usage

Use CPU as efficiently as possible

C, production environment, the Nginx module should be minimized, which are used to open which several, this need to compile the installation of Nginx to do.

D, users and groups, in the production environment, it is best to create users and groups for Nginx, and set permissions separately, which is more secure. Example: User Nginx Nginx

E, worker_processes: usually configured as the total number of cores of the CPU, or twice times, performance will be better. This can reduce the consumption of inter-process switching.

You can also use Worker_cpu_affinity to bind the CPU, allowing each worker process to enjoy a single CPU, complete concurrency, and better performance, but this is only valid for Linux systems.

F, event model in events, Linux recommends using the Epoll model, FreeBSD recommends the use of Kqueue

G, Worker_rlimit_nofile: Describes a nginx process to open the maximum number of files, configured to follow the Linux kernel file open number consistent.

Can be viewed through ulimit-n, the newly installed system by default is 1024,centos can be modified as follows: At the end of the/etc/security/limits.conf added:

* Soft nofile 65535 * hard nofile 65535 * Soft nproc 65535 * hard Nproc 65535

H, Worker_connections: The maximum number of connections allowed per process, default is 1024, can be set larger.

The total number of concurrent totals is the product of worker_processes and worker_connections, and the setting of the Worker_connections value is related to the physical memory size, because the maximum number of files the system can open and the internal

The number of files that can be opened on a machine with 1GB memory is approximately 100,000, so the value of the worker_connections depends on the number of worker_processes processes and the system can be opened.

The maximum number of files to be set appropriately.

I, Keepalive_timeout: set to 65 or so can

2.2 Configuration 2

A, Client_header_buffer_size: Set the requested cache, set to 4k, usually an integer multiple of the system paging size, you can view the system paging size by getconf PAGESIZE.

B. Set cache for open files

Open_file_cache max= is recommended to be set to the maximum number of files opened per process consistent inactive=60s;

Open_file_cache_valid 90s;

Open_file_cache_min_uses 2;

Open_file_cache_errors on;

C, as far as possible to open gzip compression, Gzip_comp_level is usually set to 3-5, high a waste of CPU

d, error log optimization: set to Crit during runtime, can reduce I/O

E, Access log optimization: If you use other statistical software, you can turn off logging to reduce disk writes, or write memory files to improve I/O efficiency.

F, sendfile instruction Specifies whether Nginx calls the Sendfile function (zero copy mode) to output the file, usually should be set to ON, if it is a download application disk IO Heavy load application, can be set to off

g, Buffers size optimization: If the buffer size is too small will cause Nginx to use temporary file storage response, which will cause disk read/write Io, the larger the problem is more obvious.

The client_body_buffer_size handles the client request body buffer size. Used to process post submission data, upload files, etc. The client_body_buffer_size needs to be large enough to accommodate post data that needs to be uploaded. In the same vein, there is buffer data at the back end.

H, Worker_priority process priority setting: In Linux systems, higher priority processes take up more system resources, where the static priority of the process is configured, with a value range of 20 to the +19,-20 level. This value can therefore be set to a smaller point, but is not recommended to be lower than the value of the kernel process (typically-5)

I, reasonable set the static resources browser cache time, try to use the browser cache

J, Load Balancer lock Accept_mutex, recommended to open, the default is open

K, if you use SSL, and the server has an SSL hardware acceleration device, please turn on hardware acceleration.


This article is from "I Love Big gold" blog, please be sure to keep this source http://1754966750.blog.51cto.com/7455444/1913198

Linux Learning: nginx--common feature configuration fragments and optimizations-06

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.