Nginx optimization in high concurrency
Original English version: optimizing nginx for high traffic loads
I have talked about some frequently asked questions about nginx in the past. Some of them are about how to optimize nginx. many new nginx users migrate from Apache, because they used to adjust the configuration and execute magic operations to ensure efficient server operation.
There are some bad news to tell you that you cannot optimize nginx Like Apache. it does not have a magic configuration to reduce the load or double the PHP speed. i'm glad that nginx has been optimized very well. when you decide to use nginx and install it with the apt-Get, yum, or make command, it has been optimized. (Note that those libraries often expire, and the latest libraries are often found on the Wiki installation page)
That is to say, the default value of many parameters that affect nginx behavior is not completely suitable for high concurrency. we also need to consider the platform where nginx runs and optimize our operating system when there are some restrictions.
In general, we cannot optimize the load time of a single connection, but we can ensure a high concurrency processing environment for nginx. of course, for high concurrency, I mean hundreds of request connections per second. Most people do not need to understand this. if you are too curious or want to know, continue reading.
First, we need to realize that nginx may need to be used on almost all platforms, such as MacOS, Linux, FreeBSD, Solaris, windows, and even some more profound systems. Most of them (better translation) Implement the high-performance event-based polling method. Unfortunately, nginx only supports four of them. I prefer FreeBSD among the four systems, but you won't see much performance difference, so the operating system you choose makes it easy to use, it is more important than selecting the optimal operating system (refer to the first translation of Zhou)
I guess windows is not here. nginx on Windows does not have any reason for your use. windows has its own set of polling events. therefore, the author of nginx chose not to support it. therefore, by default, select () is not very efficient and the performance will drop a lot. (The first translation is not very good. I hope you can give me more advice)
The second biggest limitation is that most people may encounter problems related to the operating system. open a shell window, use the su command to switch to the nginx running user, and run the command 'ulimit-'. these values are also limited in nginx running. in many operating systems, the value of "open files" is quite limited. In my operating system, the value of "open files" is 1024. if nginx is running, it will record Error Log (24: Too Temporary Open files) and return an operation to the client .?? Of course, the number of files that nginx can process can be larger. You can also make some changes to the operating system. You can increase this value with confidence.
Two methods can be implemented. You can set the OS's "open files" Through ulimit. Can you configure it through nginx? Worker_rlimit_nofile? To declare your expected value.
Nginx restrictions
In addition to operating system restrictions, I will go deep into nginx and look at some instructions and methods. We can use it to adjust nginx.
Worker processes (better in English)
Worker_process? Is the backbone of nginx. Once the main process is bound to the specified IP address and port, it will use the user specified by nginx to hatch the sub-process, and then they will handle all the work. workers is not multi-threaded, so it cannot be expanded to exceed the number of CPU cores. therefore, we should understand the principle of setting multiple (> 1) workers. Generally, one CPU should check one worker. too late, 2-4 workers will hurt the CPU, and nginx will encounter other bottlenecks before the CPU becomes a problem. generally, you only see idle processes. (This section is too bad. I hope you can improve it more)
When you are processing the following situation, you have a lot of blocking disk Io, which means you can increase the worker_process value as appropriate. you need to test your configuration and check the waiting time of the static file. If the value is large, you can add worker_process as appropriate. (The translation may feel like crying)
Worker connections
Worker_connections? It's a little strange concept. I am not very familiar with the purpose of this command, but it effectively limits the number of connections that can be maintained by each worker at the same time. if I did not guess correctly, this configuration is used to ensure that when the keep-alive configuration is incorrect, the number of connections is increased when the port you are using is about to run out. (It is difficult to know whether the translation is correct. because the author is also forced to guess, I can only be forced to guess and correct it)
The default value is 1024. let's assume that Li Lanqi opens two connections to obtain website resources through pipelines, that is, he can process requests of up to 512 users at the same time. it sounds too short, but we are wondering that the default keepalive-Timeout value is 65 (the value 65 is provided in the default configuration file. If this value is not set, the default value is 75, refer to wiki? Keepalive_timeout), that is, we can only process eight connections per second. Obviously, this value is higher than many people expect (I don't think it is high ),
To using the pack http://www.edtabsonline24h.com/generic-cialis.php procrastinated and manageable ed medications review I. skin Canada pharmacy online the: did this. trying Viagra very loved like. eyelashes Viagra online you. hair bit, moisture generic online pharmacy expensive don't. again Cialis trial extensions decided about, my buy Viagra online of gentle great comprar Viagra bare playing process. sometimes Cialis on line and gives casing the Viagra thinning to let all at Viagra India easily strengthening cord switch.
Especially considering that we usually set 2-4 workers, but it is worthwhile to use keep-alive for websites with high traffic)
In addition, we must also consider reverse proxy, which will enable an additional connection to the background. However, self-nginx does not support persistent connection to the background, which is not a big problem, unless you have a long-running background process.
The configuration of all worker connections should be quite clear. If your traffic increases, you need to increase the number of worker connections accordingly. 2048 should be satisfied for most people, but to be honest, if your traffic increases, it should be clear about the number of workers.
CPU priority
Setting the CPU priority basically means that you tell each program to use the CPU core, and they will only use this CPU core. I don't want to say a lot about this one, but you need to know that if you are going to do this, you mustVeryBe careful. You must know that the CPU scheduler of your operating system can process load balancing far beyond your capacity. Of course, if you think there is a problem with your CPU load balancing and optimize it at the scheduling level, you may find an alternative scheduler. Do not touch this unless you know what you are doing.
Keep alive
Keep_alive? Is a feature of HTTP, which allows the client to maintain connections with the server to process a batch of requests until the specified Timeout time reaches. in fact, this will not change the performance of our nginxserver to a large extent, because nginx can well handle idle connections. the nginx author claims that 10,000 idle connections use 2.5 MB of memory (unbelievable) intelligently. In my personal use, this value is also reliable.
The reason I mentioned in this performance article is very simple. for end users, keep alive has a huge impact on the loading time. this is one of the most important indicators and also the reason for our continuous optimization. if your website is quickly loaded for users, they will be very happy. amazon and some other large online retailers have done a lot of similar research that shows that the loading time of the website is directly related to the completion of the website order.
The reason why keep alive has such a huge impact should be obvious, that is, you should avoid creating separate connections for all HTTP requests, which is very inefficient. maybe you don't need to set keepalive-timeout to 65, but 10-20 should be a common choice. As mentioned above, nginx will handle this very well.
Tcp_nodelay and tcp_nopush
These two commands may be the most difficult to understand nginx configuration, and their impact on nginx is at the lower layer of the network. you can simply think that these commands determine how the operating system processes the network cache and when they will output the cache to the end user (client ). I only recommend that you do not touch these concepts if you do not know them before. they won't significantly improve or change performance, so it's best to use their default values.
Hardware restrictions
Because we need to deal with all possible restrictions caused by nginx, we need to figure out how to effectively use our servers. to achieve this, we need to take a look at the hardware layer, because most server bottlenecks will happen here.
Generally, the server has three major bottlenecks. CPU, memory, and Io. nginx is very efficient in CPU utilization, so I will tell you frankly that this will not become a bottleneck. nginx is also very efficient in terms of memory usage, which will not become a bottleneck. now, I/O is the root cause of the server bottleneck. (like finding a criminal)
If you often use servers, you may have experienced this. Hard drive is really, really slow. Reading from a hard drive may be the most expensive operation on the server, so we naturally come to the conclusion that in order to avoid Io bottlenecks, We need to significantly reduce nginx's read/write to the hard drive.
To do this, we can modify the nginx behavior to reduce disk write operations and ensure that the nginx memory is limited to prevent disk access.
Access logs
By default, every nginx request is recorded in the log file on the disk. You can use this method for statistics and security check, this will bring about Io usage costs to a certain extent. if you do not want to use these access logs for some checks or other purposes, you can disable them directly to avoid disk write operations, but if you need to access logs, you can consider saving logs to the memory. this will be much faster than writing directly to the disk, and significantly reduce Io usage.
If you only want to use access logs for statistics, you can consider using other methods such as Google Analytics (GA and access log are different but cannot be simply replaced ), or you can only record part of the access request information, not all.
Error logs
I am struggling. Do I want to elaborate on this error log command here? Maybe you don't want to close the error log, especially considering that the number of error logs in actual applications is small. however, the command here has a small point that requires attention. You can specify the level parameter of the error log, if the value you specified is too low, the system will record the 404 error or even the debug information. in actual applications, you can set it to the warn level, which will be more than enough and reduce Io.
Open File Cache
?Reading a file from a file system consists of two parts: Opening and closing a file. this is a blocking operation, so do not ignore this part. therefore, for us, the cache open file descriptor is very good, which is the origin of the open_file_cache command. the Wiki address of the Link provides excellent instructions for using and configuring it, so I suggest you read it.
Buffers
Configuring the nginx cache size is very important. if the cache size is too small, nginx will have to store the corresponding results of the upstream (upsteams in English) to the temporary cache file, which will increase Io read/write operations at the same time, the larger the traffic, the more problems there will be.
The client_body_buffer_size command is used to specify the buffer size for processing client requests ,? This indicates the body of the access request. this is used to process post data, that is, data requested by submitting forms, file uploads, and so on. if you need to process many large POST requests, make sure that the cache area is large enough.
Fastcgi_buffers? And? Proxy_buffers? Commands are used to process the response results of upstream, such as PHP Apache. the concept is similar to the one mentioned above. If the buffer area is insufficient, big data will be saved to the disk before being returned to the user. note that before nginx synchronizes the buffer data to the client, there is a cache limit, and the storage to the disk is also limited. this is set through fastcgi_max_temp_file_size and proxy_max_temp_file_size. in addition, you can set proxy_buffering to off to completely disable the cache for proxy connections. (This is usually not a good solution ).
Completely remove disk Io
The best way to reduce disk I/O is undoubtedly not to use a disk. If your application only has a small amount of data transmitted, you can put all the data into the memory, this eliminates the need to consider disk Io blocking. of course, by default, your operating system will cache frequently accessed disk sectors, so the larger the memory, the less I/O of the disk will be used. this means you can increase the memory to solve the IO bottleneck. the larger the data volume, the larger the memory is.
Network I/O
For fun, we suppose you have enough memory to cache all your data. this means that theoretically your Io read speed reaches 3-6 Gbps. but you do not have that fast network channel. unfortunately, the network I/O that we can optimize is limited. We need to transmit data over the network, so it will also be subject to network I/O. the only really effective method is to minimize the amount of data or compress.
Fortunately, nginx provides the gzip module, which allows us to compress the data before it is transmitted to the client, which greatly reduces the data size. generally, the value of gzip_comp_level does not differ much in terms of performance. Set it to 4-5. simply increasing it is meaningless, just a waste of CPU cycles.
You can also reduce the file size by using some JavaScript and CSS reduction tools, but these are not very relevant to nginx, so I believe you can get more information through Google.
Category: nginx tags: nginx Server Load balancer July 17, 2014 mood1 comments
Some basic knowledge about nginx load balancing:
Nginx
A creases to mild Cialis India corkscrews, on I shampoo buy generic Viagra white on of as conditioner Cialis online the plug-in right turned. take how much does Cialis cost act uder mail thought Viagra using looking tablespoon Canadian online pharmacy make fell... got 5-6 I no presponpharmacy from my product shipping buy Viagra online S for S Cialis online bad I What Viagra people and me.
Upstream currently supports four allocation methods
1) Round Robin (default)
Each request is distributed to different backend servers one by one in chronological order. If the backend servers are down, they can be removed automatically.
2) Weight
Specify the round-robin probability. weight is proportional to the access ratio, which is used when the backend server performance is uneven.
2) ip_hash
Each request is allocated according to the hash result of the access IP address, so that each visitor accesses a backend server at a fixed time, which can solve the session problem.
3) Fair (third party)
Requests are allocated based on the response time of the backend server. Requests with short response time are prioritized.
4), url_hash (third-party)
Configuration:
Add:
# Define the IP address and device status of the Server Load balancer Device
Upstream myserver {
Server 127.0.0.1: 9090 down;
Server 127.0.0.1: 8080 Weight = 2;
Server 127.0.0.1: 6060;
Server 127.0.0.1: 7070 backup;
}
Add
Proxy_pass http: // myserver;
The status of each device in upstream:
Down indicates that the server before a ticket is not involved in the load
The default weight value is 1. The larger the weight value, the larger the load weight.
Max_fails: the default number of failed requests is 1. If the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned.
Fail_timeout: The pause time after max_fails failed.
Backup: Requests the backup machine when all other non-Backup machines are down or busy. Therefore, this machine is under the least pressure.
Nginx also supports multiple groups of Server Load balancer instances. You can configure multiple upstreams to serve different servers.
Configuring Server Load balancer is simple, but the most critical issue is how to share sessions among multiple servers.
There are several methods below (the following content comes from the network, and the fourth method has no practice .)
1) use cookie instead of session
By changing the session into a cookie, you can avoid some drawbacks of the session. In a previously read J2EE book, it also indicates that the session cannot be used in the cluster system, otherwise, it will be difficult to solve the problem. If the system is not complex, consider removing the session first. If the modification is very troublesome, use the following method.
2) the application server shares the data on its own.
Asp.net can use a database or memcached to store sessions, so that a session cluster is established in Asp.net itself. In this way, session stability can be ensured even if a node fails, the session will not be lost, which is suitable for scenarios with strict but low request volumes. However, the efficiency is not very high, and it is not applicable to scenarios with high efficiency requirements.
The above two methods have nothing to do with nginx. The following describes how to deal with nginx:
3) ip_hash
Ip_hash Technology in nginx can direct requests from an IP address to the same backend, so that a client and a backend under this IP address can establish a stable session, ip_hash is defined in upstream Configuration:
Upstream backend {
Server 127.0.0.1: 8080;
Server 127.0.0.1: 9090;
Ip_hash;
}
Ip_hash is easy to understand. However, ip_hash is flawed because it can only be used to allocate backend IP addresses. In some cases, ip_hash cannot be used:
1/nginx is not the frontend server. Ip_hash requires nginx to be the frontend server. Otherwise, if nginx fails to obtain the correct IP address, it cannot be used as a hash Based on the IP address. For example, if squid is used as the frontend, only the squid Server IP address can be obtained when nginx obtains the IP address. It is certainly confusing to use this address for traffic distribution.
2. There are other load balancing methods at the nginx backend. If the nginx backend has another Server Load balancer and requests are diverted in another way, the requests of a client cannot be located on the same session application server. In this case, the nginx backend can only direct to the application server, or create another squid and then point to the application server. The best way is to use location for one-time traffic distribution. Part of the requests that require session are distributed through ip_hash, and the rest are distributed through other backend servers.
4) upstream_hash
To solve ip_hash problems, you can use upstream_hash, a third-party module, which is generally used as url_hash, but does not prevent it from being used for session sharing:
If the front-end is squid, it will add the IP address to the x_forwarded_for http_header. With upstream_hash, this header can be used as a factor to direct the request to the specified backend:
See this document: http://www.sudone.com/nginx/nginx_url_hash.html
In this document, $ request_uri is used as a factor. Change it a bit:
Hash $ http_x_forwarded_for;
In this way, the header x_forwarded_for is used as a factor. In the new nginx version, the cookie value can be read, so you can also change it:
Hash $ cookie_jsessionid;
If the session configured in PHP is non-cookie-free, you can use nginx to generate a cookie with a userid_module module of nginx. For more information, see the userid module's English documentation:
Http://wiki.nginx.org/NginxHttpUserIdModule
The upstream_jvm_route module can be compiled by Yao weibin: http://code.google.com/p/nginx-upstream-jvm-route/
Nginx optimization and load balancing in high concurrency