I recently spent some time on testing the pressure of NGINX and Tomcat 7 clusters. The following describes how to optimize the server performance through answers to some common questions, there is no actual data. If any data leakage occurs, please forgive me.
Background: TOMCAT7 has been added with APR or NIO. JCONSOLE has been installed to monitor basic information such as server memory and threads.
Question 1: How many of the maxThreads configurations of a Tomcat server are appropriate?
A good maxThreads configuration is to rationalize the application of resources.
Resource pool:
Before talking about other things, we should first introduce the concept of a resource pool. In tomcat7, the processing of http requests also has the concept of a pool. For configuration, refer to here. Each request is processed by one of the thread pools. The size of the thread pool is limited by maxThreads.
Asynchronous IO:
Currently, Tomcat supports performance optimization by using asynchronous IO such as java nio or Apache Portable Runtime. Asynchronous IO is used to send requests to the kernel when the application requires time-consuming I/O operations. Other requests are processed without waiting for I/O operations to complete, when I/O is actually completed, a callback or notification mechanism will notify you and complete the remaining work. In general, synchronous I/O requests are sent to the operating system when the application requires I/O operations. At the same time, the current application is blocked and subsequent operations are performed only after the IO is returned. From this we can see that asynchronous IO actually processes requests in parallel with IO, which can naturally increase the system throughput.
MaxThreads size:
First: From the above asynchronous IO mechanism, we may be able to process large connections with a small thread pool. If 100 requests are to be processed, all 50 processes are in the I/O waiting status, therefore, we may only need 50 requests that are not in the I/O wait state to meet our needs. Note that maxConnection is used in Tomcat to configure the number of concurrent connections for Tomcat.
Second: blindly increasing the number of threads will bring about some of the following effects. Because Tomcat processes threads all generate corresponding actual threads in the operating system, this means the corresponding resource consumption (memory, SOCKET, etc ). Another influence is that the increase of requests processed at the same time may lead to JAVA memory recovery. Different concurrency may have different memory usage. In fact, 90% of the memory is a temporary variable and can be recycled quickly. Large concurrency also occupies a large amount of temporary variables, which may easily fill The young generation, resulting in some memory entering The old age, resulting in more Stop The World, or even OOM, affects JVM performance. Other impacts include higher CPU usage and more hard disk reads and writes. These are actually related to hardware.
Third, we can configure a reasonable resource pool to quickly process individual requests due to ample resources, so as to achieve optimal system efficiency. However, sometimes we do not always pursue such a situation. For example, when downloading, the response time of a single request will be limited by the network. It may take 20 minutes to download a m package. We should not use a small resource pool to improve the overall efficiency, however, a large resource pool should be configured to allow more users to connect to and download data. Otherwise, most users will be denied due to timeout, resulting in ultra-fast connection, if the connection fails, it is rejected directly.
Fourth, The memory allocation of a single JVM is relatively high, resulting in a longer interruption time of Full Gc (Stop The World), affecting real-time performance. After a pause of more than 10 seconds, all things will be suspended.
Configuration Optimization ideas:
The configuration should be based on the actual situation of your application, whether it is the most CPU, memory or I/O, and finally reach a balance. The following describes the ideas.
1. Ensure that the server resources are sufficient, such as I/O, CPU, and memory.
2. In case of ample hardware, try to configure 300, 600, 1200, and 1800 with maxThreads to analyze Tomcat connection time, request time, throughput, and other parameters. During the test, pay close attention to whether the hard disk, bandwidth, CPU, and memory are in a bottleneck.
3. In fact, all things end up with a limit of hardware. Applications are divided into CPU, IO, and memory-intensive, which will become your final limiting factor. Generally, applications are divided into different clusters based on their own characteristics. For example, CPU-intensive applications are divided into a group of clusters with better CPUs. In this way, resources can be fully utilized. We take the common memory as the final limiting factor, and assume that the CPU is good enough and I/O is seldom used to explain the concept. Through some stress testing tools, we can easily find one in 300 ~ When the number of concurrent threads is 8000, the next performance inflection point is used to compare the request connection time, the average response time of a single request, and the overall throughput. This inflection point often means that memory recovery is abnormal at this time, and the JVM spends more time in memory recovery. We can analyze it by typing gc logs and using tools such as jmeter. At this point, you can try to optimize the memory structure or increase the memory to solve the problem. If it cannot be solved, it may mean that your previous configuration is a good choice. Of course, these restrictions may be converted to each other. After you increase the memory, the memory is no problem, but the CPU reaches 100%, leading to performance degradation. In this case, the CPU is the final limiting factor.
Optimization Test traps:
Here is an example of a download server. We will download a 10 m package for testing. In fact, you will find that the throughput of the entire server is very poor and the response time is slow. But careful people will find that the connection time to the server is very fast, that is to say, the server quickly accpet your request, although your throughput is not large, the processing time is also large. What is the reason? In fact, your bandwidth is already full, and you will find that 10 files can be downloaded concurrently to occupy all your bandwidth. So now, the comparison object during your test becomes more reasonable than the comparison connection time.
Of course, you can also reduce the package size, for example, to 1 k, so that the bandwidth does not become a bottleneck. this may test your server's extremely limited concurrency, but the concurrency may not reflect the actual download situation. The actual situation is that the bandwidth is easily occupied, the download server has a large number of connections.
Question 2: What kind of performance improvement can NGINX bring, or what are the benefits?
1. After testing, we found that NGINX does not speed up the response. Why? This is because NGINX will proxy your requests with the backend. This means that you only need to establish a connection with the server to complete the request. Now, you need to first establish a connection with NGINX and then establish a connection with the backend. Therefore, NGINX is introduced, resulting in more time consumption and double SOCKET connection consumption.
2. The benefits after the introduction are as follows.
1) the overall performance will be improved. It is found that the maximum return time can be greatly reduced after the test. The returned request is more stable.
2) reduce backend resource consumption. It turns out that the backend is busy when data is returned due to factors such as slow client network, occupying resources. Through NGINX proxy to the backend, the backend can return quickly due to the NGINX Cache mechanism, and more resources are used to process requests, so that the backend capabilities can be used. NGINX is excellent in maintaining a large number of connections, with very little memory and CPU usage.
3) supports very convenient scalability and high availability.
Basic (optimized) configuration
The only file to be modified is nginx. conf, which contains all the settings of different Nginx modules. You should be able to find nginx. conf in the/etc/nginx directory on the server. First, we will talk about some global settings. Then, we will talk one by one based on the modules in the file to discuss which settings can enable you to have good x & igrave; ng performance when accessing a large number of clients, why do they improve x & igrave; ng. There is a complete configuration file at the end of this article.
The number of processes to be enabled for nginx is generally equal to the total number of cpu cores. In fact, I can open four or eight of the processes under normal circumstances.
It's far from enough.
The memory consumed by each nginx process is 10 MB.
Worker_cpu_affinity
This option is only applicable to linux. You can use this option to bind a worker process and a CPU.
)
Assume that the 8-core cpu is allocated as follows:
Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000
00100000 01000000 10000000
Nginx can use multiple worker processes for the following reasons:
To use SMP
To decrease latency when workers blockend on disk I/O
To limit number of connections per process when select ()/poll () is
Used
The worker_processes and worker_connections from the event sections
Allows you to calculate maxclients value: k
Max_clients = worker_processes * worker_connections
Worker_rlimit_nofile 102400;
The maximum number of file descriptors opened by each nginx process must be the same as the number of files opened by a single process in the system.
As a result, the number of files opened in the linux 2.6 kernel is 65535, and worker_rlimit_nofile corresponds
Enter 65535
The allocation of requests to processes during nginx scheduling is not so balanced. If the number of requests exceeds the limit, the system will return a 502 error. Me
It is a little bigger here.
Use epoll
Nginx uses the latest epoll (Linux 2.6 kernel) and kqueue (freebsd) network I/O models
While Apache uses the traditional select Model.
The select network I/O model adopted by Apache is very inefficient in handling a large number of connections and reads and writes.
In highly concurrent servers, polling I/O is the most time-consuming operation. Currently, Linux can withstand high concurrency.
The Accessed Squid and Memcached all adopt the epoll network I/O model.
Worker_connections 65535;
Maximum number of concurrent connections allowed by each worker (Maxclient = work_processes * worker_connections)
Keepalive_timeout 75
Keepalive timeout
Note the following official statement:
The parameters can differ from each other. Line Keep-Alive:
Timeout = time understands Mozilla and Konqueror. MSIE itself shuts
Keep-alive connection approximately after 60 seconds.
Client_header_buffer_size 16 k
Large_client_header_buffers 4 32 k
Customer request header buffer size
Nginx uses the buffer client_header_buffer_size by default to read the header value. If
The header is too large and will be read using large_client_header_buffers.
If the HTTP header or Cookie is too small, the system reports the 400 error nginx 400 bad request.
If the request line exceeds the buffer, an HTTP 414 error (URI Too Long) will be reported)
The maximum size of the HTTP header accepted by nginx must be larger than one of its buffer; otherwise, 400
HTTP error (Bad Request ).
Open_file_cache max102400
Use field: http, server, and location to specify whether the cache is enabled. If enabled, the following information is recorded: · opened file descriptor, size information, and modification time. · existing directory information. · error message during file search-the file cannot be correctly read without it. For details, refer to open_file_cache_errors command options:
· Max-specifies the maximum number of caches. If the cache overflows, the files (LRU) that have been used for the longest time will be removed.
Example: open_file_cache max = 1000 inactive = 20 s; open_file_cache_valid 30 s; open_file_cache_min_uses 2; open_file_cache_errors on;
Open_file_cache_errors
Syntax: open_file_cache_errors on | off default value: open_file_cache_errors off field used: http, server, location this command specifies whether to search a file is a cache error record.
Open_file_cache_min_uses
Syntax: open_file_cache_min_uses number default value: open_file_cache_min_uses 1 field used: http, server, location this command specifies the minimum number of files that can be used within a certain time range in parameters invalid in open_file_cache, if a larger value is used, the file descriptor is always open in the cache.
Open_file_cache_valid
Syntax: open_file_cache_valid time default value: open_file_cache_valid 60 field used: http, server, location this command specifies when to check the effective information of cached items in open_file_cache.
Enable gzip
Gzip on;
Gzip_min_length 1 k;
Gzip_buffers 4 16 k;
Gzip_http_version 1.0;
Gzip_comp_level 2;
Gzip_types text/plain application/x-javascript text/css
Application/xml;
Gzip_vary on;
Cache static files:
Location ~ * ^. + \. (Swf | gif | png | jpg | js | css) $ {
Root/usr/local/ku6/ktv/show.ku6.com /;
Expires 1 m;
}
Optimize Linux kernel parameters
Vi/etc/sysctl. conf
# Add
Net. ipv4.tcp _ max_syn_backlog = 65536
Net. core. netdev_max_backlog = 32768
Net. core. somaxconn = 32768
Net. core. wmem_default = 8388608
Net. core. rmem_default = 8388608
Net. core. rmem_max = 16777216
Net. core. wmem_max = 16777216
Net. ipv4.tcp _ timestamps = 0
Net. ipv4.tcp _ synack_retries = 2
Net. ipv4.tcp _ syn_retries = 2
Net. ipv4.tcp _ tw_recycle = 1
# Net. ipv4.tcp _ tw_len = 1
Net. ipv4.tcp _ tw_reuse = 1
Net. ipv4.tcp _ mem = 94500000 915000000 927000000
Net. ipv4.tcp _ max_orphans = 3276800
# Net. ipv4.tcp _ fin_timeout = 30
# Net. ipv4.tcp _ keepalive_time = 120
Net. ipv4.ip _ local_port_range = 1024 65535
Appendix: Troubleshooting
The php-cgi process is insufficient, the php execution time is long (mysql is slow), or the php-cgi process is dead.
Error 502
In general, Nginx 502 Bad Gateway is related to the php-fpm.conf settings, while Nginx 504 Gateway Time-out is related to the nginx. conf settings
1. Check whether the current PHP FastCGI Process count is sufficient:
Netstat-anpo | grep "php-cgi" | wc-l
If the number of FastCGI processes actually used is close to the preset number of FastCGI processes
, Indicating that the number of FastCGI processes is insufficient and needs to be increased.
2. If the execution time of some PHP programs exceeds the Nginx waiting time, you can add
The timeout time of FastCGI in The nginx. conf configuration file, for example:
Http
{
......
Fastcgi_connect_timeout 300;
Fastcgi_send_timeout 300;
Fastcgi_read_timeout 300;
......
}
413 Request Entity Too Large
Increase client_max_body_size
Client_max_body_size: specifies the maximum request entity size allowed for client connection. It appears in the Content-Length field in the request header. if the Request is greater than the specified value, the client will receive a "Request Entity Too Large" (413) error. remember, the browser does not know how to display this error.
Increase in php. ini
Post_max_size and upload_max_filesize
High-level configuration
In the nginx. conf file, several advanced configurations of the shao number in Nginx are located on the module section.
User www-data;
Pid/var/run/nginx. pid;
Worker_processes auto;
Worker_rlimit_nofile 100000;
The user and pid should follow the default settings-we will not change the content, because the change is no different.
Worker_processes defines the number of worker processes when nginx provides external web services. The optimal value depends on many factors, including (but not limited to) the number of CPU cores, the number of hard disks for storing data, and the load mode. If you are unsure, setting it to the number of available CPU cores is a good start (setting it to "auto" will try to detect it automatically ).
Worker_rlimit_nofile: change the maximum number of files opened by the worker process. If this parameter is not set, the value is an operating system restriction. After the configuration, your operating system and Nginx can have more files than "ulimit-a", so set this value to high, in this way, nginx will not have the "too open files" problem.
Events Module
The events module contains the configuration of all ch management connections in nginx.
Events {
Worker_connections 2048;
Multi_accept on;
Use epoll;
}
Worker_connections sets the maximum number of connections that can be opened by a worker process at the same time. If the worker_rlimit_nofile mentioned above is set, we can set this value to a high value.
Remember, the maximum number of customers is also limited by the number of available socket connections (~ 64 K), so it is not good to set the impractical height.
Multi_accept tells nginx to receive as many connections as possible after receiving a new connection notification.
Use setting is used to reuse the polling method of client threads. If you use Linux 2.6 +, you should use epoll. If you use * BSD, you should use kqueue.
(It is worth noting that if you do not know which polling method should be used for Nginx, it will select the one that best suits your operating system)
HTTP Module
The HTTP module controls all core features of nginx httpch authentication x & igrave; ng. Because there is only a shao configuration, we only extract a small part of the configuration. All these settings should be in the http module, and you will not even pay special attention to this setting.
Http {
Server_tokens off;
Sendfile on;
Tcp_nopush on;
Tcp_nodelay on;
...
}
Server_tokens does not allow nginx to run faster, but it can disable the nginx version number in the error page, which is good for secure x & igrave; ng.
Sendfile can make sendfile () take effect. Sendfile () can copy data (or any two file descriptors) between the disk and the TCP socket ). Pre-sendfile is used to request a data buffer in the user space before data transmission. Then, use read () to copy data from the file to the buffer zone, and write () to write the buffer zone data to the network. Sendfile () immediately reads data from the disk to the OS cache. Because this type of copy is completed in the kernel, sendfile () is more effective than combining read () and write () and turning on and off the discard buffer (more about sendfile ).
Tcp_nopush tells nginx to send all header files in a data packet, instead of sending them one by one.
Tcp_nodelay tells nginx not to cache data, but to send data for a period of time. When data needs to be sent in a timely manner, set x & igrave; ng for the application, in this way, the returned value cannot be obtained immediately when a small piece of data is sent.
Access_log off;
Error_log/var/log/nginx/error. log crit;
Access_log: set whether nginx will store access logs. Disabling this option can make disk read IO operations faster (aka, YOLO)
Error_log indicates that nginx can only record serious errors:
Keepalive_timeout 10;
Client_header_timeout 10;
Client_body_timeout 10;
Reset_timedout_connection on;
Send_timeout 10;
Keepalive_timeout: allocate the keep-alive link timeout time to the client. The server will close the link after the timeout. We set it to a lower level so that ngnix can continue to work for a longer time.
Client_header_timeout and client_body_timeout are used to set the timeout time for the request header and request body. We can also lower this setting.
Reset_timeout_connection tells nginx to close the client connection that does not respond. This will release the memory space occupied by the client.
Send_timeout specifies the response timeout of the client. This setting is not used for the entire forwarder, but between two client read operations. If the client does not read any data during this period, nginx closes the connection.
Limit_conn_zone $ binary_remote_addr zone = addr: 5 m;
Limit_conn addr 100;
Limit_conn_zone sets the parameters used to save the shared memory of various keys (such as the current number of connections. 5 MB is 5 MB. This value should be large enough to store (32 K * 5) 32 byte status or (16 K * 5) 64 byte status.
Limit_conn sets the maximum number of connections for a given key. Here the key is addr, and the value we set is 100, that is, we allow each IP address to open up to 100 connections at the same time.
Include/etc/nginx/mime. types;
Default_type text/html;
Charset UTF-8;
Include is only an instruction that contains the content of another file in the current file. Here we use it to load a series of MIME types that will be used later.
Default_type sets the default MIME-type used by the file.
Charset: set the default character set in our header file
Gzip on;
Gzip_disable "msie6 ";
# Gzip_static on;
Gzip_proxied any;
Gzip_min_length 1000;
Gzip_comp_level 4;
Gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml + rss text/javascript;
Gzip tells nginx to send data in the form of gzip compression. This will reduce the amount of data sent by shao.
Gzip_disable disables the gzip function for the specified client. We set it to IE6 or a later version to make our solution widely compatible.
Gzip_static tells nginx to check whether there are resources that have been previously approved by gzipch before compressing resources. This requires that you pre-compress your file (commented out in this example) to allow you to use the highest compression ratio, in this way, nginx does not need to compress these files (Click here for more detailed gzip_static information ).
Gzip_proxied allows or disables compression of response streams based on requests and responses. We set it to any, which means all requests will be compressed.
Gzip_min_length sets the maximum number of shao bytes for data compression. If a request is smaller than 1000 bytes, we 'd better not compress it, because compressing these small data will reduce the speed of all processes that handle the request.
Gzip_comp_level sets the data compression level. This level can be any number between 1 and 9, 9 is the slowest but the compression ratio is the largest. We set it to 4, which is a relatively discounted setting.
Gzip_type: Set the data format to be compressed. Some of the above examples are available. You can also add more formats.
# Cache informations about file descriptors, frequently accessed files
# Can boost performance, but you need to test those values
Open_file_cache max = 100000 inactive = 20 s;
Open_file_cache_valid 30 s;
Open_file_cache_min_uses 2;
Open_file_cache_errors on;
##
# Virtual Host Configs
# Aka our settings for specific servers
##
Include/etc/nginx/conf. d/*. conf;
Include/etc/nginx/sites-enabled /*;
Open_file_cache specifies the maximum number of caches and the cache time when the cache is enabled. We can set a relatively high maximum time so that we can clear them after they are not active for more than 20 seconds.
Open_file_cache_valid specifies the interval between checking correct information in open_file_cache.
Open_file_cache_min_uses defines the minimum number of files in open_file_cache during the inactive period of command parameters.
Open_file_cache_errors specifies whether an error message is cached when a file is searched. It also includes adding a file to the configuration. We also include server modules, which are defined in different files. If your server module is not in these locations, you must modify this line to specify the correct location.
A complete configuration
User www-data;
Pid/var/run/nginx. pid;
Worker_processes auto;
Worker_rlimit_nofile 100000;
Events {
Worker_connections 2048;
Multi_accept on;
Use epoll;
}
Http {
Server_tokens off;
Sendfile on;
Tcp_nopush on;
Tcp_nodelay on;
Access_log off;
Error_log/var/log/nginx/error. log crit;
Keepalive_timeout 10;
Client_header_timeout 10;
Client_body_timeout 10;
Reset_timedout_connection on;
Send_timeout 10;
Limit_conn_zone $ binary_remote_addr zone = addr: 5 m;
Limit_conn addr 100;
Include/etc/nginx/mime. types;
Default_type text/html;
Charset UTF-8;
Gzip on;
Gzip_disable "msie6 ";
Gzip_proxied any;
Gzip_min_length 1000;
Gzip_comp_level 6;
Gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml + rss text/javascript;
Open_file_cache max = 100000 inactive = 20 s;
Open_file_cache_valid 30 s;
Open_file_cache_min_uses 2;
Open_file_cache_errors on;
Include/etc/nginx/conf. d/*. conf;
Include/etc/nginx/sites-enabled /*;
}
After editing the configuration, restart nginx to make the configuration take effect.
Sudo service nginx restart