Pressure Test PHP-FPM optimization

Source: Internet
Author: User
Tags ack benchmark epoll fpm php source code sendfile vps nginx server

Webbench can simulate up to 30,000 concurrent connections to test the load capacity of the website, the personal feeling is better than the AB pressure test tool with Apache, installation and use is particularly convenient.

1. Applicable system: Linux

2. Compile and install: References
wget http://blog.s135.com/soft/linux/webbench/webbench-1.5.tar.gz
Tar zxvf webbench-1.5.tar.gz
CD webbench-1.5
Make && make install


3. Use:Reference
Webbench-c 10000-t http://127.0.0.1/test.jpg

parameter Description:-C for concurrency number,-T for time (seconds)

4. Examples of test results: Quote everyone to see 1.5 million of the traffic per minute I'm still a native load test.



Server Configuration
Cpu:24 Nuclear
Memory: 32G
Hard drive: RAID5 sas146g 15000-turn hard disk array
Nginx PHP-FPM optimized for later post

Webbench-simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source software.

Benchmarking:get
10000 clients, running Sec.

speed=1377219 pages/min, -30844808 bytes/sec.
requests:1376697 Susceed, 522 failed.
[Email protected] conf]# webbench-c 10000-t 60
Webbench-simple Web Benchmark 1.5
Copyright (c) Radim Kolar 1997-2004, GPL Open Source software.

Benchmarking:get
10000 clients, running Sec.

speed=1483864 pages/min, -8322775 bytes/sec.
requests:1483397 Susceed, 467 failed.
Favorite Share
lqp518

Novice Road

*posted on 2014-5-25 18:42 | Just look at the author
Optimizations in Nginx commands (configuration files)
Worker_processes 8;
Nginx process number, it is recommended to follow the number of CPUs specified, usually a multiple of it.
Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;
Allocate CPUs for each process, in the example above, allocate 8 processes to 8 CPUs, or you can write more than one, or assign a process to multiple CPUs.
Worker_rlimit_nofile 102400;
This instruction refers to the maximum number of file descriptors opened by an nginx process, the theoretical value should be the number of open files (ulimit-n) and the number of nginx processes, but the Nginx allocation request is not so uniform, so it is best to be consistent with the value of ulimit-n.
Use Epoll;
Using the Epoll I/O model, this goes without saying.
Worker_connections 102400;
The maximum number of connections allowed per process is theoretically worker_processes*worker_connections per nginx server.
Keepalive_timeout 60;
KeepAlive time-out period.
Client_header_buffer_size 4k;
Client request the buffer size of the head, this can be set according to your system paging size, generally a request header size will not exceed 1k, but because the general system paging is greater than 1k, so this is set to paging size. The paging size can be obtained with the command getconf pagesize.
Open_file_cache max=102400 inactive=20s;
This will specify the cache for the open file, which is not enabled by default, max Specifies the number of caches, the recommended and the number of open files, and inactive refers to how long the file has not been requested to delete the cache.
Open_file_cache_valid 30s;
This refers to how long it takes to check the cache for valid information.
Open_file_cache_min_uses 1;
The minimum number of times the file is used in the inactive parameter time in the Open_file_cache directive, if this number is exceeded, the file descriptor is always opened in the cache, as in the previous example, if a file is not used once in inactive time, it will be removed.
Optimization of kernel parameters
Net.ipv4.tcp_max_tw_buckets = 6000
The number of timewait, by default, is 180000.
Net.ipv4.ip_local_port_range = 1024 65000
Allows the system to open a range of ports.
Net.ipv4.tcp_tw_recycle = 1
Enable Timewait Quick Recycle.
Net.ipv4.tcp_tw_reuse = 1
Turn on reuse. Allows time-wait sockets to be re-used for new TCP connections.
Net.ipv4.tcp_syncookies = 1
Turn on SYN cookies to enable cookies to be processed when a SYN wait queue overflow occurs.
Net.core.somaxconn = 262144
The backlog of LISTEN functions in Web applications restricts the net.core.somaxconn of our kernel parameters to 128, and the Nginx-defined ngx_listen_backlog defaults to 511, so it is necessary to adjust this value.
Net.core.netdev_max_backlog = 262144
The maximum number of packets that are allowed to be sent to the queue when each network interface receives a packet at a rate that is faster than the rate at which the kernel processes these packets.
Net.ipv4.tcp_max_orphans = 262144
The maximum number of TCP sockets in the system are not associated with any one of the user file handles. If this number is exceeded, the orphan connection is immediately reset and a warning message is printed. This limitation is only to prevent a simple Dos attack, not to rely too much on it or artificially reduce the value, but should increase this value (if the memory is increased).
Net.ipv4.tcp_max_syn_backlog = 262144
Record the maximum number of connection requests that have not received the client acknowledgment information. For systems with 128M of memory, the default value is 1024, and the small memory system is 128.
Net.ipv4.tcp_timestamps = 0
Timestamps can prevent the winding of serial numbers. A 1Gbps link will definitely encounter a previously used serial number. Timestamps allow the kernel to accept this "exception" packet. You need to turn it off here.
Net.ipv4.tcp_synack_retries = 1
In order to open the connection to the end, the kernel sends a SYN and comes with an ACK that responds to the previous syn. The second handshake in the so-called three-time handshake. This setting determines the number of Syn+ack packets sent before the kernel abandons the connection.
Net.ipv4.tcp_syn_retries = 1
The number of SYN packets sent before the kernel abandons the connection.
Net.ipv4.tcp_fin_timeout = 1
If the socket is closed by the local side, this parameter determines how long it remains in the fin-wait-2 state. The peer can make an error and never shut down the connection, or even accidentally become a machine. The default value is 60 seconds. 2.2 The normal value of the kernel is 180 seconds, you can press this setting, but remember that even if your machine is a light-load Web server, there is a large number of dead sockets and memory overflow risk, fin-wait-2 is less dangerous than fin-wait-1, Because it can only eat up to 1.5K of memory, but they have a longer lifetime.
Net.ipv4.tcp_keepalive_time = 30
When KeepAlive is employed, the frequency at which TCP sends keepalive messages. The default is 2 hours.
A complete kernel-optimized configuration
Net.ipv4.ip_forward = 0
Net.ipv4.conf.default.rp_filter = 1
Net.ipv4.conf.default.accept_source_route = 0
KERNEL.SYSRQ = 0
Kernel.core_uses_pid = 1
Net.ipv4.tcp_syncookies = 1
KERNEL.MSGMNB = 65536
Kernel.msgmax = 65536
Kernel.shmmax = 68719476736
Kernel.shmall = 4294967296
Net.ipv4.tcp_max_tw_buckets = 6000
Net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
Net.ipv4.tcp_rmem = 4096 87380 4194304
Net.ipv4.tcp_wmem = 4096 16384 4194304
Net.core.wmem_default = 8388608
Net.core.rmem_default = 8388608
Net.core.rmem_max = 16777216
Net.core.wmem_max = 16777216
Net.core.netdev_max_backlog = 262144
Net.core.somaxconn = 262144
Net.ipv4.tcp_max_orphans = 3276800
Net.ipv4.tcp_max_syn_backlog = 262144
Net.ipv4.tcp_timestamps = 0
Net.ipv4.tcp_synack_retries = 1
Net.ipv4.tcp_syn_retries = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_mem = 94500000 915000000 927000000
Net.ipv4.tcp_fin_timeout = 1
Net.ipv4.tcp_keepalive_time = 30
Net.ipv4.ip_local_port_range = 1024 65000
A simple nginx optimization configuration file
User www www;
Worker_processes 8;
Worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000;
Error_log/www/log/nginx_error.log Crit;
Pid/usr/local/nginx/nginx.pid;
Worker_rlimit_nofile 204800;

Events
{
Use Epoll;
Worker_connections 204800;
}

http
{
Include Mime.types;
Default_type Application/octet-stream;

CharSet Utf-8;

Server_names_hash_bucket_size 128;
client_header_buffer_size 2k;
Large_client_header_buffers 4 4k;
Client_max_body_size 8m;

Sendfile on;
Tcp_nopush on;

Keepalive_timeout 60;

Fastcgi_cache_path/usr/local/nginx/fastcgi_cache Levels=1:2
keys_zone=test:10m
inactive=5m;
Fastcgi_connect_timeout 300;
Fastcgi_send_timeout 300;
Fastcgi_read_timeout 300;
Fastcgi_buffer_size 16k;
Fastcgi_buffers 16k;
Fastcgi_busy_buffers_size 16k;
Fastcgi_temp_file_write_size 16k;
Fastcgi_cache TEST;
Fastcgi_cache_valid 302 1h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1m;
Fastcgi_cache_min_uses 1;
Fastcgi_cache_use_stale error timeout Invalid_header http_500;

Open_file_cache max=204800 inactive=20s;
Open_file_cache_min_uses 1;
Open_file_cache_valid 30s;



Tcp_nodelay on;

gzip on;
Gzip_min_length 1k;
Gzip_buffers 4 16k;
Gzip_http_version 1.0;
Gzip_comp_level 2;
Gzip_types text/plain application/x-javascript text/css application/xml;
Gzip_vary on;


Server
{
Listen 8080;
server_name ad.test.com;
Index index.php index.htm;
root/www/html/;

Location/status
{
Stub_status on;
}

Location ~. *\. (PHP|PHP5)? $
{
Fastcgi_pass 127.0.0.1:9000;
Fastcgi_index index.php;
Include fcgi.conf;
}

Location ~. *\. (GIF|JPG|JPEG|PNG|BMP|SWF|JS|CSS) $
{
Expires 30d;
}

Log_format access ' $remote _addr-$remote _user [$time _local] "$request"
' $status $body _bytes_sent ' $http _referer '
' "$http _user_agent" $http _x_forwarded_for ';
Access_log/www/log/access.log access;
}
}
A few instructions on fastcgi.
Fastcgi_cache_path/usr/local/nginx/fastcgi_cache levels=1:2 keys_zone=test:10m inactive=5m;
This instruction specifies a path for the fastcgi cache, a directory structure level, a keyword area to store time, and an inactive delete time.
Fastcgi_connect_timeout 300;
Specifies the time-out for connecting to the backend fastcgi.
Fastcgi_send_timeout 300;
The time-out of the request to the fastcgi, which is the time-out for sending the request to fastcgi after two handshake has been completed.
Fastcgi_read_timeout 300;
The timeout period for receiving the fastcgi answer, which is the time-out for receiving the fastcgi answer after two handshakes have been completed.
Fastcgi_buffer_size 16k;
Specifies how much buffer is required to read the first part of the fastcgi answer, which can be set to the buffer size specified by the fastcgi_buffers instruction, which specifies that it will use 1 16k buffers to read the first part of the answer, the answer header, In fact, this answer head is generally very small (not more than 1k), but if you specify the size of the buffer in the Fastcgi_buffers directive, it will also allocate a buffer size fastcgi_buffers the specified cache.
Fastcgi_buffers 16k;
Specifies how many and how large buffers are needed locally to buffer the fastcgi response, as shown above, if a PHP script produces a page size of 256k, it will be allocated a buffer of 16 16k to cache, if greater than 256k, the portion increased to 256k will be cached to Fastcgi_ The path specified by temp is, of course, an unwise scenario for server load because in-memory processing data is faster than the hard disk, and typically this value is set to the median of the page size generated by the PHP script in your site. For example your site most of the script generated by the page size of 256k can be set to 16k, or 4 64k or 4k, but obviously, the latter two is not a good way to set up, because if the resulting page only 32k, if 4 64k it will allocate 1 64k buffer to cache, If you use 4k it will allocate 8 buffers of 4k to cache, and if you use 16k it will allocate 2 16k to cache the page, which seems more reasonable.
Fastcgi_busy_buffers_size 32k;
I don't know what to do with this instruction, only know the default value is twice times the fastcgi_buffers.
Fastcgi_temp_file_write_size 32k;
How much data block will be used when writing Fastcgi_temp_path, the default value is twice times of fastcgi_buffers.
Fastcgi_cache TEST
Turn on the fastcgi cache and set a name for it. The personal sense of unlocking the cache is useful to reduce CPU load and prevent 502 errors. But this cache can cause a lot of problems, because it caches dynamic pages. The specific use also needs according to own demand.
Fastcgi_cache_valid 302 1h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1m;
Specifies the cache time for the specified answer code, such as the 200,302 answer cache for one hour in the previous example, 301 for the cache for 1 days, and the other for 1 minutes.
Fastcgi_cache_min_uses 1;
The minimum number of times the cache is used in the fastcgi_cache_path instruction inactive parameter value time, as in the above example, if a file is not used 1 times in 5 minutes, the file will be removed.
Fastcgi_cache_use_stale error timeout Invalid_header http_500;

TOP

lqp518

Novice Road

3#posted on 2014-5-25 18:43 | Just look at the author
PHP-FPM is a phpfastcgi process manager and is for PHP only.
PHP-FPM is actually a patch of PHP source code designed to integrate FASTCGI process management into a PHP package. It must be patch into your PHP source code and can be used after compiling and installing PHP.
Now we can download in the latest PHP 5.3.2 source tree to directly integrate the PHP-FPM branch, it is said that the next version will be fused into the main branch of PHP. Relative SPAWN-FCGI,PHP-FPM in the CPU and memory control are better, and the former is easy to crash, must be monitored with crontab, and PHP-FPM does not have this annoyance.
PHP5.3.3 has integrated php-fpm, no longer a third-party package.            PHP-FPM provides a better way to manage the PHP process, can effectively control memory and process, can be smooth overloaded PHP configuration, more than spawn-fcgi has more advantages, so by the official PHP included. The PHP-FPM can be turned on with the –ENABLE-FPM parameter when the./configure.
Using PHP-FPM to control the fastcgi process of php-cgi
/usr/local/php/sbin/php-fpm{start|stop|quit|restart|reload|logrotate}
--start starting PHP's fastcgi process
--stop forcing the fastcgi process to terminate PHP
--quit smooth termination of PHP fastcgi process
--restart restarting PHP's fastcgi process
--reload re-load PHP php.ini
--logrotate re-enabling the log file

PHP-FPM There are two ways of execution, like Apache, his number of processes can be divided into dynamic and static according to the settings, one is to open a specified number of PHP-FPM processes, no longer increase or decrease, the other is to start a certain number of php-fpm process, When the request is large, dynamically increases the number of PHP-FPM processes to the upper limit and automatically frees the idle process to a lower limit when idle.

These two different execution modes can be adjusted according to the actual needs of the server.

Here are some of the parameters that relate to this, they are pm, Pm.max_children, Pm.start_servers, Pm.min_spare_servers and Pm.max_spare_servers respectively.

PM means that there are two values that can be selected, either static or dynamic, in that way. In older versions, dynamic is called Apache-like. This should pay attention to the instructions given in the configuration file.

The meanings of the following 4 parameters are:

Pm.max_children: The number of PHP-FPM processes that are open in static mode.
Pm.start_servers: The number of start PHP-FPM processes under dynamic mode.
Pm.min_spare_servers: The minimum number of PHP-FPM processes under dynamic mode.
Pm.max_spare_servers: The maximum number of PHP-FPM processes under dynamic mode.

If the DM is set to static, then only the Pm.max_children parameter takes effect. The system turns on several PHP-FPM processes that are set up.

If the DM is set to dynamic, then the Pm.max_children parameter is invalidated and the next 3 parameters take effect. The system starts the Pm.start_servers PHP-FPM process at the beginning of the PHP-FPM run and then dynamically pm.min_spare_servers and Pm.max_spare_ according to the system's requirements Adjusts the number of PHP-FPM processes between servers.

So, for our server, which method of execution is better? In fact, like Apache, we run a PHP program after the completion of execution, more or less there is a memory leak problem. This is why the beginning of a PHP-FPM process only consumes about 3M of memory, running for a period of time will rise to 20-30m reason. Therefore, the dynamic mode because it will end the redundant process, can be reclaimed to release some memory, so it is recommended to use on the server or VPS with less memory. The specific maximum number is based on the memory/20m. For example, 512M VPS, recommended pm.max_spare_servers set to 20. As for Pm.min_spare_servers, it is recommended to set the load on the server and compare the appropriate values between 5~10.

Then, for servers that are larger in memory, it is more efficient to set to static. Because the frequent switch PHP-FPM process will also sometimes lag, so the memory is large enough to open the static effect will be better. The quantity can also be obtained according to the memory/30m. For example, 2GB memory server, can be set to 50;4GB memory can be set to 100 and so on.

The demo parameters are as follows:

Pm=dynamic
Pm.max_children=20
Pm.start_servers=5
Pm.min_spare_servers=5
Pm.max_spare_servers=20

This allows for maximum memory savings and increased execution efficiency.

TOP

lqp518

Novice Road

4#posted on 2014-5-25 18:46 | Just look at the author
My native configuration file is as follows
Server Configuration
Cpu:24 Nuclear
Memory: 32G
Hard drive: RAID5 sas146g 15000-turn hard disk array
Tip hardware configuration Not so high do not set so much!


# Nginx conf conf/nginx.conf
# Created by http://www.wdlinux.cn
# Last Updated 2010.06.01
User www www;
Worker_processes 48;
Error_log Logs/error.log Notice;
PID Logs/nginx.pid;
Worker_rlimit_nofile 409600;
Events {
Use Epoll;
Worker_connections 409600;
}

HTTP {
Include Mime.types;
Default_type Application/octet-stream;

Server_names_hash_bucket_size 128;
Client_header_buffer_size 32k;
Large_client_header_buffers 4 32k;
Client_max_body_size 8m;
Limit_conn_zone $binary _remote_addr zone=one:32k;
########## #xinjiade ###########################
Fastcgi_cache_path/www/wdlinux/nginx/fastcgi_cache Levels=1:2
keys_zone=test:10m
inactive=5m;
Fastcgi_connect_timeout 300;
Fastcgi_send_timeout 300;
Fastcgi_read_timeout 300;
Fastcgi_buffer_size 16k;
Fastcgi_buffers 16k;
Fastcgi_busy_buffers_size 16k;
Fastcgi_temp_file_write_size 16k;
Fastcgi_cache TEST;
Fastcgi_cache_valid 302 1h;
Fastcgi_cache_valid 301 1d;
Fastcgi_cache_valid any 1m;
Fastcgi_cache_min_uses 1;
Fastcgi_cache_use_stale error timeout Invalid_header http_500;

Open_file_cache max=102400 inactive=20s;
Open_file_cache_valid 30s;
Open_file_cache_min_uses 1;
##############################################
Sendfile on;
Tcp_nopush on;

Keepalive_timeout 60;
Tcp_nodelay on;

gzip on;
Gzip_min_length 1k;
Gzip_buffers 4 16k;
Gzip_http_version 1.0;
Gzip_comp_level 2;
Gzip_types text/plain application/x-javascript text/css application/xml;
Gzip_vary on;

Log_format wwwlogs ' $remote _addr-$remote _user [$time _local] $request $status $body _bytes_sent $http _referer $http _user _agent $http _x_forwarded_for ';
#include default.conf;
Include vhost/*.conf;
}


PHP-FMP settings

pm.max_children=512
pm.start_servers=128
Pm.min_spare_servers=30
pm.max_spare_servers=128

Pressure Test PHP-FPM optimization

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.