Performance Parameter Optimization Principles for nginx + php-fpm

Source: Internet
Author: User
Tags unix domain socket

1. the larger the worker_processes, the better (the higher the performance after a certain number is not obvious) 2. worker_cpu_affinity all CPUs share worker_processes, which is better than every worker_processes for cross-cpu allocation. php execution is not considered. The test result shows that the number of worker_processes is twice the number of cpu cores, and the performance is optimal. 3. the unix domain socket (shared memory) is better than the tcp network port configuration performance, so it does not take backlog into consideration. The request speed is greatly improved, but the error rate is over 50% plus backlog, performance improved by about 10%. 4. adjust nginx, php-fpm, and kernel backlog (backlog), connect () to unix:/tmp/php-fpm.socket failed (11: Resource temporarily unavailable) while connecting to upstr If an eam error is returned, nginx: the server block of the configuration file listen 80 default backlog = 1024; php-fpm: The listen of the configuration file. backlog = 2048 kernel Parameter:/etc/sysctl. conf, which cannot be lower than the net. ipv4.tcp _ max_syn_backlog = 4096net. core. netdev_max_backlog = 4096 5. adding a php-fpm master instance on a single server will increase the fpm processing capability and reduce the chance of returning an error to the multi-instance startup method. Multiple configuration files are used: /usr/local/php/sbin/php-fpm-y/usr/local/php/etc/php-fpm.conf &/usr/local/php/sbin/php-fpm-y/ usr/local/php/etc/php-fpm1.conf & Nginx fastcgi configure upstream phpbackend {# server 127.0.0.1: 9000 weight = 100 max_fails = 10 fail_timeout = 30; # server 127.0.0.1: 9001 weight = 100 max_fails = 10 fail_timeout = 30; # server 127.0.0.1: 9002 weight = 100 max_fails = 10 fail_timeout = 30; # server 127.0.0.1: 9003 weight = 100 max_fails = 10 fail_timeout = 30; server unix: /var/www/php-fpm.sock weight = 100 max_fails = 10 fail_timeout = 30; server unix:/var/www/php-fpm1. Sock weight = 100 max_fails = 10 fail_timeout = 30; server unix:/var/www/php-fpm2.sock weight = 100 max_fails = 10 fail_timeout = 30; server unix: /var/www/php-fpm3.sock weight = 100 max_fails = 10 fail_timeout = 30; # server unix:/var/www/php-fpm4.sock weight = 100 max_fails = 10 fail_timeout = 30; # server unix: /var/www/php-fpm5.sock weight = 100 max_fails = 10 fail_timeout = 30; # server unix:/var/www/php-fpm6.sock weight = 100 max _ Fails = 10 fail_timeout = 30; # server unix:/var/www/php-fpm7.sock weight = 100 max_fails = 10 fail_timeout = 30;} location ~ \. Php * {fastcgi_pass phpbackend; # fastcgi_pass unix:/var/www/php-fpm.sock; fastcgi_index index. php ;..........} 6. test environment and result memory 2Gswap2Gcpu 2-core Intel (R) Xeon (R) CPU E5405 @ 2.00GHz using AB Remote Access test, test program is php string handler 1) start four php-fpm instances, eight nginx worker_processes, four worker_processes for each CPU, 1024 for backlog, 2048 for php, and 4096 for Kernel backlog, when the unix domain socket is used for connection, the performance and error rate of other fpm instances with unchanged parameters are balanced and acceptable. If more than four fpm instances are used, the performance begins to decline, the error rate has not been significantly reduced. The conclusion is that the number of fpm instances, the number of worker_processes, and the number of cpu instances maintain a multiple relationship. The parameters that affect performance and report errors are php-fpm instances and the number of nginx worker_processes, fpm's max_request, php's backlog, unix domain socket 10 W request, 500 concurrent without error, 1000 concurrent error rate is 0.9% 500 concurrent: Time taken for tests: 25 seconds avg. complete requests: 100000 Failed requests: 0 Write errors: 0 Requests per second: 4000 [#/sec] (mean) avg. time per request: 122.313 [MS] (mean) Time per request: 0.245 [MS] (mean, internal SS all concurrent requests) Transfer rate: 800 [Kbytes/sec] received avg. 1000 concurrency: Time taken for tests: 25 seconds avg. complete requests: 100000 Failed requests: 524 (Connect: 0, Length: 524, Exceptions: 0) Write errors: 0Non-2xx responses: 524 Requests per second: 3903.25 [#/sec] (mean) Time per request: 256.197 [MS] (mean) Time per request: 0.256 [MS] (mean, average SS all concurrent requests) Transfer rate: 772.37 [Kbytes/sec] received 2) without changing other parameters, the unix domain socket is switched to the tcp network port connection. The result is as follows: 500 Concurrency: Concurrency Level: 500 Time taken for tests: 26.934431 secondsComplete requests: 100000 Failed requests: 0 Write errors: 0 Requests per second: 3712.72 [#/sec] (mean) Time per request: 134.672 [MS] (mean) time per request: 0.269 [MS] (mean, internal SS all concurrent requests) Transfer rate: 732.37 [Kbytes/sec] received 1000 Concurrency: Concurrency Level: 1000 Time taken for tests: 28.385349 secondsComplete requests: 100000 Failed requests: 0 Write errors: 0 Requests per second: 3522.94 [#/sec] (mean) Time per request: 283.853 [MS] (mean) time per request: 0.284 [MS] (mean, percentage SS all concurrent requests) Transfer rate: 694.94 [Kbytes/sec] Between pinned and 1), there is a performance reduction of about 10% 7. adjust the value of the max_request parameter of fpm to 1000. If the number of concurrent errors is 1000, the returned value is lower than 200, and the Transfer rate is around 800.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.