Nginx and FPM Process number configuration and 502,504 error

Source: Internet
Author: User
Tags fpm

generally

The php-cgi process is not enough, PHP execution time is long (MySQL slow), or the php-cgi process is dead, there will be 502 errors;

The Nginx 504 Gateway time-out is related to the nginx.conf setting;

1.502 and php-fpm.conf

Resource problems caused by 1.request_terminate_timeout

The value of Request_terminate_timeout, if set to 0 or too long, can cause file_get_contents resource problems.

If the remote resource requested by file_get_contents is too slow, file_get_contents will remain stuck there and will not time out.

We know that php.ini inside Max_execution_time can set the maximum execution time for PHP scripts, but in php-cgi (PHP-FPM), this parameter does not work. The true control of the maximum execution time for PHP scripts is the Request_terminate_timeout parameter in the php-fpm.conf configuration file.

The default value for Request_terminate_timeout is 0 seconds, meaning that the PHP script will continue to execute. Thus, when all the php-cgi processes are stuck in the file_get_contents () function, this nginx can no longer process the new request, Nginx will return to the user "502 Bad Gateway".

To modify this parameter, it is necessary to set the maximum execution time of PHP script, but the symptom is not a cure. For example, to 30s, if file_get_contents () to get a slow page content, which means that 150 php-cgi process, only 5 requests per second, WebServer also difficult to avoid "502 Badgateway".

The workaround is to set the request_terminate_timeout to 10s or a reasonable value, or to add a timeout parameter to file_get_contents.

$ctx stream_context_create (Array (' timeout ' =//Set a time-out,                  Units are seconds      file_get_contents($str$ctx);

Improper configuration of the 2.max_requests parameter may cause an intermittent 502 error:

pm.max_requests = 1000

sets the number of requests for the service before each child process is reborn. It is useful for third-party modules that may have a memory leak. If set to ' 0′, the request is always accepted. Equivalent to php_fcgi_max_requests environment change The default value: 0.

This configuration means that the process is automatically restarted when the number of requests processed by a php-cgi process accumulates to 500.

But why restart the process?

generally in the project, we will more or less use some PHP third-party libraries, these third-party libraries often have a memory leak problem, if you do not periodically restart the php-cgi process, it is bound to cause memory usage is increasing. So php-FPM, as the manager of the php-cgi, provides a monitoring function that restarts the PHP-CGI process that requests a specified number of times to ensure that the amount of memory is not increased.

It is because of this mechanism, in high-concurrency sites, often lead to 502 of errors, I guess the reason is that php-fpm to the request from NGINX queue is not handled well. However, I am still using PHP 5.3.2, I do not know if there is a problem in php5.3.3.

Our solution now is to set this value as large as possible, to minimize the number of php-cgi re-SPAWN, and to improve overall performance. In our own actual production environment we found that the memory leak was not obvious, so we set the value to very large (204800). We must set this value according to their actual situation, can not blindly increase.

in other words, the purpose of this mechanism is only to ensure that the php-cgi does not take up too much memory, so why not handle it by detecting memory? I agree with Gao Chunhuithat itwould be a better solution to restart the php-CGI process by setting the peak intrinsic consumption of the process.

2.504 and nginx.conf

The execution time of some PHP program exceeds nginx wait time, can increase the timeout time of fastcgi in nginx.conf configuration file appropriately, for example:

http
{
......
Fastcgi_connect_timeout 300;
Fastcgi_send_timeout 300;
Fastcgi_read_timeout 300;

......
}


3.413 Request Entity Too Large

Increase Client_max_body_size

client_max_body_size: directive specifies the maximum requested entity size that is allowed for client connections, which appears in the Content-length field of the request header. If the request is greater than the specified value, the client will receive a "request entity TooLarge "(413) error. Remember, the browser does not know how to display this error. The solution is to increase post_max_size and upload_max_ in php.ini FileSize

Nginx and FPM Process number configuration and 502,504 error

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.