Apache concurrency estimation
Apache is mainly a memory-consuming service application. I personally sum up the following formula:
The code is as follows: |
Copy code |
Apache_max_process_with_good_perfermance <(total_hardware_memory/apache_memory_per_process) * 2 Apache_max_process = apache_max_process_with_good_perfermance * 1.5 |
Why is there an apache_max_process_with_good_perfermance and apache_max_process? The original reason is that the system can use more memory for file system cache at low load, thus further improving the response speed of a single request. Under high load, the system's single request response speed will be much slower, and beyond the apache_max_process, the system will cause a sharp reduction in system efficiency due to the use of hard disks for virtual memory swap space. In addition, the same service: apache_max_process of 2 GB memory machines is generally set to 1.7 times of 1 GB memory, because Apache itself will cause performance degradation due to excessive processes.
Example 1:
An apache + mod_php server: A apache process generally requires 4 MB of memory
Therefore, on a machine with 1 GB memory:
The code is as follows: |
Copy code |
Apache_max_process_with_good_perfermance <(1g/4 m) * 2 = 500 Apache_max_process = 500*1.5 = 750
|
Therefore, plan your application to make the service run below 500 processes as much as possible to maintain high efficiency, and set the Apache soft limit to 800.
Example 2:
An apache + mod_resin server: an apache process generally requires 2 MB of memory.
On a machine with 2 GB memory:
The code is as follows: |
Copy code |
Apache_max_process_with_good_perfermance <(2g/2 m) * 2 = 2000 Apache_max_process = 2000*1.5 = 3000 |
The above estimates are based on the small file service (a request is generally smaller than 20 kB ). File Download websites may also be affected by other factors, such as bandwidth.
Estimate the memory required by Apache
It is very difficult to accurately calculate the memory required. In order to be as accurate as possible, we need to observe the server load and process in a similar online environment. After all, if there are differences between different server configurations and installed modules, you can only view yourself reliably. The so-called core things should be in your own hands ....
A simple and reliable method is to find the httpd process during stress testing, check the memory used by a process, and then look at the total process to estimate it.
For example:
Ps aux | grep httpd
Check the memory used by each httpd process. The number is in the fourth column and the format is a few percent.
Ps aux | grep httpd | wc-l
Get the total number of processes. Remember to reduce the result by 1 because grep httpd is also in the result.
Free
View the total server memory, in K
Then we can estimate it. For example, if a process occupies 2% of the memory, there are 27 httpd and the total memory is 4148424. That is:
Php-r "echo 0.002*4148424*26/1024 ;"
The result is 210.66215625M memory, which is only allocated to apache. You have to leave enough memory for other services. In addition, the peak period may be 12 times larger than usual. In this case, only Apache is enough. I have encountered a scenario where the server crashes due to high disk I/O.
If no memory is available on the server, you have to limit the maximum number of processes. The MaxClient command can be used to restrict the access.