I did some experiments and read some source code to prove my conjecture.
For the get method, send header + output_filter can end a request, because the GET method is synchronous, nginx will close the link in the step after handler returns, but for the post method, since nginx uses an asynchronous method to process post data, you must explicitly close the link.
Nginx content handler calls ngx_http_finalize_request after synchronous return. In this function, if the return value of handler is ngx_done, ngx_http_finalize_connection is directly returned, but the latter does not end a connection as the name suggests, if keepalive is enabled, this function ends the request instead of closing the connection. When keepalive is enabled is determined by the client. If the client version is later than 1.0 or only keepalive is specified in the header, the keepalive position of r is 1, indicating that keepalive is enabled, if keepalive support is enabled for nginx, the connection still exists after the request ends and the client does not close the connection.
For the post method asynchronously called, post_handler is the last step to actually generate and return http content. If the return code is not NGX_HTTP_SPECIAL_RESPONSE, the finalize_request is responsible for the posthandler itself, nginx will not call ngx_http_finalize_request after post_handler to end the request. Therefore, if your content handler processes the request in the post method, remember ngx_http_finalize_request. Otherwise, the accept too man open files will appear.
Let's talk about a bunch of things. If you don't see it, it doesn't matter if you simply filter it out. Let's look at the settings below.
Below is one of them:
24: Too program open files
The detailed error code is as follows:
23:00:49 [alert] 7387 #0: * 6259768 socket () failed (24: Too open files) while connecting to upstream
It is easy for people to figure out the cause of this error. I don't know where the problem is, but I have checked the configuration. The program code error should not be the error message.
The reason is that Linux/Unix has set the hardware and software file handle and the number of opened files. The solution is as follows:
Simple modification method:
Use the following command to set a large enough number of opened files
Ulimit-n 30000 simultaneously modify nginx. conf to add
The code is as follows: |
Copy code |
Worker_rlimit_nofile 30000; |
This will solve the problem of too many Nginx connections, and Nginx can support high concurrency.
Note: The modification with ulimit-n 30000 is only valid for the current shell and fails after exiting.
It seems that the following changes are relatively stable.
1. Modify the hardware configuration
You can use the 'ulimit 'command to view system File limits.
The code is as follows: |
Copy code |
Ulimit-Hn Ulimit-Sn |
The number of files that can be opened on the nginx server is limited by your operating system. Edit/etc/sysctl. conf to add the following content:
The code is as follows: |
Copy code |
Fs. file-max = 70000 |
Save and exit. Read the system configuration again.
The code is as follows: |
Copy code |
Sysctl-p |
Edit/etc/security/limits. conf.
Add content:
The code is as follows: |
Copy code |
* Soft nofile 10000 * Hard nofile 30000 |
This modification takes effect only in the reboot system. Therefore, you must restart the server.
2. Modify the file restrictions of the Nginx system and use nginx worker_rlimit_nofile Option
Open the corresponding configuration file location. The path of ubutnu12.04 is as follows:
The code is as follows: |
Copy code |
Vim/etc/nginx. conf |
Add content
The code is as follows: |
Copy code |
# Set open fd limit to 30000 Worker_rlimit_nofile 30000; |
Save and exit to read nginx configuration again
The code is as follows: |
Copy code |
Sudo service nginx reload |
View system output again
Ulimit-Hn
Ulimit-Sn
The result is as follows:
30000
10000
This ends. I hope it will be useful to you.