The number of requests with a response code of 400 on an application increases by 1‰. the reason for the application's request is analyzed. First, check the log to check whether other requests of the ip address of the 400 request are normal, and eliminate the attack of forged requests. The User-Agent distribution is normal and the ip address distribution is normal. All requests with a 400 response code increase on an application, basically reaching 1‰. the reason for the application's request is analyzed.
First, check the log to check whether other requests of the ip address of the 400 request are normal, and eliminate the attack of forged requests. The User-Agent distribution is normal and the ip address distribution is normal. Responses to all 400 requests
The content is 226 bytes, indicating that the content returned by the server to the user is fixed.
400 error is a user request error. There are two possible reasons for the format error. one is due to network communication, and the other is violation of conventions and data format or length.
If the client is a user-defined client, the protocol coverage is incomplete, and there may be many violations of the data format or length. However, if the user accesses the client through a browser normally
The probability of data format or length is very small. When we consider that the data is too long, the most likely reason is that the cookie is too long, so we simulate a super-long cookie for testing. Although the browser returns
400 response code, but the content-length is very large, including the cookie content length (apache bug, which has been upgraded to 2.2.22 to solve this bug ).
Therefore, we suspect that timeout results in incomplete data.
View the average processing time of 400 of records:
Cat cookie_log | grep 'HTTP/1.1 \ "400 '| awk' {print $12} '| awk' BEGIN {total = 0; count = 0} {total + = $1; count ++} END {print total/count }'
The average time consumption is 6119.350 ms.
View the average time consumption of the first 100,000 records:
Head-n100000 cookie_log | grep 'HTTP/1.1 \ "000000' | awk '{print $12}' | awk 'begin {total = 0; count = 0} {total + = $1; count ++} END {print total/count }'
The time consumed is 48.325 ms.
It indicates that the average time consumption is more than 400 times of normal time consumption at 120.
Program verification:
Out. write ("GET/testpath HTTP/1.1 \ r \ n". getBytes ());
Out. flush ();
Thread. sleep (30*1000); // simulate timeout
Out. write ("Host: testhost \ r \ nUser-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv: 10.0) Gecko/20100101 Firefox/10.0 \ r \ nAccept: text/html, application/xhtml + xml, application/xml; q = 0.9, */*; q = 0.8 \ r \ nAccept-Language: zh-cn, en-us; q = 0.7, en; q = 0.3 \ r \ nAccept-Encoding: gzip, deflate \ r \ nConnection: keep-alive \ r \ nIf-Modified-Since: Thu, 27 Oct 2011 01:16:25 GMT \ r \ nCache-Control: max-age = 0 \ r \ n "). getBytes ());
Out. flush ();
Get the following response:
HTTP/1.1 400 Bad Request
Date: Wed, 08 Feb 2012 08:36:52 GMT
Server: Apache
Content-Length: 226
Connection: close
Content-Type: text/html; charset = iso-8859-1
400 Bad Request
Bad Request
Your browser sent a request that this server cocould not understand.
This result is very consistent with the online situation, so it is basically suspected that some clients have sent requests for a long time. Apache starts from receiving the first header (GET/xxxx \ r \ n ),
No matter whether the header is correct or not, the processing time is calculated. all subsequent content receiving is counted into the processing time.
Shi Jianshu's column