Apache's keepalive Setup and optimization

Source: Internet
Author: User
Tags http request time interval pingdom tools

Before we say Apache KeepAlive, we need to have some simple idea of the loading process of Web data
Here is a test site loading tool: Pingdom tools, in which we enter a URL to test the loading speed, At the same time, the most important thing is to observe the loading process:


The meaning of each block is: Yellow is the HTTP start time, Green is the HTTP request link time, Blue is the load time;
From this result graph, we can see:
1) all requests, This refers to the HTTP request, are divided into three steps, the first step to start, the second link, the third step of the official Download
2) All the Web page, start the first page of the HTTP request, link the request, and download the upper page of the data, download this part of the data is only one HTTP request download
3) When the main page of the data download is completed, will download the encountered CSS JS file (Note: This tool does not count JS), in the future data download, will be concurrent 10 HTTP requests simultaneously download
4) While downloading the current 10 HTTP request data, other resources need to wait, so, In the process of optimization, we should pay attention to the number of Web Resources
5) When downloading a Web resource, if the resource is relatively large, of course, it takes a long time to load, so also pay attention to the size. In the above test, there is a part of the process of requesting a download resource: The link and disconnection of the TCP request, and this article formally says the request. So what is the relationship between the HTTP request and the TCP request? The simple point is that a TCP request is closer to the bottom, on it is HTTP and other application requests, so you can think of a TCP request including a number of HTTP requests (as to how much, can be set in Apache), At the same time, the link and disconnection of TCP is more necessary to consume more memory resources and time than the link and disconnection of HTTP requests. First of all, the Apache keepalive settings.

KeepAlive Setup instructions in Apache Core: Keep-alive extends the persistent link feature from http/1.0 and http/1.1. Provides a long-lasting HTTP session for multiple requests in the same TCP connection. In some cases, this will accelerate the delay of 50% of HTML documents that contain a large number of images. after the Apache1.2 version, you can set KeepAlive on to enable persistent linking.
For http/1.0 clients, persistent link connections are used only when the client is specified for use. In addition, a persistent link connection is established with the http/1.0 client only if the transferred content length can be known in advance. This means that content that is variable in length, such as CGI output, SSI pages, and a list of directories generated on the server side, will generally not be able to use persistent links established with http/1.0 clients. For http/1.1 clients, if no special designation is made, persistence will be the default connection method. If the client makes a request, a chunked encoding is used to resolve the problem of sending unknown length content in the persistent link.

Another related is the setup description of KeepAliveTimeout in Apache Core: The number of seconds that Apache waits for the next request before shutting down the persistent connection. Once a request is received, the timeout value is set to the number of seconds specified by the timeout instruction. For high-load servers, a large keepalivetimeout value can cause some performance problems: the larger the timeout value, the more processes remain connected to the idle client. And then finally there's a related is

Description of Maxkeepaliverequests in the core:maxkeepaliverequestsThe directive limits the number of requests allowed per connection when KeepAlive is enabled. If this value is set to "0", the number of requests is not limited. We recommend that you better set this value to a larger value to ensure optimal server performance. With Apache's setup instructions, we've been able to understand the keepalive principle. In the case of a Web page that contains many images, the client makes multiple HTTP requests in an instant, and multiple TCP connections can significantly slow down the response time. At this point, the persistent connection allows the user to make multiple HTTP requests in a single TCP connection, reducing the number of TCP connection builds and increasing the response speed. We can count the number of successive HTTP requests, the time interval, and the amount of traffic to determine the values of maxkeepaliverequests and keepalivetimeout through Access_log. KeepAliveTimeout too small to play the role of continuous connection; too big, persistent connection delays, wasted TCP connections not to say, and worse, the number of httpd processes in the system will increase, resulting in higher system load, even cause the server to lose response.However, when your server is only dealing with dynamic Web requests, because users rarely instantly request multiple dynamic Web pages (usually after the page is opened to read long time before the next page), opening keepalive is tantamount to wasting the number of TCP connections. What, what determines whether or not we want to open the keepalive factor is very simple to determine, that is, the user in a page request will be issued to the server multiple HTTP requests.
For my friend, their server has a dynamic application, with all the pictures, I looked at, estimated that their home page issued by the following types of requests: text/html, Text/css, Application/octet-stream, text/ JavaScript, Image/gif, Image/jpeg. A home page made 181 requests (I read all the requests and note that all requests are the same domain name). Only text/html and Application/octet-stream can be generated by the application here, text/html only once in this request, and Application/octet-stream only 4 times. Is it helpful for them to close the keepalive? My reply is not helpful, and will make the server's service quality worse. If this is the case, how to do it.

My suggestion is as follows:


1. if only one request in each of our pages is generated dynamically, and 180 (there may be 4 not, but not important) are static, which should be static and dynamic separate to two servers (a machine can). Turn off the dynamically applied keeplive and open the keeplive of the static server.


2. Front-end deployment of four-layer switching or seven-tier swap or cache server, which will allow the system to expand, but also can make the server's keeplive open with better results.

 
3. Should consider optimization under their Apache, heard that a process has up to XXM memory consumption, more scary, within 10M more normal said, but this is an option.
How to verify that your server Apache has this feature turned on (if you buy virtual space, the general network management will not open for you, because it consumes too much memory). Webmaster Station tool query, and gzip compression together query.
Summary: Long connection has both advantages and disadvantages, on the one hand long connection will cause memory consumption is too large, on the other hand, Apache this kind of long connection for the dynamic request and static request effect is different, for the dynamic request is not good, a request occupies a TCP, for static content, Can download multiple HTTP requests at the same time, it is good, so guess: the Web into a server, a server to hold static HTML CSS JS especially pictures, a server to store dynamic page requests, in theory, the effect will be better, But I'm not testing the actual results and the test data.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.