14 criteria for high-traffic Web2.0 website Performance Tuning

Source: Internet
Author: User
Tags website performance

22 criteria for performance tuning of large-scale and high-traffic Internet sites (zz)
Reprint from http://icyriver.net /? P = 26. The author should be yahoo China's engineer. Among the six newly added rules, the flush head method is quite interesting. yahoo changed its own yapache, so it is better to do these things, several other rules I think are also summarized by the research efforts of this jj team. yahoo can open and share such valuable stuff, which is really awesome, by the way, I liked their YUI and the open development community around it. Although I have not read all the code, all the stuff in YUI theater has been repeated, there are a lot of GAINS.

Hehe, standing on the shoulders of giants, things are different. Although I have not fully penetrated yahoo in my company, I can see some of the improvements.

This article reminds me of the jiggle jj in yahoo China. I still saw the iphone on smth that day. If the pdf works well, I would also consider burning it, you can read things during various waiting times, and the poor can manage and use the time by the poor :)

The 14 criteria for large-scale and high-traffic performance tuning of Web websites have become the standards for front-end Optimization of Web websites in the industry. Many articles and books at home and abroad have introduced these standards. The 14 Guidelines are actually one of the achievements of Yahoo Performance in the United States over the past few years. They have also studied and proposed many effective website Performance tuning technologies. The U.S. Performance team is responsible for improving Yahoo products and applications faster, better, and more efficient.

1. Make Fewer HTTP Requests

(Minimize the number of http requests)

One of the first problems is to put all JavaScript and CSS in one file, or split them into multiple files?

From the perspective of reducing network requests, the former is better than the latter. However, from the perspective of parallelism, by default, both IE and Firefox can only request two resources from one domain at the same time. this will bring a poor user experience in many cases-you must download all the files before you can see a decent page. flickr adopts a compromise-JavaScript and CSS are divided into multiple sub-files when the number of files is as small as possible. this brings complexity to the development, but the performance gains are huge.

2. Use a Content Delivery Network

(Use CDN)

3. Add an Expires Header

(Add the expiration time in the downloaded css, js, and image components)

4. Gzip Components

(Compress the downloaded component)

There is no doubt that compressing site content is a common Web optimization method. but not always achieve the desired results. the reason is that the mod-gzip module not only consumes server-side CPU resources, but also client-side CPU resources. in addition, the temporary files created after the mod_gzip file is compressed are stored on the disk, which may cause serious disk IO problems.

Flickr uses the mod_deflate module supported by Httpd 2.x and later. all compression operations are performed in the memory. mod_deflate is unavailable in Httpd 1.x, but the performance can be indirectly improved by creating a RAM disk.

Of course, mod_gzip is not useless. It is good for pre-compressed files. in addition, when using compression, you should also pay attention to the policy. there is no need to compress image files (there are a lot of Flickr images, and compression is not good ). flickr only compresses JavaScript and CSS. the new version of mod_gzip can automatically process pre-compressed files by configuring the mod_gzip_update_static option. cal also pointed out that this feature may cause problems in some earlier browsers.

Another major means of compression is content compression. for JavaScript, you can use tips such as reducing comments, combining spaces, and using compact syntaxes (all Google scripts are very hard to read and compact, with similar ideas ). of course, the JavaScript processed in this way may contain a lot of parentheses that are not easy to parse. Flickr uses the Dojo Compressor to build the parsing tree. Dojo Compressor has low overhead and is transparent to end users. the JavaScript processing method has been introduced, while CSS processing is relatively simple. A simple regular expression is used to replace multiple spaces with a single space character. A maximum compression ratio of 50% is obtained.

5. Put CSS components at the top of the page.

(Place css files at the top of the page as much as possible)

6. Put JS components as close to the bottom of the page as possible.

(Place js files at the bottom of the page as much as possible)

7. Avoid CSS Expressions

(Use expressions with caution in css files)

8. Make JavaScript and CSS External

(Include js and css files externally)

9. Reduce DNS Lookups

(Reduce the number of domain name resolution requests)

10. Minify JavaScript

(JavaScript code compression)

11. Avoid doing redirects.

(Avoid redirection)

12. Remove Duplicates Scripts

(Avoid repeated js files)

13. Configure ETags

(Configure ETag)

Developers of Flickr make full use of the Etag and Last-Modified mechanisms defined in the Http 1.1 Standard to Improve the Caching efficiency. it is worth noting that Cal introduces an e-Tag tips for Server Load balancer. you can set Apache to get the E-Tag through the file adjustment time and file size. By default, Apache obtains the e-Tag through the file node. Of course, this is not perfect, because it will affect if-modified-since.

However, some website e-tags, such as yahoo, are generated based on nodes. The e-Tag of the same css or js script on different node servers is different. Therefore, if there are n servers, the probability that the browser will receive 304 of the response messages is 1/n.

14. Make Ajax Cacheable

(Cache Ajax requests)

The following are the new principles, has not been officially announced, so everyone should pay attention to, reprint this article, please be sure to indicate the source-http://icyriver.net /? P = 26.

15. Flush the Header

(First send the information in the Header)

We improved the page load times by flushing the apache output buffer after the document HEAD was generated. This had two benefits.

First, the HEAD contains SCRIPT and LINK tags for scripts and stylesheets. by flushing the HEAD, those tags are pinned Ed and parsed by the browser sooner, and in turn the browser starts downloading those components earlier.

Second, the HEAD is flushed before actually generating the search results. This is a win for any property doing a significant backend computation or especially making one or more backend web service CILS.

16. Split Static Content into SS Multiple Hostnames

(Split large static files into requests in different domains)

If you have failed (10 or more) components downloaded from a single hostname, it might be better to split those into SS two hostnames.

17. Reduce the Size of Cookies

(Do not make the Cookie content too large)

Reduce the amount of data in the cookie by storing state information on the backend, and abbreviating names and values stored in the cookie. set expiration dates on your cookies, and make them as short as possible.

18. Host Static Content on a Different Top-Level Domain

(Place static files under different top-level domain names)

19. Minify CSS

(Css code compression)

20. Use GET for XHR

(Use GET requests when XHR is available)

Iain Lamb did a deep study of how using POST for XMLHttpRequests is inefficient, especially in IE. his recommendation: "If the amount of data you have to send to the server is small (less than 2 k ), I suggest you design your webservice/client application to use GET rather than POST.

21. Avoid IFrames

(Try to avoid using IFrame)

Don't use SRC (set it via JS instead). Each IFrame takes 20-50 ms, even if it contains nothing

22. Optimize images

(Picture optimization)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.