Flickr is the Web 2.0 representative site. In addition to the general Web site's content optimization, there is a need to flexibly deal with the complexity of deployment distribution after the frequent changes in JavaScript and CSS.
Setting the file size strategy The first question is whether to put all the JavaScript and CSS in a file, or split into multiple files? From the point of view of reducing network request, the former is better and the latter is poor. But from a parallel perspective, ie and Firefox can only request two resources from one domain at a time by default. This can lead to a bad user experience in many situations--all files must be downloaded before they can see a decent page. Flickr uses the eclectic approach of dividing JavaScript and CSS into multiple sub files, while keeping the number of files as small as possible. This brings complexity to development, but the benefits to performance are enormous.
Compression optimization problem There is no doubt that the compression of site content is a more common Web optimization method. But it does not always achieve the desired results. The reason is that mod-gzip modules not only consume server-side CPU resources, but also consume client CPU resources. Also, temporary files created after Mod_gzip compressed files are placed on disk, which poses a serious problem for disk IO. Flickr uses mod_deflate modules that are supported by HTTPD 2.x later. The compression operation is performed in memory. Mod_deflate is not available in HTTPD 1.x, but can indirectly improve performance by creating a RAM disk.
Of course, mod_gzip to also is not useless, for the compressed file, there is good. Moreover, when using compression, we should also pay attention to the strategy. Image file compression is not necessary (there are many images on Flickr, and compression is less than good). Flickr only compresses JavaScript and CSS. Mod_gzip a new version can automatically process the uncompressed files by configuring the Mod_gzip_update_static option. Cal also points out that this feature may be problematic in some older versions of the browser.
Another major means of compression is the compression of content. JavaScript can be done by reducing annotations, merging spaces, and using compact syntax (all of Google's scripts are very difficult to read, and very compact and thought-like). Of course, JavaScript that has been handled like this might have a lot of parentheses not easy to parse, Flickr uses the dojo compressor to build the parse tree. Dojo compressor are inexpensive and transparent to end users. JavaScript processing is described, CSS processing is relatively simple. With simple regular expression substitution (for example, replacing multiple spaces with a spaces), the maximum compression ratio can be 50%.
Caching's optimized Flickr developers make full use of the etag and last-modified mechanisms defined by the Http 1.1 specification to improve Caching efficiency. It is worth noting that Cal introduces a e-tag trick in load-balanced conditions. That is, you can set Apache to get E-tag by file resizing and file size, and by default, Apache gets E-tag through file nodes. Of course, this is not perfect, because it will affect if-modified-since.
Flexible use of mod_rewrite it is said that Flickr site applications are built daily (day build). If there is not a flexible mechanism I am afraid this is unthinkable. And, at sites like Flickr, the simultaneous processing of content changes is a daunting challenge. Their sharp weapon is the flexible use of mod_rewrite. By configuring URL rewrite rules, it's easy to switch to different environments. It sounds simple, but without a certain amount of WEB technology, it's not easy to do.
Through the use of these main methods, we see Flickr as a dream-like performance.
BTW: Because there is no server at Flickr at home, the speed of access to mainland users is not to mention:(
--end.