Explain why you're writing this article, and how to get tangled up in this "little problem". First of all, open the static file gzip compression is very advantageous to improve the site's access speed, and effectively reduce the spider crawling static page of the Time-taken, but also will not be as open as dynamic file compression may be on Baidu Spider caused 200 0 64 of the crawl problem, so on the one hand, Website speed is conducive to improve the user experience, on the other hand, Google Administrator blog this year has made clear that the speed of the site is one of the factors, and for foreign host to do Baidu Chinese station optimization, Time-taken not ideal will lead to Baidu spider inside the page catch less, Guoping before in their blog article page loading speed is how to affect the effect of SEO has also mentioned that in a fixed period of time the spider crawl site's total time is fixed, then crawl speed up, crawl page number will be more, conversely less.
OK, start the text, in the previous article " Spider crawl static page and trigger gzip compression experimental results " in question two, I proposed the gzip static page of the compressed version of the server to save the way the speculation, after a long puzzled, Found that the final reason for the two hosts to return the gzip results is the IIS version rather than the cache folder I guess was set too small
In fact, IIS7 has a larger update than IIS6 on static compression, where static compression is performed on a different thread, so the first HTML version sent to the browser is not compressed after an HTTP request is received, At the same time, IIS6 will start using a different thread to compress the file and save the compressed version for a long time in the cached folder of the compressed file. In the past, that is, on the IIS6 server, after the compression is complete, the compressed version of the static file of the HTTP request, IIS6 will directly from the cache folder directly call the compressed version and return to the browser.
But in IIS7, compression is done on the main thread, and in order to save the cost of compression, IIS7 does not keep all HTTP requests for long periods of time only for those static files that are often accessed by the user, which is why I did not compress the first access, In the short term again, the return is a compressed version, but a few minutes later the reason for the return of the uncompressed version is returned. Here we can understand that IIS7 does not actually save the compressed version to the cached folder, but only saves it in server memory, or temporarily saves the compressed version to the cached folder and deletes it after a while.
And IIS7 defines what files are frequently accessed by conforming to the compression standard method that is system.webserver/serverruntime in the following two properties, Frequenthitthreshold and Frequenthittimeperiod. If IIS receives access to more than Frequenthitthreshold gate values for a static file within the Frequenthittimeperiod time period, Then IIS7 will compress the static file like IIS6 and save the compressed version for a long time in the cached folder of the compressed file. If a user accesses a file in a Web site, the cached version of the file already exists in the cached folder, then IIS7 will not judge Frequenthitthreshhold this logic but return the compressed version directly to the browser.
This setting is really a pain in the egg, but Microsoft's official reply is that this has the advantage of improving server performance ... So if you want IIS7 to be able to compress like IIS6, there are two solutions, of course, to modify the values of Frequenthitthreshold and Frequenthittimeperiod:
The first is to add the following content to the web.config, Frequenthitthreshold to 1, and frequenthittimeperiod for 10 minutes.
frequenthitthreshold= "1"
frequenthittimeperiod= "00:10:00"/>
The second method is to open%windir%\system32\inetsrv\appcmd.exe, and then enter the following command string in the command line interface, and then return
Set Config-section:system.webserver/serverruntime-frequenthitthreshold:1
Microsoft's official suggestion is that the less radical approach is to not lower the frequenthitthreshold but to improve the frequenthittimeperiod, which is more moderate to server performance. Here to mention is that for the friends who have VPS, the proposal can be set manually, and the virtual host users can not set up to see the service provider, I will be very tragic change. Let's try it.