Objective:
In the same network environment, two also can meet your needs of the site, a "duang" of the load out, a tangled half a day just out, you will choose Which? Research shows that: the user most satisfied with the open page time is 2-5 seconds, if waiting for more than 10 seconds, 99% of users will close this page. Perhaps this said, you will not have too many feelings, next I enumerate a set of data: Google website access speed per 400ms, resulting in user search requests decreased by 0.59%; Amazon's 1% per cent increase in 100ms will result in a drop in revenue of up to 5-9%, and Yahoo's 400ms latency will result in a drop in traffic. Website loading speed Seriously affected the user experience, also determines the survival of the site.
Some people may say: the performance of the site is a back-end engineer thing, and the front end is not much of a relationship. I can only say that too young too is simple. In fact, only 10%~20% 's end-user response time is used to get the HTML document from the Web server and deliver it to the browser, where is the rest of the time? Take a peek at the golden rule of performance:
Only 10%~20% end-user response time is spent on downloading HTML documents. The rest of the 80%~90% time is spent on all the components in the download page.
Next we'll look at how the front-end siege lion can improve the loading speed of the page.
Server-side picture map: Submit all clicks to the same URL while submitting the x and Y coordinates of the user's click, and the server side maps the response according to the coordinates.
Client picture map: Map clicks directly to actions
One, reduce the HTTP request
It says that 80%~90% time is spent on HTTP requests made by all components in the download page. Therefore, the simplest way to improve response time is to reduce the number of HTTP requests.
Photo map:
Assuming that there are five images on the navigation bar, clicking on each image will enter a link so that five navigation images will generate 5 HTTP requests when loading. However, using a picture map can improve efficiency, so that only one HTTP request is required.
Server-side picture map: Submit all clicks to the same URL while submitting the x and Y coordinates of the user's click, and the server side maps the response according to the coordinates.
Client picture map: Map clicks directly to actions
<map name= "Planetmap" id= "Planetmap" >
<area shape= "rect" coords= "180,139,14" href = "venus.html" alt= "Venus"/>
<area shape= "rect" coords= "129,161,10" href = "mercur.html" alt= "Mercury"/>
<area shape= "rect" coords= "0,0,110,260" href = "sun.html" alt= "Sun"/>
<area shape= "rect" coords= "140,0,110,260" href = "star.html" alt= "Sun"/>
</map>
Disadvantages of using picture maps: rectangles or circles are easier to specify when specifying coordinate areas, while other shapes are difficult to manually specify
CSS Sprites
CSS sprites literal translation is the CSS Wizard, but this translation is obviously not enough, in fact, by merging multiple images into a map, and then through the CSS of some technical layout to the Web page. Especially pictures of the site, if you can use CSS sprites reduce the number of pictures, will bring the speed of ascension.
PS: The use of CSS sprites may also reduce the amount of download, you may think that the merged picture will be larger than the sum of the detached picture, because there may be additional white space. In fact, the merged image will be smaller than the sum of detached pictures, because it reduces the cost of the image itself, such as color tables, formatting information, etc.
Font icon
In the place where you can use a lot of font icons, we can use font icons, font icons can reduce the use of many images, thus reducing HTTP requests, font icons can also be used to set the CSS color, size and other styles, why not.
Merging scripts and style sheets
Merging multiple style sheets or script files into a single file reduces the number of HTTP requests and thus shortens the effect time.
However, merging all of the files is not tolerated by many people, especially those who write modular code, and merging all of the style files or script files may result in loading more than their own style or script when a page is loaded, increasing the download volume for those who only visit one (or several) pages of the site. So everyone should weigh the pros and cons.
Second, the use of CDN
If the application Web server is closer to the user, the response time for an HTTP request is shortened. On the other hand, if the component Web server is closer to the user, the response time for multiple HTTP requests will be shortened.
A CDN (Content publishing Network) is a set of Web servers distributed across multiple geographic locations that are used to publish content to users more effectively. When optimizing performance, the choice of servers to publish content to a specific user is based on the measurement of network-class congestion. For example, a CDN might choose a server with the smallest number of network steps, or a server with the shortest response time.
CDN also enables data backup, extended storage capabilities, caching, and helps mitigate peak web traffic pressure.
Disadvantages of CDN:
1, response time may be affected by other website traffic. A CDN service provider shares a Web server group among all its customers.
2, if the CDN service quality is reduced, then the quality of your work will also fall
3. Unable to directly control the component server
Third, add expires head
The initial visitor to the page makes a lot of HTTP requests, but by using a long-expires header, the components can be cached and the next time they are accessed, the unnecessary HTPP requests can be reduced, which increases the load speed.
The Web server tells the client through the expires header that it can use the current copy of a component until the specified time. For example:
Expires:fri, 07:41:53 GMT
Expires disadvantage: It requires the server and client clocks to be strictly synchronized; the expiration date needs to be checked frequently
HTTP1.1 introduced Cache-control to overcome the expires header limit, using Max-age to specify how long the component is cached.
cache-control:max-age=12345600
If both Cache-control and expires are established, the Max-age will cover the expires head
Four, compression components
Starting with HTTP1.1, Web clients can express support for compression through the accept-encoding header in an HTTP request
Accept-encoding:gzip,deflate
If the Web server sees this header in the request, it is compressed using one of the methods that the client has listed. The Web server notifies the Web client through the content-encoding in the response.
Content-encoding:gzip
Proxy Cache
When a browser sends a request through a proxy, the situation is different. Suppose that the first request sent to the agent for a URL came from a browser that does not support gzip. This is the first request of the agent, and the cache is empty. The agent forwards the request to the server. The response is uncompressed and the proxy cache is sent to the browser at the same time. Now, assuming that the request to the proxy is the same URL, it comes from a browser that supports gzip. The agent responds with uncompressed content in the cache, thereby losing the opportunity to compress. Conversely, if the first browser supports GZIP and the second one does not, the compressed version in your proxy cache will be available to subsequent browsers, regardless of whether they support gzip.
Workaround: Add the Vary header Web server in response to the Web server to tell the agent to change the cached response based on one or more request headers. Because the decision to compress is based on the accept-encoding request header, you need to include accept-encoding in the vary response header.
Vary:accept-encoding
V. Put the style sheet on the head
First of all, put the style sheet on the head for the actual page load time does not cause too much impact, but this will reduce the time the first screen of the page, so that the page content gradually rendered, improve the user experience, to prevent "white screen".
We always want the page to display content as soon as possible, to provide users with visual feedback, which is very important for users with slow speed.
Placing the style sheet at the bottom of the document prevents the content in the browser from appearing gradually. To avoid redrawing the page element when the style changes, the browser blocks the content from being rendered progressively, resulting in "white screen". This originates from the behavior of the browser: if the stylesheet is still loading, building the rendering tree is a waste, because any style sheet loading parsing is done before the retreat of anything
Six, put the script at the bottom
With the same style sheet, the script at the bottom does not have much impact on the actual page load time, but it reduces the time the page's first screen appears, making the page content progressively present.
The download and execution of JS blocks the construction of the DOM tree (which is strictly interrupted by an update of the DOM tree), so the script tag will truncate the contents of the first screen in the HTML snippet in the first-screen range.
Parallel downloads are disabled when downloading scripts-even if different host names are used, no other downloads are enabled. Because the script may modify the page content, the browser waits, and also to ensure that the script executes in the correct order, because the script behind it may have dependencies on the previous script, and failure to execute in sequence may result in an error.
Vii. Avoiding CSS Expressions
CSS expressions are a powerful and dangerous way of dynamically setting CSS properties, and are supported by IE5 and later versions, IE8 versions.
The mouse moved a few times, the function of the number of runs easily reached thousands of times, the risk is obvious.
How to resolve:
One-time expression:
Event handling mechanism
Using the JS event processing mechanism to dynamically change the style of the elements, so that the number of function runs within the controllable range.
Viii. using external JavaScript and CSS
inline scripting or styling can reduce HTTP requests, which is supposed to increase the speed of page loading. In practice, however, when scripts or styles are imported from outside, it is possible for the browser to cache them, allowing the cache to be used directly at a later time, while the size of the HTML document decreases, increasing the loading speed.
Impact factors:
1. The less page views each user produces, the stronger the argument for inline scripting and styling. For example, if a user visits your website only one or two times per month, then inline will be better in this case. And if the user can generate a lot of page views, then the cached style and script will greatly reduce the download time, submit page loading speed.
2. Using external files can increase the reusability of these components if the components used in the different pages of your site are roughly the same.
Download after loading
Sometimes we want inline styles and scripts, but we can provide external files for the next page. Then we can load the external components dynamically in the page loading, so that the user will access the next.
In this page, JavaScript and CSS are loaded two times (inline and externally). To make it work, you must deal with a double definition. Putting these components into an invisible iframe is a better solution.
Ix. Reduction of DNS lookups
When we enter the URL in the browser's address bar (for example: www.linux178.com), and then enter, enter this moment to see what happened to the page?
Domain name resolution--Initiates a TCP 3 handshake--initiates an HTTP request after the TCP connection is established--the server responds to the HTTP request, the browser gets the HTML code--the browser parses the HTML code, and requests the resources in the HTML code (such as JS, CSS , pictures, etc.)-browser renders the page to the user
Domain name resolution is the first step in page loading, then how is the domain name resolved? Take Chrome for example:
1. Chrome will first search the browser's own DNS cache (cache time is relatively short, about 1 minutes, and can only hold 1000 cache), See if there is an entry for www.linux178.com in its cache, and it is not expired, and resolves to this end if there is no expiration.
Note: How do we view Chrome's own cache? You can use the chrome://net-internals/#dns to view
2. If the browser itself does not find the corresponding entry in the cache, then Chrome will search the operating system's own DNS cache, if found and not expired, stop the search resolution to this end.
Note: How to view the DNS cache of the operating system itself, take Windows system as an example, you can use Ipconfig/displaydns at the command line to view the
3. If the DNS cache on the Windows system is not found, try reading the Hosts file (located in C:\Windows\System32\ DRIVERS\ETC), see if there is a corresponding IP address in this domain, if there is a successful resolution.
4. If the corresponding entry is not found in the Hosts file, the browser initiates a DNS system call to the locally configured preferred DNS server (typically provided by the telco operator, You can also use a DNS server like Google to initiate a domain name resolution request (through a UDP protocol that initiates a request to DNS port 53, which is a recursive request that the carrier's DNS server must provide us with the IP address of the domain name), The operator's DNS server first finds its own cache, finds the corresponding entry, and does not expire, and the resolution succeeds. If the corresponding entry is not found, then there is a carrier's DNS for our browser to initiate an iterative DNS resolution request, it is to find the root domain of the DNS IP address (this DNS server is built in 13 root domain DNS IP address), find the root domain of the DNS address, will make a request to it (ask www.linux178.com the IP address of this domain name AH?) ), root domain found this is a domain name of a top-level domain COM domain, so tell the carrier's DNS I do not know the IP address of this domain name, but I know the IP address of the COM domain, you go to find it, so the operator's DNS to get the IP address of the COM domain, Another request to the IP address of the COM domain (what is the IP address of this domain name www.linux178.com?), COM domain This server tells the operator of the DNS I do not know www.linux178.com the IP address of this domain name, but I know linux178.com this domain DNS address, you go to find it, so the operator's DNS and to linux178.com the DNS address of this domain name (this is generally By a domain name registrar, such as WAN Network, new network, etc.) to initiate the request (please www.linux178.com the IP address of this domain name is how much?) ), This time linux178.com domain DNS Server A check, eh, really in my place, so the results of the found sent to the operator's DNS server, this time the operator's DNS server got www.linux178.com the domain name corresponding IP address, and returned to the Windows system kernel, The kernel also returns the result to the browser, finally the browser to get the www.linux178.com corresponding IP address, the action of one step.
Note: In general, the following steps are not performed
If the above 4 steps have not been resolved successfully, then the following steps will be performed:
5. The operating system will look for the NetBIOS name cache (NetBIOS names caching, which exists on the client computer), what is the cache? The computer name and IP address of the computer that I have successfully communicated with in the recent period of time will exist in this cache. Under what circumstances can the step be resolved successfully? This is the name just a few minutes ago and I successfully communicated, then this step can be successfully resolved.
6. If the 5th step is unsuccessful, the WINS server is queried (the server that corresponds to the NetBIOS name and IP address)
7. If the 6th step is not successfully queried, then the client is going to broadcast the search
8. If the 7th step is unsuccessful, the client reads the LMHOSTS file (and the same directory as the Hosts file)
If the eighth step has not been resolved successfully, then declared this resolution failed, it will not be able to communicate with the target computer. As long as there is one step in these eight steps to resolve the success, you can successfully communicate with the target computer.
DNS is also a cost, usually the browser to find a given domain name IP address to spend 20~120 milliseconds, before the completion of the domain name resolution, the browser can not load from the server to anything. So how to reduce the domain name resolution time, speed up the page loading speed?
When the client DNS cache (browser and operating system) is cached as empty, the number of DNS lookups is the same as the number of unique host names in the Web page to be loaded, including the hostname of the page URL, script, style sheet, picture, Flash Object, and so on. Reducing the number of host names can reduce the number of DNS lookups.
Reducing the number of unique hostnames can potentially reduce the number of concurrent downloads in the page (the HTTP 1.1 specification recommends downloading two components in parallel from each host name, but can actually be multiple), so reducing the hostname and concurrent download scenarios can create contradictions that need to be weighed against each other. It is recommended to place components at least two but not more than 4 hosts, reducing DNS lookups while also allowing highly parallel downloads.
X. Streamlining JavaScript
Streamline
Refinement is the removal of unnecessary characters from the code to reduce the size of the file and to reduce the load time. When the code is streamlined, it removes unnecessary whitespace characters (spaces, line breaks, tabs) so that the entire file size becomes smaller.
Confuse
Obfuscation is another way to apply the source code, which removes comments and whitespace characters, and it also overwrites the code. When confused, the function and variable names are converted to shorter strings, where the code is more refined and difficult to read. This is often done to increase the difficulty of reverse engineering the code, which also improves performance.
Disadvantages:
Confusion itself is more complex and may introduce errors.
Symbols that cannot be changed are required to prevent JavaScript symbols (such as keywords, reserved words) from being modified.
Confusion makes code difficult to read, making it more difficult to debug problems in a production environment.
In the above mentioned about using a compression method such as gzip to compress the file, this way, even if using gzip to compress the file, it is still necessary to streamline the code. In general, the savings generated by compression are higher than streamlined, and in a production environment, streamlining and compression are used to maximize savings.
The simplification of CSS
The savings from CSS simplification are generally smaller than JavaScript, because there are relatively few comments and blanks in CSS.
In addition to removing whitespace, comments, CSS can be optimized to achieve more savings:
merging the same classes;
Removal of unused classes;
Use abbreviations, such as
above. Right is the correct notation, colors use abbreviations, use 0 instead of 0px, merging styles that can be merged. In addition, it is possible to omit the last line of the style when it is streamlined.
Xi. Avoid redirection
What is redirection?
Redirection is used to reroute a user from one URL to another.
Types of common redirects
301: Permanent redirection, mainly used when the domain name of the site changes, tell the search engine domain name has changed, should be the old domain name of the data and the number of links to the new domain name, so that the site will not be affected by the change in the ranking of the domain name.
302: Temporary redirect, which mainly implements post request, informs the browser to transfer to the new URL.
304:not Modified, mainly used when the browser retains a copy of the component in its cache, and the component has expired, this is the browser generates a conditional get request, if the server's components have not been modified, it will return 304 status code, and do not carry the body, Tell the browser to reuse this copy and reduce the response size.
How does redirection damage performance?
When a page is redirected, it delays the transfer of the entire HTML document. Before the HTML document arrives, nothing is rendered on the page, and no components are downloaded.
Consider a practical example: for an ASP. NET WebForm development, it's easy for a novice to make a mistake by writing a page's connection to a server control background code, such as a button control. In its background click event: Response.Redirect (""); however, the function of this button is only to transfer the URL, which is very inefficient, because after clicking the button, send a POST request to the server, After the server processes Response.Redirect (""), it sends a 302 response to the browser, which then sends a GET request based on the URL of the response. The correct way to do this is to use the a tag directly on the HTML page to make the link, thus avoiding redundant post and redirection.
Redirected Scenarios
1. Track internal traffic
Redirects are often used to track the direction of user traffic, and you can use redirects when you have a portal home page and want to keep track of the traffic after the user leaves the page. For example: The link address of the homepage News of a website http://a.com/r/news, click the link will produce 301 response, its location is set to http://news.a.com. By analyzing a.com Web server logs, you can tell where people are going after they leave the home page.
We know how redirection is damaging performance, and for better efficiency, you can use the Referer log to track the flow of internal traffic. Each HTTP request has a referer that represents the original request page (in addition to actions such as opening from a bookmark or typing the URL directly), recording the referer of each request, thereby improving response time by avoiding sending redirects to the user.
2. Track outbound traffic
Sometimes links may take users away from your site, in which case it is not realistic to use referer.
You can also use redirection to resolve outbound traffic issues. Baidu Search For example, Baidu by wrapping each link to a 302 redirect to solve the problem of tracking, such as the search keyword "front-end performance optimization", a URL in the search results is https://www.baidu.com/link?url=pDjwTfa0IAf_ frbnlw1qldtq27ybujwp9jpn4q0qsjdntgtdbk3ja3jyyn2cgxr5ataywg4si6v1nypksyliswjifufqdinhpvn4qe-ulgg&wd=& eqid=9c02bd21001c69170000000556ece297, even if the search results do not change, but this string is dynamic change, temporarily do not know how to play a role here? (Personal feeling: The string contains the URL to be visited, after the click will generate 302 redirects, the page will be transferred to the target page (to be modified, ask the great God to correct me))
In addition to redirection, we also have the option to use beacons-an HTTP request with trace information in its URL. Trace information can be extracted from the access journal of the Beacon Web server, which is usually a 1px*1px transparent image, but the 204 response is better because it is smaller, never cached, and will never change the state of the browser.
12. Delete duplicate scripts
When a team develops a project, it is possible that the same script will be added multiple times because the page or component may be added to the page by different developers.
Duplicate scripts can cause unnecessary HTTP requests (if the script is not cached), and unnecessary JavaScript waste time, and may cause errors.
How do you avoid repeating scripts?
1. Form a good scripting organization. Repeating scripts may appear in different scripts that contain the same script, some of which are necessary, but some are not necessary, so a good organization of the script is required.
2. Implement the Script Manager module.
First check if it is plugged in and return if inserted. If the script relies on other scripts, the dependent script is also inserted. The last script is routed to the page, GetVersion checks the script and returns the file name appended with the corresponding version number, so that if the script version changes, the previous browser cache will be invalidated.
13. Configuring the ETag
Previously, the browser cache was invalidated.
What is an etag?
An entity label (Entitytag) is a string that uniquely identifies a specific version of a component and is a mechanism that the Web server uses to confirm the validity of the cache component, which can often be constructed using some of the properties of the component.
Conditional GET Request
If the component expires, the browser must first check that it is valid before reusing it. The browser sends a conditional GET request to the server, and the server determines that the cache is still valid, sending a 304 response that tells the browser that the cache component can be reused.
So what does the server do to determine if the cache is still valid? There are two ways:
ETAG (entity label);
Latest modification date;
Last Modified Date
The original server returns the latest modification date for the component through the last-modified response header.
Give me a chestnut:
When we do not have a cache access to www.google.com.hk, we need to download Google's logo, then send such an HTTP request:
Request:
GET Googlelogo_color_272x92dp.png HTTP 1.1
Host:www.google.com.hk
Response:
HTTP 1.1 OK
Last-modified:fri, Sep 22:33:08 GMT
When the same component needs to be accessed again, and the cache has expired, the browser sends the following conditional GET request:
Request:
GET Googlelogo_color_272x92dp.png HTTP 1.1
If-modified-since:fri, Sep 22:33:08 GMT
Host:www.google.com.hk
Response:
HTTP 1.1 304 Not Modified
Entity labels
The ETag provides another way to detect if the components in the browser cache match the components on the original server. Excerpt from the book:
Requests without a cache:
Request:
Get/i/yahoo/gif HTTP 1.1
Host:us.yimg.com
Response:
HTTP 1.1 OK
last-modified:tue,12 Dec 200603:03:59 GMT
ETag: "10c24bc-4ab-457elc1f"
Request the same component again:
Request:
Get/i/yahoo/gif HTTP 1.1
Host:us.yimg.com
if-modified-since:tue,12 Dec 200603:03:59 GMT
If-none-match: "10c24bc-4ab-457elc1f"
Response:
HTTP 1.1 304 Not Midified
Why should I introduce an etag?
The ETag is primarily intended to address some of the problems that last-modified cannot solve:
1. Some files may change periodically, but his content does not change (just change the modified time), this time we do not want the client to think that the file has been modified, and re-get;
2. Some files are modified very frequently, such as in the time of the second to modify, (for example, 1s modified n times), if-modified-since can check the granularity is S-class, this modification can not be judged (or UNIX record mtime can only be accurate to the second);
3. Some servers do not accurately obtain the last modified time of the file.
The problem with the ETag
The problem with the ETag is that some properties are typically used to construct it, and some properties are unique to a particular server where the site is deployed. When using a clustered server, the browser obtains the original component from one server and then initiates a conditional get request to another different server, and the ETag will have a mismatched condition. For example, using Inode-size-timestamp to generate an etag, the file system uses Inode to store information such as file type, owner, group, and access mode, and the inode is different on multiple servers, even if the file size, permissions, timestamps, and so on are the same.
Best practices
1. If you do not have any problems using last-modified, you can remove the Etag,google search home page without using the ETag.
2. To determine the etag to use, when configuring the value of the ETag, remove the properties that might affect the validation of the component cluster server, such as using Size-timestamp to generate timestamps.
14. Enable Ajax to cache
This defines Ajax in Wikipedia:
Ajax, "Asynchronous JavaScript and XML" (Asynchronous JavaScript and XML technology), refers to a set of browser-side web development techniques that combine a number of technologies. The concept of Ajax is presented by Jessi James Jarrett.
Traditional Web applications allow the user to fill out a form (form) and send a request to the Web server when the form is submitted. The server receives and processes the incoming form and then sends it back to a new webpage, but it wastes a lot of bandwidth because most of the HTML code in the front and rear two pages is often the same. Since each application communication requires a request to the server, the response time of the application depends on the response time of the server. This causes the user interface to respond much more slowly than the native application.
Unlike this, an AJAX application can send and retrieve only the necessary data to the server, and the client uses JavaScript to handle the response from the server. The server responds faster because the amount of data exchanged between the server and the browser is reduced (approximately 5% of the original) [source request]. At the same time, a lot of processing can be done on the client machine making the request, so the load on the Web server is reduced.
Similar to DHTML or Lamp,ajax does not refer to a single technique, but rather organically utilizes a range of related technologies. Although its name contains XML, the data format can actually be replaced by JSON, further reducing the amount of data to form so-called Ajaj. The client and server do not need to be asynchronous either. Some Ajax-based "derivation/composition" (Derivative/composite) technologies are also emerging, such as Aflax.
The goal of Ajax is to break the web's nature of the start-stop interaction, to display a white screen to the user after redrawing the entire page is not a good user experience.
Asynchronous vs. Instant
One obvious thing about Ajax is that it provides immediate feedback to the user because it asynchronously requests information from the backend Web server.
The key factor in whether users need to wait is if the AJAX request is passive or active. Passive requests are pre-initiated for future use, and unsolicited requests are initiated based on the user's current actions
What kind of Ajax requests can be cached?
Post requests are not cached on the client, and each request needs to be sent to the server for processing, and the status code 200 is returned each time. (data can be cached on the server side to improve processing speed)
Get requests, which can be cached by the client (and by default), unless a different address is specified, the AJAX request of the same address is not repeated on the server, but instead returns 304.
Ajax requests Use caching
When making an AJAX request, you can choose to use the Get method as much as possible, so you can use the client's cache to increase the request speed.
Web front-end performance optimization-How to increase page load speed