The Division of the website is generally two: front end and backstage. We can understand the background is used to achieve the function of the site, such as: the implementation of user registration, users can comment on the article and so on. And what about the front? In fact, it should be a function of performance. And the vast majority of user access experiences come from front-end pages.
And what is the purpose of building our website? Is it just to get the target people to visit? So we can understand that the front end is the real contact with the user. In addition to the background needs to optimize performance, in fact, the front-end page needs to work on performance optimization, only in order to give our users a better user experience. It's like, a lot of people ask, men are looking for a girlfriend is not only look at the appearance, some wise man gave the answer: the face and figure determines whether I want to know her thoughts, thought determines whether I will vote against her face and body. Similarly, the site is the same, the user experience of the front end of the site determines whether users want to use the function of the site, and the function of the site determines whether the user will vote to veto the front-end experience .
Not only that, if the front-end optimization is good, he can not only save costs for the enterprise, he can also bring more users to users, because of the enhanced user experience. Having said so much, how do we optimize the performance of our front-end pages?
In general, the Web front end refers to the site business logic before the section, including browser loading, site view model, image services, CDN services, such as the main optimization means browser access, use reverse proxy, CDN and so on.
Browser Access Optimization
Browser request processing process such as:
1, reduce the HTTP request, reasonable set HTTP cache
The HTTP protocol is a stateless application layer protocol, which means that each HTTP request needs to establish a communication link and data transfer, and on the server side, each HTTP needs to start a separate thread to process. These communications and services are expensive, and reducing the number of HTTP requests can effectively improve access performance.
The main way to reduce HTTP is to merge CSS, merge JavaScript, and merge pictures. Combine the JavaScript and CSS you need to access the browser once to a single file, so the browser needs only one request. Pictures can also be merged, multiple images merged into one, if each picture has a different hyperlink, you can use the CSS offset to respond to mouse click action, construct a different URL.
The power of the cache is powerful, and the appropriate cache settings can greatly reduce the HTTP request. Suppose a website home page, when the browser does not cache when the visit will be issued 78 requests, a total of more than 600 K data, and when the second access that the browser has been cached after access to only 10 requests, a total of more than 20 K data. (It should be explained here, if the direct F5 refresh the page, the effect is not the same, in this case the number of requests is the same, but the cache resource of the request server is a 304 response, only the header does not have the body, can save bandwidth)
What is the reasonable setting? The principle is simple, the more cache the better, the longer the cache can be better. For example, a picture resource that rarely changes can set a long expiration header directly through the expires in the HTTP header, and a resource that is infrequently changed and potentially variable can use last-modifed for request validation. As much as possible, allow resources to stay in the cache longer. The specific settings and principles of the HTTP cache are no longer detailed here.
2. Using the browser cache
For a Web site, CSS, JavaScript, logos, icons, these static resources file updates are relatively low frequency, and these files are almost every time the HTTP request is required, if you cache these files in the browser, can improve performance very well. By setting the properties of Cache-control and expires in the HTTP header, you can set the browser cache, which can be cached for days, even months.
At some point, the static resource file changes need to be applied to the client browser in a timely manner, this can be achieved by changing the file name, that is, updating the JavaScript files is not updating the contents of the JavaScript file, but instead generate a new JS file and update the references in the HTML file.
Web site using browser caching policy when updating static resources, should adopt the method of updating the volume, such as the need to update 10 icon files, not the 10 files are updated all at once, but should be a file a file gradually updated, and there is a certain interval, so that the user browser suddenly a large number of cache failure, centralized update cache , resulting in server load surges, network congestion situation.
3. Enable compression
In the server side of the file compression, the browser side of the file decompression, can effectively reduce the amount of data transmitted by the communication. If possible, combine external scripts, styles, and more than one. The compression efficiency of text files is more than 80%, so the HTML, CSS, and JavaScript files enable gzip compression to achieve better results. However, compression has a certain amount of pressure on the server and the browser, which should be weighed when the communication bandwidth is good and the server resources are insufficient.
4. CSS Sprites
Another good way to reduce the number of requests is to merge CSS images.
5, Lazyload Images
This strategy does not necessarily reduce the number of HTTP requests, but it can reduce the number of HTTP requests under certain conditions or when the page has just been loaded. For pictures, you can load only the first screen when the page is loaded, and then load the subsequent pictures when the user continues scrolling back. In this way, if the user is only interested in the content of the first screen, then the remaining picture requests are saved.
6, CSS on the top of the page, JavaScript placed at the bottom of the page
The browser will render the entire page after the full CSS has been downloaded, so it is best to put the CSS on top of the page so that the browser can download the CSS as soon as possible. If you put CSS in other places such as body, then the browser may have not downloaded and parsed into the CSS has already started rendering the page, which causes the page to jump from the CSS state to the CSS state, the user experience is worse, so you can consider putting CSS in the head.
JavaScript, on the other hand, executes immediately after the JavaScript is loaded, potentially blocking the entire page, causing the page to appear slowly, so JavaScript is best placed at the bottom of the page. But if you need JavaScript for page parsing, it's not appropriate to put it at the bottom.
Lazy load Javascript, which is loaded only when it needs to be loaded, does not normally load the information content. With the popularity of JavaScript frameworks, more and more sites also use the framework. However, a framework often includes a number of feature implementations, which are not required for every page, and can be a waste of resources if you download unwanted scripts-wasting bandwidth and wasting time executing. There are probably two ways to do this, one is to tailor a dedicated mini version to a page that is particularly large, and the other is Lazy Load.
7, asynchronous request callback (is to extract some behavior style, slowly load the content of the information)
In some pages there may be a need to use the script tag to request data asynchronously. Similar:
[JavaScript]View PlainCopyprint?
- <span style="FONT-SIZE:14PX;" >/*callback function * /
- function Mycallback (info) {
- //do something here
- }
- Html:
- What the callback returns:
- Mycallback (' Hello world! ');
- </span>
Like the above way directly on the page to write on the performance of the page is <script>
also affected, that is, the burden of the first load of the page is increased, delaying the triggering time of the domloaded and Window.onload events. If time-sensitive allows, consider loading when the domloaded event is triggered, or use settimeout mode to flexibly control the timing of the load.
<span style="FONT-SIZE:14PX;" >//Global variables var globalvar = 1; function mycallback(info) { //local variables cache global variablesvar localVar = Globalvar; for( var i = 100000; I--;) { //access to local variables is the fastestLocalVar + = i; } //This example only needs to access 2 global variablesIn the function, you only need to assign the value of the content in the Globalvar to the localVar Globalvar = localVar; }</span>
8. Reduce the transmission of cookies
On the one hand, cookies are included in each request and response, and too large a cookie can seriously affect the data transfer, so what data needs to be written to the cookie requires careful consideration to minimize the amount of data transferred in the cookie. On the other hand, for some static resources access, such as CSS, script, etc., sending cookies is meaningless, you can consider static resources using independent domain name access, avoid requesting static resources to send cookies, reduce the number of cookie transmission.
9. JavaScript code optimization
(1). DOM
a.html Collection (HTML collector, returns an array of content information)
In the script, Document.images, Document.forms, getElementsByTagName () return a collection of htmlcollection types, which are used mostly as arrays when used normally, as it has The Length property, or you can use an index to access each element. However, the performance of the access is much worse than the array, because this collection is not a static result, it represents only a specific query, each access to the collection will be re-executed this query to update the query results. The so-called "Access Collection" includes reading the length property of the collection, accessing the elements in the collection.
Therefore, when you need to traverse the HTML collection, try to convert it to an array and then access it to improve performance. Even if you do not convert to an array, you should access it as little as possible, for example, when traversing, you can save the length property, the member to the local variable, and then use the local variable.
B. Reflow & Repaint
In addition to the above point, DOM operations also need to consider the browser's reflow and repaint, as these are resources that need to be consumed.
(2). Caution With
With (obj) {p = 1}; The behavior of the code block is actually to modify the execution environment in the code block, placing obj at the forefront of its scope chain, and accessing the non-local variable in the with code block is the first lookup from obj, and if it is not followed up by the scope chain, then using with equals increases the scope chain length. Each lookup scope chain is time consuming, and an excessive scope chain causes the lookup performance to degrade.
Therefore, unless you are sure to access only the properties in obj in the with code, use with, instead, the properties that you need to access by using the local variable cache.
(3). Avoid using eval and Function
Each time the Eval or function constructor acts on the source code of a string representation, the scripting engine needs to convert the source to executable code. This is a resource-intensive operation-usually more than 100 times times slower than a simple function call.
The Eval function is particularly inefficient, because the content in the string passed to eval is not known beforehand, and Eval interprets the code to be processed in its context, meaning that the compiler cannot optimize the context, so only the browser can interpret the code at run time. This has a significant impact on performance.
The function constructor is slightly better than eval because using this code does not affect the surrounding code, but it is still slow.
In addition, using eval and function is also detrimental to the JavaScript compression tool to perform compression.
(4). Reduce the scope chain lookup
The previous article addressed the scope chain lookup problem, which is especially important in loops. If you need to access a variable in a non-scoped scope in a loop, cache the variable with a local variable before traversing, and then rewrite that variable after the traversal, which is especially important for global variables, because the global variable is at the top of the scope chain and is accessed most often.
Low-efficiency notation:
[JavaScript]View PlainCopyprint?
- <span style="FONT-SIZE:14PX;" >//Global variables
- var globalvar = 1;
- function Mycallback (info) {
- For ( var i = 100000; i--;) {
- //Every access to the Globalvar needs to find the top of the scope chain, which needs to be accessed 100,000 times in this case
- Globalvar + = i;
- }
- }
- </span>
More efficient notation:
[JavaScript]View PlainCopyprint?
- <span style="FONT-SIZE:14PX;" >//Global variables
- var globalvar = 1;
- function Mycallback (info) {
- //local variables cache global variables
- var localVar = Globalvar;
- For ( var i = 100000; i--;) {
- //access to local variables is the fastest
- LocalVar + = i;
- }
- //This example only needs to access 2 global variables
- In the function, you only need to assign the value of the content in the Globalvar to the Localvar
- Globalvar = LocalVar;
- }
- </span>
In addition, to reduce the scope chain lookup should also reduce the use of closures.
(5). Data access
Data access in JavaScript includes direct quantities (strings, regular expressions), variables, object properties, and arrays, where access to direct and local variables is fastest, and access to object properties and arrays requires greater overhead. It is recommended that you put data in local variables when the following conditions occur:
A. Access to any object property more than 1 times
B. Number of accesses to any array member exceeds 1 times
In addition, the object and array depth lookups should be minimized as much as possible.
(6). String concatenation
Using the "+" sign in JavaScript is inefficient because each run will open up new memory and generate a new string variable, and then assign the stitching result to the new variable. The more efficient way to do this is to use the Join method of the array, where the string to be spliced is placed in the array and the Join method is called to get the result. However, the use of arrays also has some overhead, so you can consider this method when you need to stitch more strings.
10. CSS Selector optimization
In most people's minds, the browser's parsing of CSS selectors is left-to-right, for example
#toc A { color: #444; }
Such a selector, if it is from right to left parsing is very efficient, because the first ID selection is basically to limit the scope of the lookup, but in fact, the browser to parse the selector from right to left. As in the above selector, the browser must traverse the ancestor node to find each a tag, and the efficiency is not as high as previously thought. According to this line of browser features, in the writing of selectors need to pay attention to a lot of things, interested children shoes can go to understand.
CDN Acceleration
The nature of the CDN (Contentdistribute network, content distribution networks) is still a cache, and the data is cached in the nearest place to the user, allowing the user to get the data at the fastest speed, i.e. the so-called network access first hop, such as.
Because the CDN is deployed in the network operator's computer room, these operators are the end user's network service provider, so the user requests the first hop of the route arrives the CDN server, when exists in the CDN the browser request resource, returns from the CDN directly to the browser, the shortest path returns the response, speeds up the user access speed, Reduce data center load pressure.
The CDN cache is usually static resources, tablets, files, CSS, script scripts, static Web pages, etc., but these files are very frequently accessed, caching them in a CDN can greatly improve the speed of Web pages opening.
Reverse Proxy
The traditional proxy server is located on the browser side, and the proxy browser sends HTTP requests to the Internet, while the reverse proxy server is located on one side of the site, and the Proxy Web server receives HTTP requests. As shown in the following:
Forum website, the popular entries, posts, blogs cache on the reverse proxy server to speed up user access speed, when these dynamic content changes, through the internal notification mechanism to notify the reverse proxy cache invalidation, reverse proxy will reload the latest dynamic content to cache again.
In addition, the reverse proxy can also achieve load balancing function, and the application cluster built by load Balancing can improve the overall processing capacity of the system, and improve the performance of the high-concurrency of the website.
Web Front-End optimization