Now that browsers can achieve more and more features, and the network gradually shifted to mobile devices, so that our front-end code more compact, how to optimize, it becomes more and more important.
Developers generally take their code habits precedence over the user experience. But a lot of small changes can make a leap in the user experience, so any little optimization will improve the performance of your site.
The front end gives you the ability to have many simple strategies and code habits that allow us to ensure optimal front-end performance. The theme of our series is to show you some of the best practices for front-end performance optimization, which will only take a minute to optimize your existing code. Best Practice 1: Replace complex element injection with documentfragments or innerHTML
Dom operations are taxed on the browser. Although the performance boost is in the browser, Dom is slow, and if you don't notice, you may notice that the browser is running very slowly. This is why it is important to reduce the creation of a centralized DOM node and a quick injection.
Now suppose we have an element in our page, call Ajax to get a JSON list, and then use JavaScript to update the element content. Typically, programmers write this:
JavaScript code
var list = Document.queryselector (' ul ');
AjaxResult.items.forEach (function (item) {/
/create <li> element
var li = document.createelement (' li ');
li.innerhtml = Item.text;
<li> element general operations, such as adding class, changing attribute attributes, adding event sniffing, etc.
//quickly injecting <li> elements into parent <ul>
List.apppendchild ( LI);
The code above is actually a bad notation, and it's very slow to transplant elements with the DOM operations of each list. If you really want to use document.createelement and treat the object as a node, you should use Documentfragement, given the performance problem.
documentfragement is a "virtual store" of a set of child nodes, and it does not have a parent tag. In our example, Documentfragement is imagined as an invisible element, outside the DOM, keeping your child nodes until they are injected into the DOM. Then, the original code can be optimized with documentfragment:
JavaScript code
var frag = document.createdocumentfragment ();
AjaxResult.items.forEach (function (item) {/
/create <li> element
var li = document.createelement (' li ');
li.innerhtml = Item.text;
<li> element General Operations
//For example, add class, change attribute attributes, add event listener, add child nodes, etc.
//Add <li> elements to the fragment
Frag.appendchild ( LI);
Finally, all the list objects are injected into Dom
Document.queryselector (' ul ') by DocumentFragment. AppendChild (Frag);
Appending a child element to the DocumentFragment, and then adding the DocumentFragment to the parent list, is just a DOM operation, so it is much faster than a centralized injection.
If you don't need to manipulate list objects as nodes, the better way is to build HTML content with strings:
JavaScript code
var htmlstr = ';
AjaxResult.items.forEach (function (item) {
//build string containing HTML page content
Htmlstr + = ' <li> ' + item.text + ' </li > ';
});
Through InnerHTML set UL content
document.queryselector (' ul '). InnerHTML = Htmlstr;
This also has only one DOM operation, and less than DocumentFragment code. In any case, both of these methods are more efficient than injecting elements into the DOM in each iteration. Best Practice 2: High Frequency Execution Event/method stabilization
Typically, developers add events where there is user interaction, and often these events are triggered frequently. Imagine the Resize event of the window or the onmouseover event of an element-they are triggered very quickly and trigger many times. If your callback is too heavy, you may be able to kill the browser.
That's why we're introducing the stabilization.
The knowledge points involved here are higher order functions, and interested friends can step into my other blog to fully understand JavaScript object-oriented (i), the function of the throttle part is to achieve high-frequency implementation of the event/method of the stabilization.
The stabilization can limit the number of times a method executes in a given time. The following code is an example of a stabilization:
JavaScript code
function Debounce (func, wait, immediate) {
var timeout;
return function () {
var context = this, args = arguments;
var later = function () {
timeout = null;
if (!immediate) func.apply (context, args);
var callnow = immediate &&!timeout;
Cleartimeout (timeout);
Timeout = settimeout (later, wait);
if (Callnow) func.apply (context, args);}
;
}
Add a resize callback function, but only allow it to execute once every 300 milliseconds
Window.addeventlistener (' Resize ', debounce (function (event) {
//Here Write resize process
}, 300));
The Debounce method returns a method that is used to wrap your callback function and limit his execution frequency. Using this method of stabilization, you can write frequent callback methods without interfering with the user's browser. Best Practice 3: Static caching of networked storage and non-essential content optimization
The Web storage API used to be a significant advance for the cookie API and has been used for developers for many years. This API is reasonable, much more storage, and more sane. One strategy is to use the session store to store non-essential, more static content, such as the HTML content of the sidebar, the content of the article loaded from Ajax, or some other kinds of snippets that we only want to request once.
We can write a piece of code using JavaScript, and use the Web Storage to make these content easier to load:
JavaScript code define (function () {var cacheobj = Window.sessionstorage | |
{getitem:function (key) {return this[key];
}, Setitem:function (key, value) {This[key] = value;
}
};
return {get:function (key) {return This.isfresh (key);
}, Set:function (key, value, minutes) {var expdate = new Date ();
Expdate.setminutes (Expdate.getminutes () + (minutes | | 0)); try {cacheobj.setitem (key, Json.stringify ({value:value, E
Xpires:expDate.getTime ())); catch (e) {}}, Isfresh:function (key) {//return value or return False V
AR item;
try {item = Json.parse (Cacheobj.getitem (key)); catch (e) {} if (!item) RETUrn false; Date algorithm return new date (). GetTime () > Item.expires?
False:item.value; }
}
});
This tool provides a basic get and set method, which, like the Isfresh method, ensures that stored data does not expire. Calling methods is also very simple:
JavaScript code
require ([' storage '], function (storage) {
var content = storage.get (' sidebarcontent ');
if (!content) {
//do ' AJAX request to get the sidebar content
//... and then store returned content for a hour
storage.set (' sidebarcontent ', content,);
}
);
Now the same content will not be repeated requests, your application to run more efficient. Take a moment to look at your website design and pick out the things that won't change, but will be asked for, and you can use the Web Storage tool to improve the performance of your site. Best Practice 4: Use asynchronous loading to delay load dependencies
Requirejs has ushered in a huge wave of asynchronous loading and AMD formats. XMLHttpRequest (This object can invoke Ajax) makes asynchronous loading of the resource popular, allowing non-blocking resources to load, and making onload boot faster, allowing page content to load without refreshing the page.
The asynchronous loader I'm using is John Hann's curl. The Curl Loader is a basic asynchronous loader that can be configured to have a good plug-in. Here's a little piece of curl code:
JavaScript code//Basic use: Load part of the AMD-formatted module curl ([' Social ', ' dom '], function (social, DOM) {dom.setelementcontent ('. Soc
Ial-container ', social.loadwidgets ());
}); Define a module that uses Google Analytics, which is a non-AMD-formatted define (["js!//google-analytics.com/ga.js"], function () {//return a Simp Le custom Google Analytics controller return {trackpageview:function (href) {_gaq.push (["_
Trackpageview ", url]); }, Trackevent:function (eventName, href) {_gaq.push (["_trackevent", "Interactions", EventName, ""] , href | |
Window.location, True]);
}
};
});
Load a non-AMD JS file Curl ([' js!//somesite.com/widgets.js ']) without a callback method; Load JavaScript and CSS files as modules curl ([' js!libs/prism/prism.js ', ' css!libs/prism/prism.css '], function () {Prism.highlig
Htall ();
}); Load an AJAX-requested URL curl ([' text!sidebar.php ', ' storage ', ' dom '], function (content, storage, DOM) {Storage.set (' side
Bar ', content, 60); Dom.setelEmentcontent ('. Sidebar ', content); });
As you may have known, asynchronous loading can greatly increase the speed of the million, but I want to illustrate here that you want to use asynchronous loading. After you use it you can see the difference, and more importantly, your users can see the difference.
When you can defer loading dependencies based on page content, you can appreciate the benefits of asynchronous loading. For example, you can load only Twitter,facebook and Google Plus into a DIV element that has a CSS style named social. The "Check for need before loading" policy can save my users several KB of bogus loading. Best Practice 5: Use Array.prototype.join instead of string concatenation
There is a very simple way to optimize the client, which is to use Array.prototype.join instead of the original basic character connection. In "Best Practice 1" above, I used the basic character connection in my code:
JavaScript code
Htmlstr + = ' <li> ' + item.text + ' </li> ';
But in the following code, I use optimizations:
JavaScript code
var items = [];
AjaxResult.items.forEach (function (item) {
//build string
items.push (' <li> ', Item.text, ' </li> ');
});
//through InnerHTML set list content
document.queryselector (' ul '). InnerHTML = Items.join (');
You may need to take a moment to see what this array is for, but all users benefit from this optimization. Best Practice 6: Use CSS animations whenever possible
The web design has a large demand for aesthetically pleasing features and configurable element animations, enabling a number of JavaScript libraries, such as jquery,mootools, to be used. While browsers now support the animations of CSS transformation and keyframe, there are still a lot of people using JavaScript to animate, but actually using CSS animations is more efficient than JavaScript-driven animations. CSS animations require less code at the same time. Many CSS animations are handled using the GPU, so the animation itself is fluent, and of course you can use the following simple CSS to force your hardware to accelerate:
JavaScript code
. myanimation {
animation:someanimation 1s;
Transform:translate3d (0, 0, 0); /* Force Hardware acceleration */
}
Tansform:transform (0,0,0) sends calls to hardware acceleration without affecting other animations. In situations where CSS animations are not supported (IE8 and the following versions of browsers), you can introduce JavaScript animation logic:
JavaScript code
<!--[if less than IE8 version]>
<script src= "Http://code.jquery.com/jquery-1.9.1.min.js" >< /script>
<script src= "/js/ie-animations.js" ></script>
<![ Endif]-->
In the example above, the Ie-animations.js file must contain your custom jquery code, which can be used instead of CSS animations to achieve animation when CSS animations are not supported in earlier IE. Perfect through CSS animation to optimize the animation, through JavaScript to support the global animation effect. Best Practice 7: Use event delegates
Imagine if you had an unordered list with a bunch of elements, and each element would trigger an action when clicked. At this point, you usually add an event listener to each element, but if the element or the object you are adding to the listener is frequently removed and added. At this point, you need to handle the removal and addition of the event listener while removing the addition element. At this point, we need to introduce an event delegate.
An event delegate adds an event listener to the parent element instead of adding event sniffing on each child element. When an event is triggered, Event.target evaluates whether the appropriate action needs to be performed. Here's a simple example:
JavaScript code
//Get element, add Event listener
window.onload = function () {
var Oul = document.getElementById ("Ul1");
Oul.onclick = function (ev) {
var ev = EV | | window.event;
var target = Ev.target | | ev.srcelement;
if (target.nodeName.toLowerCase () = = ' Li ') { alert (target.innerhtml);
}
}
The above example is incredibly simple, when an event occurs, it does not poll the parent node to find a matching element or selector, and it does not support a selector based query (for example, with class name, or ID to query). All JavaScript frames provide a delegate selector match. The point is that you avoid loading event sniffing for each element, but adding an event listener to the parent element. This greatly increases the efficiency and reduces the number of maintenance. Best Practice 8: Use the data uri instead of picture src
Increase the efficiency of the page size, not only depends on the use of the wizard or compressed code, the number of requests for a given page in the front-end performance also occupies a very large weight. Reducing requests can make your site load faster, and one way to reduce page requests is to use the data URI instead of the src attribute of the picture:
JavaScript code
<!--previous writing-->
<!--Use the data URI for the notation-->
<--Example: http://davidwalsh.name/demo/data-uri-php.php-->
Of course the page size will increase (if your server uses the appropriate gzip content, this increase will be very small), but you reduce the potential of the request, but also in the process to reduce the number of server requests. Now most browsers support the data URI, the background bone in the CSS can also use the data URI, so this strategy can now be widely used at the application level. Best Practice 9: Load a background picture of a specified size using a media query
CSS media queries are used close to the Write logic control in CSS until the CSS @supports is widely supported. We often use CSS media queries to adjust CSS properties based on the device (usually by adjusting the CSS properties based on screen width), such as setting different element widths or suspending positions based on different screen widths. So why don't we change the background picture in this way?
JavaScript code
/* defaults to loading pictures for desktop applications
/* someelement {background-image:url (sunset.jpg);}
@media only screen and (max-width:1024px) {
. someelement {Background-image:url (sunset-small.jpg)}
}
The code snippet above is to load a smaller picture for mobile devices or similar mobile devices, especially if you need a particularly small picture (for example, the size of the picture is almost impossible to view). Best Practice 10: Using Index objects
In this article, we'll talk about using Index object retrieval instead of traversing an array to improve traversal speed.
Ajax and JSON one of the most common use cases is to receive an array containing a set of objects, and then to search for the object based on the given value from the set of arrays. Let's look at a simple example, in the following example, where you receive an array from the user, and then you can search for the user object based on the value of username:
JavaScript code
function GetUser (desiredusername) {
var searchresult = ajaxResult.users.filter (function (user) { return
User.username = = Desiredusername;
});
Return searchresult.length? Searchresult[0]: false;
Get user
var davidwalsh = GetUser ("Davidwalsh") according to user name;
Gets another user based on the user name.
var TechPro = GetUser ("Tech-pro");
The above code works, but it's not very efficient, and when we want to get a user we go through an array. A better approach would be to create a new object, an index for each unique value, in the example above, with username as the index, and the array object can be written as:
JavaScript code
var userstore = {};
AjaxResult.users.forEach (function (user) {
userstore[user.username] = user;
});
Now when you want to find a user object, we can find the object directly through the index:
JavaScript code
var davidwalsh = userstore.davidwalsh;
var TechPro = userstore["Tech-pro"];
Such code is better and simpler to write, and it is quicker to search through an index than to traverse the entire array. Best Practice 11: Controlling DOM Size
In this article, we will say how to control the size of the DOM to optimize front-end performance.
Dom is notoriously slow, and the culprit for slowing down the site is a lot of DOM. Imagine if you had a DOM with thousands of nodes, and imagine using Queryselectorall or getelementbytagname, or other DOM-centric search methods to search for a node, even with built-in methods, This will also be a very laborious process. You know, extra DOM nodes make other utilities slow too.
I've seen a situation where the size of the DOM has crept up and is on an AJAX web site where all the pages are in the DOM, and when a new page is loaded through Ajax, the old page is stored in a hidden DOM node. For Dom speed, there will be a catastrophic reduction, especially if a page is dynamically loaded. So you need a better way.
In this case, when the page is loaded with Ajax, and the previous page is stored on the client, the best approach is to store the content through string HTML (removing the content from the DOM), and then use event delegates to avoid specific element events. In doing so, a large number of DOM builds can be avoided when the content is cached on the client side.
Common techniques for controlling DOM size include: using: Before and: After pseudo element deferred loading and rendering content using event delegates, more easily converting nodes to string storage
One simple word: try to make your DOM as small as possible. Best Practice 12: Use the Web workers on heavy execution
In this article we'll introduce Web Workder, a way to move heavy operations to an independent process.
The Web workders was introduced into a popular browser some time ago, but it does not seem to be widely used. The main function of WEB workers is to perform heavy methods outside the scope of General browser execution. It does not access the DOM, so you must pass in the node that the method involves.
The following is a sample code for a Web workder:
JavaScript code
/* Using Web Worker//
/ start worker
var worker = new Worker ("/path/to/web/worker/resource.js" );
Worker.addeventlistener ("Message", function (event) {
//We get information from Web Worker!
});
Directs Web worker work.
worker.postmessage ({cmd: "Processimagedata", Data:convertimagetodatauri (MyImage)}); /* Resource.js is a web workder
/self.addeventlistener ("message", function (event) {
var data = Event.data;
Switch (data.cmd) {case
' process ': Return
processimagedata (data.imagedata);
});
function Processimagedata (imagedata) {
//manipulate the image/
/For example, change it to grayscale return
newimagedata;
}
The above code is a simple example that teaches you how to use Web worker to do some heavy work in other processes. It does this by turning a picture from a normal color to grayscale, because it's a much heavier process, so you can submit the process to Web Worker to make your browser load less. Data is passed back through the message event.
You can read the following MDN about the use of Web Workder, and perhaps some of the features on your site can be moved to other standalone processes to perform. Best Practice 13: Link CSS to avoid using @import
Sometimes @import is so good that it's hard to resist the temptation, but in order to reduce the maddening request, you have to reject it. The most common usage is in a "main" CSS file, with no content, only @import rules. Sometimes, multiple @import rules tend to cause event nesting:
JavaScript code
//main CSS file (main.css)
@import "Reset.css";
@import "Structure.css";
@import "Tutorials.css";
@import "Contact.css";
Then in the Tutorials.css file, there will continue to be @import
@import "Document.css";
@import "Syntax-highlighter.css";
We write a CSS file that has two extra links in the file, so it slows down the page load. SASS can read @import statements, link CSS content to a file, reduce unwanted requests, and control the size of the CSS file. Best Practice 14: Include a variety of media types in a CSS file
As we said in the 13th best practice above, multiple CSS files can be merged together by @import rules. But what many programmers don't know is that a variety of CSS media types can also be merged into one file.
JavaScript code
/* All of the following are written in a CSS file * * *
@media Screen {
* * All default structural design and element styles written here/
}
@media Print { /
* Adjust the style when printing * *
@media only the screen and (max-width:1024px) {/
* Use the ipad or mobile phone style settings */
}
The size of the file, when the media must be merged, or when it must be set separately, CSS is not mandatory, but I would prefer to merge all the media, unless one of the media accounted for a much larger proportion than the other. One less request will be much easier for both the client and the server, and in most cases the attached media type is much smaller than the main screen media type.