The cost of scripting DOM operations is expensive, and it is the most common performance bottleneck in rich Web applications. There are three main types of problems:
Accessing and modifying DOM elements
Modifying the style of DOM elements results in repaint and reflow
Interacting with the user through DOM event handling
Dom in the browser
The DOM is a language-independent application interface (application program Interface) for manipulating XML and HTML documents (document Object Model). Although the DOM is not language-agnostic, the interface in the browser is implemented using JavaScript.
A front-end tip
Browsers usually separate JS and Dom separately to implement independently.
To give a chestnut cold knowledge, in IE, the implementation of JS named JScript, located in the Jscript.dll file, the implementation of the DOM exists in another library, named Mshtml.dll (Trident).
The DOM in Chrome is implemented as a webcore in WebKit, but the JS engine is a V8 developed by Google itself.
The JS engine in Firefox is SpiderMonkey, and the rendering engine (DOM) is gecko.
DOM, is inherently slow
In front of the small knowledge said that the browser to implement the page rendering part and parsing JS part of the implementation, since it is separate, once the two need to create a connection, it will pay the price.
Two examples:
Xiao Ming and Xiao Hong is two different school students, two people home economic conditions are not very good, can not afford to buy mobile phone (good awkward setting orz ... , so you can only communicate with each other by writing a letter, and the process is certainly much more expensive (extra events, cost of writing letters, etc.) than the two of them have to face.
Official example: the DOM and JS (ECMAScript) are each imagined as an island, connected by a toll bridge. ECMAScript every time you visit the DOM, you have to approach this bridge and pay "bridge fee". The more times you access the DOM, the higher the cost.
Therefore, the recommended practice is to reduce the number of bridges as much as possible, and try to stay on ECMAScript Island.
Access and modification of DOM
While the access to the DOM requires a "toll", modifying the DOM element is more expensive because it causes the browser to recalculate the page's geometric changes. Take a look at the code:
function innerHTMLLoop(){ for (var count = 0; count < 15000; count++){ document.getElementById(‘text‘).innerHTML += ‘dom‘; } }
In this code, each loop accesses two specific elements: the first time to read the innerHTML property of this element and rewrite it for the second time.
Seeing this clearly, it is not difficult to get a more efficient version:
function innerHTMLLoop2(){ var content = ‘‘; for (var count = 0; count < 15000; count++){ content += ‘dom‘; } document.getElementById(‘text‘).innerHTML += content; }
With a local variable package layer after each update of the content, wait for the end of the loop, a one-time write page (as far as possible to give more work to the JS part to do).
According to statistics, in all browsers, the modified version is run faster (the most obvious optimization is IE8, using the latter is 273 times times faster than using the former).
HTML element Collection
An HTML element collection is a class array object that contains a DOM node reference.
You can get a collection of HTML elements with the following methods or properties:
Document.getelementsbyname ()
document.getElementsByTagName ()
Document.getelementsbyclassname ()
Document.images all IMG elements in the page
Document.links all a elements in a page
Document.forms All table cells in the page
Document.forms[0].elements All fields of the first form in a page
The collection of HTML elements is in a "live state", which means that when the underlying document object is updated, it is also automatically updated, that is, the connection between the HTML element collection and the underlying document object. Because of this, whenever you want to get some information from the collection of HTML elements, a query operation is generated, which is the source of inefficiency.
An expensive collection
//这是一个死循环 //不管你信不信,反正我是信了 var alldivs = document.getElementsByTagName(‘div‘); for (var i = 0; i < alldivs.length; i++){ document.body.appendChild(document.createElement(‘div‘)); }
At first glance, this code simply doubles the number of Div in the page: Iterate through all the Div, create a new div each time, and create to add to the body.
But in fact, this is a dead loop: because the exit condition of the loop alldivs.length at the end of each cycle, because this collection of HTML elements reflects the real-time state of the underlying document element.
Next, we'll do some processing of a collection of HTML elements through this piece of code:
function toArray(coll){ for (var i = 0, a = [], len = coll.lengthl i < len; i++){ a[i] = coll[i]; } return a; } //将一个HTML元素集合拷贝到一个数组中 var coll = document.getElementsByTagName(‘div‘); var arr = toArray(coll);
Now compare the following two functions:
function loopCollection(){ for (var count = 0; count < coll.length; count++){ //processing... } } function loopCopiedArray(){ for (var count = 0; count < arr.length; count++){ //processing... } }
In IE6, the latter is 114 times times faster than the former, 119 times times in IE7, 79 times times in IE8 ...
So, in the same content and quantity, traversing an array is significantly faster than traversing a collection of HTML elements.
Because in each iteration loop, reading the length property of the element collection causes the collection to be updated, which has significant performance problems in all browsers, so you can do the same:
function loopCacheLengthCollection(){ var coll = document.getElementsByTagName(‘div‘), len = coll.length; for (var count = 0; count < len; count++){ //processing... } }
This function is as fast as the Loopcopiedarray () above.
Use local variables when accessing collection elements
In general, for any type of DOM access, it is best to cache this member with a local variable when the same DOM property or method needs to be accessed multiple times. When traversing a collection, the primary optimization principle is to store the collection in local variables, cache the length outside the loop, and then use local variables to access the elements that require multiple accesses.
A chestnut that accesses three properties of each element in a loop.
function Collectionglobal () { var coll = document.getelementsbytagname ( ' div '), len = coll.length, & nbsp name = '; for ( var count = 0; count < Len; count++) {& nbsp name = document.getelementsbytagname ( ' div ') [Count].nodename; name = document.getelementsbytagname ( ' div ') [Count].nodetype; name = document.getelementsbytagname ( ' div ') [Count].tagname; //My God, no one really wrote that. } return name; }
The above code, we do not take seriously ... The normal person must not be written out ... Here is to compare, so the slowest situation for everyone to see.
Next, there is a slightly optimized version:
function collectionLocal(){ var coll = document.getElementsByTagName(‘div‘), len = coll.length, name = ‘‘; for (var count = 0; count < length; count++){ name = coll[count].nodeName; name = coll[count].nodeType; name = coll[count].tagName; } return name; }
This time it looks a lot more normal, and finally the final version of this optimization tour:
function collectionNodesLocal(){ var coll = document.getElementsByTagName(‘div‘), len = coll.length, name = ‘‘, ele = null; for (var count = 0; count < len; count++){ ele = coll[count]; name = ele.nodeName; name = ele.nodeType; name = ele.tagName; } return name; }
Traversing the DOM crawling in the DOM
Usually you need to start with a DOM element, manipulate the surrounding elements, or recursively find all the child nodes.
Consider the following two equivalent chestnuts:
1functionTestnextsibling (){var el =document.getElementById (' Mydiv '), ch = el.firstchild, name = '; do { name = ch.nodename; } while (ch = ch.nextsibling); &NBSP; return name; } //2 function testchildnodes ( ) { var el = document.getElementById ( ' mydiv '), &NB Sp;ch = el.childnodes, len = ch.length, //childn Odes is a collection of elements, so the chairman caches the length property in the loop to avoid iteration updates name = '; for ( Span>var count = 0; count < Len; count++) { name = ch[count].nodename; &N Bsp } return name; }
In different browsers, the running time of the two methods is almost equal. But in the old version of IE browser, nextsibling performance is better than childnodes.
ELEMENT node
We know that there are five categories of DOM nodes:
The entire document is a document node
Each HTML element is an element node
text within an HTML element is a text node
Each HTML attribute is an attribute node
Comment is a comment node
These DOM attributes, such as ChildNodes, FirstChild, nextsibling, do not differentiate between element nodes and other types of nodes, but often we only need to access element nodes, and some filtering work is needed at this point. In fact, the procedures for these types of checks are unnecessary DOM operations.
Many modern browser-provided APIs only return element nodes and, if available, only use these APIs directly because they are more efficient to execute than their own filtering in JS.
Modern browser-provided API (the replaced API)
Children (ChildNodes)
Childelementcount (Childnodes.length)
Firstelementchild (FirstChild)
Lastelementchild (LastChild)
Nextelementsibling (nextSibling)
Previouselementsibling (previoussibling)
With these new APIs, you can get directly to the element node, and that's why it's faster as well.
Selector API
Sometimes in order to get the list of elements needed, developers have to combine calls to getElementById, getElementsByTagName, and traverse the returned nodes, but this dense process is inefficient.
The latest browser provides a native Dom method named Queryselectorall () that passes parameters to the CSS selector. This approach is much faster than using JS and Dom to traverse the lookup element.
Like what
var elements = document.querySelectorAll(‘#menu a‘);
This piece of code returns a nodelist ———— the class array object that contains the matching node. Unlike before, this method does not return a collection of HTML elements, so the returned node does not correspond to the real-time document structure and avoids the performance (potential logic) problems that were previously caused by the HTML collection.
If you do not use Queryselectorall (), we need to write this:
var elements = document.getElementById(‘menu‘).getElementsByTagName(‘a‘);
Not only is it more cumbersome to write, but it is also important to note that at this point the elements is a collection of HTML elements, so you need to copy it into the array to get a static list similar to the former.
There is also a queryselector () method, which is used to get the first matching node.
Redraw and Rearrange (repaints & reflows)
The browser is used to display all the "components" of the page: HTML tags, js, css, Pictures--and then parse and generate two internal data structures:
Each node in the DOM tree that needs to be displayed has at least one corresponding node in the render tree.
The nodes in the render tree are called "frames (frames)" or "boxes (boxes)", conform to the definition of the CSS box model, and understand that the page element is a box with padding, margin, borders, and position.
Once the render tree is built, the browser begins to display the page elements, a process known as paint.
When the DOM changes affect the geometry of the element (width, height)--such as changing the width of the border or adding some text to a paragraph resulting in an increase in the number of rows--the browser needs to recalculate the geometry of the element, as well as the geometric properties and location of the other elements in the page.
The browser causes the affected parts of the render tree to disappear and rebuild the render tree, a process called reflow (reflow). After the rearrangement is complete, the browser will re-draw the affected parts into the browser, which is called "Redraw (repaint)".
If you change the geometry of an element, such as changing the background color of an element, no reflow occurs, only one redraw occurs, because the layout of the element does not change.
Whether redrawing or rearrangement is expensive, they can cause the Web application's UI to be unresponsive and should minimize the occurrence of such processes.
When does reflow occur?
Add or remove a visible DOM element
Change of element position
Change in element size (padding, margin, border, height, width)
Content changes (text changes or picture size changes)
Page Renderer Initialization
browser window size Change
The appearance of the scroll bar (which triggers the rearrangement of the entire page)
Minimize redraw and rearrange style changes
A chestnut:
var el = document.getElementById(‘mydiv‘); el.style.borderLeft = ‘1px‘; el.style.borderRight = ‘2px‘; el.style.padding = ‘5px‘;
In the example, the three styles of an element are changed, and each affects the geometry of the element. In the worst case, this code will trigger three reflow (most modern browsers are optimized for this, only triggering a single reflow). From another point of view, this code accesses the Dom four times and can be optimized.
var el = document.getElementById(‘mydiv‘); //思路:合并所有改变然后一次性处理 //method_1:使用cssText属性 el.style.cssText = ‘border-left: 1px; border-right: 2px; padding: 5px‘; //method_2:修改类名 el.className = ‘anotherClass‘;
Bulk Modify Dom
When you need to do a series of actions on DOM elements, you might want to follow these steps:
Leaving elements out of the document flow
Apply multiple changes to it
Bring elements back into the document
In this combination, the first and third parts trigger a reflow. However, if you ignore these two steps, any changes that occur in the second step will trigger a reflow.
Here are three ways to get DOM elements out of the document flow:
Hide elements
Use a document fragment to build a subtree outside the current DOM and copy it back to the document fragment
Copy the original element to a node that is out of the document, modify the copy, and replace the original element when you are done
Move an animated element out of the document flow
In general, rearrangement affects only a small part of the rendering tree, but it can also affect a large part, even the entire rendering tree.
The fewer reflow times a browser requires, the faster the application responds.
Imagine a situation where the bottom of the page has an animation that will move the entire remaining part of the page, which will be a costly mass rearrangement! The user is bound to feel the page one card one card.
Therefore, use the following steps to avoid most of the reflow in the page:
Use absolute positioning to leave the animated elements on the page out of the document flow
Animation Presentation phase
When the animation ends, the element is restored to position.
IE's: hover
From IE7 onwards, IE allows for use on any element: hover this CSS selector.
However, if you have a large number of elements used: hover, you will find that the thief is slow!
Event delegate (delegation)
This optimization is also a high-frequency topic in the front-end job interview.
When there are a large number of elements in the page, and these elements need to be bound to the event handler.
Each event handler that is bound has a cost, either aggravating the page load or increasing the execution time of the runtime. Furthermore, event binding consumes processing time, and the browser needs to track each event handler, which also consumes more memory. There is also the case that when these jobs are finished, most of these event handlers are no longer needed (not 100% of the buttons or links will be clicked by the user), so there is a lot of work that is not necessary.
The principle of event delegation is simple-the event bubbles up and can be captured by the parent element.
With an event delegate, you only need to bind a processor to the outer element to handle all events that are triggered on its child elements.
Here are a few things to note:
Accessing event objects, judging event sources
Cancel bubbles on demand in the document tree
Block default actions on demand
Summary
Accessing and manipulating the DOM requires crossing the bridge between the two islands connecting ECMAScript and Dom, in order to minimize the "toll", there are a few things to note:
Minimize Dom access times
For DOM nodes that require multiple accesses, use local variables to store their references
If you want to manipulate a collection of HTML elements, it is recommended that you copy it to an array
Use a faster API: like Queryselectorall
Pay attention to the number of reflow and redraw
Event delegate
High Performance Js-dom