As we know, Dom is an application interface for manipulating XML and HTML documents, and scripting DOM operations can be costly. There is an apt analogy, the DOM and JavaScript (here refers to ecmscript) each imagined as an island, they are connected with a toll bridge, ECMAScript each access to the DOM, the bridge, and pay "bridge fee", access to the DOM more often, The cost will be higher. Therefore, the recommended approach is to minimize the number of bridges and try to stay on ECMAScript Island. We can not use the DOM interface, then, how to improve the efficiency of the program?
1. Dom Access and modification
Accessing DOM elements can be costly ("bridge fees" you know), and modifying elements is expensive because it causes the browser to recalculate the geometry of the page (rearrange and redraw).
Of course, the worst case scenario is to access or modify elements in a loop, looking at the following two pieces of code:
var times = 15000;
Code1
console.time (1);
for (var i = 0, i < times; i++) {
document.getElementById (' MyDiv1 '). InnerHTML + = ' a ';
}
Console.timeend (1);
Code2
Console.time (2);
var str = ';
for (var i = 0; I < times; i++) {
str = ' a ';
}
document.getElementById (' MyDiv2 '). InnerHTML = str;
Console.timeend (2);
Results The first run of time incredibly is the second thousand times! (Chrome version 44.0.2403.130 m)
The problem with the first code is that every iteration of the loop, the element is accessed two times: Read the value of the innerHTML one time, rewrite it another time, that is, each loop is crossing the bridge (rearrange and redraw the next tutorial)! The results show that the more times the DOM is accessed, the slower the code runs. As a result, the number of DOM accesses can be reduced as much as possible to stay on the ECMAScript side of the process.
2. HTML Collection & Traversal DOM
Operation Dom Another energy-consuming point is to traverse the DOM, and generally we will collect an HTML collection, such as using getElementsByTagName (), or using document.links, etc., I think we are not unfamiliar. The result of the collection is a collection of similar arrays, which exists in a real-time state, meaning that it is automatically updated when the underlying document object is updated. What do you say? It's easy to raise a chestnut:
<body>
<ul id= ' fruit ' >
<li> apple </li>
<li> Orange </li>
< li> Banana </li>
</ul>
</body>
<script type= "Text/javascript" >
var lis = document.getElementsByTagName (' Li ');
var peach = document.createelement (' li ');
peach.innerhtml = ' peach ';
document.getElementById (' fruit '). appendchild (peach);
Console.log (lis.length); 4
</script>
And this is the source of inefficiency! Very simply, as with array optimization, caching a length variable is ok (reading a collection's length is much slower than reading a regular array of lengh, because it is queried every time):
Console.time (0);
var lis0 = document.getelementsbytagname (' li ');
var str0 = ';
for (var i = 0; i < lis0.length i++) {
str0 + = lis0[i].innerhtml;
}
Console.timeend (0);
Console.time (1);
var lis1 = document.getelementsbytagname (' li ');
var str1 = ';
for (var i = 0, len = lis1.length i < len; i++) {
str1 + = lis1[i].innerhtml;
}
Console.timeend (1);
Let's see how much performance improvement can be.
When the length of the set is large (demo is 1000), performance improvement is obvious.
And "High performance JavaScript" proposes another optimization strategy, it points out that "because traversing an array is faster than iterating through the collection, if you copy the collection elements to an array, then the property that accesses it is faster," and, after testing, does not find the rule well, so don't be superfluous, The test code is as follows: (There are some doubts welcome to communicate with me discussion)
Console.time (1);
var lis1 = document.getelementsbytagname (' li ');
var str1 = ';
for (var i = 0, len = lis1.length i < len; i++) {
str1 + = lis1[i].innerhtml;
}
Console.timeend (1);
Console.time (2);
var lis2 = document.getelementsbytagname (' li ');
var a = [];
for (var i = 0, len = lis2.length i < len; i++)
a[i] = lis2[i];
var str2 = ';
for (var i = 0, len = a.length i < len; i++) {
str2 + = a[i].innerhtml;
}
Console.timeend (2);
The last part of this section introduces two native Dom methods, Queryselector () and Queryselectorall (), and I'm sure everyone is familiar with the former returns an array (note that their return value is not as dynamic as the HTML collection), which returns the first element of the match. Well, not all of the time it's performance is better than the former HTML set traversal.
Console.time (1);
var lis1 = document.getelementsbytagname (' li ');
Console.timeend (1);
Console.time (2);
var lis2 = document.queryselectorall (' li ');
Console.timeend (2);
1:0.038ms
//2:3.957ms
But because it is similar to the choice of CSS, so in the combination of choice, efficiency will improve, and convenient. For example, do the following combination query:
var elements = Document.queryselectorall (' #menu a ');
var elements = Document.queryselectorall (' div.warning, Div.notice ');
The above is about High-performance JavaScript DOM programming all the content, I hope you can understand, to help you learn.