This article comes from a question on the understanding.
For the readability of the program, we will use ES6 's deconstruction assignment:
function f({a,b}){}f({a:1,b:2});
In this example, does the function call actually produce an object? If so, the large number of function calls will generate a lot of temporary objects to be released by GC, then it means that when the function parameter is less, it is necessary to avoid the use of the deconstruction parameter, and the traditional:
function f(a,b){}f(1,2);
The above description actually raised several questions at the same time:
- Will it produce an object?
- When the parameters are few, do we need to try to avoid using the deconstruction parameter?
- How big is the impact on performance (cpu/memory)?
1. Analyzing the performance of both from V8 bytecode
First, from the code example given above, it does produce an object. However, in the actual project, there is a large probability that the temporary object is not required to be generated.
I have previously written an article using D8 to analyze how JavaScript is optimized by the V8 engine. So let's analyze your sample code.
function f(a,b){ return a+b;}const d = f(1, 2);
Since many people are not d8, we use node. js instead. Run:
node --print-bytecode add.js
Where --print-bytecode
you can view the bytecode generated by the V8 engine. Look in the output results [generating bytecode for function: f]
:
[Generating bytecode for function:] Parameter count 6Frame size 0000003ac126862a @ 0:6e xx createclosure [0], [0], #2 0000 003ac126862e @ 4:1e FB Star r0 e> 0000003ac1268630 @ 6:91 Stackcheck 98 s> 0000003ac1268631 @ 7:03 Ldasmi [1] 0000003ac1268633 @ 9:1e F9 Star R2 0000003ac1268635 @ 11:03 Ldasmi [2] 0000003ac1268637 @ 13:1e F8 Star R3 98 E > 0000003ac1268639 @ 15:51 FB F9 F8 CallUndefinedReceiver2 R0, R2, R3, [1] 0000003ac126863e @ 20:04 Ldaundefined 107 s> 0000003ac126863f @ 21:95 Return Constant Pool (size = 1) Handler Table (size = +) [Generating bytecode F or Function:f]parameter count 3Frame size 0 e> 0000003ac1268a6a @ 0:91 Stackcheck s> 0000003ac1268a6b @ 1:1d 02 Ldar A1 e> 0000003AC1268A6D @ 3:2b Add A0, [0] 94 s> 0000003ac1268a70 @ 6:95 Return Constant pool (size = 0) Handler Table (size = 16)
Star r0
Stores the value currently in the accumulator in the register r0
.
LdaSmi [1]
Loads a small integer (Smi) 1
into the accumulator register.
The function body has only two lines of code: Ldar a1
and Add A0 , [0]
.
When we use the deconstructed assignment:
[Generating bytecode for function:] Parameter count 6Frame size 000000d24a568662 @ 0:6e xx createclosure [0], [0], #2 0000 00d24a568666 @ 4:1e FB Star r0 e> 000000d24a568668 @ 6:91 Stackcheck s> 000000d24a568669 @ 7:6c F9 createobjectliteral [1], [3], #41 R2 e> 000000d24a56866e @ 12:50 FB F9 CallUndefinedReceiver1 R0, R2, [1] 000000d24a568672 @ 16:04 ldaundefined s> 000000d24a568673 @ 17:95 Return Constant pool (size = 2) Handler Table (size = +) [Generating bytecode F or Function:f]parameter count 2Frame size e> 000000d24a568aea @ 0:91 Stackcheck 000000d24a568aeb @ 1:1f, FB Mov a0, R0 000000d24a568aee @ 4:1d FB Ldar r0 000000d24a568af0 @ 6:89 Jumpifundefin Ed [6] (000000D24A568AF6 @ 000000d24a568af2 @ 8:1d FB Ldar r0 000000d24a568af4 @ 10:88 Jum Pifnotnull [+] (000000d24a568b04 @) 000000D24A568AF6 @ 12:03 3f Ldasmi [+] 000000d24a 568af8 @ 14:1e F8 Star R3 000000d24a568afa @ 16:09 ldaconstant [0] 0000 00D24A568AFC @ 18:1e F7 Star R4 000000d24a568afe @ 20:53 E8 xx F8 callruntime [Newtypeer ROR], R3-R4 e> 000000D24A568B03 @ 25:93 Throw s> 000000D24A568B04 @ 26:20 FB ldanamedproperty r0, [0], [2] 000000d24a568b08 @ 30:1e FA Star R1 s> 000000d24a568b0a @ 32:20 FB ldanamedproperty r0, [1], [4] 000000d24a568b0e @ 36:1e F9 Star R2-s> 000000D24A568B10 @ 38:1d F9 Ldar R2, e> 000000D24A568B12 @ 40:2b FA, Add R1, [6] S> 000000d24a568b15 @ 43:95 Return Constant pool (size = 2) Handler Table (size = 16)
As we can see, the code has significantly increased a lot and CreateObjectLiteral
created an object. A function that had only 2 core commands suddenly increased to nearly 20. There is no shortage of JumpIfUndefined
CallRuntime
Throw
such directives.
- Extended reading: Understanding V8 bytecode "translation"
2. Use the--TRACE-GC parameter to view memory
Because this memory footprint is small, we add a loop.
function f(a, b){ return a + b;}for (let i = 0; i < 1e8; i++) { const d = f(1, 2);}console.log(%GetHeapUsage());
%GetHeapUsage()
The function is somewhat special, starting with a percent sign (%), which is a function used by the V8 engine for internal debugging, and we can use these functions with command-line arguments --allow-natives-syntax
.
node --trace-gc --allow-natives-syntax add.js
Get the results (I've adjusted the output format for readability):
[10192:0000000000427F50]26 ms: Scavenge 3.4 (6.3) -> 3.1 (7.3) MB, 1.3 / 0.0 ms allocation failure[10192:0000000000427F50]34 ms: Scavenge 3.6 (7.3) -> 3.5 (8.3) MB, 0.8 / 0.0 ms allocation failure4424128
When using a deconstructed assignment:
[7812:00000000004513E0]27 ms: Scavenge 3.4 (6.3) -> 3.1 (7.3) MB, 1.0 / 0.0 ms allocation failure[7812:00000000004513E0]36 ms: Scavenge 3.6 (7.3) -> 3.5 (8.3) MB, 0.7 / 0.0 ms allocation failure[7812:00000000004513E0]56 ms: Scavenge 4.6 (8.3) -> 4.1 (11.3) MB, 0.5 / 0.0 ms allocation failure4989872
You can see more memory allocations, and the use of heap space is more than before. With --trace_gc_verbose
parameters you can see more detailed information about the GC, and you can see that the memory is a new generation, and that the cost of cleaning up is still relatively small.
3. Escape Analytics Escape Analysis
With escape analysis, the V8 engine can remove temporary objects.
Also consider the previous function:
function add({a, b}){ return a + b;}
If we still have a function, it is double
used to double a number.
function double(x) { return add({a:x, b:x});}
And this double
function will eventually be compiled into
function double(x){ return x + x;}
Inside the V8 engine, the escape analysis process is performed as follows:
First, increase the intermediate variables:
function add(o){ return o.a + o.b;}function double(x) { let o = {a:x, b:x}; return add(o);}
An add
inline expansion of a call to a function becomes:
function double(x) { let o = {a:x, b:x}; return o.a + o.b;}
To replace an access operation to a field:
function double(x) { let o = {a:x, b:x}; return x + x;}
To delete a memory allocation that is not used:
function double(x) { return x + x;}
By V8 's escape analysis, the objects that were originally allocated to the heap were removed.
4. Conclusion
Do not do this micro-optimization of the grammatical level, the engine will be optimized, business code or more attention to readability and maintainability. If you are writing library code, you can try this optimization, and then pass the parameters directly, how much performance benefit you will have to see the final benchmark test.
For example, Chrome 49 started to support Proxy
it until a year later Chrome 62 improved Proxy
its performance, improving Proxy
overall performance by 24% to 546%.
Original address: https://www.zhihu.com/question/282228797/answer/427739238
Do you create an object each time before the ES6 assignment? Does it add to the burden of GC?