Document directory
- STL container Benchmark
- Data Oriented Design
STL container Benchmark
C ++ benchmark-STD: vector vs STD: List vs STD: deque
C ++ benchmark-STD: vector vs STD: List
We know that for operations with frequent insertion and deletion in a sequence, the use of list will be faster, while for operations with frequent insertion at the beginning and end of the sequence, deque will be faster. Vector is faster for search. Compared with the first two types, vector has a higher utilization rate for cache.
Different operations are compared in detail in this article, which has some reference significance.
It is worth noting that the reason why Random insert + linear search and list are the slowest (small data block) should still be cache miss, because if node is allocated different memory blocks, the probability of Miss cache increases.
Data Oriented Design
When selecting a data structure, cache locality is usually not considered as an important factor in traditional complexity analysis. Once cache miss occurs, the processor needs to wait many CPU cycles to obtain data from the memory, especially for the NUMA architecture processor. Therefore, there is a kind of data structure called DOD, that is, the design of Data Oriented Design is often used, and more attention is paid. Refer:
Pitfalls of Object Oriented Programming
Note that the title is the defect of OOP. It does not discuss Oop, process-oriented programming paradigm, but the impact of OOP on performance. The following are some articles related to DOD:
Data-oriented design (or why you might be shooting yourself in the foot with OOP)
Data-oriented design now and in the future
Intorduction to Data Oriented Design
DOD is of little significance for performance-insensitive modules (80/20 principles), but it is important for modules with high performance requirements.
In short, optimization is a deep learning.