A common programming problem:
Which one is faster than traversing arrays and linked lists of the same size?According to the algorithm analysis method in university textbooks, you will conclude that these two are as fast as they are because the time complexity is O (n ). However, in practice, the two are very different. The following analysis shows that arrays are much faster than linked lists.
First, we will introduce a concept: Memory Hierarchy (storage hierarchy). There are various types of memory in the computer, as shown in the following table.
- CPU registers-Immediate access (0-1 CPU clock cycles)
- CPU L1 Cache-Fast Access (three CPU clock cycles)
- CPU L2 cache-Slightly slower access (10 CPU clock cycles)
- Memory (RAM)-Slow access (100 CPU clock cycles)
- Hard Disk (File System)-Very slow (10,000,000 CPU clock cycles)
(Data comes from http://www.answers.com/topic/locality-of-reference)
The memory speed varies greatly between different levels, and the CPU register speed is 100 times the memory speed! This is why the CPU provider invented the CPU cache. This CPU cache is the key to the difference between arrays and linked lists.
The CPU cache reads a piece of continuous memory space because the array structure isSequential memory addressTherefore, all or some elements of the array are continuously stored in the CPU cache. The average time for reading each element is only three CPU clock cycles. The linked list node isScatteredIn the heap space, the CPU cache can only be used to read the memory, and the average read time requires 100 CPU clock cycles. In this case,Array access is 33 times faster than linked lists!(Here we will only introduce the concept. The specific numbers vary by CPU)
Therefore, the program tries its best to use a continuous data structure, which can give full play to the power of the CPU cache. This cache-friendly algorithm is called cache-Oblivious algorithm. If you are interested, refer to relevant materials. Another simple example:
Comparison
For I in 0. n
For J in 0 .. m
For k in 0 .. p
C [I] [J] = C [I] [J] + A [I] [k] * B [k] [J];
And
For I in 0. n
For k in 0 .. p
For J in 0 .. m
C [I] [J] = C [I] [J] + A [I] [k] * B [k] [J];
Although the execution results are the same and the algorithm complexity is the same, you will find that the second method is much faster.
To sum up, the speed of various types of memory varies greatly, so it is absolutely necessary to consider this factor in programming. For example, the memory speed is 10 thousand times faster than that of the hard disk, so frequent hard disk reads and writes should be avoided in the program; the CPU cache speed is dozens times faster than that of the memory, and the memory should be used as much as possible in the program.
> Original article copyright belongs to the author, reprint please indicate the source and author information (http://blog.csdn.net/WinGeek/), thank you. <