Test code:
#include <iostream>using namespace std #include <string> #include <windows.h> #include <string.h > #include <stdio.h> #include <stdlib.h> #include <time.h> #include <map>const int maxval = 2000000 * 5; #include <unordered_map>void map_test () {printf ("map_test\n"); Map<int, Int> MP; clock_t StartTime, EndTime; StartTime = Clock (); for (int i = 0; i < maxval; i++) {Mp[rand ()% maxval]++; } endTime = Clock (); printf ("%lf\n", (double) (endtime-starttime)/clocks_per_sec); printf ("Insert finish\n"); StartTime = Clock (); for (int i = 0; i < Maxval, i++) {if (Mp.find (rand ()%maxval) = = Mp.end ()) {//printf ("not Found\n "); }} endTime = Clock (); printf ("%lf\n", (double) (endtime-starttime)/clocks_per_sec); printf ("Find finish\n"); StartTime = Clock (); for (Auto it = Mp.begin (); It!=mp.end (); it++) {} EndTime = Clock(); printf ("%lf\n", (double) (endtime-starttime)/clocks_per_sec); printf ("Travel finish\n"); printf ("------------------------------------------------\ n");} void Hash_map_test () {printf ("hash_map_test\n"); Unordered_map<int, Int> MP; clock_t StartTime, EndTime; StartTime = Clock (); for (int i = 0; i < maxval; i++) {Mp[rand ()% maxval] + +; } endTime = Clock (); printf ("%lf\n", (double) (endtime-starttime)/clocks_per_sec); printf ("Insert finish\n"); StartTime = Clock (); for (int i = 0; i < maxval; i++) {if (Mp.find (rand ()% maxval) = = Mp.end ()) {//printf ("no T found\n "); }} endTime = Clock (); printf ("%lf\n", (double) (endtime-starttime)/clocks_per_sec); printf ("Find finish\n"); StartTime = Clock (); for (Auto it = Mp.begin (); It!=mp.end (); it++) {} endTime = clock (); printf ("%lf\n", (double) (endtime-starttime)/clocks_per_sec); PriNTF ("Travel finish\n"); printf ("------------------------------------------------\ n");} int main (int argc, char *argv[]) {srand (0); Map_test (); Sleep (1000); Srand (0); Hash_map_test (); System ("pause"); return 0;}
Detailed
Map (using red-black tree) compared to unordered_map (HASH_MAP)
???? Map theory Insert, query time complexity O (LOGN)
???? Unordered_map theory Insert, Query time complexity O (1)
When the amount of data is small, it may be that the initial size of the UNORDERED_MAP (hash_map) is small, the size frequently reaches the threshold, and multiple rebuilds lead to a slightly larger insertion time. (similar to the vector reconstruction process).
The hash function is also consumed (it should be a constant time), when the consumption of the hash is greater than the consumption of the red-black Tree lookup (O (LOGN)), so the lookup time of the unordered_map will be extra for the lookup time of the map.
When the amount of data is large, the number of rebuilds is reduced, the cost of rebuilding is small, and the advantage of Unordered_map O (1) begins to appear
Greater data volume and more obvious advantages
Use space:
The first half is map and the second part is unordered_map
Unordered_map occupies slightly more space than map, but is acceptable.
The internal implementations of map and Unordered_map should be based on a mechanism that doubles the threshold to open up space (16, 32, 64, 128, 256, 512, 1024 ...). It is unavoidable to waste a certain amount of space. And in the double space, if not from the current open, will be opened in other locations, open the data moved past. The frequent movement of data also consumes a certain amount of time, especially when the volume of data is small.
One way is to write a hash with a fixed length. This works well when the amount of data is small (avoiding the frequent movement of data, really approaching O (1)). However, due to the fixed length, when the data volume is large, the data overlap seriously, the hash effect drops sharply, the time complexity approaching O (n).
One way to compromise is to write your own handwriting unordered_map (hash_map) and assign the initial size to a larger value. Expansion can mimic the double expansion of STL, or it can use other methods on its own. This is best written, but it is extremely cumbersome to implement.
Comprehensive pros and cons, our group adopted unordered_map.
Attached: Using dev Test and VS2017 test effect is very different???
The efficiency is 10 times times inferior???
Reason:
Dev
VS2017
In debug, it is really slow to record debugging information such as breakpoints.
Release: Do not debug the source code, compile time to optimize the speed of the application, making the program in code size and speed is optimal.
VS2017 cut to release, also faster
In addition to the above-mentioned efficiency differences between debug and release, the difference in the compiler will result in efficiency differences.
Learned it.
Comparison of Unordered_map (HASH_MAP) and map