When you want to create a larger HashMap, make full use of another constructor public HashMap (int initialcapacity, float loadfactor) initialcapacity: initial capacity and loadfactor: Load factor. The capacity is the number of buckets in the hash table, and the initial capacity is only the size of the hashtable at the time of creation. The load factor is a scale in which the hash table can reach full size before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the capacity is doubled by calling the rehash method.
Should avoid hashmap repeated hash refactoring, expansion is a very performance-consuming thing, in the default initialcapacity only 16, and Loadfactor is 0.75, how much capacity you need to accurately estimate the best size you need, The same hashtable,vectors is the same.
If our capacity requirements for HashMap are not very large, you give it a default 1W capacity, and obviously waste valuable space. As for the choice of these two parameters can be grasped, can even set dynamic binding: Analysis of historical data, find the law, or predict the future trend to find the law. A dynamic adjustment is implemented for the two parameters of HashMap. For example in the morning 8 ~9 point a business is busy, it corresponds to the hashmap can give some space in advance, and after 10 o'clock B business use of the HashMap busier, a relatively leisurely, you can reduce a space to B.
If you read a record from a table in a database into HashMap, you can completely initialize the HashMap's capacity based on the number of rows (row size) of the record, so that you can achieve the minimum number of rehash, while also guaranteeing the minimal capacity required for HashMap:
For example: Through SQL statements: Select COUNT (field) as rowsize from table
Number of lines mentioned: Rowsize = 30
So when we define HASHMAP: hashmap<string,string> h = new hashmap<string,string> (rowsize,1f);