Progressive rehash
In the previous section, we learned about the dictionary rehash process. In particular, the rehash program is not activated.
After that, execute the task immediately until it is completed. Instead, it is completed multiple times and progressively.
Assume that a user triggers rehash when adding a new key-Value Pair in a dictionary with many key-value pairs.
Process. If this rehash process only returns the result to the user after all key-value pairs are migrated
The method will be unfriendly.
On the other hand, the server must be blocked until rehash is completed, which is unacceptable to the Redis server itself.
To solve this problem, Redis uses the progressive incremental rehash method: distribute the rehash
To multiple steps to avoid centralized computing.
Progressive rehash is mainly implemented by two functions: _ dictRehashStep and dictRehashMilliseconds:
. _ DictRehashStep is used to passively rehash the dictionary of the database dictionary and hash key;
. DictRehashMilliseconds is executed by the general task program server cron job of the Redis server.
To actively rehash the database dictionary;
_ DictRehashStep
Execute _ dictRehashStep, ht [0]-> all nodes on the first non-empty index of the table hash table
To ht [1]-> table.
After rehash starts, d-> rehashidx is not-1). Each time you perform the ADD, search, and delete operations,
_ DictRehashStep is executed once:
650) this. width = 650; "title =" QQ20140122212848.png "src =" http://www.bkjia.com/uploads/allimg/140207/2242303230-0.jpg "alt =" wKiom1Lfx8-xIvSUAAFG-ophSV4572.jpg "/>
Because the dictionary keeps the ratio of the hash table size to the number of nodes in a very small range, the number of nodes on each index
There will not be many rehash conditions of the current version. On average, there is only one rehash condition. Generally, there will not be more than five rehash conditions.
While performing the operation, migrating the nodes on a single index will hardly affect the response time.
DictRehashMilliseconds
DictRehashMilliseconds can rehash the dictionary within a specified millisecond.
When the Redis server runs regular tasks, dictRehashMilliseconds will be executed, within the specified time,
Rehash the dictionaries in the database dictionary to accelerate the rehash of the database dictionary.
Process progress ).
Other measures
When a hash table is rehash, the dictionary also takes some special measures to ensure that the rehash is performed smoothly and correctly:
Because the dictionary uses two hash tables at the same time during rehash, all operations such as searching and deleting during this period,
In addition to ht [0], it also needs to be performed on ht [1.
During the add operation, the new node will be directly added to ht [1] instead of ht [0], so as to ensure that the Section of ht [0]
The number of vertices is not increased during the rehash process.
Dictionary Shrinkage
The previous section on rehash describes how to use rehash to expand the expand dictionary.
If the number of available nodes is much larger than the number of used nodes, you can also perform rehash on the hash table to contract shrink)
Dictionary.
The operation for shrinking rehash is almost the same as the extended rehash operation shown above. It performs the following steps:
1. Create a ht [1]-> table smaller than ht [0]-> table;
2. migrate all key-value pairs in the ht [0]-> table to the ht [1]-> table;
3. Clear the data of the original ht [0] and replace ht [1] with the new ht [0].
The process of extending rehash and shrinking rehash is exactly the same. Whether a rehash is to expand or contract the dictionary, the key lies in
Size of the newly allocated ht [1]-> table:
. If rehash is an extended operation, ht [1]-> table is larger than ht [0]-> table;
. If rehash is a contraction operation, ht [1]-> table is smaller than ht [0]-> table;
The contraction rule of the dictionary is defined by the redis. c/htNeedsResize function:
/** Check whether the dictionary usage is lower than the minimum rate allowed by the system **. If yes, 1 is returned; otherwise, 0 is returned. */Int htNeedsResize (dict * dict) {long size, used; // number of nodes used in the hash table size = dictSlots (dict ); // The size of the hash table used = dictSize (dict ); // when the size of the hash table is greater than DICT_HT_INITIAL_SIZE // when the dictionary filling rate is lower than REDIS_HT_MINFILL // return 1 return (size & used & size> DICT_HT_INITIAL_SIZE & (used * 100/ size <REDIS_HT_MINFILL ));}
By default, the value of REDIS_HT_MINFILL is 10, that is, when the dictionary filling rate is lower than 10%
Then you can perform the contraction operation on the dictionary.
One difference between dictionary contraction and dictionary extension is:
. The dictionary extension operation is automatically triggered, whether it is automatic expansion or forced expansion );
. The dictionary contraction operation is manually executed by the program.
Therefore, the dictionary program can decide when to contract the dictionary:
When the dictionary is used to implement the hash key, each time a key-value pair is deleted from the dictionary, the program will execute
HtNeedsResize function. If the dictionary reaches the contraction standard, the program will immediately contract the dictionary;
. When the dictionary is used to implement the key space of the database key space ),
Redis. c/tryResizeHashTables function decision.
Other dictionary operations
In addition to adding and stretching/shrinking operations, the dictionary also defines other operations, such as common search, deletion, and more
New.
Because the information about the implementation of the chain address hash table can be found in any data structure or algorithm book.
Other operations in this tutorial. However, we have discussed how to create a dictionary, add key-value pairs, contract, and expand rehash.
The core content of the dictionary module.
Dictionary Iteration
The dictionary comes with its own iterator implementation-iteration of the dictionary is actually an iteration of the hash table used by the dictionary:
The iterator first iterates the first hash table of the dictionary, and then continues
Hash Tables.
. When the hash table is iterated, find the first non-null index and iterate all nodes on the index.
When this index iteration is complete, continue to search for the next non-null index. This loop continues until the entire hash table is repeated.
End of generation.
The entire iteration process can be expressed as follows using pseudo code:
Def iter_dict (dict): // iterate over the iter_table (ht [0]-> table) of hash No. 0. // if rehash is being executed, the hash table if dict is also iterated over. is_rehashing (): iter_table (ht [1]-> table) def iter_table (table): // traverse all indexes on the hash table for index in table: // skip null index if table [index]. empty (): continue // traverse all nodes on the index for node in table [index]: // processing node do_something_with (node)
There are two types of dictionary iterators:
Secure iterator: you can modify the dictionary during iteration.
Unsafe iterator: the dictionary is not modified during iteration.
The data structure definition of the iterator is as follows:
/** Dictionary iterator */typedef struct dictIterator {dict * d; // Number of the dictionary int table being iterated, // Number of the hash table being iterated 0 or 1) index, // The index of the hash table array being iterated is safe; // is it safe? DictEntry * entry, // current hash node * nextEntry; // subsequent node of the current hash node} dictIterator;
Summary
A dictionary is an abstract data structure composed of key-value pairs.
The databases and hash keys in Redis are implemented based on dictionaries.
The underlying implementation of the Redis dictionary is a hash table. Each dictionary uses two hash tables. Generally, only hash no. 0 is used.
Table. Hash Tables 0 and 1 are used at the same time only when rehash is performed.
Hash Tables use the link address method to solve key conflicts.
Rehash can be used to expand or contract a hash table.
Rehash of a hash table is performed multiple times and incrementally.
This article is from "phper-a little bit every day ~" Blog, please be sure to keep this source http://janephp.blog.51cto.com/4439680/1353930