Content-based variable-length chunking
1, Introduction
Data block detection technology is divided into fixed block detection technology (fixed-sized Partition, FSP), variable block detection technology (variable-sized Partition, VSP), sliding block technology (Sliding block).
Fixed chunking blocks the flow of data at a fixed length, so it's easy to do so, but a change in one piece of data will cause all of the subsequent tiles to change, making it impossible to match. Therefore, the fixed block technology has less application in practice. Variable block technology can compensate for this limitation of fixed block technology, and can find the duplicate data more flexibly. Content-based variable-length chunking (content-defined Chunking, CDC) is one of the variable chunking (variable-sized Partition, VSP).
2, Theoretical basis
The CDC's theoretical basis is Rabin fingerprint, please refer to Michael O Rabin's fingerprinting by Random polynomials.
3, the concrete realization
The file is divided into a variable-length block of data, and the length of the data block is between a specified minimum and maximum value. Variable-length blocks are divided by a sliding window, and a block is created when the hash value of the sliding window matches a datum value, so that the size of the data block can reach a desired distribution. Rabin ' s fingerprint pre-defined two integers D and R (r<d) a fixed window with a size of W sliding on the file,
。 In position k, the hash value of the data in the fixed window is f. If f mod D = R, the position is a boundary of the data block. Repeat the process until the entire file is chunked.
The implementation is not very complicated, but it is necessary to calculate the value of the hash in the window of each slide, and the computational amount is increased. Also, if the selected D and R are inappropriate, it can cause the window to be too small (easy to match) or too large (not difficult to match)
4, compared with fixed block technology
Now there is a string of data D0: (ABCDEFGHIJKLMNOP), with a fixed chunk (ABCD | EFGH | IJKL | MNOP), if the data in the middle part changes, the data becomes D1: (ABCDEF22GHIJKLMNOP), then the fixed block is (ABCD | EF22 | Ghij | KLMN | OP), except for the first block, all other blocks do not match.
If you are using CDC, assume that the initial chunking is also (ABCD | EFGH | IJKL | MNOP), then means that in D, H, L These three windows, is in accordance with the conditions of fmodd=r, when the data changes, because the DHL three Windows did not change, they are still recognized as the boundary, then the chunking may become (ABCD | Ef22gh | IJKL | MNOP), so that, in addition to the second block that has changed, the match cannot be completed, the matching of the other three data blocks will not be affected.
CDC has long been widely used, the earliest is the use of low-bandwidth data transmission and synchronization, such as RSYSNC using CDC technology, to detect the difference between this backup and the last backup, so as to achieve the purpose of only passing the difference part.
Reference documents
1, file copy algorithm based on hybrid method in low bandwidth environment *
Xu Dan 1+, Sheng Hong 2, Shing 2, Wu 1, Wangdong 2,3
2,michael O. Rabin fingerprinting by Random polynomials.
3, url de-weight algorithm based on Rabin finger texture method
"Go" based on content variable-length chunking (CDC)