I. Background to the issue
Often encountered to synchronize data, and the system comes with the replication function can not achieve incremental synchronization, each time to do the full amount of replication, the abnormal situation can only start again, very troublesome, excellent it is the kind of large file processing, it is time-consuming.
Two. Solution
1. Calculate the source directory data fingerprint
2. Calculating the target directory data fingerprint
3. Compare fingerprint data, find out the difference, get a list of files that need to be added, deleted or updated, and calculate the size of the data that needs to be updated
4. Synchronize the differences each time, if you encounter large files, then cache their fingerprint data to the target folder for the next synchronization of data to use
Three. Data fingerprint description
The data fingerprint, as its name implies, is a unique identifier for a folder or file, in the form of:
File relative path +:(delimiter) + Modified date +:+ data length +:+ internal fingerprint
The internal fingerprint is made up of MD5 of multiple content blocks.
Content block is to partition large files, each time comparing data, the smallest synchronization object is the content block, to avoid processing the entire file, but also to achieve incremental synchronization of the key point
Four. Source code download
Source: Https://github.com/xxonehjh/file-sync
Executable file: Filesync.jar
Instruction for use: Java-jar Filesync.jar source Directory destination Directory
Data synchronization Scheme (Java source code included)