Processing of <1>remote End (server side) data
There is no need to implement this algorithm. Direct call to the library is possible.
Chunk a piece of data block
1. Slicing data from remote (server-side) according to Chunck
2. Calculate the weak checksum of each chunk (Adler-32 algorithm) with strong checksum (MD5 algorithm)
3. Save into map, key is weak check id,value is chunk linked list
Map>
<2>local End processing (local side)
1. Read a Chunck to determine if the data is sufficient for a Chunck size
2. According to the read chunk to the remote map to determine whether the match. If there is differ data before the match, if there is differ data, add the diff data to the patch.
End up with a linked list of chunk Index + differ data to match
If it does not match, it moves one byte to the right, and the left side is called differ date, which is the byte that is not in the server and the byte to be uploaded.
differ Date (* *)
**01**02**03****0x the data received by the server
Then match the fill to form the new data
New data = differ date (client-worn) + matching data already in the server
Advantages and disadvantages of this algorithm: Save the file size of the upload, but at the expense of the CPU (CPU needs to continue to calculate, take up the CPU)
Web pages are not able to read local files (security considerations), so it is sometimes necessary to install plugins.
Share:
Fast file upload and download principle