In some data files, the probability of two bytes of data duplication is relatively high. For such files, you can use a method similar to the Huffman encoding method for front-level compression.
If the probability is large, it is encoded in 1 byte. If the probability is small, it is encoded in 2 bytes. Because it is encoded in bytes, the encoded data does not affect subsequent data compression.
Steps:
1. The probability of occurrence of statistical data is measured in two bytes, and the probability of occurrence is arranged from large to small.
2. count the number of two bytes of data in the statistical file, and reserve the data size starting from 0xffff to downward in the ing table.
3. because the number of data records is generally less than 0 xFFFF, the number of single-byte encodings is calculated: (0 xFFFF-Data Count)> 8)-1. A single-byte ing is established from large to small.
4. data other than single-byte ing is mapped in double byte mode.
Note: The dual-byte encoding height and single-byte encoding are not repeated.
5. traverse the data file and encode it according to the ing table.
It has been confirmed that for data with more duplicates in two bytes, the data is compressed by lzss after being encoded in the previous level to obtain a higher compression ratio.