Zstandard(abbreviated as ZSTD) is a new lossless compression algorithm designed to provide fast compression and achieve high compression ratios. It is neither likeLZMAand theZPAQthat pursuit of the highest possible compression ratio, is not likeLZ4that pursuit of the ultimate compression speed.
Here is a set of benchmark data:
Compression algorithm Name |
Compression ratio |
Compression speed (MB/s) |
Decompression speed (MB/s) |
Zlib 1.2.8-6 |
3.099 |
18 |
275 |
ZSTD |
2.872 |
201 |
498 |
Zlib 1.2.8-1 |
2.73 |
58 |
250 |
LZ4 HC r127 |
2.72 |
26 |
1720 |
Quicklz 1.5.1b6 |
2.237 |
323 |
373 |
LZO 2.06 |
2.106 |
351 |
510 |
Snappy 1.1.0 |
2.091 |
238 |
964 |
LZ4 r127 |
2.084 |
370 |
1590 |
LZF 3.6 |
2.077 |
220 |
40S |
(Environment: Core i5-4300u @ 1.9GHZ; Benchmarking Program: Open source program Fsbench 0.14.3)
As can be seen from the above table, ZSTD compression ratio and compression speed are relatively high, and the decompression speed of about 500mb/s per core.
The compression speed of the ZSTD can be configured according to different conditions. In the above test, it provides approximately the speed of 200mb/s per core and can be used in some real-time compression scenarios. At the same time, similar to LZ4, it can provide a derivative algorithm for balancing compression ratio and compression time, and does not affect the decompression performance.
ZSTD also features memory requirements that can be configured. This allows it to accommodate situations where the memory configuration is low or the server is processing multiple requests in parallel. In addition, it uses the "finite state entropy (finite-Entropy, abbreviated as FSE)" Encoder. The encoder is a new entropy encoder developed by Jarek Duda based on the ANS theory, which is designed to compete with "Huffman Encoder (Huffman encoder)" and "Arithmetic encoder (Arithmetic encoder)".
According to Cyan4973 Project creator Yann Collet introduced, although ZSTD is a fast compression/decompression algorithm, but it does not enter the scope of LZ4. In a benchmark test (see here for test methods), Collet came to the conclusion that:
LZ4 is a better choice when the transmission speed is higher than 50mb/s, and ZSTD is ahead of other algorithms at speed when the transmission speed is between 0.5mb/s and 50mb/s.
In addition, when replying to user comments, Collect compared zstd with Lzham:
According to my understanding, their design starting point is different. Lzham from Lzma ... Take offline compression scenarios as a measure ... The basic principles of ZSTD are more like zlib, but there are three major changes:
- FSE replaces the Hoffman encoder;
- Unlimited matching size;
- The offset can be repeated.
Finally, please note to readers: ZSTD development is still in its infancy, some of the test results in this paper are only early test results, and its implementation will continue to improve and improve over time, especially in the first year of project creation.
Zstandard: A new lossless compression algorithm