Note: 1 inch = 25.4mm
From the information theory perspective, compression removes redundancy in information, that is, retaining uncertain information and removing definite information (which can be inferred ), that is to say, a description closer to the essence of information is used to replace the original redundant description.
CompressionThere are two types: lossless compression (reversible) and lossy compression (irreversible ).
Compression encodingThere are many methods, mainly divided into the following four categories: pixel encoding, predictive encoding, transform encoding and other methods.
1)Pixel Encoding: Each pixel is processed separately during encoding without considering the correlation between pixels. Several common methods include pulse coding modulation (PCM), entropy coding, travel coding, and bit plane coding.
2)Prediction code:Remove the correlation and redundancy between adjacent pixels, and only encode new information. Common prediction codes include modulation (DM) and differential prediction coding (DPCM)
3)Transform encoding:Transform a given image to another data domain (such as the frequency domain), so that a large amount of information can be expressed with less data. There are many transform encodings, such as discrete Fourier transform (DFT), discrete cosine transform (DCT), and discrete hadama transform (DHT ).
4)Other codesThere are also many methods, such as mixed encoding, vector quantization (VQ), and LZW algorithms.
9.1User ID
Its basic principle is that frequently-used data is replaced by shorter code, while less-used data is replaced by longer Code. The Code for each data is different.
To generate the Huffman encoding, You need to scan the original data twice. The first scan accurately counts the frequency of each value in the original data, and the second scan builds and encodes the Huffman tree. Because Binary Trees need to be built and traversed to generate codes, data compression and restoration are slow, but simple and effective, and thus widely used.
Since I have learned this encoding before, I will not go into details about the algorithm ~
9.2Travel code
The principle of stroke encoding is also very simple: Replace the adjacent pixels with the same color value in a row with a Count value and the color value.
Advantage: if an image is composed of many areas with the same color, the compression efficiency is astonishing.
Disadvantage: If the colors of every two adjacent points in an image are different, this algorithm not only cannot be compressed, but also doubles the data size. Therefore, the current compression algorithm simply uses the travel EncodingNot used much, PCX file is one of them.
9.3 LZWGeneral idea of Algorithms
LZW is a complex compression algorithm with high compression efficiency. Its basic principle: LZW uses a numerical value to encode each first-occurrence string, and then returns the value to the original string in the restoration program. LZW is lossless. GIF files use this compression algorithm.
9.4 JPEGCompression Coding Standard
Amount... The last section cannot be viewed!
I am not interested in Image Compression and encoding ~ Just spare me ~~~~~~~~~
Pray-don't let me do compression and coding ~~~~