[Linpiu] from the true audio homepage, the compression ratio is not very high. I think this is why it is not as popular as FLAC and ape. Record the information and study it later.Algorithm.
True audio (abbreviated as TTA, URL: http://www.true-audio.com/) is composed of a group of RussianProgramA lossless audio compression coding technology developed by the staff. Its main features are free, simple, and real-time. TTA has the following advantages:
- Open source code, but not based on GPL, but a more loose open source license;
- Very efficient, and obtained Tamir Barak and Noam Koenigstein from Technion (Israel Institute of Technology, sponsored by Intel) for optimization in multi-processor environments.
- Optimizes real-time hardware encoding and supports playback of some hardware;
- Supports multi-channel audio and high-sampling-bit audio;
- Supports id3v1 and ID3v2 tags
- Compatible with playback gain;
- Good error tolerance.
- The format of the matroska (MKV) Container file is supported.
The disadvantage of TTA is:
- No mixed/lossy mode;
- Riff paragraphs are not supported;
- MPs queue operations are not supported.
The lossless compression of TTA is based on adaptive predictive filtering technology. In fact, it is essentially different from most lossless audio compression codes. The biggest disadvantage of this type of lossless compression technology is that the compression/Decompression speed is not high, while TTA focuses on this aspect for optimization, we strive to maintain a relatively high processing speed while providing sufficient compression ratios. TTA supports standard WAV format files with multi-channel, 8/16/24-bit sampling digits as the input for compression processing. The compression ratio depends on the audio and video types. The compression ratio ranges from 30% ~ 70%. Currently, the development team is working to optimize the TTA decoder so that it is easier to implement support on different hardware.
TTA provides a complete stream separation, encoding, and decoding plug-in based on DirectShow technology. Supports the matroska (http://www.matroska.org/) Container file format. Most players on windows now support DirectShow, so TTA files can be played on most media players. On the other hand, because the DirectShow plug-in provided by TTA is not only a decoding plug-in, but also a stream separation and encoding plug-in, TTA can be easily integrated with various applications.
In terms of hardware, TTA specifically targets hardware manufacturersCodeOptimization, and provide the industry with help to integrate TTA support in products. The first hardware device that provides TTA format playback support is the neuston Maestro http://www.neuston.com, a DVD player from neuston Corporation (DVX-1201/), which seems to have been discontinued ). DVX-1201 is actually a multimedia player with DVD as the media, in addition to TTA format, also supports playing MP3/MP4/Ogg Vorbis/DivX and other formats. The machine can also update support for new media formats through firmware upgrades, which is rare and professional.
Neuston Maestro DVX-1201 DVD player
Similar to La (losslessaudio, another lossless audio compression technology, refer to this site Introduction: La-losslessaudio lossless audio encoding and lossless compression Principles, TTA also publicly introduced its technical details on the webpage. Moreover, because TTA is an open source code solution (LA is a closed source code), TTA's open technical details are much more detailed than LA's content. Because the translation level is valid, if you think there is something wrong with the text, please refer to the original Website:, and send a letter to the author to point out.
Introduction to TTA lossless compression technology
As we all know, there is no effective algorithm for lossless compression of random input data. The high compression ratio obtained by the audio compression algorithm depends on the familiarity and processing degree of the data features of the algorithm used. Lossless data compression (not lossless audio compression) is not a new technology. Widely used compression software tools, such as PKZIP, compress, and gzip, WinZip, WinRAR, and 7-zip are all specialized compression tools for text and binary data files. These tools can achieve lossless File compression, and the decompressed file and the original file are a bit without deviation. However, the algorithms used by these compression tools are more or less based on the Lempel-Ziv algorithm, which does not show any advantages for multimedia data. Although these tools can achieve a compression ratio of over 2 to 1 when compressing text files, they often have no effect when compressing multimedia files.
All Lempel-Ziv-based compression tools share the same principle: Find repeated paragraphs in compressed data, replace the following paragraph with a pointer pointing to the position where the paragraph appears for the first time. Since the pointer length is much smaller than the paragraph length, compression is implemented. For audio files, the data is the audio sample value. However, it is difficult to find identical and repeated audio sample value paragraphs in audio data. Therefore, this compression algorithm does not work. On the other hand, this compression algorithm does not take into account the characteristics of audio sampling data: the strong correlation between adjacent sampling values. Therefore, all modern multimedia compression algorithms include data de-processing links to reduce the statistical correlation of audio data. These algorithms usually include two processing links: downstream Processing and prediction modeling.
The residual signal after de-correlation processing can usually be lossless compressed using entropy encoding technology. In fact, all audio compressors that support multi-channel audio use one of the following technologies: Huffman coding Hofman encoding, Run Length Coding travel encoding, or rice coding rice encoding, perform the following steps to compress data:
- Segmented signal decomposition
- In-channel signal de-correlation
- Signal Modeling (prediction)
- Entropy Encoding
The structure of the TTA lossless audio compressors also includes the above processing steps. At the Signal Modeling (prediction) stage, the input signal modeling is completed by a two-phase adaptive filter. The difference between the original signal and the predicted signal is entropy encoded, while the remaining signal is compressed by rice encoding.
The optimal segment size greatly depends on the compression algorithm selected in the modeling phase. In general, in order to restore defective files, the segment size should be as small as possible. The smaller the segment, the more frames are. Therefore, the number of bytes wasted on the frame structure increases, reducing the compression ratio. Increasing the segment size makes it difficult to edit the compressed number. In statistical modeling, the size of segments must not be too large. Otherwise, the signal modeling cannot be performed correctly. TTA compressors use dynamic (Adaptive) Signal Modeling, which is only valid for large frames. Therefore, the last frame length selected by TTA is a little longer than that of 1 second.
The input multi-channel data must be processed in-channel. For example, for dual-channel audio, the data of the two channels will be converted to the "average" and "difference" data channels. The method used is simple: Average = (Channel 1 + Channel 2) /2; Difference = channel 1-Channel 2. To avoid data loss, the two formulas are converted to: Differential = channel 1-Channel 2; average = channel 1-Difference/2. For multi-channel audio data that has a good correlation between audio channels, the processing of this step can usually significantly increase the compression rate.
In the modeling stage, The TTA lossless audio compressors try to use a function to approach the signal, the difference between the calculation result of this function and the original signal (also known as residual signal residue, differential signal difference or error) must be as small as possible. Compared with other stages (in-channel de-correlation and residual Signal Encoding), the processing in the modeling stage is quite different. The following are some of the main methods used in the TTA compressors:
- Signal approximation through a series of multiple schedulers;
- Linear Prediction (LPC) is used for signal modeling;
- To adapt to the filter for signal modeling.
Among them, it is the best result to adapt to the filter for signal modeling. This method uses the IIR (Infinite Impulse Response, infinite pulse response) filter. The operation parameters of the filter must be constantly adapted to the signal during processing. The basic system element is a P-dimension non-recursive filter, which can be described using the following expression:
(1)
Where:
X' [N] is the predicted value of the new sample value x [N;
V [N, K] is the next value of the filter weight coefficient;
R is the signal delay.
The filter weight coefficient is defined by the following formula:
(2)
Where:
M is the coefficient that defines the adaptive speed of the filter;
E [N] is the next value of the output signal (error signal;
SGN is the symbol of the signal value signal.
Remaining signal:E[N] =X[N] −X'[N] The minimum value can be obtained through different algorithms, such as the widrow-hoff least mean square error algorithm (LMS) based on statistical approximation or the recursive least square algorithm (RLS ). Although the second algorithm has a higher convergence speed, it requires sufficient CPU resources. Therefore, TTA uses the LMS algorithm. The TTA compression algorithm converges through the shortest descent method. to simplify the calculation, TTA uses the gradient random Approximation Algorithm of widrow-Hoff. As an idealized standard for the convergence speed, TTA uses the minimum modulus of the filter error.
Independent of the value of the filter coefficient matrix V that may be relatively arbitrary, the algorithm converges on the average (converges at the average) and the stability can be maintained when the Parameter M meets the condition 1/λ max> m> 0. λ max is the maximum value of the feature value of the Self-correlation matrix used for input signals. The output residual filter signal is defined as the difference between the real signal and the predicted value. The calculation method is convolution Based on the weighting coefficient of the real signal and the Cross-pass filter (see reference 1. The pulse characteristics of the (p-dimensional weighted convolution vector) filter are updated on each discrete moment (see reference 2.
The algorithm of The TTA lossless audio compression is slightly different from the above description. The input signal goes through two phases of filtering. In the first phase, we use a zero-level estimator as a filter, so the error signal can be defined using the following formula:
E [
N ] =
X [
N ] −
K
X [
N −1], where k is very close to 1.
Next, in the final phase of filtering, TTA uses a set of bandwidth filters similar to the one described above. The combination of the above technologies is the most effective in terms of prediction accuracy and processing speed in all tested technologies. However, the disadvantage is that the segment size is small and it is difficult to edit the compressed stream.
After the model is created, the encoder deducts the approximate signal from the original signal. If the model is incorrect, the difference between the original signal and the predicted signal is the residual signal, and the lossless compression encoding will be performed in subsequent processing. TTA utilizes the fact that the difference signal usually has the Laplace Distribution, and there is a special hefman encoding, namely rice encoding, these signals can be effectively and quickly encoded and compressed without creating a dictionary. The rice coding process involves finding a parameter that matches the signal distribution for code synthesis. If the signal distribution status changes, the ideal parameter also changes. Therefore, a technique that can estimate the parameter as necessary is required. If the prediction is effective, the remaining signal will contain less bits than the original signal. Furthermore, the remaining signal is usually separated by segments with its own unique rice parameter in a small size. Selecting a smaller part size also affects the encoding efficiency. Because the TTA lossless audio compression device uses adaptive encoding technology and the dynamic rice Parameter definition to encode the residual signal, in this case, the residual signal split caused by a small section is not executed.
References:
- A. v. djourik, P. g. zhilin. siberian solar radiotelsion: the PCA format for the SSRT data compression. numerical Methods and programming, the Lomonosov Moscow State University, 4 (2003), 278-282.
- M. Hans, R. Schafer. lossless audio coding. Technical Report. csip TR-97-07. Atlanta, 1997.
- T. Robinson. Shorten: simple lossless and near-lossless waveform compression. Technical Report, Cambridge University engineerig department. Cambridge, December 1994.
- P. C. Craven, M. J. Law, and J. R. Stuart. lossless compression using IIR prediction filters. 102nd AES Convention. Munich, March 1997.
- R. F. Rice, some practical universal noiseless coding techniques. Technical Report. JPL-79-22, Jet Propulsion Laboratory. Pasadena, 1979.
Linux: http://www.wavecn.com/content.php? Id = 148