I. Preface in the past 20 years, Internet, mobile communication and multimedia communication have achieved unprecedented development and great commercial success. The convergence of mobile communication and multimedia technology is accelerating. research results in network architecture, low-power integrated circuits, powerful digital signal processing chips, and efficient compression algorithms are emerging. Video Image Coding and transmission technologies for wireless networks and the Internet have become the cutting-edge topics of information science and technology. In 2003, the ISO/IEC Motion Image Expert Group (mPEG) and the ITU-T video coding Expert Group (VCEG) jointly developed the latest third generation video coding standard H.264/AVC [1]. The main purpose is to provide higher encoding efficiency and better network adaptability. In the same reconstructed image quality, compared with H.263 + and MPEG-4 ASP standard, it can save 50% of the code stream. In hierarchical mode, the video encoding layer (VCL) is defined) and the network extraction layer (NAL), which is designed for network transmission, can adapt to Video Transmission in different networks, and further improve the network "affinity ". H.264 introduces the IP packet-oriented encoding mechanism, which facilitates group transmission in the network and supports streaming media transmission of videos on the network.
This feature is especially suitable for Wireless Video Transmission with high packet loss rate and severe interference. Ii. Review of video communication error tolerance algorithms the current video encoding and compression standards mainly include MPEG-X and H.26x series. These compression algorithms are based on macro blocks [2], improved Coding Efficiency from three aspects: (1) motion estimation/Motion Compensation (MP/MC) Eliminating video time redundancy; (2) discrete cosine transformation of image difference (DCT) space redundancy is eliminated; (3) variable-length encoding (VLC) of the quantization coefficient is used to eliminate statistical redundancy. Practices show that the video encoding standards have achieved extremely high compression efficiency through the above methods. However, there are still some difficult problems in the compressed code stream transmission over the Internet, especially over wireless channels. One of the highlights is: on the one hand, these compressed code streams are very sensitive to channel bit codes. On the other hand, due to multi-path reflection and fading, wireless channels introduce a large number of random and burst codes, which affects normal transmission of the code streams. Especially when the VLC scheme is adopted, the bitstream is more susceptible to the error code, and the decoding end will lose the synchronization with the encoding end, as a result, the VLC Code cannot be correctly decoded before the next synchronous code word is encountered. At the same time, the prediction encoding technology will spread the error to the entire video sequence, greatly reducing the quality of the reconstruction image. Therefore, in order to realize video transmission of good quality, some fault-tolerant measures must be taken based on the transmission characteristics of practical application channels. Based on different locations in the video transmission system, the error tolerance algorithm [3] can be divided into the error tolerance algorithm based on encoder, the fault tolerance algorithm based on decoder and the error tolerance algorithm based on feedback channel. Among them: (1) the error tolerance algorithm based on encoder adds redundant information to the bit stream by re-encoding. These redundant information is added to the source or channel encoder, which reduces the coding efficiency, the complexity of implementation is increased in exchange for the fault tolerance ability of encoding, including hierarchical encoding, multi-description encoding, independent segmentation encoding, re-synchronous encoding, and forward correction encoding (FEC. (2) decoder-based fault tolerance algorithms refer to the use of the correlation between the damaged macro block and Its Adjacent macro block to complete the restoration, this part of work includes error detection and error recovery. For error detection, syntax errors and embedded data errors are generally used. For error recovery, the time domain and airspace error hiding methods can be used. (3) A fault tolerance algorithm based on the feedback channel is a method in which the decoder obtains the error information and transmits it to the encoder for error processing through the feedback channel. It mainly includes: Error Code tracking, conditional ARQ, intra-frame/inter-frame encoding mode selection, and reference image selection mode. At the same time, in the source encoder, it has become a hot topic in the past two years to study the anti-error performance of video code streams. H. as the latest video encoding standard, AVC 264/adopts a series of practical technical measures to improve network adaptability and the robustness of Data anti-code, this ensures the QoS of the compressed video after transmission. Unlike the previous video encoding standards. the 264/AVC Standard defines the video encoding layer (VCL, video coding layer) and network extraction layer (Nal, Network encoding action layer) at the system level ). The video encoding layer is independent of the network and mainly includes the Syntax definition of core compression engines and blocks, macro blocks, and slices. The introduction of a series of new features not only improves the compression efficiency of H.264 encoding by nearly doubled, but also enhances the robustness of video streams by multiple error recovery tools. The main function of the network extraction layer is to define the data Encapsulation Format and adapt the bit strings produced by VCL to a variety of networks and Multi-Environment environments. Syntax definitions involving segments and above, including representation of data required for independent decoding, similar to image and header sequence data in previous video compression standards; protection against competing encoding; additional enhancement information and bitstring of the encoded part. H. 264 separating nal from VCL from the framework structure has two main purposes: first, you can define the interface between VCL video compression processing and NAL network transmission mechanism, in this way, the design of the video encoding layer VCL can be transplanted on different processor platforms, but it has nothing to do with the data Encapsulation Format of the nal layer;
Second, both VCL and NAL are designed to work in different transmission environments. In heterogeneous network environments, VCL bit streams do not need to be reconstructed and reencoded. The following describes the QoS of Video Transmission in VCL and NAL respectively. Related documents: about h. 264 Analysis of QoS characteristics of video encoding transmission (1) about h. 264 Analysis of QoS characteristics of video encoding transmission (2) about h. 264 Analysis on QoS characteristics of video encoding transmission (III)
Previous Article: H.264/AVC technology progress and Pragmatic Development Strategy