Video Encoder optimized with Digital Signal Processor

Source: Internet
Author: User
Video Encoder optimized with digital signal processor time: 14:41:41 Source: Electronic Engineering album Author: Ajit Rao Soyeb Nagoori, multimedia coding software

The features of high compression ratio standards provide a broad space for technicians to strike an optimal balance between complexity, latency, and other factors that constrain real-time performance.

Digital Video Encoding
Video compression can reduce the video capacity as much as possible while maintaining acceptable video quality. However, reducing the size of a video for transmission and storage may sacrifice some image quality. In addition, video pressure
Shrinking also requires a high processor performance and supports a wide range of features in the design, because different types of video applications have different requirements in terms of resolution, bandwidth, and flexibility. Number with higher flexibility
The word signal processor (DSP) not only can fully meet the above requirements, but also can make full use of the rich options provided by advanced video compression standards to help system developers achieve product optimization.

Video Coding/Decoding
Decoder) the inherent structure and complexity of algorithms make it necessary for us to adopt optimization solutions. Encoder is very important, because they must not only meet application requirements, but also be the main part of video applications for a large number of processing tasks.
Points. Although the encoder is based on information theory, it is very complicated to consider the trade-off between different factors during implementation. Encoder should be highly configurable and can be used for various video applications
Developers are greatly benefited by providing easy-to-use system interfaces and optimizing performance.

Features of video compression

Transmission or storage of original digital videos requires a large amount of space. Like H. 264/MPEG-4 AVC and other advanced video codecs can achieve a compression ratio of up to 60: 1 to and ensure constant throughput, which enables us to use a narrow transmission channel for transmission, this reduces the space occupied by video storage.

And
Like JPEG Standards in the field of static images, the ITU and MPEG Video Encoding algorithms also need to combine discrete conversion encoding (DCT or similar technology), quantization, variable length encoding, and other technologies to compress frames.
Macro Block. Once the algorithm establishes the baseline encoding (I frame), it can establish a large number of subsequent prediction frames (P frames) by encoding the difference between the visual content or the residual values between them ). We can use so-called motion compensation.
Technology to realize the difference between frames. This algorithm first estimates the Macro Block of the previous reference frame to the position of the current frame, and then eliminates redundancy and compresses the remaining part.

Figure 1 shows the structure of the general motion compensation video encoder. The motion vector (MV) Data describes the Moving position of each block. This data is created in the estimation phase, which is usually the stage in which the algorithm calculates the maximum intensity.


Figure 1: Structure of the general motion compensation video encoder.

Figure 2 shows the P frame (right) and Its Reference Frame (left ). At the bottom of the P frame, the remaining part (black part) shows the remaining encoding after the motion vector (blue part) is calculated.


Figure 2: displays the P frames and reference frames of the remaining encoding after the motion vector is calculated.

View
The frequency compression standard only refers to the positioning of the stream syntax and decoding process, so that the encoder has a lot of innovation space. Rate Control is also an innovative field that enables the encoder to allocate quantitative parameters to determine in an appropriate way
The noise in the video signal. In addition, the advanced H.264/MPEG-4 AVC Standard provides macro block size, motion compensation 264 pixel resolution (quarter-pel
Resolution), multi-reference frame, bidirectional Frame Prediction (B frame), and adaptive intra-ring deblocking filtering (in-loop)
Deblocking.

Diverse Application Requirements

Video application requirements vary greatly. The features of advanced compression standards provide a broad space for technicians to strike an optimal balance between complexity, latency, and other factors that constrain real-time performance. For example, we can imagine that video calls, video conferences, and digital cameras (DVRs) have different requirements for videos.

Video Telephone and video conferencing

Just
Video Telephone and video conferencing applications, transmission bandwidth is usually the most important issue. Based on different links, the bandwidth transmission range can be between dozens to thousands of KB per second. In some cases, we can ensure that
Transmission speed, but for the Internet and many enterprise intranets, the transmission speed will be very different. Therefore, the video conferencing encoder usually needs to meet different types of links and adapt to the changing available bandwidth in real time.
After receiving conditions, the sending system should constantly adjust the encoding output to ensure optimal video quality with as few video interruptions as possible. If the conditions are poor, the encoder can reduce the average bit rate and frame Skip.
Or change the image group (GoP, I .e., the combination of I frame and P frame. The compression degree of an I-frame is lower than that of a P-frame, so the overall bandwidth required for a GoP with a small I-frame is lower. Because the video conferencing is visible
The volume is always unchanged, so the number of I frames can be reduced to a lower level than that of entertainment applications.

H.264 uses an adaptive loop inner Block Removal filter to process the edge of a block
To ensure the smoothness of the video between the current frame and the subsequent frame, thus improving the video encoding quality, which is particularly effective at low bit rate. In addition, disabling filters can increase the number of visualized data at a given bit rate.
And increases the motion estimation resolution from 1/4 pixels to 1/2 or higher. In some cases, we may need to reduce the filter quality or resolution to reduce the complexity of coding.
.

Because the quality of video conferencing is not guaranteed by internet groups, video conferencing can usually benefit from the encoding mechanism that can increase the compression rate. As shown in 3, the P-frame continuous Graph
IMG (progressive
Strip can be used for intra-frame encoding (I image strips), so that the complete I frame is no longer needed after the initial frame, it can also reduce the problems of discarded I frames and damaged images.


Figure 3: P-frame continuous stripe images can be used for intra-frame encoding.

Digital recording

Suitable
The digital camera (DVR) for home entertainment may be the most widely used real-time video encoder application. For such systems, how to achieve the optimal balance between storage capacity and image quality is a major problem. And
Video conferencing cannot tolerate latency. If the system caches enough storage space, the compression of video recording can withstand a certain amount of real-time latency. The design to meet actual requirements means that the output buffer can be
Processing several frames is sufficient to ensure that the disk can obtain stable and continuous data streams. However, in some cases, the algorithm may generate a large amount of P-frame data due to rapid changes in visual information.
Blocking. As long as the blocking problem is solved, the image quality can be improved again.

One of the mechanisms for effective trade-offs is to change the quantization parameter Qp in real time. Quantization is the number of Compression
One of the steps in the final phase of the data algorithm. Increasing quantization can reduce the bit rate output of the algorithm, but image distortion will increase in proportion to the square of Qp. Increasing the Qp will reduce the bit rate output of the algorithm, but it will also affect the image
Quality. However, because this change occurs in real time, it helps to reduce frame skipping or image breakage. If the visible content changes very quickly, for example, when the buffer is congested, although the image quality is reduced
As noticeable as when content changes slowly. After the visible content returns a Low Bit Rate and the buffer is cleared, the Qp can be reset to a normal value.

Encoder flexibility

By
Developers can use DSPs in a variety of video applications. Therefore, the design of DSP encoder should take into account the flexibility of its own compression standards. For example, based on Texas Instruments (TI) mobile
OMAP media processor, TMS320C64x + DSP or Da Vinci (DaVinci ?) The encoder of the processor is highly flexible. To maximize compression performance,
Each encoder can be used to take full advantage of its platform's DSP architecture, including the built-in video and image co-processor (VICP) in some processors ).

All encoders use a set of basic APIs that use default parameters. Therefore, no matter what type of system is used, the system interface will not change. The extended API parameters enable the encoder to meet the requirements of specific applications. By default, parameters can be set to high quality, and high-speed presets are also provided. The program uses extended parameters to override all preset parameters.

Expansion
Exhibit parameters enable the application to meet H.264 or MPEG-4 requirements. Encoder supports several options, such as YUV and YUV
Input Format, motion compensation with a minimum resolution of 1/4 pixels, and various I-frame intervals (from each frame to the first I-frame without subsequent I-frames), Qp bit rate control, access motion vector, block filter
Wave Control, simultaneous encoding of two or more channels and I strip. Encoder dynamically and unlimitedly determines the search range of the default motion vector.
Is an improvement.

In addition, there is usually an optimal operation point (sweet spot), that is, the output bit rate at an established input resolution and the number of frames per second (fps. Developers should recognize this best encoder point to achieve the optimal design balance between system transmission and image quality in the design scheme.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.