The hevc video encoding layer adopts a hybrid encoding method that has been used since the H.261 Standard (intra-frame and inter-Frame Prediction and 2-D transform encoding ). Figure 1 shows the hybrid video encoder block diagram of hevc. (Please advise me if I understand the error. O (∩ _ ∩) O Thank you)
The specific encoding process is described as follows. Each frame is divided into multiple block-based areas, and the split information is transmitted to the decoding end. The first frame of a video sequence (or the first frame of each blank Random Access Point (CRA, clean Random Access Point) of a video sequence) only intra-frame prediction is used (that is, only the airspace information between adjacent regions of the same frame image is used for prediction, but this frame is not independent of other frames ). Most of the other frames in the video sequence or other frames between two CRAs use the time-domain Frame Prediction Method for block prediction. The encoding process of inter-frame prediction is composed of motion data, which contains reference frames and motion vectors (mV,
Motion Vector ). The encoder and decoder use the Motion Compensation (MC, Motion Compensation) consisting of the MV and mode selection data for edge information transmission to generate the same inter-Frame Prediction signal.
Residual signals predicted within or between frames (that is, different information of the original block and its prediction block) are transmitted through linear spatial transformation. The conversion coefficient is extended, quantified, entropy encoded, and transmitted together with the prediction information.
The processing loop of the coder replication Decoder (the gray part shown in figure 1), which generates the same prediction for later data. Therefore, the quantization coefficient copies and decodes the approximate residual signal through inverse extension and inverse transformation. After the residual signal is added with the prediction information, one or two loop filters are used to smoothly filter out the block effects caused by Block Processing and quantification. This final image representation (a replica of the decoded output image) is stored in the decoded image cache for subsequent image prediction. Generally, the encoding and decoding processes of images are different from the order in which they come from the source (that is, the video source. Therefore, the decoder decoding sequence (that is, the code stream sequence) and the output sequence (that is, the display sequence) must be distinguished ).
Generally, hevc is expected to scan images row-by-row (probably because the video input is in that format, or to eliminate the possibility that line-by-line scanning is better than encoding ). The hevc design encoding feature does not explicitly indicate that it supports the use of the line scan, because the line scan is no longer used for display, and the use of this line scan is fewer and fewer. However, a Data Element syntax introduced by hevc allows the encoder to encode each field (that is, the even and odd fields of each video frame) to process the scanned video in the same line. This method not only effectively encodes the video that is scanned by line, but also does not burden the decoder on processing the video. (Please specify the source for reprinting. Thank you)