Video Encoding: the so-called video encoding method refers to converting a video format file into another video format file through a specific compression technology.
Currently, the most important encoding and decoding standards for video stream transmission are ITU's H.261 and H.263,
M-JPEG of the moving static image expert group and the mpeg series standard of the International Standardization Organization Motion Image Expert Group,
In addition, real-networks realvideo, Microsoft's WMV, and Apple's QuickTime are widely used on the Internet.
Video Compression Technology: video compression technology is a prerequisite for computers to process videos.
After the video signal is digitalized, the data bandwidth is usually greater than 20 Mb/s. Therefore, it is difficult for the computer to store and process the video signal.
After the compression technology is adopted, the data bandwidth is usually reduced to 1-10 Mb/s, so that video signals can be stored in the computer and processed accordingly.
Common algorithms are developed by ISO, namely, JPEG and MPEG algorithms.
JPEG is a standard for Static Image Compression and is applicable to consecutive color or grayscale images. It consists of two parts:
1. distortion-free Coding Based on DPCM (spatial Linear Prediction) technology, 1. distortion-free Algorithms Based on DCT (discrete cosine transformation) and Harman coding,
The compression ratio is very small, and the latter algorithm is mainly used.
The most common method in nonlinear editing is the MJPEG algorithm, that is, motion JPEG.
It converts a video signal of 50 fields/second (PAL) to 25 frames/second, and then compresses each frame at a speed of 25 frames/second using the JPEG algorithm.
Generally, Betacam image quality can be achieved when the compression factor is between 3.5 and 5.
The MPEG algorithm is a compression algorithm suitable for dynamic videos. In addition to coding a single image, it also removes redundancy by using the relevant principles in the image sequence, which can greatly improve the compression ratio of the video.
The front MPEG-I is used in VCD programs, and the MPEG-II is used in VOD and DVD programs. [4]
First, we need to identify the differences between media files and encoding: files are a collection of videos, audio, scripts, and containers;
The compression algorithm of the video and audio in the file is the specific encoding. That is to say, a video in A. AVI file may be encoded as A, B, or 5 or 6,
The specific decoder with that encoding is called by the player to read information in AVI file format.
Encoding Introduction: There are many audio and video encoding solutions, which cannot be described as excessive. Currently, the following types of common audio and video encoding are available:
MPEG series: (developed by the mpeg [Motion Image Expert Group] affiliated to ISO [International Standards Organization]) Video Encoding: mpeg1 (which is used for VCD) and MPEG2 (used for DVD),
MPEG4 (currently DVDRip uses its variants, such as DivX and Xvid) and MPEG4 AVC (currently popular );
Audio Encoding is mainly MPEG audio layer 1/2, MPEG audio Layer 3 (famous MP3), MPEG-2 AAC, MPEG-4 AAC and so on. Note: No MPEG is used for DVD audio.
H.26x series: (led by ITU [International Telex video Alliance], focusing on network transmission. Note: Video Encoding only)
Including H.261, h.262, H.263, H.263 +, H.263 ++, and H.264 (the crystallization of MPEG4 AVC-Cooperation) [5]
Video Encoding principles:
Video Image Data is highly correlated, that is, there is a large amount of redundant information. Redundancy Information can be divided into airspace redundancy information and time domain redundancy information.
Compression Technology removes redundant information from data (removing the correlation between data). compression technology includes intra-frame image data compression technology, inter-frame image data compression technology, and entropy encoding compression technology.
De-temporal redundancy information
Interframe encoding technology can be used to remove time-domain redundancy information. It consists of the following three parts:
-Motion Compensation
Motion Compensation is used to predict and compensate the current local image through the previous local image. It is an effective method to reduce the redundant information of frame sequences.
-Motion Representation
Different Motion Vectors must be used to describe motion information for images in different regions. Motion vectors are compressed by entropy encoding.
-Motion Estimation
Motion Estimation is a complete set of techniques for extracting motion information from video sequences.
Note: Common compression standards use block-based motion estimation and motion compensation.
De-airspace Redundancy Information
It mainly uses the frame encoding technology and entropy encoding technology:
-Transform Encoding
Both intra-frame images and predictive differential signals have high airspace redundancy information. The Transform code transforms the airspace signal to another orthogonal vector space to reduce the correlation and data redundancy.
-Quantization Encoding
After conversion and encoding, a batch of transformation coefficients are generated to quantify these coefficients so that the output of the encoder reaches a certain bit rate. This process leads to a reduction in accuracy.
-Entropy Encoding
Entropy encoding is lossless encoding. It further compresses the coefficients and motion information obtained after transformation and quantification.
Video Encoding Technology:
Currently, MJPEG, mpeg1/2, MPEG4 (SP/asp), H.264/AVC, and other video encoding technologies are used in monitoring. For end users, he is most concerned about definition, storage capacity (bandwidth), stability, and price. The adoption of different compression technologies will greatly affect the above factors.
MJPEG
The MJPEG (motion JPEG) compression technology is mainly developed based on static video compression. Its main feature is that it basically does not consider the changes between different frames in the video stream, only one frame is compressed.
The MJPEG compression technology can obtain high-definition video images and dynamically adjust the frame rate and resolution. However, because the changes between frames are not taken into account, a large amount of redundant information is repeatedly stored. Therefore, a single video occupies a large space, currently, the most popular MJPEG technology can only achieve 3 K Bytes/frame, usually 8 ~ 20 k!
MPEG-1/2
The MPEG-1 standard is mainly for SIF standard resolution (NTSC system is 352x240; palsystem is 352x288) image compression. the main target of the compression bit rate is 1.5 Mb/s. compared with MJPEG technology, mpeg1 significantly improves real-time compression, data volume per frame, and processing speed. However, mpeg1 also has many disadvantages: its storage capacity is too large, its definition is not high, and its network transmission is difficult.
MPEG-2 on the basis of MPEG-1 expansion and lifting, and MPEG-1 backward compatible, mainly for storage media, digital TV, high definition and other applications, the resolution is: low (352x288 ), medium (720x480), secondary high (1440x1080), and high (1920x1080 ). MPEG-2 video relative to the MPEG-1 to improve the resolution, to meet the user's high definition requirements, but because the compression performance is not much improved, making the storage capacity is too large, also not suitable for network transmission.
MPEG-4
The MPEG-4 video compression algorithm has a significant increase in Low Bit Rate compression compared to MPEG-1/2, in the case of CIF (352*288) or higher definition (768*576) video compression, both in terms of definition and storage capacity, mpeg1 has a greater advantage and is more suitable for network transmission. In addition, the MPEG-4 can easily adjust the frame rate and bit rate dynamically to reduce the storage capacity.
MPEG-4 because the system design is too complex, making the MPEG-4 difficult to fully implement and compatible, it is difficult to achieve in the video conference, videophone and other fields, this is a little away from the original intention. In addition, Chinese enterprises still need to face high patent fees, which currently stipulates:
-Each decoding device needs to be handed over to the MPEG-LA for $0.25.
-The encoding/decoding device still needs to be paid by time (4 cents/day = $1.2/month = $14.4/year ).
H.264/AVC
International video compression standards mainly include H.261, h.262, H.263, H. 264 and MPEG-1, MPEG-2, MPEG-4 developed by MPEG, among which h.262/MPEG-2 and H. 264/MPEG-4 AVC is jointly developed by ITU-T and MPEG.
In short, H.264 is a video encoding technology. Both of them belong to the same technology as Microsoft's wmv9, that is, the "codec" program for compressing dynamic image data.
This article from Web: http://baike.baidu.com/view/572258.htm
Video Coding rate, number of frames, screen size, and file volume
From: http://www.movie007.com/archiver/tid-422.html
Video Encoding rate:
It can be simply understood that the key parameter used to measure the file size represents the number of KB per second.
We can see that the unit is kbit/s. In fact, kbit/s means kbit/s. 8 kbit/s = 1kb/s.
That is to say, kbps means that the video will occupy KB of disk space per second (of course, the volume of audio is not added here ).
The above example only gives you a specific image of the video encoding rate (hereinafter referred to as: Bit Rate). In fact, you don't have to calculate it yourself. wismencoder has already helped you calculate it, in the lower right corner of the software, the disk space required for the current configuration per hour and per minute is displayed. (Only theoretical values. The encoding rate after compression may have some error)
Therefore, you can consider compressing the same video. The larger the video coding bit rate, the larger the file size.
It can be considered that the larger the video encoding bit rate, the better the image quality, and the less mosaic.
Frames:
We all know that a movie is composed of one picture. When playing a movie, a picture quickly and continuously appears. Each image here is called a "frame"
The number of frames in wismencoder is actually FPS, that is, the full name should be "Number of frames per second ". That is, how many images are contained per second. Obviously, the more images contained per second, the more consecutive the images appear, the less the images appear, the more "card" the picture ".
The number of frames is also related to the image quality! In the case of the same video bit rate, the larger the number of frames, the worse the image quality. Especially motion pictures. Because each screen shares a limited volume of files per second, the more images there are, the more limited the content each screen can display.
Image Size:
The image size, in pixels rather than inches and centimeters. This should be clarified. The screen size is also called resolution.
Each pixel is a vertex. The 640x480 indicates that each image of the video is composed of 640 points in width and 480 points in height. The pixel that the camera is talking about now is also the concept, but the pixel that the camera is talking about is the product value of width and height.
It is easy to understand that the more detailed the image is, the more clear the image is. Just like if you use a 5x5 board to place a graph and a 50x50 image, the 5x5 can hardly reflect the details of 50x50.
The higher the image size, the higher the encoding bit rate. Because the image has more details, the size of the required file should also increase. Otherwise, the image size is not as small as the image size. You will find the same bit rate, the larger the image, the more obvious the degree of mosaic.