In simple terms,
Yuv:luma (Y) + chroma (UV) format, typically sensor supports YUV422 format, that is, the data format is output in y-u-y-v order
RGB: Traditional red-green-blue format, such as RGB565, whose 16-bit data format is 5-bit R + 6-bit G + 5-bit B. G more than one, because the human eye is more sensitive to green.
Each pixel of the RAW rgb:sensor corresponds to a color filter, and the filter is distributed according to Bayer Pattern. Outputs the data for each pixel directly, i.e. raw RGB
JPEG: Some sensor, especially the low-resolution, its own JPEG engine, can directly output the compressed JPG format data
For these kinds of output formats, several questions:
1, some mobile phone baseband chip can only support 2M or less YUV sensor,3m can only use JPEG sensor, here is the YUV output format for baseband chip has a certain requirements, then the exact YUV sensor on the baseband chip what is the requirements?
2, if the direct output RGB, LCD display is the most convenient, then why most baseband chips are required to output data in YUV format for processing it.
1 YUV a pixel accounted for 2B, if the pixel is too high in the high clock baseband chip processing, JPEG data volume will be much smaller, so it is not YUV to baseband chip requirements but baseband chip to the output data rate requirements.
2 RGB565 generally used on the very low-end baseband chip, directly to the screen brush. YUV output brightness signal does not have any loss, and color bias signal human eye is not particularly sensitive, RGB565 output format is R5G3 g3b5 will lose a lot of raw information, so YUV image quality and stability is much better than RGB565
3 Raw data 1 b per pixel, the amount of data is much less, generally more than 5M sensor output raw data to ensure a relatively fast output speed, the back end hangs a DSP to process the output data.
The difference between RAW and JPEG
Raw file is basically a source file without any image processing, it can record the information captured by the camera, not because of image processing (such as sharpening, increase color contrast) and compression caused by the loss of information, but need to use special software to open these files. Another commonly used format is JPEG, the camera will be based on the user's settings to do a certain image processing, and then compressed (degree due to the quality of the picture set in the camera) and save photos. Why to shoot Raw. Raw is a popular format for professional photographers, as it preserves information in its own right, allowing users to post-produce a large amount of photos, such as adjusting white balance, exposure, color contrast, and so on, especially for the novice to remedy the failed photo, and no matter what changes are made to the post-production, Photos can also be restored to their original state without fear of losing pictures due to accidental storage. There is also a benefit to raw, such as the Canon DPP software, which corrects lens loss, distortion, and so on. What are the advantages of JPEG format? JPEG format is a very popular photo format, almost all modern digital cameras can use this format, most of the computer can also open JPEG files, users can also arbitrarily set the compression level to preserve the picture quality (the best JPEG quality is very close to raw), is a very convenient format. Should I shoot raw or JPEG? Before we discuss this issue, let's look at the disadvantages of raw format: 1. Because the raw file needs to keep all the details and information, the file is much larger than JPEG, so the time to store or transfer the photos to the computer is longer, and the storage capacity required is greater; 2. Raw files need special software to open, once the computer is not installed software will not be able to open the file; 3. After 10 years, once the specific software can not be installed, the photos taken before can not be opened; 4. The software opens raw for a long time, fast requires 8, 9s, It may take up to 20s to be slow. 5. Different software has different ways to "translate" raw files, so a raw file may differ in Photoshop and Nikon Capture NX, 6. The price of proprietary software sold by vendors is not low. (Canon DPP can be downloaded for free, Nikon NX will need to be purchased separately) after knowing the shortcomings of raw, we can see which case should choose raw or JPEG: If you need to take a large number of photos, you should consider using JPEG, because its capacity needs are relatively small and can be retained after the system and the The time the photo is converted to JPEG; If you are using it for commercial or post-production, you should use raw because the rear-made space is large; If you are taking a travel photo, consider using raw or raw+jpeg, as the place of travel may not be frequent, and using raw allows you to fail the shotThere is a greater chance of redress. PostScript actually now Photoshop is very powerful, for JPEG files can also be adjusted by level or curve to the exposure, white balance, color contrast, of course, if you need to make a large adjustment or raw files more suitable.
Camera Data format
The data output format of the camera is generally divided into CCIR601, CCIR656, RAW RGB and other formats, the RGB format said here should be CCIR601 or CCIR656 format. The raw RGB format differs from the normal RGB format.
We know that the photosensitive principle of sensor is to sample and quantify light from a single photosensitive point, but, in sensor, each photosensitive point can only be a color in the photosensitive RGB. So, what is usually said to be 300,000 pixels or 1.3 million pixels, refers to 300,000 or 1.3 million photosensitive points. Each photosensitive point can only be photosensitive one color.
However, to restore a real image, you need to have RGB three colors at every point, so for the CCIR601 or 656 format, there will be an ISP module inside the sensor module that will interpolate and effect the data collected by the sensor, For example, if the color of a photosensitive point is R, then the ISP module calculates the G and B values of this point based on the values of the G and B light points around the sensor point, then the RGB value is restored and then transmitted to host in the format encoded in 601 or 656.
The raw RGB sensor is a sensor that does not have a sensitive point to the RGB value directly transmitted to the host, by the host for interpolation and special effects processing.
Raw RGB has only one color per pixel (one in R, G, b);
RGB each pixel has three colors, each value is between 0~255;
During the test of the mobile camera, the data from the sensor is raw (raw RGB), and the color interpolation becomes RGB.
Sensor output data format, mainly divided into two types: YUV (more popular), RGB, this is the sonsor data output; The GRB is raw RGB, which is the data obtained by the sensor's Bayer array (each transducer obtains the corresponding color brightness);
But the output of the data is not equal to the actual image data, module testing, it is necessary to write a software to complete data acquisition (obtain raw data), color interpolation (the purpose is to obtain RGB format, easy image display), image display;
This can be found that the whole module is normal, there is no bad point, dirty points, such as detection of defective products; (in the process of software processing, in order to obtain better image quality, also need white balance, gamma correction, color correction)
In the mobile phone application, the phone according to the camera module data format, provide an ISP (mainly used in RGB format), with the software, so that the camera function to be applied;
For sensor, the image structure of both Bayer RGB and RGB Raw is bg/gr (Bayer pattern, which is the structure of the color filter, is divided into two types: std Bayer pattern and pair pattern, where STD The structure of the Bayer pattern is bg/gr, and the pair pattern refers to the structure of the bgbg/grgr, that is, a unit of four behavior, the first two lines is the structure of BG, the latter two lines is the structure of GR, this structure is the United States light specifically for this patent, Mainly in the output TV mode (Ntsc/pal system),
Since the back-end application, the decoding of the raw data image is decoded by the default structure, such as bg/gr, so Bayer RGB and RGB RAW image structure must be bg/gr, and if the output image structure is bgbg/grgr, it can not be directly displayed and decoded.
Bayer RGB and RGB Raw is the main difference between the two output before the processing is different, Bayer RGB from the ADC output, only through the lens shading correction,gamma module processing and then direct output, and RGB Raw is processed by the entire ISP module and is eventually transformed by YUV422 data.
Digital Video CCIR 601 coding standard
First, sampling frequency: In order to ensure the synchronization of the signal, the sampling frequency must be a multiple of the TV signal line frequency. CCIR Common TV image sampling standards for NTSC, PAL, and SECAM formats:
This sampling frequency is exactly 864 times times the line frequency of PAL and SECAM, and 858 times times of NTSC system, which can guarantee the synchronization of sampling clock and line synchronization signal. For 4:2:2 sampling format, the luminance signal is sampled with FS frequency and two chromatic aberration signals are used respectively
Frequency sampling of the Fs/2=6.75mhz. The minimum sample rate for the resulting chroma component is 3.375MHz.
Second, the resolution: according to the sampling frequency, can be calculated for the PAL and SECAM format, each scan line sampling 864 sample points, for NTSC system is 858 sample points. Since each line in the TV signal includes a certain synchronous signal and a back sweep signal, the effective sample point of the image signal is not so much, CCIR 601 provides for all the standard, the effective sample point of each line is 720 points. Because different formats have different valid rows per frame (PAL and SECAM are 576 lines, NTSC is 484 rows), CCIR defines 720x484 as the basic standard for HDTV HDTV (High Definition TV). When the actual computer displays digital video, the following table parameters are usually used:
TV format resolution frame rate
NTSC 640x480 30
PAL 768x576 25
Third, data volume: CCIR 601 stipulates that each sample point is digitized by 8 digits, that is, there are 256 levels. But in fact, the brightness signal accounted for 220 levels, chroma signal accounted for 225 levels, other bits for synchronization, encoding and other control. If you sample in the format of FS sampling rate, 4:2:2, the amount of data for digital video is:
13.5 (MHz) x8 (bit) +2x6.75 (MHz) x8 (bit) = 27mbyte/s
It can also be calculated that if sampled by 4:4:4, the amount of data in the digital video is 40 megabytes per second. A 10-second digital video consumes 270 gigabytes of storage space at a data rate of 27 megabytes per second. According to this data rate, a 680 megabyte CD-ROM can only record about 25 seconds of digital video data, and even if the current high-speed optical drive, its data transmission rate is far less than 27 megabytes per second transmission requirements, video data will not be played back in real time. The amount of uncompressed digital video data for the current computer and network, whether storage or transmission is not realistic, so in the multimedia application of digital video is the key problem is the digital video compression technology.
The difference between CCIR601 and CCIR656 standards
About the difference between the two signals:
ITU-R BT 601:16-bit data transmission, 21-core, Y, U, v signal transmission simultaneously.
ITU-R BT 656:9-core, no synchronous signal required, 8-bit data transmission, serial video transmission, transmission rate is 601 twice times, first pass y, then pass UV.
CCIR601 to pass the line, the field synchronization two signal line to transmit the line, the field synchronization information;
CCIR656 does not need these two signal lines, it only through the 8-bit data cable to achieve "soft" synchronization.
656 output is serial data, and the line-field synchronization signal is embedded in the data stream;
601 is the parallel data transmission, the Line field synchronization has a separate output;
656 is only the data transmission interface, it can be said as a 601 transmission mode.
Simply stated, the ITU-R bt.601 is the standard for "studio Digital TV coding parameters", while the ITU-R bt.656 is the Digital Interface standard in ITU-R BT.601 Annex A, which uses 27mhz/s and 243mb/between major digital video devices, including chips Digital Transmission interface Standard for the S serial interface.
The formulation of CCIR601 is the first step towards the unification and standardization of the parameters of DTV Broadcasting System. In this recommendation, 625 and 525 lines of the system TV Center Studio Digital Coding Basic parameter values.
Recommendation No. No. 601 sets out a separate coding standard for TV studios. The coding method, sampling frequency and sampling structure of color TV signal are defined.
It specifies that the color TV signal is encoded by component. The so-called component coding is that the color full TV signal is separated into a luminance signal and a chromatic aberration signal before being converted into a digital form, and then coded separately. The component signals (Y, B-y, R-y) are encoded separately, and then the digital signal is synthesized. It specifies the sampling frequency and the sampling structure.
For example: In the 4:2:2 grade encoding, the specified luminance signal and chromatic aberration signal sampling frequency is 13.5MHZ and 6.75MHZ, sampling structure is orthogonal structure, that is, by row, field, frame repetition, each row of r-y and b-y sampling and odd times (1,3,5 ...) Y is sampled in the same position, i.e. the sampling structure is fixed, and the location of the sampling point on the TV screen remains the same. It specifies the encoding method. The luminance signal and two chromatic aberration signals are linearly PCM encoded, and 8 bits per sampling point is quantified. At the same time, in the digital coding, the entire dynamic range of the A/D conversion is not used, only the luminance signal is assigned 220 quantization levels, the black level corresponds to the quantization level 16, the white level corresponds to the quantization level 235. Each chromatic aberration signal is assigned 224 quantization levels, and the 0 level of the chromatic aberration signal corresponds to the quantization level 128.
In summary, we know that the encoded data stream of the component signal is very high. Take the 4:2:2 coding standard as an example, its bit stream is: 13.5x8+6.75x8x2=216mb/s. If the 4:4:4 encoding method is used, the composite signal is encoded directly, and the sampling frequency is 13.3x8=106.4 MB/s.
PS: We can consider CCIR601 namely "ITU-R Bt.601-5", "ITU-R Bt.656-4" that is CCIR601.
656 8-bit data+clk,601 have 8-bit data+clk+hsync+vsync and 16-bit data+clk+hsync+vsync.
656 Insert HSync, vsync into the data, 601 of the data line only data