Camera image processing principle and example analysis-important image concept

Source: Internet
Author: User
Tags scale image

Principle and example analysis of camera image processing

Liu Xuhui [email protected] Reprint please specify the source

blog:http://blog.csdn.net/colorant/

Home: http://rgbbones.googlepages.com/

As one of the core modules of camera phone, camera sensor effect adjustment, involving a number of parameters, if the basic optical principle and sensor software/hardware on the principle of image processing can have in-depth understanding and grasp, for our work will play a multiplier effect. Otherwise, the lack of theoretical guidance, can only be by feeling and experience to touch, often can not accurately grasp the key problem, can not grasp the core sensor debugging technology, can not solve the problem.

1.1 Color sensing and correction

1.1.1 Principle

The recognition of color in human eyes is based on the existence of three different sensing units for the optical spectrum of the human eye, the principle that different sensing units have different response curves to different wavelengths of light, and the perception of color is obtained through the synthesis of the brain. Generally speaking, we can use the concept of RGB tricolor to understand the decomposition and synthesis of color.

Theoretically, if the response of the human eye and sensor to the light spectrum is reflected in the spectrum as follows, basically the response to the three colors will not affect each other, there is no so-called crossover effect.

However, the actual situation is not so ideal, which indicates the response of the three-color induction system in human eyes to the spectrum. The visible RGB response is not completely independent.

Represents the response of a Kodak camera spectrum. It is obvious that the response curve of the human eye is different.

1.1.2 Calibration of sensor's color sensing

Now that we have seen the response of the sensor to the spectral spectrum, the response to the spectral responses of the human eye on the RGB components is usually biased, and of course it needs to be corrected. Not only on the cross effect, but also on the color of the response strength of the components need to be corrected. It is common practice to correct the color by one color correction matrix.

The operation of the color correction is usually done by the ISP, and the software modifies the relevant registers to obtain correct correction results. It is important to note that since the conversion of the RGB-to-YUV is also achieved through a 3*3 transformation matrix, sometimes the two matrices are combined in the process of ISP processing, and the correction of color and the conversion of color space are done by a matrix operation operation.

1.2 Color space

1.2.1 Classification

In fact, the description of the color is very complex, such as RGB three-color system can not cover all possible colors, for a variety of color expression, as well as color conversion and hardware and software applications, there are a variety of color models and color space expression. These color models, depending on the classification criteria, can be divided into different categories according to the different principles.

For sensor, we are often exposed to the concept of color space, mainly RGB, YUV two (in fact, these two systems contain many different color representations and models, such as SRGB, Adobe RGB, YUV422, YUV420 ... ), RGB as described above is the principle of the three-color plus light system to describe the color, and YUV is based on the brightness, chromatic aberration principle to describe the color.

1.2.1.1 RGB <-> YUV Conversion

No more than the conversion of other color space there is a standard conversion formula, because YUV is to a large extent with the hardware phase

, so the conversion formula for RGB and YUV is usually multiple versions, slightly different.

The common formula is as follows

PAL System:

y=0.299r+0.587g+0.114b

u=0.493 (b-y) = -0.15r-0.29g+0.44b

v=0.877 (r-y) = 0.62r-0.52g-0.10b

In addition, there is the NTSC system:

However, the YUV value obtained in this way is negative and the difference between the upper and lower bounds of the range is not 255 and so on, which is not conducive to computer processing, so according to different understanding and requirements, usually in the software processing will use a variety of different variants of the formula:

On the sensor, we also find that some sensor can set the output value range of YUV. That's the reason. One of the key points to understand from the formula is that the UV signal is actually the blue difference signal and the red difference signal, and then, in fact, to a certain extent indirectly represents the intensity of blue and red, understanding this point for our understanding of various color transformation process will be a great help.

1.3 White Balance

1.3.1 Color Temperature

The definition of color temperature: the blackbody from absolute zero start heating, the temperature of each increase is once called 1 Kelvin (denoted by the letter K), when the temperature rises to a certain extent, the blackbody will radiate visible light, its spectral composition and give the feeling will also increase the temperature of the corresponding changes. As a result, the temperature of a certain shade of blackbody radiation is set to emit the same color temperature of the light source.

As the color temperature increases, the colors of the light source transition from warm to cool, and the energy distribution in the light source is shifted from the red to the blue end.

It is worth noting that the spectral distribution of the actual light source is different, and the color temperature only represents the intensity of the energy, does not reflect the specific spectral distribution, so even if the same color temperature of the light source, may also cause a different color reaction. The human eye and the brain on the color temperature has a certain physiological and psychological self-adaptability, so see the color is less affected by the temperature shift, and the camera sersor does not have this ability, so the photos taken out of the white balance treatment, And the human eye to see the color will have a greater deviation (although the human eye to see and the real color under the white light is also biased).

Sunlight color temperature varies with weather and time, and is related to refractive index of different frequency light:

Long wavelength of light, small refractive index, strong transmission capacity, short wavelength light, refractive index, easy to be scattered, low refractive index, which is why traffic lights with red, anti-fog lights are usually yellow, the sky why is blue and so on the phenomenon of reasons.

Know this, the sun color temperature changes in the law and reason can be understood and analyzed, left to everyone to think.

Color correction of 1.3.2 Color temperature change

Therefore, it can be seen theoretically that with the increase of color temperature, the color temperature should be more positive, otherwise, the object in such a light condition will be shown by the colors of the normal color deviation, it is necessary to reduce the sensor to the red gain, increase the sersor to the blue light gain. At the same time, in order to adjust the parameters to a certain extent to keep in mind that the overall brightness to maintain a general constant, that is measured in YUV, the Y value to remain unchanged, in theory, you can refer to the RGB->YUV transformation formula, RGB three components of the contribution of Y value, thereby determining Rgain and Bgain The proportional relationship of the change. But the actual situation is more complicated than this, to take into account the different sensors on the r,b of the cross-influence and nonlinearity, so the best value may be some deviation from the theoretical value.

1.3.3 Automatic white balance principle

1.3.3.1 principle

Automatic white balance is based on the assumption that the color of the scene of the average falls within a specific range, if the measurement results deviate from the range, then adjust the corresponding parameter, correction until its mean falls into the specified range. This process may be based on YUV space, or it may be based on RGB space. For Sensor, the usual approach is to adjust the r/b gain so that the UV value falls within a specified range. This allows for automatic white balance.

1.3.3.2 Treatment of special cases

In the automatic white balance, it is easy to encounter the problem is that if the shooting scene, excluding the effect of the light color temperature, its own color is deviating from the average color value, such as a large area of bias to a color pattern such as: grass, red flag, blue sky, etc. The image color is severely distorted.

Therefore, the usual practice is: in the processing of automatic white balance, in addition to the target result of the desired color range, in addition to set a pair of source image color range threshold, if the image of the non-processed color mean beyond the threshold, it does not do automatic white balance processing. This guarantees the correct handling of the above-mentioned special cases.

It can be seen that the determination of these two pairs of thresholds plays a key role in the effect of automatic white balance.

1.3.4 Example of a code

It can be seen that with the increase of color temperature, the change law is basically in line with the theoretical analysis in the previous section. However, most of the parameters and theoretical values have some deviations, in which the color temperature parameter set and the theoretical value of fluorescent lamp has a large deviation, the actual effect also proves that the parameters of the fluorescent lamp set in the household fluorescent lamp environment to take photos of the color blue. After modifying its parameters, the actual beat effect is improved significantly. (Look at some of the data can be seen usually there are two types of fluorescent color temperature 4000 and 5000K, the current I have access to the 5000K is the majority).

1.3.5 Commissioning and validation

The specific parameters of the adjustment, should be in the light box environment, the use of a variety of known color temperature standard source of the standard color card shooting, on the Pc by the Color tool to measure the color of its RGB components with the standard color deviation, the corresponding adjustment of the proportional relationship of the component gain. In order to get more accurate results, the exposure gain setting should be corrected relatively accurately before this.

1.4 Color-related effects processing

1.4.1 Grayscale (gray scale)

The effect of a grayscale chart is to convert a color picture to a black and white image.

1.4.1.1 theory

Theoretically, in the YUV space, the UV component is discarded, only the Y component is retained, so that the black and white image can be obtained, which is also the principle of the color electrical machine signal compatible with the black and white machine. such as the theoretical Y-value of the same color (right is the effect of ACDSee to grayscale), in grayscale mode should be the same color.

The operation of the algorithm should theoretically change the UV value to a gray corresponding value. However, depending on the software algorithm and hardware structure, the code will be different.

1.4.2 Sepia/sepiagreen/sepiablue

The so-called retro (green, blue) on a platform phone is on the basis of gray scale, the UV value is made an offset, the gray-scale image into a gradient map of a certain color. In order to obtain the blue effect theoretically, the blue difference signal should be increased, and the red difference signal should be reduced. That is, increase U, decrease V.

In the case of the Sepiablue effect, the MSB of the byte here represents the sign bit: So 88 is 88,158-30.

Set_hue_u_gain (0);

Set_hue_v_gain (0);

Set_hue_u_offset (88);

Set_hue_v_offset (158);

1.4.3 Negative

The so-called negative effect is to invert the color of the image, which looks like the effect of a film's film. This is theoretically easy to understand and deal with, that is, in the RGB space, take its complementary color, the specific operation is to use 255 minus RGB to get a new RGB value. This functionality is typically implemented in the ISP.

Understanding the principle, it is easy to make other color transformation effect. Basically, the relevant parameters to consider in terms of color correction and processing generally include:

Automatic WB Upper and lower limit, automatic white balance when the target range, RGB gain, UV gain, UV offset, color correction. Some of the settings associated with saturation and hue are also available.

From the sensor or ISP hardware processing process, the usual direction is to do RGB gain, then do color correction, and finally do the YUV space processing. So when adjusting the effect, in order to reduce the interaction between the parameters, you can basically adjust the parameters in this order.

1.5 Brightness sensing and exposure

1.5.1 photosensitive Latitude

From the brightest to the darkest, assuming that the human eye can see a certain range, then the film (or CCD and other electronic photosensitive devices) can show much less than the range seen by the human eye, and this limited range is photosensitive latitude.

The photosensitive latitude of the human eye is much higher than the film, and the photographic latitude of the film is much higher than the CCD of the digital camera! Knowing this concept, it is not difficult to understand why in the backlight conditions, the human eye can see the backlight of the building and dazzling sky clouds. And once filmed, either the clouds are brightly coloured and the buildings become dusky silhouettes, or the building's color details are clear and the original beautiful cloud becomes a white piece.

Then look at the structure of the human eye, there is the pupil can control the amount of light, there are rod-shaped photosensitive cells and vertebral-shaped photosensitive cells to adapt to different light intensity, visible even if the human eye has a high sensitivity latitude, there is still a brightness adjustment system to adapt to light intensity changes.

So for camera sensor, the right exposure is even more important!

1.5.2 Auto Exposure and 18% ash

For sensor, how to judge the exposure is correct? It is standard practice to calculate the mean value of the Y value of the current image in YUV space. Adjust the various exposure parameter settings (automatic or manual), so that the mean falls near a target value, it is considered to have been the correct exposure.

So how do you determine the mean value of this y, and how to adjust the parameters so that sensor can adjust the brightness of the current image to this range?

This involves a concept of 18% ash, generally considered indoor and outdoor scenery, in the usual case, its average reflection coefficient of about 18%, and the color mean, as mentioned earlier, can be considered a medium gray hue. In this way, the exposure parameter can be adjusted to approximate the medium luminance Gray (Y value is 128) by shooting a gray plate with a reflectivity of 18%. Then, for the usual scenery, you can automatically get the correct exposure.

Of course, this auto-judging exposure parameter AE function is not omnipotent, for the reflection rate deviation from the usual mean scene, such as snow, night, etc., with this method can not get the correct exposure amount. Therefore, in the Sensor software processing module, often also provides the exposure level of the setting function, forcing the change of automatic exposure criteria. such as changing the expected luminance mean.

1.5.3 exposure level setting

The ability to set the exposure level is visible on most digital cameras and camera phones, as described above, which in fact provides a certain exposure control to the user on the basis of automatic exposure, forcing changes to the exposure criteria of the camera sensor to achieve the desired effect.

The usual practice is to change the expected value of the Y-value mean so that the sensor automatically adjusts exptime and AG as it targets the new Y-expected value at AE.

1.5.4 Gamma correction

The mean value of the exposure is correct, does not mean that the overall image of the luminance distribution is consistent with what the human eye sees.

In fact, the human eye to the brightness of the response is not a linear proportional relationship, and various related to photoelectric conversion equipment input and output characteristic curve is generally non-linear, and is represented as a power function form: Y=XN, so the entire graph system transfer function is a power function: g= g1xg2x...xgn.

For sensor, the response is close to the linear relationship, so in order to properly output a variety of devices in accordance with the human eye on the response to the brightness of the image, it needs to be corrected.

The inverse of the exponent of the power function is the commonly called gamma value.

The corrected function can be expressed as, typically, the output display system for Window, Gamma is 2.2, and for Apple's output display system and printing system, the gamma value is 1.8.

In fact, when the sensor is doing gamma correction, it is usually also converted from 10bit data in RAW format to 8bit data, so this time the formula can be expressed as.

Because the exponential operation consumes a lot of CPU time, the actual practice is often to fit the gamma curve with a 12 segment segment, for example. This requires only 13 points of data to be stored, and a gamma correction is performed using a linear transformation or a table-checking method. The gamma correction of most of our platforms is based on the latter approach. Adjusting the gamma correction actually means adjusting the values of these 13 points.

In principle, when photographing a color swatch such as a gray scale, the more series you can distinguish, the better.

1.5.5 Contrast Ratio

Contrast adjustment to a certain extent, is actually the gamma curve adjustment, increase contrast is to improve the gamma value. For image processing, there is also a similar power function transformation to adjust the contrast after the hardware gamma correction. 、

1.5.6 Adjustment of exposure parameters

The exposure intensity adjustment can be achieved by changing the exposure time or by changing the brightness gain AG.

Exposure time is limited by frame rate, such as the camera requires 15 frames per second, when the maximum exposure time can not exceed 1/15s, there may be other conditions, the actual exposure time is short, in the case of weak light, adjust the exposure time alone can not meet the need for frame frequency.

You can also adjust the gain AG to control the gain of exposure and reduce exposure time. However, the disadvantage of this is to sacrifice the quality of the image at the expense of the increase in AG, accompanied by the inevitable reduction in Snr, image noise enhancement.

Therefore, in order to take image quality as a priority, the adjustment of exposure parameters is usually preferred to adjust the exposure time, followed by the exposure gain is considered. Of course, the exposure time can not be too long to avoid the jitter caused by the blurred image, and in the shooting motion scene, the exposure time requirements are higher.

1.6 Anti-noise treatment

The increase of AG, the inevitable increase in noise, in addition, if the light is darker, the exposure time is too long, it will increase the number of noise (from the digital camera, mainly because of the long exposure, the temperature of the photosensitive element increases, the current noise caused by the increase in the noise of the photosensitive element), The flaw in the photosensitive element itself is one of the sources of noise and even bad points. Therefore, it is common for the sensor integrated or backend ISP to have the relevant settings for noise reduction capabilities.

1.6.1 Start time

According to the reason of the formation of noise, mainly AG or exptime after a certain value need to start the noise reduction function, it is usually necessary to determine the threshold of the two parameters, too small and too much bad.

Noise reduction from the following approach will be seen, the noise potential accompanying the image quality degradation, so early start noise reduction function, do not need to do noise reduction process not only increases the burden on the processor or ISP, but also may backfire. And the late start noise reduction function, when it was originally needed, does not play a corresponding role.

1.6.2 judgment principle and treatment method

So how to determine whether a point is a noise point? We start from the discussion of how people recognize noise, and for the human eye, to judge a point is a noise, no difference is that the brightness or color and the edge of most of the point differences too large. From the noise generation mechanism, the color anomaly should always be accompanied by the brightness of the anomaly, and the brightness of the abnormal processing work than the color anomalies, so usually the sensor ISP decision principle is a point of brightness and the brightness of the surrounding point is greater than a threshold value when the point is considered a noise point.

The way to handle this is usually to replace the original value with the average of the surrounding points, which does not increase the amount of information, similar to a fuzzy algorithm.

For high-end digital cameras, with a strong image processing chip, in the determination and processing of whether there are more complex algorithms, it is also possible to estimate. For example, brightness and color synthesis as a standard to determine the noise, using a larger computational interpolation algorithm to compensate, for sensor inherent bad point, noise, the use of shielding the way to abandon its data (Nikon is doing so, other manufacturers should be so) and so on.

1.6.3 effect

For cell phone sensor, the role of this noise reduction process, I personally think it should be very limited, after all, relative to the digital camera, cell phone sensor lens is too small, the light is small, so its benchmark AG is bound to be larger than the camera gain (such as the equivalent of ordinary home digital camera ISO800 level), So that the same brightness can be achieved, so the impact of current noise will be much larger. In this way, even if the best situation, the noise will be a lot of fluctuations in the data itself is very large, which will cause us in the mobile phone photos are bound to see the dense flower point, if all do average, reduce the noise, the image will become blurred, so the cell phone noise judgment threshold will be set relatively high, so as to avoid too large, Blurred the overall image. In this way, the data itself is poor, the second is the standard of noise reduction, resulting in poor overall results.

1.7 Digital zoom

There are two forms of digital zoom:

One, is through the interpolation algorithm, the image of the interpolation operation, the size of the image to expand to the required specifications, this algorithm is not ideal for its effect, especially when used on the phone, the camera itself to get the data on the phone has a large noise, and then interpolation, the resulting image can hardly be used. In fact, even digital camera digital zoom function does not have much practical value. If the interpolation algorithm does not have hardware support, it needs to be implemented at the application level. Our digital zoom for a platform is the way.

Second, is actually a pseudo-digital zoom form, when the camera is not in the maximum resolution format, such as 1.3 million pixels of sensor using 640*480 specifications, still set sersor work at 1280*960 resolution, and then collect the central part of the image to obtain 640*480 's photos make it seem that the size of the object on the phone is magnified by one-fold. There are many mobile phones using this digital zoom, this approach requires little additional algorithm support, the image quality has no effect, the disadvantage is that only small size can be used. In addition, in the DV mode can also realize the so-called digital zoom magnification shooting function. (This should be a selling point, for DV, this digital zoom still has practical meaning).

To use this zoom mode, the driver needs to support the windowing function to obtain the desired portion of the sensor image data.

1.8 Strobe suppression function

1.8.1 What is strobe

Daily use of ordinary light sources such as incandescent lamps, fluorescent lamps, quartz lights, etc. are directly used 220/50hz AC power, every second, positive and negative half-week change 50 times, resulting in light in 1 seconds in the blink of a 50x2, coupled with the instability of the mains voltage, flickering lights, This creates the so-called "strobe".

The following table shows the light intensity fluctuations of several light sources:

Because the human eye has certain hysteresis and adaptability to the light intensity change, it is usually not visible the brightness change of the source. But it still increases the fatigue level of the eye. So the market will have so-called non-strobe sales.

Suppression of 1.8.2 on strobe

For camera sensor, there is no hysteresis and adaptation of the human eye, so the change in brightness of the light source is more sensitive. If not suppressed, in preview and DV mode, there may be noticeable changes in the brightness of the image flashing.

How to solve it? Considering the periodicity of strobe, in a period, the cumulative value of light source brightness should be roughly the same, so, if the control exposure time is the whole multiple of the strobe period, then the brightness of each frame image is generally consistent, so that can effectively suppress the effect of strobe on the image brightness.

Therefore, in AE mode, the sensor adjusts the exposure time to the full multiples of its cycle, depending on the frequency of the strobe. Because of the different frequency of alternating current, so there are 50hz/60hz points.

When the relevant Sensor registers are set, the number of clock cycles corresponding to the strobe period is calculated according to the current frequency and sensor clock frequency, resolution, etc.

Camera image processing principle and example analysis-important image concepts

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.