Mobile phone camera introduction and parameter configuration

Source: Internet
Author: User

Mobile phone camera Basics

Http://blog.csdn.net/qikaibinglan/article/details/5882821



As a new mobile phone shooting function, the built-in digital camera function is the same as the low-end (0.1 million-1.3 million pixels) digital camera we usually see. Compared with traditional cameras, traditional cameras use "film" as their carrier to record information. The "film" of digital cameras is their imaging and photosensitive device, which is the heart of digital photography. The sensor is the core of the camera and the most critical technology.
Cameras are divided by structure, with built-in and external points, but the basic principle is the same.
According to the photosensitive device used, there are CCD and CMOS:
CCD (charge coupled device, charge-coupled component) is made of a high sensitivity semiconductor material that converts light into a charge and converts it into a digital signal through an analog-to-analog converter chip, after the digital signal is compressed, it is stored by the flash memory or built-in hard drive card in the camera. Therefore, data can be easily transmitted to the computer and processed by the computer, modify the image as needed and as expected. CCD is composed of many photosensitive units. When the CCD surface is illuminated by light, each photosensitive unit will reflect the charge on the component, and all the signals produced by the photosensitive units are combined, it constitutes a complete picture. Like traditional cameras, a photosensitive system is a circuit device that senses light. You can think of it as a tiny sensor particle that is placed behind an optical lens, when light and images pass through and are projected to the CCD surface from the lens,
CCD generates current and converts the sensor content into digital data for storage. The larger the number of CCD pixels and the larger the size of a single pixel, the clearer the collected image. Therefore, although the number of CCD is not the only key to determining the image quality, we can regard it as one of the important criteria for determining the camera level. Currently, most scanners, video recording and placement devices, and digital cameras are equipped with CCD.
After 35 years of development, CCD has been shaped in general shape and operation mode. CCD consists of a mosaic-like grid, a concentrating lens, and a matrix of Electronic lines at the bottom. Currently, companies capable of producing CCD are Sony, philps, Kodak, Matsushita, Fuji, and sharp, most of which are Japanese manufacturers.
Like CCD, CMOS (complementary etal-oxide semiconductor, with an additional metal oxide semiconductor component) is a semiconductor that records light changes in a digital camera. The Manufacturing Technology of CMOS is no different from that of general computer chips. It is mainly a semiconductor made of silicon and Ge, so that they coexist with semiconductors with n (Band-electric) and P (band + electric) on CMOS, the current produced by these two complementary effects can be processed as chip records and then read as images. However, the disadvantage of CMOS is that it is too prone to noise. This is mainly because the early design made CMOS produce overheating due to frequent current changes when processing rapidly changing images.
The advantages and disadvantages of CCD and CMOS can be compared from the technical perspective:
Information reading methods are different. The charge information stored by the CCD sensor must be read after one-bit transfer under synchronous signal control, the charge information transfer and read/output require a clock control circuit to work with three sets of different power supplies, making the entire circuit more complex. The CMOS sensor directly generates current (or voltage) signals after photoelectric conversion, and the signal reading is very simple.
The speed varies. The CCD sensor must output one-bit output unit at a slow speed under the control of the synchronous clock. However, when the CMOS sensor collects optical signals, it can take out electrical signals, it can also process image information of each unit at the same time, which is much faster than CCD.
Power Supply and power consumption. Most CCD sensors require three power supplies and consume a large amount of power. CMOS Sensors only need one power supply and consume a very small amount of power, which is only 1/8 to 1/10 of CCD charge coupler, CMOS Photoelectric Sensors have great advantages in energy saving.
Imaging quality. CCD sensor production technology started earlier, the technology is relatively mature, the use of PN combined with silica isolation layer Isolation noise, imaging quality has a certain advantage over CMOS sensor. Due to the high degree of integration of CMOS Sensors, the distance between the photoelectric sensor components and the circuit is very close. The optical, electrical, and magnetic interference between each other is serious, and noise has a great impact on image quality.

At the same resolution, the cost of CMOS is lower than that of CCD, but the image quality produced by CMOS is lower than that produced by CCD. So far, most of the consumer-level and high-end digital cameras on the market use CCD as sensors, while CMOS Sensors are used as low-end products for some cameras. The existence of CCD sensors once became one of the criteria for people to determine the digital camera grade. Because the manufacturing cost and power consumption of CMOS are much lower than that of CCD, many mobile phone manufacturers use CMOS lenses. Currently, most mobile phones on the market use CMOS cameras, and a few also use CCD cameras.


Principle of continuous shooting
The continuous shooting function saves the data transmission time to capture photography opportunities. In the continuous shooting mode, you can take multiple photos in a short period of time by loading data into the high-speed memory (cache) of the digital camera instead of transmitting data to the memory card. Because digital cameras need to undergo photoelectric conversion, A/D conversion, and media recording, both conversion and recording take a long time, especially when recording a large amount of time. Therefore, the speed of all digital cameras is not very fast.
The unit of frame is generally the same as the film. Each frame represents a picture. The more frames can be captured per second, the faster the video connection function. Currently, the fastest continuous shooting speed of a digital camera is 7 frames/second, and it takes several seconds to continue shooting after three seconds. Of course, the speed of continuous shooting is an indicator that must be paid attention to for photojournalists and those who are well-received in Sports photography, which does not need to be considered in general photography scenarios. In general, the resolution and quality of captured photos are reduced. Some digital cameras can choose to take photos with a lower resolution, which can accelerate the speed of continuous shooting. On the contrary, the speed of continuous shooting of pictures with a higher resolution will be relatively slow.
In the continuous shooting mode, you only need to tap the button to take continuous shooting and vividly record the continuous action.
Optical zoom and digital zoom principles
Optical zoom is generated when the positions of the lens, object, and focus change. When the imaging surface is moving horizontally, for example, the visual and focal length will change, and the farther scenes will become clearer, making people feel like objects are moving forward.

Obviously, there must be two ways to change the angle of view. One is to change the focal length of the lens. In photography, this is optical zoom. You can change the focal length of a zoom lens by changing the relative position of each lens. The other is to change the size of the imaging surface, that is, the diagonal length of the imaging surface is in the current digital photography, which is called digital zoom. In fact, digital zoom does not change the focal length of the lens, but changes the angle of view by changing the diagonal angle of the imaging surface, resulting in a "equivalent" Focal Length Change.
So we can see that some digital cameras with longer lenses have a larger Internal Lens and sensor space, so the zoom factor is also greater. We can see that some ultra-thin digital cameras on the market generally do not have the optical zoom function, because the roots of their bodies do not allow the movement of photosensitive devices, for digital cameras like Sony f828 and Fuji s7000, the optical zoom function is 5 to 6 times.
Digital zoom is also called digital zoom. Digital zoom increases the area of each pixel in an image by using a processor in a digital camera. This approach is like using image processing software to increase the area of the image, but the program is carried out in a digital camera to enlarge a part of the pixel on the original image sensor using the "interpolation" processing method, use interpolation algorithms to enlarge pixels on the image sensor to the entire image.
Unlike optical zoom, digital zoom changes in the vertical direction of the photosensitive device, giving a zoom effect. The smaller the area of the photosensitive device, the more visual the user will see only the part of the scene. However, because the focal length remains unchanged, the image quality is inferior to normal.
Through digital zoom, the captured scenes are zoomed in, but their definition will decrease to a certain extent. Therefore, digital zoom does not have much practical significance. Because a large digital zoom can seriously damage the image, sometimes the image is unclear even because the magnification is too high.
4 tips for mobile phone shooting?
The pixels of the mobile phone are generally not high. However, as long as you think about it, you can still use your cell phone to take a picture.
Rule 1 uses a "three-way" diagram. In the actual photography diagram, the main scene and the center are slightly staggered, and pay attention to the echo between the main and other objects. In this way, the main scene of the photo is clear and prominent, and it will not be blurred.
The second side of the Rule highlights texture. In general, the light injected from the side can better highlight the texture of the object, so we should try to use the side light. When the backlight is reversed or the side is reversed, you can consider using items to block it. If it is not, you can use your hands to block it next to the camera to relieve the effect of backlight. You must also note that you should never use a mobile phone lens to take a shot in a strong light.
Rule 3: do not immediately turn the phone after pressing the button. Mobile phone photo delay is generally obvious. If you use an external camera's mobile phone to take a photo, this delay will be more obvious. If your hands shake when you press the shutter, the photos you take may be blurred or blurred. Therefore, remember not to press the camera key and immediately turn the camera.

Rule 4: Do not use digital zoom. If you use digital zoom to take a photo, the image definition will be reduced, and the effect is not as good as that without digital zoom. For example, a digital zoom photo with a resolution of 640x480 may have a actual resolution of 320x240. When you look at the image on a computer, the image will become blurred.
Mobile phone camera parameter configuration problems

1. In many cases, it is actually a ghost image, and the screen color is the same as that of the Ghost Image book (but the color is not displayed normally, and there is a large color block spot, and so on ), unprofessional comrades often call this screen. This is mainly because the signal on the data line is incorrect. For example, d [5] And Gnd are short-circuited or disconnected. The higher the signal line, the more serious the ghosting phenomenon will be. Low-position signals (such as d [1] and D [0]) have little impact on the screen. Therefore, in the 10-digit output format, to be compatible with eight-bit I/O Ports, remove the lower two ports as long as the higher eight ports. How can we understand the importance of a high signal line? We all know that the 8-bit signal can represent 256 different levels, such as the brightness value Y or the intensity level of the color value U/v. If d [] = 10000000 represents a 128 brightness value, it is displayed as gray, but if d [7] is disconnected or short-circuited, the CPU value will be 00000000, black is displayed, and the difference is big. A color error also occurs for the color signal. In this case, first check the signal path (usually due to poor connection of ctor, followed by poor sensor welding binding), and then check whether the driver is wrong.
2. In the RGB color system, the color is red, green, and blue. In the YUV system, the brightness signal and the color signal are disordered. Of course, there are also two color signals that are disordered. For example, for a camera in yuv422 format, the valid pixel output is generally: (y0 + U0), (Y1 + V0), (y2 + U1), (Y3 + V1 )...., if the output time sequence of camera is incorrect (for example, if camera outputs (U0 + y0) and (v0 + Y1 )....), while the CPU is still a few silly people think it is the first standard time series, then the Brightness Signal of each pixel point is reversed with the color signal, for the Brightness Signal y, which is the most important for building a clear picture, is pulled as U (or blue-colored CB), the area with high brightness is green and the area with low brightness is red, the overall brightness of the screen is also greatly reduced. Other situations are similar, and so on. However, the reversed color of the screen is usually displayed in red and green. In this case, check whether the relevant register value is set in the parameter sent to the sensor, or check whether the driver settings on the CPU side are consistent with the data format sent.
3. Image stripes, which are generally colored horizontal stripes. This colored stripe is fixed in some rows, or is constantly flashing in different rows. From the data of a single row, the cause of the error is the same as that of the above 2nd rows, which is caused by data dislocation. In this case, the RGB raw output is generally the first line/second line: rgrg... /gbgb ..., if the data R in the first row is not sampled, the data collected by the CPU is actually grgr .... 0/gbgb .... (assume that the CPU uses 0 to fill in the missing bits in a row of data ), however, the color interpolation is performed based on the data arranged in the previous standard (the color interpolation is not clear and will be discussed later ), if the background is green (the r component is small and the G component is large), but due to the missampling, the CPU regards the larger G value as the first pixel R value.
The component is obviously greatly increased. Therefore, when the image is displayed and saved, this line will be highlighted in red. Even if the second line is not misplaced, it will also be affected, showing a red sign. For such a problem, unlike 2nd, the entire screen is misplaced, but some rows of data appear, which is generally caused by the difference in component manufacturing, the sensor manufacturer cannot guarantee that each sensor has the same performance or that the data sequence of each row is not accurate. Of course, it is also related to the external influence of the signal. For example, if the href of the synchronous line signal is affected by the external influence, the rising edge length may cause the loss of the first pclk. If the pclk signal is disturbed, or the drive capability is insufficient, it may also lead to the loss of some pixels, so that a row of data is arranged incorrectly, and the stripe of the screen appears. Therefore, when designing the hardware or debugging the driver, a good signal synchronization strategy and setting a better signal tolerance range will be the basis for the long-term stability of the system.
4. Image Noise, too much noise, is often said to be a screen, may be intuitively understood, the noise of the "screen" is really called flowers, according to the face of Ma, and green, it's red. This problem is why I am talking about it at the end. Now it is becoming more and more difficult. With the advancement of CMOS technology and ISP integration, the noise reduction capability in the sensor is getting stronger and stronger, and the noise in addition to Low Illumination (several lux) is still difficult to eliminate, in other cases, ISP functions such as color correction, automatic gain adjustment, automatic gmma, and black/white point correction can be basically eliminated. If you are a brother using the RGB raw data format, you have to spend some time debugging the driver and take full advantage of some ISP functions integrated with the CPU to eliminate the red ghosts and blue ghosts. Image Noise is mainly related to the design and manufacturing technology of sensor. We can only sigh with hope. However, if the boss is relatively generous and applies to high-end models, we still need to buy expensive sensor, now, this market has been overwhelmed, and the price will not be unreliable. Basically, it is just a waste of money.

Repeat a netizen's point of view:

I am engaged in cellphone and Pc camera applications. Let me talk about some of my views. Please add some shortcomings!
This is mainly the application of CMOS image sensor. Let's talk about the entire module first!
Composition:
The entire system consists of three parts: image acquisition module, image processing module, and image transmission module.
1. image acquisition module:
The image acquisition process is to convert the light into an electrical signal. First, the light enters the sensor through the lens, and photodiode in the sensor is converted into a voltage current, and then amplified by the amp, then, the ADC is converted to a digital signal;
2. Image Processing Module:
This process processes the digital signals from the sensor, called ISP and image signal process.
Including: Lens shading; Gamma Correction; color interpolation; contrast; saturation; AE; AWB; color correction; bad pixel correction) and AWB (Automatic white balance)
Moving Object detection and tracking, Object Recognition and extraction, and other image content-based processing have high requirements on image quality. Two important factors affecting imaging quality are exposure and white balance: the human eye is very sensitive to changes in the light and shade of the external environment. In the strong light environment, the pupil is reduced, making the scene less eye-catching; when the light is weak, the pupil expands to make the scene clearer as much as possible. This is known as exposure in imaging. When the outside light is weak, the operating current of the CMOS imaging chip is small and the image is dark. In this case, the exposure time should be appropriately increased for Backlight compensation; when the light is sufficient or strong, reduce the exposure time appropriately to prevent over-exposure and image whitening. It is not enough to adjust the exposure time to improve the imaging quality. Because the color of an object changes with the color of the light, the color of the image varies with the light. This is the White Balance Problem. Traditional optical cameras or cameras use filters to remove partial colors. For CMOS imaging chips, you can adjust the electronic gain of RGB three-color to solve the white problem.
The Automatic Exposure Control and white balance processing of the system are implemented as follows:
The original RGB image is collected, and the mean M (y) of the brightness of the entire image is calculated first. Then, the histogram equalization is performed on the image, and the average brightness of the image is calculated as a yt threshold. Compare M (y) with yt. If M (y) <yt, increase the value of the INT (integration time) Register of sensor to increase the exposure time. Otherwise, reduce the exposure time. The white balance is adjusted similarly. Based on the original image and the average Cr and CB after equalization, the RCG (red color gain), BCG (blue color gain) adjust the gain of the red and blue channels. The Conversion Relationship Between ycrcb and RGB is:
Y = 0.59G + 0.31r + 0.11b
Cr = 0.713 × (R-Y)
CB = 0.564 X (B-Y)
Among them, Y is the brightness component, Cr and CB is the color difference component.
Sensor's exposure time range is 0 ~ (224-1) pixel clock cycle, that is, 0 ~ 1.34s@12.5MHz; gain range is generally 30 ~ 63. The test results show that ~ 10 iterations can achieve better results.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.