Author: Liu Xuhui Raymond reprinted. Please indicate the source
Camera Sensor Used on embedded systems, such as mobile phones, is usually controlled by a bus like I2C, and the Controller on the CPU provides the required driver timing, generally, data formats such as YUV and RGB are supported. Some sensor need to be processed by the CPU. Some sensor will integrate the image processing chip to complete some basic image processing work, and some high-pixel sensor may even complete JPEG encoding. Due to the diversity of hardware, the problems I encountered may be different from those of you. The analysis content is for reference only.
No response from the I2C bus at the sensor end
All input voltage and clock signals are normal. After the command to read the register data is written to the I2C bus, the sensor does not respond and no data is output from the I2C bus.
Because the measurement finds that all output signals are normal, it is often suspected that there is a problem with the sensor hardware. However, in the case of 99%, the actual cause is always because the id value of the I2C bus is not set, the device does not respond to the command. According to my observation, every time a new engineer debugs the sensor, this problem almost occurs.
The reason for this easy setup error is that the I2C ID number written on the spec of the camera sensor usually contains the last read/write direction bit. In the definition of I2C bus, this parameter is strictly not part of the ID, so the idnumber in the call parameter of the Linux I2C driver API, generally, this bit is not considered. The read/write direction bit is set in the register in a specific read/write operation.
For example, the I2C ID of the read/write register operation on the spec is 0x64 and 0x65, respectively. In actual API call, 0x32 should be used as the I2C ID of the device.
The image has a constantly changing horizontal stripe
Different from the large-area horizontal ripple caused by the flash of fluorescent lights, it shows a horizontal stripe with a high pixel height. The position is not fixed and the number is large, the light intensity also changes.
Because the color of these horizontal stripes is affected when some sensor registers are set, the interference caused by the Board on the data during data transmission is basically ruled out, and the possibility of poor contact is also ruled out, it should be that the data already has these horizontal stripes inside the sensor.
In addition, the same initialization sequence and the same sensor do not occur in the demo version of the manufacturer. Therefore, the software issue is also basically ruled out.
Finally, it is found that, to save hardware costs, the two analog and digital circuits with the same voltage of sensor are provided by the same chip, resulting in mutual interference between the two, affecting the normal operation of sensor.
Separate analog and digital power supply
The image has fixed jagged vertical stripes.
There are obvious vertical stripes on the image, full screen distribution, very fine, like blinds.
After careful consideration, we can find that the vertical stripe is actually a jagged stripe caused by the dislocation of adjacent two or two pixels on the image.
After carefully analyzing the spec, we can see that because the sensor sends image data in bytes, two bytes represent one pixel in rgb565 mode. In the camera controller of the cpu I use, data is processed in 4 bytes, that is, one word. Because the CPU processes data in LSB mode, therefore, the order of two pixels is reversed without being adjusted within a word. That is, when the DMA sends the data to the memory's continuous buffer, the pixel sequence is: pixel 2, pixel 1, pixel 4, pixel 3...
UseProgramAdjust the pixel sequence. To reduce the CPU burden of additional computing, you can combine this operation in other operations such as color conversion or pack mode to planer mode.
The image is prone to misplacement when the size is large.
When the sensor is working at the maximum resolution, the image is prone to dislocation.
The tracing program can see that the FIFO cache of the CPU camera controller overflows at this time. That is to say, the DMA cannot transmit data in the FIFO to the memory. In this example, when the sensor is at the maximum resolution, the clock of output data works at 24 MHz. Theoretically, DMA should be used to transmit data urgently, but it may be because the memory bandwidth may be occupied by other devices, such as the CPU, and it is too late to write data into the memory, this prevents DMA from working at the maximum load, so it is too late to read data from the FIFO, leading to loss of some data and misplacement of images.
In some cases, changing the start threshold of DMA transmission can solve this problem, but in some cases it is invalid.
Considering that the maximum resolution is only used when taking a picture, this resolution is not used during preview. Therefore, you can change the resolution at the same time of the instant when taking a picture without affecting the number of previews, modify the sensor clock frequency to a frequency that does not cause FIFO overflow.
In addition, try not to perform other memory-related operations while capturing images with the highest resolution. After capturing the image, immediately switch back to the preview resolution. These methods reduce the possibility of FIFO overflow.
The data read is displayed on the screen.
The data read is displayed on a screen, but it changes obviously with the changes of the object.
Specifically, common situations include:
The displayed data is completely flat, or the outline of the object can be seen, but the color is completely incorrect, for example, a piece of green. This is often because the image data format does not match, for example, yuv2rgb is not processed, and the sampling sequence of each YUV component does not match the value sequence calculated by software.
If the image is constantly transformed and irregular, it is usually possible that the edge of the Data receiving trigger is incorrect, resulting in incorrect data receiving.
In addition, when the screen is blurred, we carefully observe the pattern of the screen and find some signs of misplacement and repetition. Therefore, the analysis may be the physical layout of the sensor, and its length/width ratio is the opposite of that of the LCD. Check the spec carefully and confirm it.
The specific situation is handled.
This article from the csdn blog, reproduced please indicate the source: http://blog.csdn.net/colorant/archive/2008/08/19/2793915.aspx