Camera architecture and working principle in Android 1. Camera architecture in Android
The camera architecture of Android is the application layer, framework layer, hardware abstraction layer, and Linux driver layer from top to bottom. The following describes the framework layer, hardware abstraction layer, and Linux driver layer.
The application layer communicates with the Java framework layer by the Binder Mechanism.
During system initialization, A cameraservice daemon is enabled to provide a functional interface for upper-layer applications.
The framework layer and Hardware Abstraction Layer transmit data through callback functions.
The overlay layer consists of serviceflinger and overlayhal to implement video output. Only the camera framework layer or the video framework layer can call the overlay layer.
The abstraction layer is located in the user space and transmits data with the kernel space through system calls, such as open (), read (), and IOCTL.
Ii. Working Principle of camera
After the external light passes through lens, it is filtered by color filter and then exposed to the sensor surface. Sensor converts the light uploaded from lens to an electrical signal, and then converts it to a digital signal through the internal ADPs, if the sensor does not integrate DSPs, it is transmitted to baseband in DVP mode. The data format is raw.
Data. If DSP is integrated, the raw data is processed by AWB, color matrix, lens shading, gamma, sharpness, AE, and de-noise to output data in YUV or RGB format. At last, the CPU will be sent to the framebuffer for display, so that we can see the video taken by camera.
Camera chip circuit diagram:
The hardware circuit diagram of 5 m back camera chip mt9d111 is as follows:
After obtaining the schematic diagram, we need to pay attention to connecting the 19 and 21 pins to cam_i2c_sda and cam_i2c_scl respectively. We can configure the camera through I2C. In addition, when debugging the camera, you can use an oscilloscope to measure the waveform based on this schematic to verify that the code is correct.
It is also worth noting that it is best to use a multimeter to measure whether each pin of the camera is correctly connected to the chip before the development driver. Otherwise, the image will not be visible even if the code is correct.
Mt9d111 is an image sensor chip with a CMOS interface. It can sense external visual signals, convert them into digital signals, and output them.
We need to provide the camera with a clock through xvclk1. The Reset is a reset line, and pwdn should always be low when the camera is working. Href indicates the line reference signal, pclk indicates the pixel clock, and vsync indicates the field synchronization signal. Once the camera provides a clock and resets the camera, the camera starts to work and transmits Digital Image Signals synchronously through href, pclk, and vsync. The image format for mt9d111 to external transmission is YUV. YUV is a compressed image data format, which also contains many specific formats, our camera corresponds to YCbCr (8 bits, interpolated color ). The format set in the driver later must be the same as this format.