Design of Image Acquisition System Based on OV6630 image sensor and DSP
[Date:] |
Source: China Power Grid Author: Xu Jingshu, Fu huaming |
[Font: large, medium, and small] |
0 Introduction
DSP is an important technology developed based on programmable ultra-large-scale integrated circuits and computer technology, the fast data collection and processing functions of DSP Chips and various functional modules integrated on-chip provide great convenience for DSP to be applied in various occasions. Compared with CCD, the CMOS image sensor can integrate the front-end amplification and digitization of time series processing circuits and image signals into one chip, therefore, its development has been highly valued by the industry. Now, with the development of technology and technology, CMOS image sensors have not only improved noise, but also improved resolution significantly. The CMOS image sensor is widely used in the Video Acquisition field for its low price, practical image quality, high integration and relatively low power consumption. Therefore, this paper proposes a real-time image acquisition system based on DSP and CMOS image sensors and controlled by the complex programmable logic control chip CPLD.
1. Hardware Design
Figure 1 shows the circuit structure of the image acquisition system. As shown in figure 1, the image acquisition system consists of OV6630 image sensing chip, CPLD control module, SRAM data storage, FLASH program memory, DSP signal processor, and other components. Its Image Acquisition chip is a color CMOS image sensor OV6630 developed by Omni Vision. Compared with traditional CCD sensors, its most obvious advantage is high integration and low power consumption, low production costs and easy integration with other chips. The chip integrates the CMOS optical sensor core with the peripheral support circuit. Because of its proprietary sensor technology, it can eliminate ordinary photoelectric interference. The chip's pixel array is 352x288, that is, 101376 pixels, with four rows and four columns to choose from. The output of image data can be in multiple formats (YCrCb4: 2: 2, GRB4: 2: 2 and RGB original data output formats). In this system, 8-Channel Y is used to output RGB original data output format, and the working method of line-by-line scanning. The output format is:
Odd Number of scanned rows BGBG ......
Even Number of scanned rows GRGR ......
According to the large area coloring feature of human eyes with low color Response Bandwidth, each pixel does not need to output three colors at the same time. Therefore, when the data is sampled, the odd number of rows scanned is 1, 2, 3, 4 ,... Pixel sampling and output respectively B, G, B, G ,... Data; number 1, 2, 3, 4,… of rows scanned even ,... Pixel sampling and output G, R, G, R ,... Data. In actual processing, the R, G, and B signals of each pixel are composed of a certain color signal output by the pixel itself and other color signals output by adjacent pixels. This sampling method can reduce the sampling frequency by more than 60% while basically not reducing the image quality.
The core processing chip in the system selects TI's enhanced fixed-point DSP chip TMS320VC5410A, which can work at 160 MHz and has a 64 kbram internal space which can be flexibly mapped to data or program storage space. Due to the limited internal storage space of the DSP, this design expands the FLASH program memory SST39VF400A, which is 1 MB in the external memory CY7C1021 and 256 kb. The control chip CPLD selects the EPM7128SLC84-15 of the MAX7000 series chip. The chip consists of 84 I/O pins and 128 macro units. Each 16 macro units can form a logical array with a working voltage of 5.0 V. The chip is in the overall timing control position in the system, which is used to provide control signals for the image sensor chip. It is also used for chip selection and read/write control of SRAM and FLASH, and is also responsible for LCD display control.
2 Software Design
After the system is configured, image data can be collected and processed. In the process of image collection, the main task is to determine the start and end time of an image data frame. The synchronization signal output by OV6630 is carefully studied (VSYNC is a vertical synchronization signal, HREF is a horizontal synchronization signal, and PCLK is an output data synchronization signal. The author uses VHDL to precisely control the starting point of the collection process. Figure 2 shows the time series relationship between the three synchronous signals and the data signals during image collection.
In Figure 2, each frame synchronization signal VSYNC cycle contains 288 HREF pulses, and each HREF cycle contains 352 PCLK clock pulses, each PCLK clock can output an RGB pixel video data.
By monitoring the changes of the Vertical Synchronous signal vsync in the system, you can determine whether a new frame of image starts. After a frame of image starts, only when the href is high and the pclk output descent edge, to output a valid pixel value. The rising edge of vsync indicates the arrival of a new image, and the falling edge indicates the start of an image data collection (the CMOS image sensor collects images by column ). Href is a horizontal synchronous signal, and its rising edge indicates the beginning of a column of image data. Pclk is the output data synchronization signal. Effective data collection can be started only when href is high. The arrival of the pclk descent edge indicates data generation. Each time a descent edge appears, pclk can transmit one bit of data. When href is a high level, a total of 352-bit data can be transmitted. In a frame of image, that is, when vsync is low, href will appear 288 times high. When the rising edge of the next vsync signal arrives, it indicates the end of the image acquisition process with a resolution of 352 × 288.
The implementation process of CPLD control is first to check whether the vsync and chsync signals are valid in order. At this time, we should avoid the interference of the glitch signal. Because the glitch Signal Time is short, you can use the flag setting method during design, that is, when the effective edge of the signal is detected (vsync is the rising edge, while chsync is the falling edge ), you can check the signal again after a certain period of time to see if it is still valid. If it is valid, the signal is correct.
Because the pixel data in the system is output by pclk clock, it can be used to store the SRAM enable signal CE of the image. In addition, its read/write signal is also generated by CPLD. Therefore, you only need to set the reading signal re to "1" in the CPLD write operation. Because the rising edge signal of pclk is relatively stable during data output, and RAM can write data in the rising edge of WR, after href is valid (href = 1) use pclk as the write signal RW.
Because the number of pixels in the image is known, that is, the number of data is known, after the count is complete, the CPLD sends a signal of counting completion. After receiving the stop signal, the DSP can start to read data from Ram, compress the data, and process the data, and then put the data to the data bus of the LCD screen, finally, the collected image is displayed on the LCD screen. Figure 3 shows the software flowchart of the image acquisition system.
3 conclusion
The system uses a comprehensive DSP and CPLD solution to divide image acquisition and data processing. The experimental results show that the image of the system is clear and can meet the real-time display requirements, and can be widely used in network video and industrial automatic monitoring.