Design of CPLD Vision System Based on Image Sensor

Source: Internet
Author: User
Design of CPLD Visual System Based on Image Sensors 10:23:31 Source: MCU and embedded system Author: Beijing University of Aeronautics and Astronautics Liu junmu
Previously, research on visual systems has become a hot topic, and some systems have been developed for reference. However, these systems are mostly based on PCs. Due to the complexity of algorithms and hardware structures
The application is restricted. After the above system collects image data, the visual processing algorithm is implemented on PC. With the advancement of embedded microprocessor technology, the 32-bit ARM processor system has a high computing speed.
And strong signal processing capabilities, can be used as a processor of the visual system, instead of PC to implement simple visual processing algorithms. The following describes an Embedded Visual System Based on ARM and CPLD.
Experience in Embedded Visual Development.

1. system solution and Principle
In the design of embedded vision, there are currently two mainstream solutions:

Solution 1: Image Sensor + microprocessor (arm or DSP) + SRAM

Solution 2: Image Sensor + CPLD/FPGA + microprocessor + SRAM

Solution 1: The system structure is compact and the power consumption is low. During image collection, the identification of synchronous time series signals output by Image Sensors requires arm interruption. During interrupt processing, the microprocessor needs to complete program jump and save the context, this reduces the image acquisition speed and is suitable for scenarios with low requirement on acquisition speed and low power consumption.
Solution 2 uses CPLD to identify synchronous timing signals of image sensors without interrupting the microprocessor. Therefore, the acquisition speed of the system is improved, but the intervention of CPLD will increase the power consumption of the system.
To combine the advantages of the above two solutions, the hardware adopts "arm + CPLD + image sensor + SRAM ". This solution makes full use of the programmability of CPLD and combines the advantages of solution 1 through software programming. It is embodied in the following aspects:

① The power consumption can be controlled. For scenarios where power consumption is strictly required, the interfaces of the timing series part are connected to the interrupt ports of arm through the programmability of CPLD, which is only a bus that combines logic, and can reduce
The power consumption of CPLD achieves the effect of solution 1. The advantages of CPLD can be fully utilized when the acquisition speed is high and the power consumption is low. The combination and time series logic can be used to achieve the same output of image sensors.
Step signal identification, and write the image data into the SRAM.

② Various device options are available. In terms of hardware design, all bus are connected with CPLD; in terms of software design, different modules are separately encapsulated by function. In this way, with CPLD as the center, other devices of the system can be more
In this way, you do not need to modify some programs of CPLD, which is conducive to the upgrade of system functions. As an application of this system, a visual tracking program has been developed, which can be applied to objects when the color of the target and the background is strong.
. By processing the data collected by the CMOS Camera in real time, the centroid coordinates of the tracked object are calculated based on the color of the object. The following describes the functions of each part of the system.

2 system hardware
2.1 hardware composition and Connection
The system hardware consists of four parts: CMOS image sensor ov6620, programmable device CPLD, 512 kb SRAM, and 32-bit microprocessor lpc2214.

Ov6620 is a CMOS image sensor produced by omnivision. It is suitable for Embedded image acquisition systems with high performance and low power consumption.
Collected through ov6620. The programmable device CPLD uses the epm7128s of Altera, and uses the hardware programming language of OpenGL to write programs under quartuⅱ;
As the data buffer of the system, the SRAM selects is61lv5128, and its random access feature facilitates the image processing program. The lpc2214 is most supported by the PLL.
High performance can run at a frequency of 60 MHz, providing hardware support for rapid image processing.
The 0V6620 is integrated into a board with an independent 17 MHz crystal oscillator. Output three time series signals for image synchronization: pixel clock PCLK, frame synchronization VSYNC, and row synchronization HREF. At the same time, you can use an 8-bit or 16-Bit Data Bus to output RGB or YCrCb format image data.
In terms of hardware design, two problems need to be solved:
① Strict timing synchronization of image acquisition;
② Dual-CPU shared SRAM bus arbitration.

The key to solving the first problem lies in how to read the OV6620 timing output signal in real time and accurately, and then write the image data into the SRAM. The solution here is to use CPLD to implement time series information.
Identification and writing of image data. The CPLD can recognize the edge of the signal on the hardware, and the speed is faster. The Mealy state machine is compiled using the Tilde language to write the SRAM of the image data.
More stable.

A reasonable connection method can be used for shared SRAM with two CPUs. Considering the programmability of CPLD, the Data Bus of OV6620, the address of LPC2214, the data bus and
The SRAM bus is connected to the CPLD. By programming to control the connection between bus, as long as the mutual exclusion of bus is ensured on the software, that is, there is only one controller at the same time (CPLD or
To operate the SRAM bus, you can effectively avoid bus conflicts. In this way, the hardware arbitration can be ensured through software, and this process can be ensured by writing multi-channel data selection in CPLD.
It is implemented by the selector.
The connection relationship between each device is shown in 1.


Figure 1 shows that the bus of the microprocessor is connected to the CPLD. In scenarios with strict power consumption requirements, you only need to pin the synchronized timing signal of the OV6620 to
The interrupt Pin Connecting the LPC2214 to the CPLD can be converted into solution 1. For CPLD, pin-connected is only the combination logic, reducing power consumption. Details of solution 1
See references. For cases with high collection speed requirements, for the program source code of the CPLD part, see the website www.mesnet.com.cn -- editor's note. Next we will focus on this
In this case.
2.2 Working Process

After the system is powered on, the operating status of the camera is first configured by the LPC2214 through the I2C bus. The data format, speed, white balance, and automatic gain of the output image must be configured.
After the configuration is complete, the LPC2214 sends an image acquisition signal to the CPLD. At this time, the CPlD operates the SRAM bus and writes the image data through the detection of the OV6620 output sequence.
. Of course, writing to the SRAM must strictly comply with the operating sequence of the SRAM. After one frame of image is collected, the CPLD positions the flag to notify the LPC2214.
If it is idle, the CPLD is notified to switch the right to use the bus to the LPC2214, which reads data from the SRAM and performs image processing. At the same time, send a signal to the CPLD for data collection.
Collection, image acquisition and processing will be executed in parallel, improving the system's work efficiency. After one frame of data is collected again, repeat the above process.
2.3 features of the hardware solution
LPC2214
It is responsible for image processing, and CPLD is responsible for collecting image data, which can effectively encapsulate functions. As you can see, CPLD encapsulates programs related to the hardware timing sequence. The interfaces with the outside world are only flags of status lines and
The data collection bus greatly facilitates system upgrades without modifying the hardware and software of the image collection part. Even replace it with another model with a more powerful microprocessor, as long as it complies with the above-mentioned mark status line agreement.
The system can still work normally, enhancing system compatibility and portability.

3 System Software
The system software consists of ARM microprocessor and CPLD. The ARM code is developed in the C language in the ADSl.2 environment, while the CPLD code is developed in the quartuⅱ language in the hardware language in the OpenGL.
3.1 Program Design of CPLD
The program of CPLD is mainly divided into two parts: the combination logic and the time series logic. The combination logic mainly completes bus arbitration, and the program does not rely on the global clock of CPLD. The time series logic completes the signal detection and writes the image data according to the time series of SRAM operations.
In the bus arbitration section, note that for CPLD, the direction of data flow in the same bus is different at different times. Therefore, the bus must be declared as a two-way port in OpenGL. The specific bus arbitration procedure is as follows:


Bus operations on two-way ports are summarized as follows:
① The control signal must be used to indicate the direction of the port at a certain time point;
② High output impedance indicates that the two-way port is in the input state and can be used as a normal input port.
The time series logic module mainly recognizes the time series signals of image sensors. As shown in 2, the CPLD needs to first detect the descent edge of VSYNc, then detect the rising edge of the HREF signal, and then read the image data in the rising edge of the PCLK signal.

 


The detection of the rising edge is implemented using the always statement in the Tilde language. For example, to detect the rising edge of the clock signal cam_pclk: always @ (posedge
Cam_pclk ). However, from the above analysis, we can see that there are three signal traces to be detected, and always can be used for detection. However, in the OpenGL syntax, the always statement is not
Can be nested. To solve this problem, the system adopts the following method: the entire module has only one always block of time series logic, and other signal edge detection is implemented in an equivalent way to al-ways.
Now. For example, for the cam_vsyn signal, set two temporary signals vsyn_0 and vsyn_1. Assign the following values to the rising edge of each clock signal:



In this way, the values of vsyn_0 and vsyn_1 will be updated when every clock is approaching. When the value of vsyn_O is O and the value of vsyn_1 is 1, it is considered that the rising edge is coming. Likewise, it can be detected.
Descent edge. Note: In this mode, the cycle of the clock signal is much smaller than the duration of the high and low levels of the detected signal. If the signal pulse is too narrow, vsyn_O and
If the value of vsyn_l is not updated, edge detection is lost.
The data writing process is implemented by using the Mealy state machine, and the program is universal. If another type of SRAM is used, you only need to modify the high/low level in the corresponding status according to the read/write sequence of the device. The state machine makes the program structure clear and easy to debug.
3.2 arm Program Design
Currently, there are many PC-based visual processing algorithms. However, in a microprocessor-based embedded Visual System, the hardware resources and processing speed of the system cannot be compared with that of a PC. Especially when real-time requirements are required, you need to compile a fast and effective algorithm suitable for embedded systems. The algorithms written below are all based on this idea.

Color tracking: The color tracking task can be divided into two steps: Color Calibration and color segmentation. The color calibration task uses a known color to find a closed area corresponding to it in the color space. Color
The comparator is used to determine whether the pixels in the image fall into the calibration space in the color space. If the pixels are in the calibrated space, the color is considered to be the same as the calibrated color, in this way, the calibration can be closed.
The area identifies the objects in the image with the same color as the calibration. To meet the application requirements in different situations, two color tracking modes are set.
(1) frame processing mode

In this mode, you need to enter the R, G, and B color borders to be tracked to create a color space for RGB tracking. Then, the processor starts from the upper left corner of the image and sequentially checks each pixel line by line. If
The detected pixel falls into the User-Defined color range and is marked as a trace. At the same time, the highest point, lowest point, leftmost and rightmost points of the tracked point must be recorded. If the detected pixel position
Outside the tag box of the current tracking area, you need to increase the tag box to include this pixel. At the same time, you need to record the number of pixels that meet the requirements. After an image scan is completed, you can use the horizontal and vertical sit points that meet the requirements respectively.
Mark and divide by the required pixel points to obtain the central coordinates of the tracked object.
In this way, after a scan of an image, the central coordinates of the tracked object can be obtained. At the same time, the processor only needs to record a small number of global variables, embedded systems are suitable for both time complexity and space complexity.
In the above method, only one tracking point can change the marking box. Therefore, if there is a noise point during the tracking process, it will affect the marking box. The idea of noise reduction is that if the other points around a pixel falls within the RGB range entered by the user, this point is considered to meet the requirements.
(2) Row processing mode
Unlike the frame processing mode, the row processing mode records the leftmost and rightmost coordinates of consecutive points in the row after scanning a row of data, it may be recorded as (xnl, ynl) and (xnr, ynr) respectively ). After an image is processed, the image shown in Figure 3 is displayed.
Based on the obtained results, more information about the tracked object can be calculated:
① Calculate the area. Calculate the length of each line segment L (n), and then accumulate and overlay L (n) to obtain the tracking area value S.


④ Recognize the shape of an object. Based on the length of each tracking point in each row and several consecutive tracking points in the same row that meet the requirements, we can know the shape of the object from the camera angle. Especially when detecting lines on a plane, you can identify whether there are branches. This is not possible in the frame processing mode.
It should be noted that although the row processing mode will obtain more information about the tracking target, the processing of each row increases the burden on the processor, and the processing speed is not faster than frame processing.

4. Improve the system's working rate
Contents
Previously, the operating rate of the system in frame processing mode was 25 frames/s. As a verification of the system function, the algorithm used here is color tracking. If you only perform pure image acquisition without image processing, then the system
It can reach the maximum working rate of OV6620, that is, 60 frames/s. In terms of image processing, the efficiency of different image processing procedures has a great impact on the operating frequency of the system. The following section describes general ARM processing.
To improve program efficiency:
① Inline can improve performance by deleting the overhead of sub-function calls. If a function is not called in other modules, a good suggestion is to use static to identify the function; otherwise, the compiler will compile the function into a non-embedded function in the embedded decoding.
② In the ARM system, when the number of parameters is less than or equal to 4 during function calling ~ When the number of parameters is greater than 4, it is transferred through the stack mode (additional commands and slow memory operations are required ). Generally, the number of parameters is limited to 4 or less. If this is unavoidable, place the first four frequently used parameters in R0 ~ In R3.

Two commands ADD and CMP are required for the 1st methods, while only one command SUBS is required for the 2nd methods.

④ ARM core does not include division hardware. Division is usually implemented using a Runtime library function, which requires many cycles. Some Division operations are processed as special cases during compilation. For example, the Operation divided by 2 replaces the remainder with the Left shift.
The operator "%" usually uses a modulo algorithm. If the modulo of this value is not the N power of 2, it will take a lot of time and code space to avoid this situation. The specific method is to use if () for status check.

For example, the range of count is 0 ~ 59;

Count = (count + 1) % 60;

Use the following statement instead:

If (++ count> = 60) Count = 0;
⑤ Avoid using a large local struct or array. You can use malloc/free instead.
6. Avoid recursion.

Knot
Ben
This paper introduces an Embedded Visual System Based on ARM and CPLD to achieve color tracking. In terms of hardware design, image acquisition and image processing separation are more conducive to the upgrade of system functions. The visual processing algorithm is more
Focuses on processing efficiency and timeliness, and there are two modes available based on different needs. Finally, some suggestions and methods are provided to improve the program efficiency. Compared with a PC-based visual system, this system has low power consumption,
Small Size, suitable for mobile robots and other fields.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.