Principle and Visual Positioning of the placement machine Visual System

Source: Internet
Author: User
Principle and Visual Positioning of the placement machine Visual System

Zhong jiangsheng 1, Li Qinchuan 2, Xia yunpeng 1, Liu hongzhao 2

(1. Mechanical and Electrical Engineering College, Shenzhen Vocational and Technical College, Shenzhen 518055, China 2. School of Mechanical and precision instruments, Xi'an University of Technology, Xi'an 710048, China)

 

Summary: Describes the basic components and implementation principles of the Vision System of a placement machine. This paper introduces the application of image processing technology and discusses the algorithm for locating chip components.

Keywords: Placement machine, surface mount technology, flight vision, Visual Positioning

Class number: Tp391.41 Document ID: Article A No.: 1004-4507 (2005) 12-0026-04

With the constant pursuit of small and light electronic products, the market demand for placement machines is also growing. At present, the Research on the Key Technologies of automatic installation of electronic components in China is still in its infancy, especially in terms of speed and accuracy, which is significantly different from that of advanced foreign systems [1, 2]. As one of the key technologies of a placement machine, the vision center system determines the placement capability of the placement machine, which directly affects the placement precision and speed of the placement machine. Therefore, it is necessary to study the vision center system based on placement machine.

This article describes the principle of the placement machine vision system, and puts forward specific effective algorithms for Chip element pairs.

1. Architecture and implementation principle of the placement machine Visual System

As shown in figure 1, the placement machine visual system is generally composed of two types of CCD cameras. The first is the reference camera installed on the suction head and followed by the reference camera moving x-y. It determines the coordinates of the PCB in the system coordinate system by taking the reference point on the PCB; second, the detection center camera is used to obtain the deviation value between the component center and the suction nozzle center and the angle θ of the component relative to the Mount position. At last, the exact difference between the component and the Mount position is found through coordinate transformation between cameras to complete the Mount task.

1.1 Basic components of the system

The basic component 2 of the visual system is shown in. The system consists of three independent CCD imaging units, light sources, image acquisition cards, image processing computers, and master computer systems. To improve the accuracy and speed of the visual system, the detection center is designed to be a low-resolution camera ccd1 targeting small chip components and a high-resolution camera ccd2 targeting large IC. ccd3. When the center of the suction nozzle reaches the central field of view of the detection center, a trigger signal is sent to obtain the image, and the corresponding light source is shining at the same time.

1.2 relationship between System Coordinate Systems

In order to accurately identify the actual deviation between the elements to be pasted and the target location, the relationship between the scene, CCD camera, CCD imaging plane, and pixel coordinates on the display screen must be analyzed, in order to associate the points in the pixel Coordinate System of the display screen with the points in the scene coordinate system, and calculate the deviation between the center of the component to be pasted and the center of the suction nozzle through image processing software analysis.

For a single camera, the pinhole model is the simplest approximate model suitable for many Computer Vision Applications [3]. The camera performs linear transformation from 3D projection space P3 to 2D projection space P2. The ry relationship 3 shows. To facilitate further interpretation, the following four coordinate systems are defined:

(1) Euclidean scene coordinate system (subscript: W): the origin is in ow, and points x and u are expressed in the scene coordinate system.

(2) Euclidean camera coordinate system (subscript: C), origin in focus c = OC, coordinate axis ZC and optical axis re-merge pointing to the image outside the plane. There is a unique relationship between the scene coordinate system and the camera coordinate system. You can convert the scene coordinate system to the camera coordinate system through an Euclidean transformation composed of a translation T and A rotating R. The relationship is shown in formula (1:

(3) Euclidean image coordinate system (subscript: I). The coordinate system is consistent with the camera coordinate system. XI and Yi are located on the image plane. The Coordinate System of OI pixel coordinate system is (xp0, yp0 ).

(4) pixel coordinate system (subscript: P), which is the coordinate system used during image processing. In this system, the orientation of the coordinate system is the same as that of the Euclidean image, but the origin coordinates are different and the scales are different.

Scenario point XC is projected onto the image plane π and is the point UC (UC, Vc,-f ). The coordinate relationships between them can be exported using similar triangles:

Due to the small field of view, the lens distortion is very low, UC can be directly simplified to equal to the coordinates of the Euclidean image coordinate system, so that UC = UI, Vc = VI, and UI = (up-xp0) delta, Vi = (vp-yp0) Delta, Delta is the size of a single pixel.

In this way, the ing between the Euclidean scene coordinate system and the Euclidean image coordinate system can be obtained:

In this system, cameras are independent from each other, so the coordinates of each camera can be converted to coordinates under the same scene coordinate.

1.3 system implementation principle

The working principle of the placement machine visual system is shown in Figure 4. When a new PCB to be mounted is transmitted to a specified position and fixed by a plate feeding mechanism, the reference camera ccd3 installed on the patch head searches for the mark points in the corresponding area using the image recognition algorithm, then, the coordinates in the Euclidean scene coordinate system are calculated using the formula (3. Next, we will send the location data of the corresponding components to the master computer. The center detection camera (ccd1, ccd2) is used to detect components and obtain the coordinates and corner values in the displayed screen coordinate system. Then, the coordinates in the scene coordinate system are converted to those in the (3) format, compare with the target position to obtain the position and corner of the mounting head.

2. Image Processing

2.1 image preprocessing

The purpose of image preprocessing is to improve image data, suppress unwanted deformation, or enhance certain image features that are important for subsequent processing. Due to the non-clean factors at the SMT production site, the dust on the head of the ccdmirror is easy to bring large external noise to the image. In addition, noise such as optical path Disturbance and system circuit distortion is also inevitably introduced during image acquisition. Therefore, it is necessary to pre-process images to eliminate the effects of these noises.

The main requirement of the Noise Smoothing Method is: it can effectively reduce noise, not cause the blur of edge contour, but also require fast operation speed. Conventional methods include Gaussian filtering, mean filtering, Lee filtering, median filtering, and edge persistence filtering.

Median Filtering is a non-linear smoothing method with few edge fuzzy values. Its basic idea is to replace the current point of the image with the median of brightness in the neighborhood, it is a smooth method that can remove impulsive noise and salt and pepper noise while retaining the image edge details. In addition, because the median filter does not have obvious fuzzy edges, it can be used iteratively. Obviously, all pixels in a matrix (usually 3 × 3) must be sorted on each pixel, which is costly. A more effective algorithm [4] (proposed by t s Huang and others) is to note that when the window moves a column along the row, the change in the window content only removes the leftmost column and replaces it with a new column on the right. For the median window of m rows and n columns, m × n-2 × M pixels do not change and do not need to be re-ordered. The specific algorithm is as follows:

(1) set th = Mn/2;

(2) Move the window to the beginning of a new row, sort its content, create a histogram H of the window pixel, and determine the value of med, write down the number of pixels whose brightness is equal to or less than med (lmed;

(3) For each pixel P whose brightness is PG in the leftmost column, do: H [PG] = H [PG]-1;

(4) Move a column to the right of the window. For each pixel P of PG in the rightmost column, do: H [PG] = H [PG] + 1. If PG <med, set lmed = LEED + 1;

(5) If lmed> TH side rotation (6) repeats lmed = lmed + H [med] med = med + 1 until lmed ≥ th, then (7 );

(6) Duplicate med = Med-1, lmed = lmed-H [med] Until lmed ≤ th;

(7) If the right column of the window is not the right boundary of the image (3 );

(8) If the bottom line of the window is not the bottom boundary of the image (2 );

2.2 Image Segmentation

The threshold method is a traditional image segmentation method. It has become the most basic and widely used Segmentation technology for image segmentation because of its simple implementation, low computing workload, and stable performance, it has been applied in many fields. In these applications, segmentation is a prerequisite for further analysis and recognition of images. The accuracy of segmentation directly affects the effectiveness of subsequent tasks. The selection of thresholds is a key technology in the image threshold segmentation method.

The most inter-class variance method [5] proposed by Otsu in 1978 has been widely used for its simplicity, stability, and effectiveness. From the perspective of pattern recognition, the optimal threshold value should produce the best separation performance between the target class and the background class, which is characterized by the class variance, therefore, the following three equivalent measurements are introduced: Intra-Class Variance σ 2w, inter-Class Variance σ 2B, and population variance σ 2t:

In view of the calculation amount, the threshold value is generally obtained by optimizing the third criterion. In practical use, use the following simplified formula:

Where: σ 2 is the maximum variance between the two classes, WA is the Class A probability, μ A is the Class A average gray level, WB is the class B probability, and μ B is the class B average gray level, μ indicates the average gray scale of the image.

That is, the threshold t divides the image into two parts: A and B, so that the total variance σ 2 (t) of the two classes gets the maximum T, that is, the best segmentation threshold.

2.3 Image Recognition and positioning

The moment representation of a region refers to the probability density of a normalized grayscale image function as a two-dimensional random variable. The attributes of this random variable can be described using statistical features-moment (moment) [6. If a non-zero pixel value is used to represent a region, the moment can be used to describe the region of a binary or grayscale value. The (p + q) moment of a digital image can be calculated using the following formula:

Where I and j are the pixel coordinates of the region points, and f (I, j) is the gray value of the image area. Then, the coordinates of the center of the image area (the image area is center after binarization) can be obtained through the following relationship.

The aspect ratio of the chip component is. Therefore, the area after binarization is slim, and the direction of the area is defined as the longest side direction of the smallest external rectangle. Based on the image center moment, you can use the following formula to calculate the region direction.

Where:

2.4 experiment results

For the image processing method of the chip component calibration in the patch vision system proposed in this paper, experiments were conducted in the VC ++ 6.0 environment. Table 1 shows that the chip components of 0402 are located at the same position, the results of four simulation tests with different illumination show that the image processing has achieved satisfactory results. The error range is within the permitted range, and the image processing time is within Ms, it can meet the real-time requirements of the placement machine.

3 conclusion

On the basis of explaining the structure of the Vision System of a placement machine, this paper puts forward a very simple method for aligning chip components. The experiment proves that, this method is advanced and practical to meet the real-time and precision requirements of medium-speed placement machine.

This article is excerpted from special electronic industry equipment.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.