ZZ self-made low-cost 3D laser scanning range finder (3D laser radar), Part 1

Source: Internet
Author: User
Tags radar

FromCSKLow-cost 3D compaction. Very impressive!

Before introducing the principles, we will first give you some 3D models and demonstration videos after scanning, so as to give you an intuitive understanding.

Video Link

Related Images:

Corner of the scanned room (click to view original size)

Scanned me (click to view original size)

Scanner

Structure

  1. Briefly introduces the current situation of LIDAR Products
  2. Principle of laser triangular ranging
  3. Principle of linear laser cross-section ranging
  4. 3D laser scanner production considerations
  5. References
Introduction-laser scanner/Radar

The essence of the laser scanning range finder mentioned here is 3D lidar. As shown in the video above, the scanner can obtain the scanning cross-section distance from the target object to the scanner at various corners. As such data looks like a cloud composed of many small dots after being visualized, therefore, it is often called:Point Clouds ).

After obtaining the scanned point cloud, you can reproduce the 3D information of the scanned object/scene on the computer.

Such devices are often used in the following aspects:

1) robot positioning and navigation

Currently, the most ideal device for the robot's slam algorithm is laser radar (although it can be used for the moment, it cannot be used outside the room and its accuracy is relatively low ). A robot uses a laser scan to obtain 2D or 3D point clouds in the environment, so that it can perform localization algorithms such as Slam. Determine your location in the environment and create a map of the environment at the same time. This is one of my main projects.

2) 3D modeling of parts and objects

3) map ing

Status quo

At present, single-point laser range finder is common in the market, and the price is relatively low. However, it can only measure the distance from a specific point on the target. Of course, if you install such a range finder on a rotating platform, a one-week rotation scan will become a 2D lidar ). Compared with laser range finder, the market price of laser radar products is much higher:

Image: hokuyo 2D Lidar

The 2D laser radar product produced by hokuyo, Which is priced at a price of tens of thousands of yuan. One of the expensive reasons is that they often use high-speed optical vibration mirrors for Laser Scanning in a large angle range (180-270, in addition, the ranging method is used to calculate the phase difference between the Emission and Reflection laser beams. Of course, their performance is also very strong. Generally, the scanning frequency is above 10Hz, and the precision is also several millimeters.

2d lidar uses single-bundle point laser for scanning. Therefore, only the distance information of one cross section can be collected. To measure 3D data, use the following two methods:

  1. Linear Laser
  2. Scan with a 2D lidar while rotating on the other axis. To scan 3D information.

The first method is to change the output mode of the laser from a point to a linear light. The scanner measures the reflection of the beam light on the target object to be tested to obtain data of a scanning cross section at a time. The advantage of this is that the scanning speed can be fast and the accuracy is relatively high. However, the disadvantage is that because the laser becomes a line segment, its brightness (intensity) will greatly decrease with the distance, so the range of ranging is very limited. This method is very effective and cost-effective for close range (<10 m) scanning. The laser radar described in this article also uses this method,

Figure: single-line red laser

For the second method, the advantage is that it is easy to use 2D Lidar for transformation. Compared with the first method, the scanning distance is farther under the same laser output power. Of course, due to the need to control extra degrees of freedom, the error may be large, and the scanning speed is also slightly lower.

At present, this type of LIDAR products appear in many laboratory and industrial application scenarios, but their prices are too high for personal interests or household equipment. Of course, there is also an alternative solution, that is, Kinect. However, its imaging resolution and ranging accuracy are much lower than that of Lidar and cannot be used outdoors.

Low-cost solutions

The reason for the high cost of LIDAR equipment is:

  1. Measuring Laser Phase Difference/time difference of Propagation
  2. High cost of High-Speed vibration mirrors
  3. Correction algorithms and correction labor costs

For personal DIY, the third factor can be ruled out. The so-called knowledge is the power. Here we can reflect :-) for the first two factors, to achieve the same precision and performance, I am afraid the cost cannot be reduced. However, if we have slightly lower requirements on precision and performance, the cost will be greatly reduced.

The first thing to note is that there is no linear proportion between the input material cost and the performance that can be achieved. When the performance requirement is lowered to a specific level, the cost will be greatly reduced. For the first factor, you can use the triangular ranging method described in this article. For scanning anchor with a Vibration mirror, a common motor mechanism can be used to drive the laser.

The low-cost 3D laser scanner introduced in this article achieves the following cost/performance:

Cost :~ ¥150

Measurement Range: up to 6 m

Measurement Accuracy: (the error between the measurement distance and the actual distance) the maximum error is 80mm at the maximum of 6 m, and the error level is less than 5mm at the close distance (<1 m ).

Scan range: 180 degrees

Scanning Speed: 30 samples/sec (for example, it takes 6 seconds to incrementally scan 180 degrees from 1 degree)

For precision, this low-cost solution is enough to surpass the Kinect, but the scanning speed is slow, but it is enough for General spare-time purposes. However, the scanning speed is easily increased. This article will introduce the method to increase the scanning speed after analyzing its constraints.

Principles and Algorithms

Here we first introduce the algorithm involved in the previous vertex of the measurement target. 3D scanning will be expanded in a similar way.

Use single point laser for triangular ranging

In addition to the use of Phase Difference and Time Difference for TOF ranging, another method of ranging is triangular ranging. This is also the key to achieving low-cost Laser Ranging, because this method does not require special hardware required by other ranging methods. In addition, within a certain distance, triangular ranging can reach the measurement accuracy and resolution comparable to that of TOF ranging.

Image (source from [3]): Laser triangular ranging principle

At present, many fans [1] [2] have developed laser radar or range finder Based on Laser triangular ranging. This method is also used in this article. In addition to this article, refer to [3] for more details. (The author of this paper is the company that uses low-cost Lidar for home robot XV-11 developers, so I won't talk about it here :-)

The following is an excerpt from the paper. To perform laser triangular ranging, the required device is simple:Point lasers, cameras. Therefore, we should be clear about the cost.

The figure shows the distance between the measurement object and the laser. The imager part in the figure is an abstract representation of the photo header (pinhole camera model ). The line segment labeled with S can actually be a fixed plane of the camera and laser. The imaging plane of the photo head is parallel to the fixed plane, and the angle between the rays emitted by the laser and the plane beta only exists in the view in the figure.

To measure the distance d, the laser radiation is first required to be sent to the object, and the reflected light is imaged on the photosensitive plane of the camera. For objects of different distances, the X value of the imaging spot on the camera changes when the laser is illuminated. The following parameters are involved:

BETA: Laser Angle

S: distance between the laser center and the camera Center

F: Camera focal length

If these parameters are not changed (fixed) after the ranging device is installed and the numerical value is known, the distance between the object and the laser can be obtained using the following formula:

Q = FS/X .... (1)

D = Q/sin (Beta ).... (2)

X is the only variable to be obtained in the measurement. It indicates the distance from the laser point on the object to the side of the camera sensor (such as CMOS. The distance can be obtained by finding and calculating the pixel coordinates at the center of the laser point on the camera screen. For

Formula (1) calculates the vertical distance between the target object and the camera-laser plane (in fact, this value can be considered as the actual distance for large-scale ranging ). This step is all about triangular ranging, which is very simple.

However, in practice, the above formula still needs to be expanded. First, we solve the variable X. Assume that we have obtained the pixel coordinates (PX, Py) of the laser spot in the image by using the algorithm, and the X, first, we need to transform the coordinates of a single pixel to the actual distance value. For ease of calculation, a coordinate axis of the camera screen can be parallel to the line segment s during installation, the advantage of this is that we only need to use a parameter (PX or Py) in the pixel coordinate of the spot to find the actual projection distance x. Assume that we only use PX.

Then, the variable X can be calculated using the following formula:

X = pixelsize * px + offset .... (3)

Formula (3) introduces two parameters, pixelsize and offset. The pixelsize is the size of a single pixel photosensitive unit on the camera sensor. The offset is the deviation between the projection distance calculated by the pixel and the actual projection distance x. This deviation is introduced by the following two factors:

  1. The position of the X-variable origin (the dotted line and the imaging plane focus point with the laser radiation flat) may not be in the first column (or row) of the imaging photosensitive array) (In fact, the probability in the first row is very low)

  2. The pixel coordinates of the light passing through the camera's main optical axis may not be the midpoint of the screen.

For pixelsize, you can use the camera sensor manual to determine its value. For offset, it is almost impossible to eliminate the offset or directly measure the offset in the installation. Therefore, you need to find the offset by following the correction steps described later.

Here, we have obtained the formula for finding the actual distance of the corresponding spot through the pixel coordinate (PX) of the laser point:

D = FS/(pixelsize * px + offset)/sin (Beta ).... (4)

The next question is how to determine these parameters. However, in practice, performance indicators still need to be considered: What kind of camera is required to meet certain precision requirements, and how to choose the above parameters?

Factors that determine single-point laser ranging Performance

According to formula (3), the PX parameter is a discrete value (although continuous PX can be obtained by an algorithm, we will introduce it later ). Therefore, the obtained distance value also changes. The degree of hop variation reflects the ranging resolution and accuracy.

If you rewrite formula (1) to X = FS/Q and perform the derivation based on Q, you can get: dx/DQ =-fs/(Q ^ 2), or write:

DQ/dx =-Q ^ 2/Fs .... (5)

Formula (5) indicates the relationship between the Q hop and the actual distance to be tested based on the distance value obtained by using the triangular ranging formula. It can be seen that when the distance to be tested is remote, the distance obtained from the camera will increase greatly for each pixel moving a unit distance.That is to say, the accuracy and resolution of triangular ranging are both deteriorated as distance increases.

Therefore, to determine the expected indicators, we only need to clarify:

Maximum distance needed for ranging

Resolution (formula (5) value at the maximum distance

In the paper [3], his selection rules are given. Here we will give a conclusion that the specific process will not be repeated:

Assuming that the laser spot position can be 0.1 pixels per second, and the unit pixel size is 6um. The resolution must be at 6 m (DQ/dx) <= 30mm.Requirements:

FS >=700

In our production process, this requirement is easy to implement. In addition, current CMOS cameras often have smaller unit pixel sizes (higher resolution is made on chips of the same size), so the lower limit of the actual FS value can be lower.

For the Camera Resolution and laser angle beta, the ranging range is determined (nearest/longest distance ). We will not repeat it here. For details, refer to [3]. For a camera that uses PX for ranging, the resolution of the camera is 480 × 640, which can achieve better performance and higher resolution (of course, the disadvantages will be mentioned later ). Beta is generally about 83deg.

Principles and performance constraints of 2D Lidar

After single-point laser ranging is implemented, 2D laser scanning is very easy: rotating. The performance issue discussed here: scanning speed.

Using a triangular ranging method, the camera screen can recognize the laser point and calculate the actual distance. for desktop computers, it is almost considered that time is not required. Then, the factor limiting the scanning speed is the camera's detection rate. For common USB cameras currently in the market, the maximum frame rate is 30 FPS in 640 × 480 resolution mode, so the scanning speed is 30 samples/sec. In other words, ranging is calculated 30 times per second.

For a laser radar with a 180-degree range, it takes a minimum of 6 seconds to calculate the ranging value every 1 degree.

To increase the scanning speed, it is natural to increase the scanning rate. For USB cameras, the PS eye camera is available for 60 FPS. However, this can only achieve 3 seconds and 180-degree scanning. Higher speed is required, which means faster transmission speed. For USB2.0, the resolution is 640 × 480, and FPS is difficult to improve. In the paper [3], they adopted the high-speed camera chip + DSP method to achieve the FPS frame rate.

Since this production does not require high scanning speed, I still use a 30fps camera.

Principle of 3D laser scanning

As mentioned above, a linear laser is used to scan and locate a target object of a line instead of a single point at a time. Rotate the scanner to perform 3D scanning. Shows his working image and captured camera image:

Figure: this is an early working figure of a red-lined laser.

Figure: pictures captured by a red-line Laser

For linear Laser Ranging, it can be converted into a single point laser ranging calculation problem. The algorithm calculates the X coordinate PX of the laser spot at the current Y axis height based on the Y axis. And try to find the distance to the point through the previous algorithm.

To simplify the problem, we first consider the distance between the laser spot points on a plane parallel to the camera's photosensitive surface:

Figure: abstraction of the distance between the laser facula points on a parallel plane

As shown in, the distant plane is the target plane to be tested, with a purple laser spot on it. The near plane is the photosensitive imaging plane of the camera. After being folded, it can be regarded as a cross section of the pyramid consisting of the target plane and the imaging center of the camera.

Point P1 in the figure is located at the midpoint of the height of the projected image of the camera. According to the definition of the pinhole camera, the distance between the point P1 and the camera center should be the focal length F of the camera. Therefore, for P1, the actual distance can be obtained using the directly introduced formula (4 )..

The question is, can I obtain points at other heights, such as P2, through formula (4?

Figure: Principles of 3D ranging

The answer is yes, but additional parameters are involved here. As shown in, if the distance from p2 to the center of the camera is F', the vertical distance from p2 to baseline is obtained by the following formula:

D' = f' baseline/X .... (6)

It is easy to know that f' can be obtained through F:

F' = f/cos (arctan (P2 '. y-P1'. Y)/F )).... (7)

P2 '. Y and p1'. Y are the actual heights of point p2' on the imaging component. They can be obtained by multiplying the coordinate py of the respective vertices by the pixel height.

After finding the vertical line distance d', we need to convert it into the actual distance d. At this time, we need to know the angle Theta of the P2-RotationCenter and baseline. This angle can be obtained from the stereo ry knowledge through the angle beta of the laser and baseline. For details about the formula, refer to the Calculation Section of the source code.

After finding the coordinates of any laser spot on a parallel plane, we can make the problem a perfect match. For any laser projection point in 3D space, we can first construct a parallel plane where the point is located, then use the above algorithm to solve the problem.

The preceding algorithm generates an array Dist [N] for each ranging sampling. Dist [I] indicates the distance between the laser points under the pixel coordinate I corresponding to different image heights. For cameras with a resolution of 640x480, the value of N is 480.

If you perform a step-by-step 3D scan of 180 degrees, you can obtain a 180x480 point cloud array.

If you scan 0.3 degrees, you can obtain a 180x600 point cloud array.

Determination and solution of laser spot pixel coordinates

Here we will discuss how to calculate the coordinates of the point from the camera image. Specifically, we need to solve the following problems:

  1. Identify and determine laser points to eliminate interference

  2. Determine the exact position of the spot center

First, let's look at Question 1. This seems simple, but there are actually many problems, such as the following images encountered in actual operations:

Figure: images captured by cameras in different environments and configurations

The above three images were taken by the red laser camera. (A) The image is ideal, and there is no content except the laser spot in the image. Although there is an interference point from the photoelectric emission on the top, however, when the laser spot is still old, you can find the most bright spot on the screen.

(B) a fluorescent lamp is displayed on the screen. Because the brightness of the fluorescent lamp is also high, the brightness of the center of the laser is the same as that of the center (both are pure white). For this image, one way is to determine the color of the adjacent pixel at the same time, and the periphery of the red laser points are red.

(C) In addition to laser points, other red objects are displayed, and some highlights are displayed in the image as pure white, the above color-based algorithms will also become ineffective. Therefore, another method is required.

The perfect laser extraction algorithm almost does not exist. One example is that when two similar laser points appear in the screen (another from another range finder or laser pen ), in this case, it is difficult to determine which is the correct spot from a pair of images.

At the same time, more accurate identification of spot points also requires the cooperation of hardware and optical equipment. The specific details are beyond the scope of this article. Here are several feasible methods:

1. Add filter

In the [3] and [4] documents, the use of filters is mentioned. Only the incoming light of the laser emission wavelength is retained, which can avoid interference to a certain extent.

2. Adjust the camera exposure time

Adjusting the camera exposure can also effectively remove screen interference, such as (B) and (c). For a 5 mW laser, within a certain distance, the unit of light intensity is still higher than the daily light intensity [3] (the human eye can identify the laser pointer shining on the ground outside). Therefore, as long as the camera's exposure is adjusted to a sufficient range, it is entirely possible to remove content except laser points from the screen.

3. Use a non-Visible Laser

For example, if you use an infrared laser, this method is the same as the remote control uses an infrared LED for the reason that there is little infrared interference in the artificial environment. In combination with infrared filters, this filter can effectively filter out interference from fluorescent lamps. However, for daylight and incandescent lamps, which also contain strong infrared light, this method cannot be used alone.

4. Increase laser power

In combination with the exposure control, increasing the laser emission power is enough to keep only the spot in the screen, but this is also risky, especially when a point laser is used.

All of the above methods are used in this document.

For question (2), the simplest way is to directly find the coordinates of the brightest pixels in the photoelectric. However, as we know from the formula above that the obtained PX value is an integer, the calculated Q will be a big hop. This section describes how to change Px to a more precise "secondary pixel" level.

There have been a lot of research on this issue in the academic world. We recommend that you refer to [5] Here, which introduces several pixel laser spot location algorithms and analyzes their advantages and disadvantages. I will not repeat it here.

Simply put, we can think that the brightness of the laser spot is a two-dimensional Gauss function that is used to sample the laser spot on the screen. Then, we can estimate the center of the light point by means of fitting or simple linear interpolation/Quality Center.

In this tutorial, a simple centroid method is used to obtain the laser center of the secondary pixel.

Figure: After a filter is used, the system identifies and calculates the laser spot center position from the white fluorescent screen (right ).

May someone ask if such an estimation is accurate and effective? In general, accurate to 0.1 pixels is relatively reliable, and some documents have pointed out that they have achieved reliable positioning of 0.01 pixels.

The Solution Process for linear lasers is similar to that for point lasers. The difference is that the center of the laser spot is identified by each line (or column) of the image.For more information, see [6]. [7]. A fine spot Center Extraction Algorithm for Linear lasers is provided.

Camera Calibration

The basic principle of laser ranging is very simple, but there are many constraints in implementation. In addition to the parameters that need to be determined in the triangle ranging formula mentioned above, it is also important to correct the camera to obtain the image under the ideal pinhole camera model.

The first question to answer is: why do we need to correct the camera? What parameters are corrected?

The main reason for correction is that the current camera is not the pinhole camera model mentioned above. The principle of the pinhole camera is similar to that of small hole imaging: light is imaged on the photosensitive part behind a small hole. However, as we all know, realistic cameras use optical lens for concentrating imaging, and the lenses used are not parabolic (difficult to process). At the same time, the photosensitive chip is not strictly parallel between lenses [8]. In short, the reality is that the generated images are actually distorted and offset. If you directly use the original camera screen for ranging, it will inevitably cause errors. Therefore, the camera must be calibrated to obtain images that eliminate the distortion and offset of the above image, and then be used for laser ranging.

Figure: original and corrected pictures of the camera

The picture on the left is the original image taken by the camera. It can be seen that the image is distorted. After the camera is corrected, we can modify the distorted image parameters after correction, the image on the right is displayed.

The principle, algorithm, and process of camera calibration are beyond the scope of this article. For details, refer to the following documents and Tutorials: [8] [9] [10]. In the subsequent production part of this article, we will also introduce the correction process and results of this production.

Calibration and Solving Parameters Used for triangular ranging

The triangle ranging formula described above involves the following parameters:

BETA: Laser Angle

S: distance between the laser center and the camera Center

F: Camera focal length

Pixelsize: The pixel size of the photosensitive part.

Offset: The compensation value of the Laser Point imaging position.

Some of these parameters are difficult to obtain through actual measurement, and some are difficult to control accuracy during installation. Their numerical accuracy will have a great impact on the ranging accuracy. For example, pixelsize is generally a micron-level value. A small error can cause the final ranging deviation.

For their solutions, we will find them in the correction step after the preparation of the range finder. Here, the actual process is to collect the PX values used in the ranging formula under the measured distance. Then, the parameters are determined by curve fitting.

The specific operations in this part will be described in the subsequent preparation/correction process.

Low-cost 3D Lidar

This part will be introduced in the next article.

References

[1]Details of the laser range finder

Http://www.eng.buffalo.edu/ubr/ff03laser.php

[2]Webcam based DIY laser rangefinder

Http://sites.google.com/site/todddanko/home/webcam_laser_ranger

[3] K. konolige, J. augenbraun, N. Donaldson, C. Fiebig, and P. Shah.
A low-cost laser distance sensor.
In Int. Conference on robotics and automation (ICRA), 2008.

[4] kenth Maxon.A Real-time Laser Range finding Vision System

[5] Fisher, R. B. and D. K. Naidu.A comparison of algorithms for subpixel Peak Detection. Springer-Verlag, Heidelberg, 1996.

[6] Mertz, C., J. Kozar, J. R. Miller, and C. Thorpe.Eye-safe laser line striper for outside use.Intelligent Vehicle Symposium, 2002.

[7] Mertz, C., J. Kozar, J. R. Miller, and C. Thorpe.Eye-safe laser line striper for outside use. Intelligent Vehicle Symposium, 2002.

[8]Learning opencv: Computer Vision with opencv Library, Gary bradski and Adrian kachlev, first edition, 2008 O 'Reilly, ISBN 978-0-569-51613.

[9] sharing experience in implementing stereoscopic vision with opencv
Http://blog.csdn.net/scyscyao/article/details/5443341

[10] Camera Calibration toolbox for Matlab
Http://www.vision.caltech.edu/bouguetj/calib_doc/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.