Visual SLAM Combat (a): rgb-d SLAM V2

Source: Internet
Author: User

Write in front

First make an advertisement. Slam researchers exchange QQ Group: 254787961. Welcome to the big gods and small white to come to exchange.

Read the previous three blog post, is not a classmate to ask: Bo Master you pull so many useful useless things, can you give force a little, come up with a we can actually hand play things ah? Yes, next we take everyone, actually run the visual slam in the classic procedures, give everyone an intuitive impression-so the blog is called "Visual slam Combat". These procedures include:

    • RGBD SLAM V2
    • SVO
    • Kinectfusion
    • Orb-slam

If you have any suggestions, you can send my e-mail or chat with the group. Of course, I am running while writing a blog, rather than a splinters to write. So, the last run of the program may be a little out of the present plan. Well, don't say much nonsense, first to introduce the experimental equipment.

Laboratory equipment

1. Hardware

Good slam, how can you do without a robot? Boss, give me three copies first!

This robot is a modified version of Turtlebot: Viewbot. For more information on Turtlebot, see: http://wiki.ros.org/Robots/TurtleBot/. Viewbot is a modified version of Shanghai scenery Company, including can be installed some additional sensors, as well as the original black base plate into a transparent floor what, use up and Turtlebot similar, price 1w+. In order to avoid the suspicion of advertising I will not put the link. The advantage of using this robot is that Ros has a direct corresponding package without your own writing. A word can open the sensor read data, then a sentence can be remote control it, convenient.

The main part of the robot is its base and Kinect. The seat has its own inertial guide, can estimate its position; Kinect, we all know, not much to say. In fact, the RGBD-SLAM-V2 to run today does not need a stand, as long as a Kinect will move. We do not need so many robots, only one on the line (that is, labeled that one).

2. Software

Software requires only a note, on the machine to run the program. I am using an ASUS Ubuntu, loaded with Mac theme Pack is really a cottage gas-full AH:

The specific software configuration will be detailed later.

3. Environment

Environment is my laboratory, this also does not elaborate.

Slam program

The RGB-D-SLAM-V2 program is made up of F. Endres big caps. [1] See the paper. Why do you choose this program first? Because its principle is introduced in our previous blog. It is a program based on the depth camera, Pose graph (graph optimization), and the backend is G2O. On the other hand, its code is directly compatible with ROS Hydro version, it is very convenient to run without configuration. Here we teach you to run this program:

    1. Download the source code from the author's homepage. Link: http://felixendres.github.io/rgbdslam_v2/Click on the right side of the tar.gz or zip to download to local.
    2. After the download is done, unzip, get a package, many files inside:

No hurry, first read from the Readme: "RGBDSLAMv2 is based on the ROS project, OpenCV, PCL, Octomap, Siftgpu and more–thanks!" What are you waiting for? Loading and mounting! Fortunately these things are installed under Ubuntu, just a few words to fix things.

ROS Hydro Installation Guide: Http://wiki.ros.org/cn/hydro/Installation/Ubuntu (direct installation after adding PPA source)

Linux OPENCV Installation Guide: http://blog.sciencenet.cn/blog-571755-694742.html (compiled from source code)

Pcl:http://www.pointclouds.org/downloads/linux.html (plus PPA post installation)

After a few, do not pretend to see the mood, even if not installed, Rgbd-slam-v2 can also run up.

3. After loading, look at "Installation from scratch" column, basically do it again. The author gave the order out, so I won't post it. When you're done, Rgbdslam is in your Ros bag.

4. Plug the robot's Kinect USB port into the computer and run Roslaunch Rgbdslam openni+rgbdslam.launch to see a nice interface.

Can see the author really very attentively, actually did the UI. Lazy people like me will never be able to do the UI ... In this case, the bottom two figures are the current color and depth of the Kinect, and the above is the 3D online point cloud (which can be viewed with the mouse). Now, the program is still in standby state, knocking down enter will capture a single frame of data, and the blank key will be continuously collected.

In addition, the parameters of the program can be adjusted in the Openni+rgbdslam.launch file. For example, the feature point type Yes (Support Sift,surf,orb,siftgpu), the maximum number of features, and so on.

Run the program

Now, we are connected to the remote end of the Turtlebot:

Roslaunch Turtlebot_bringup minimal.launch (starter base)

Roslaunch turtlebot_teleop keyboard_teleop.launch (Start remote control)

Press the RGBD-SLAM-V2 blank key to let the robot walk around. The status bar of the UI shows the running state of the program, and I've seen that it's extracting features, adding a new frame, and so on. If it succeeds in matching, the point cloud will be updated and will follow the robot's rotation.

I had the car turn a few laps in the corner of the lab. It swept out a pile of boxes (in fact, rubbish) that were placed in the middle. When you feel satisfied, then press the space key, stop the program. Then choose what you need to save from the menu: the trajectory of the robot, the final point cloud, and so on. The trace is a TXT file, and the point cloud is a PCD, which can be viewed after installing the PCL and calling Pcl_viewer.

Although the above is like playing a code, the final point of the cloud is HD uncensored:

Trajectory, use Matlab to write a script plot can:

Can see the trajectory of the fault, in fact, the robot is moving relatively fast, the algorithm lost, but then through the loopback detection to find back.

Evaluation

Finally, let's summarize the experiment.

Rgbd-slam-v2 is the algorithm mentioned in the 14 paper. It integrates various technologies in the field of Slam: Image features, loopback detection, point clouds, graph optimization, etc., and is a very comprehensive and excellent program. Its UI is also very beautiful and you can continue to develop it on its source code. The authors also provided data sets for researchers to test.

Disadvantage: In addition to aesthetics, because to mention the characteristics (SIFT is time-consuming), rendering point cloud, these things are very eating resources, resulting in real-time algorithm is not very good. Sometimes you'll find it stuck there and have to wait for it for a little while. If the robot goes too fast, it is easy to lose. So my robot really crawled on the floor like a turtle ... and. Once the head is fast, the trajectory is basically broken. In addition, the program collects key frame frequency is very high, a little will be produced dozens of frames, not too suitable for long time slam. Finally, the point cloud has a 300w+ point, I use the grid filter to barely show it.

Reference documents

[1]. Endres et al, 3D Mapping with an rgb-d camera, TRO, 2014.

Visual SLAM Combat (a): rgb-d SLAM V2

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.