the expression of mixed map in slam
map types of slam
What are the expressions for mixed maps?
Summary
Slam Map Type introduction
The representation of a map depends to a large extent on the current task, the sensor and the environment type, as shown in the figure below:
A single map type is mainly composed of the following centralized forms:
In many cases, the expression of a single map is difficult to meet complex tasks, and in the complex environment to show robustness, so the general use of mixed map expression, the background summed up a bit, mainly the following points:
1, the task needs
In many cases, the ultimate goal of Slam is not to build a map, but to navigate.
Navigation requires a topological structure of the environment, as well as locating and avoiding obstacles.
2. Multi-sensor Fusion
When a single sensor is unable to meet the requirements, the multi-sensor fusion is often used to
, and a single map is often difficult to fuse with more than two sensing data.
3. Improve the robustness of slam and navigation process
Hybrid maps can record different kinds of data, in complex environments,
Positioning and mapping accuracy can still be ensured. Introduction to the expression form of mixed map
For the expression of mixed maps, the automatic homing algorithm based on laser and monocular Fusion will be presented in the form of three papers.
Choi D G, Shim I, Bok Y, et al autonomous homing based on Laser-camera Fusion System[c]. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012. pp. 2512-2518. Problem Description
1, the laser sensor in the corridor environmental environment, the positioning error is large
2, the camera can not play a role in the environment with less feature texture
This can be done as shown in the following figure:
If the environment to be matched has a significant direction, then the translation in parallel to this direction will have a scale dispute after the point cloud matching match.
T=tr+ta T=t_r + t_a
Ta=λva T_a=\lambda v_a
The TR T_r is the accurate translation component, while TA T_a is controversial, that is, Λ\LAMBDA is controversial.
In this article, the way to resolve disputes is to combine monocular cameras to solve scale disputes in a significant direction. The remaining problem is how to determine whether the environment has a significant direction (in corridors and other environments are more common). The following figure shows the exact operation:
Operation Steps
1 . Calculate the direction of the corresponding points in the ICP match.
2, statistics all points, generate the gradient histogram as shown in the left image, select the largest as the current frame of the significant direction.
3, if points fall into a significant direction, then mark the main points; Conversely, it is the secondary point.
4 , the calculation of sub-points accounted for the ratio of all points, if less than 0.2, is judged to be controversial.
Problem Solving
1, the current frame does not exist in dispute, using the ICP matching based on single laser.
2, the current frame in dispute, combined with Monocular camera, the calculation of the dispute λ\lambda.
The feature points extracted by sift or surf (assuming only feature points on the wall) can be specified as shown in the figure below. Then, using the laser data, the two-dimensional information points after ignoring the y-axis direction are fitted out to solve the Λ\LAMBDA.
An image feature point, which can be expressed as a ray in a laser coordinate system:
A set of feature points corresponding to two frames of image information is obtained, Q,q′q,q ', and the scale information is calculated.
Unified Bayesian Framework and global relocation based on hybrid maps (topological and scale maps)
Tully S, Kantor G, Choset H. A Unified Bayesian Framework for global localization and slam in hybrid metric/topological Maps[j]. The international Journal of Robotics, 2012, 31 (3): 271-288. Problem Description
1, the high-level topological map and the lower scale map how to unify the framework.
2, in a large number of similar environments, how to do global relocation and slam loopback.
For loopback detection and global relocation, we always think that we can be uniquely detected when we need to relocate, or when a loopback occurs. But for the people themselves, this is also very difficult, the reality of too many repeated scenes, there is no way to let us know where we are, as shown in the figure below. If you want robust and reliable solutions to global relocation and loopback, you may have to keep every possible time, again and again, to do the likelihood update.
Problem Solving
Unified framework of the Bayesian framework:
Parameter description:
Specific slam and positioning of global topological maps and local sub-graphs:
In the local scale map, the EKF is used to update the position, in the case of unknown map, the Ekf-slam
Topology maps estimate that the mobile robot itself needs to know which sub-diagram it is in:
Recursive part, is generally carried out quasi-recursive, the previous frame as a priori probability:
Multi-hypothesis tree test
Using a multi-hypothesis tree, each loopback and relocation possibility is preserved, and every time a likelihood update is made, the only possibility is known. The following figure, depending on the application can be divided into completely unknown, fully known, local unknown situation.
An example of a more vivid image can be seen in the following illustration, each time a hypothetical update is made.
A Vision Slam system using multi-layer feature structure and unsupervised geometric constraints Problem Description
Monocular Visual Slam has the following problems:
1, scale offset, can not cope with rapid movement
2, the non-uniform distribution of the characteristics of sensitive
The problem is that many of the best Monocular slam are now facing, largely because the sensor itself is limited. Many researchers turned to monocular and other sensor solutions.
In addition, most Monocular slam systems use only lower-level visual features, and in most human environments there are more advanced features and geometric constraints between them.
Lu Y, Song D. Visual Navigation using heterogeneous landmarks and unsupervised geometric constraints[j]. IEEE Transactions on robotics, 2015, 31 (3): 736-749. Problem Solving
steps:
1, input key frame image information, can directly extract feature points and line segments, these as low-level features.
2, line and feature points through the geometric structure, matching produces more advanced line, blanking point, and spatial plane features.
Note: Arrows indicate parent-child node relationships
The overall steps are extremely complex, and the relationship between features is very cumbersome to maintain, as shown in the figure below. The author also has an open source code and data set that can be downloaded on both the author's website and on GitHub. If you are interested, you can still look at the author's original text.
Summary
Hybrid maps are more widely used and more robust than traditional maps, while the frame of a hybrid map can be more complex and more formal.
Problems can be Dabigatran (mobile robot navigation and control group: 199938556) discussion of communication, into the group to modify the notes.