Navigation and positioning are important parts of Robot Research.
Generally, a robot needs to use a laser sensor (or a depth sensor to convert it into a laser data) in a strange environment, first modeling a map, and then performing navigation and positioning based on the created map. There are also many complete packages in ROS that can be used directly.
In Ros, the following three packages must be used for navigation:
(1) move_base: planned the path based on the referenced message to make the mobile robot arrive at the specified position;
(2) gmapping: map based on laser data (or laser data simulated by depth data;
(3) amcl: locate an existing map.
Reference: http://www.ros.org/wiki/navigation/Tutorials/RobotSetup
Http://www.ros.org/wiki/navigation/Tutorials
Shows the overall navigation package layout:
The white box contains the required components that ROS has prepared for us. The gray box contains the optional components in ROS, and the blue is the components on the robot platform that users need to provide.
1. Sensor transforms
This involves using tf to convert sensor coordinates, because the control center of the robot we use is not necessarily on the sensor, therefore, we need to convert the sensor data to coordinate information in the control center. As shown in, the data obtained by the sensor is in the coordinate system of base_laser, but we use base_link as the center. Therefore, we need to perform coordinate transformation based on the two locations.
The transformation process does not need to be handled by ourselves. We only need to tell tf about the location relationship between base_laser and base_link to automatically convert the data. For specific implementation, see http://blog.csdn.net/hcx25909/article/details/9255001.
2. sensor sources
Here is the robot navigation sensor data input, generally only two types:
(1) laser sensor data
(2) point cloud data
See specific implementation: http://www.ros.org/wiki/navigation/Tutorials/RobotSetup/Sensors
3. odometry Source
For robot navigation, You need to input the ODPS data (tf, nav_msgs/Odometry_message). For details, see:
Http://www.ros.org/wiki/navigation/Tutorials/RobotSetup/Odom
4. Base Controller
In the navigation process, this part is responsible for encapsulating the previously obtained data into a specific twist and releasing it to the hardware platform.
5. map_server
In the navigation process, the map is not necessary. At this time, it is equivalent to navigation in an infinite space without any obstacles. However, considering the actual situation, we still need a map when using navigation.
See specific implementation: http://www.ros.org/wiki/slam_gmapping/Tutorials/MappingFromLoggedData
In the ROS navigation, the costmap_2d package is mainly responsible for creating and updating 2D or 3D maps based on sensor information. Ros maps (costmap) are in the form of grid. The value of each grid ranges from 0 ~ 255, divided into three states: occupied (with obstacles), useless (idle), unknown. The relationship between the status and value is as follows:
There are five parts in total: (the red block diagram below is the outline of the robot, and the black box next to it is the ing position)
(1) lethal: the center of the robot overlaps with the center of the grid. At this time, the robot must conflict with the obstacle.
(2) inscribed: the outer circle of the grid is cut inside the outline of the robot. At this time, the robot will inevitably conflict with the obstacle.
(3) possibly circumscribed.
(4) freespace: space without obstacles.
(5) unknown: unknown space.
Specific visible: http://www.ros.org/wiki/costmap_2d
----------------------------------------------------------------
You are welcome to repost my article.
Reprinted, please note: transferred from ancient-month
Http://blog.csdn.net/hcx25909
Continue to follow my blog