Original blog: Reprint please indicate source: http://www.cnblogs.com/zxouxuewei/
Premise:
1. This tutorial ensures that you have successfully installed the Kinect or xtion depth camera driver and is able to use it properly. Driver installation can refer to my blog http://www.cnblogs.com/zxouxuewei/p/5271939.html
2. You already have a platform that can be manually or automatically moved, and your Deepin camera is actually installed on the mobile platform. (such as the film is my own production of mobile platform, the interface will be described in detail later)
A . Convert point cloud data to laser data using the Kinect depth camera
1. Download source code test source code, Pointcloud_to_laserscan:https://github.com/ros-perception/pointcloud_to_laserscan
2. Unzip into your Ros workspace path.
3. Compile the Feature pack with Catkin_make.
4. Feature Pack Path:ROS_PACKAGE_PATH=/root/catkin_rs/src/pointcloud_laserscan:$ROS_PACKAGE_PATH
5. Analyze the launch file and modify it to make it available on your own platform. (Modify the following tips)
<?xml version="1.0"?><launch> <arg name="Camera" default="Camera"/> <!--start sensor--> <include file="$ (Find openni_launch)/launch/openni.launch"> #由于我们使用的kinect, Openni2 is almost out of support. So use Openni <arg name="Camera" default="$ (arg camera)"/> </include> <!--run Pointcloud_to_laserscan node-to-<node pkg="Pointcloud_to_laserscan"Type="Pointcloud_to_laserscan_node"Name="Pointcloud_to_laserscan"> #节点名称可以更改为自己想要的 <remap from="cloud_in"to="$ (arg camera)/depth_registered/points_processed"/> #话题名称也可以更改, but in Rviz to understand the topic of the subscription <remap from="Scan"to="$ (arg camera)/scan"/> <rosparam>Target_frame:camera_linktransform_tolerance:0.01min_height:0.0Max_height:1.0angle_min:-1.5708#-m_pi/2Angle_max:1.5708# m_pi/2angle_increment:0.087# m_pi/360.0Scan_time:0.3333range_min:0.45Range_max:4.0Use_inf:true# Concurrency level, affects number of Pointclouds queued forProcessing and number of threads used #0: Detect number of Cores #1: Single Threaded #2-inf:parallelism level Concurrency_level:1</rosparam> </node></launch>
Analysis: Start the depth camera in openni_launch first. The definition node name is:pointcloud_to_laserscan. Publish the topic name definition and so on;
6. Run the launch file to start the depth camera node:
Roslaunch Pointcloud_to_laserscan Sample_node.launch (if the following information proves to be normal)
[INFO] [1458635459.179868847]: Initializing Nodelet with4worker threads. [INFO] [1458635462.234383165]: Number devices connected:1[INFO] [1458635462.234515193]:1. Device on bus001: - isA sensorkinect (2AE) fromPrimeSense (45e) with serial ID'a00366803996050a'[INFO] [1458635462.235427900]: Searching forDevice with index =1[INFO] [1458635462.415296355]: Opened'Sensorkinect'On bus1: -With serial number'a00366803996050a'[INFO] [1458635462.973510398]: rgb_frame_id ='Camera_rgb_optical_frame'[INFO] [1458635462.973599301]: depth_frame_id ='Camera_depth_optical_frame'
7. View the point cloud and the simulated laser data in the Rviz view:
Rviz
8. View the node map
Rqt_graph
Two. Create a map with a deep camera-based imitation of laser data
1. Download hector_slam_example: After compiling (CMake), remember to add this file to the Ros workspace path.
2.
Install the dependency packages:
ROSDEP Install Hector_slam_example
3. Before booting, if you are using Kinect, please modify the launch file.
<include file="$ (Find openni_launch)/launch/openni.launch"> #由于我们使用的kinect, Openni2 is almost out of support. So use Openni.
And then there's oslaunch.
Roslaunch Hector_slam_example Hector_openni.launch
Just let the mobile platform to move to build a map, of course, the method used to generate a simulated aurora is Depthimage_to_laserscan, the code is faster but requires the camera to be relatively stable, and as far as possible horizontal placement.
4. View nodes:
Rqt_graph
Deep camera's imitation laser data create map--24