Human modeling based on three Kinect

Source: Internet
Author: User

SingleKinect 's human body is rebuilt, and the Kinect fusion works well in Kinect SDK 1.8. Its shortcomings are obvious, one is the scanning time is long, the reconstructed object also need to remain static for a long time, second, need the human body or the Kinect rotation to achieve omni-directional scanning, the operation is cumbersome. With three Kinect for human body reconstruction, only three Kinect can be fixed in a certain direction (such as 22 120° angle), and the body will be able to scan the body even if it is not moving, in contrast to the short time spent.

Based on three Kinect's human reconstruction, three Kinect cameras are first calibrated to get the camera's internal and external parameters. This allows each camera to see a bit of space in the same coordinates, and three camera coordinate systems are unified. Calibration method is 22 calibration, that is, to take a Kinect camera (recorded as a) for reference, the other two (recorded as B,C) and a calibration respectively, to obtain two sets of camera parameters. Here, I am using the matlab self-contained calibration method Stereocameracalibrator, see link matlab. For example, with two Kinect at the same level (shown) for calibration testing, the resulting parameter is the rotation matrix R:

0.9949-0.0196 0.0991
0.0235 0.9990-0.0384
-0.0983 0.0405 0.9943

The translation vector T is:

-280.5068 2.5255-25.4387

As can be seen from the calibration results, the R matrix is close to the unit matrix, indicating that the two camera positions are basically parallel. The translation vector-280.5068 means that the X-direction distance of the two cameras is 28cm, and the actual distance in the x direction is 26~27cm, which indicates that the calibration accuracy is relatively reliable.

After calibration, two sets of camera parameters (rotation matrix and shift vectors) can be obtained, which will be useful in subsequent point cloud stitching.

Point Cloud Acquisition


Transform the depth image acquired by Kinect into point cloud data and save it with the conversion function built into the SDK. The point cloud is obtained as shown in an angle.

Point Cloud Processing

Because the data volume of point cloud is very large, and there are redundant data and noise interference, the computational complexity is increased. Therefore, pre-processing of point clouds is required prior to point cloud stitching. The key problem is extracting points from point cloud data that can reflect surface features, streamlining data and removing noise, and improving the accuracy and efficiency of reconstruction. Denoising method is a combined bilateral filtering algorithm

Point Cloud stitching

The essence of three-dimensional point-cloud splicing is to coordinate transformation of the data point cloud in different coordinate system, to find out the correct arrangement relation through two or more pieces of data point cloud, and to splice into a complete data point cloud. The key problem is to use the rotation matrix and the matrix to register the point cloud through the ICP algorithm.

Human modeling based on three Kinect

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.