Kinect development tutorial 8: openn2display depth, color, and fusion Images

Source: Internet
Author: User

In Article 2: openni's Deep Image and color image display, Xiao Jin introduced openni's method of reading depth and color image data, and used opencv for display.

There has been a big change between openni and openni on the interface. For more information, see openni migration guide. From the perspective of obtaining depth and color sensor data, Xiao Jin thinks that calling is more intuitive, but for Kinect, one disadvantage is that the depth and color image registration methods provided by openn2are not supported (as shown in the following figure: device. isimageregistrationmodesupported () method ).

However, you don't need to be frustrated with the use of Kinect. In openni2.1 beta, Xiao Jin saw the newly added convertdepthtocolorcoordinates () method to convert depth and color coordinate data, it works with device. setimageregistrationmode (image_registration_depth_to_color) is similar. If you are interested, try it.

In terms of display, Xiao Jin still uses opencv. This time, the C ++ interface of opencv is used for operations.

/*************************OpenNI2 Deep, Color and Fusion ImageAuthor: Xin Chen, 2013.2Blog: http://blog.csdn.net/chenxin_130*************************/#include <stdlib.h>#include <iostream>#include <string>#include "OpenNI.h"#include "opencv2/core/core.hpp"#include "opencv2/highgui/highgui.hpp"#include "opencv2/imgproc/imgproc.hpp"using namespace std;using namespace cv;using namespace openni;void CheckOpenNIError( Status result, string status ){ if( result != STATUS_OK ) cerr << status << " Error: " << OpenNI::getExtendedError() << endl;}int main( int argc, char** argv ){Status result = STATUS_OK;      //OpenNI2 imageVideoFrameRef oniDepthImg;    VideoFrameRef oniColorImg;//OpenCV imagecv::Mat cvDepthImg;cv::Mat cvBGRImg;cv::Mat cvFusionImg;cv::namedWindow("depth");cv::namedWindow("image");cv::namedWindow("fusion");char key=0;//【1】// initialize OpenNI2    result = OpenNI::initialize();CheckOpenNIError( result, "initialize context" );  // open device  Device device;    result = device.open( openni::ANY_DEVICE );//【2】// create depth stream     VideoStream oniDepthStream;    result = oniDepthStream.create( device, openni::SENSOR_DEPTH );//【3】// set depth video mode    VideoMode modeDepth;    modeDepth.setResolution( 640, 480 );    modeDepth.setFps( 30 );    modeDepth.setPixelFormat( PIXEL_FORMAT_DEPTH_1_MM );    oniDepthStream.setVideoMode(modeDepth);// start depth stream    result = oniDepthStream.start(); // create color stream    VideoStream oniColorStream;    result = oniColorStream.create( device, openni::SENSOR_COLOR );// set color video modeVideoMode modeColor;    modeColor.setResolution( 640, 480 );    modeColor.setFps( 30 );    modeColor.setPixelFormat( PIXEL_FORMAT_RGB888 );    oniColorStream.setVideoMode( modeColor);//【4】// set depth and color imge registration modeif( device.isImageRegistrationModeSupported(IMAGE_REGISTRATION_DEPTH_TO_COLOR ) ){device.setImageRegistrationMode( IMAGE_REGISTRATION_DEPTH_TO_COLOR );}// start color stream    result = oniColorStream.start();  while( key!=27 ) {  // read frameif( oniColorStream.readFrame( &oniColorImg ) == STATUS_OK ){// convert data into OpenCV typecv::Mat cvRGBImg( oniColorImg.getHeight(), oniColorImg.getWidth(), CV_8UC3, (void*)oniColorImg.getData() );cv::cvtColor( cvRGBImg, cvBGRImg, CV_RGB2BGR );cv::imshow( "image", cvBGRImg );}  if( oniDepthStream.readFrame( &oniDepthImg ) == STATUS_OK ){cv::Mat cvRawImg16U( oniDepthImg.getHeight(), oniDepthImg.getWidth(), CV_16UC1, (void*)oniDepthImg.getData() );cvRawImg16U.convertTo( cvDepthImg, CV_8U, 255.0/(oniDepthStream.getMaxPixelValue()));//【5】// convert depth image GRAY to BGRcv::cvtColor(cvDepthImg,cvFusionImg,CV_GRAY2BGR);cv::imshow( "depth", cvDepthImg );}//【6】cv::addWeighted(cvBGRImg,0.5,cvFusionImg,0.5,0,cvFusionImg);cv::imshow( "fusion", cvFusionImg );key = cv::waitKey(20);}//cv destroycv::destroyWindow("depth");cv::destroyWindow("image");cv::destroyWindow("fusion");    //OpenNI2 destroy    oniDepthStream.destroy();    oniColorStream.destroy();    device.close();    OpenNI::shutdown();return 0;}

Xiao Jin explained from top to bottom:

[1] use the openni: Initialize () method for initialization. For error handling, you can use the openni: getextendederror () method. Here, the device object opens any available device.

[2] In openn2, you can create a videostream Video Stream object to read the depth and color image data of the device.

[3] For videostream video stream objects, we can set the mode of the video, including resolution, FPS, and pixel format. For the pixel format type, you can use the getsensorinfo () method of videostream. Currently, only pixel_format_depth_1_mm is available for the Kinect.

[4] If the device supports deep and color image registration, Xiao Jin uses the interface provided by openn2for registration. In the while loop, each videostream object reads the corresponding image data through readframe.

[5] convert the image data of openni to the image format that can be displayed by opencv. For color images, You can first Insert the data into the opencv three-channel (8-bit) RGB object, and then convert it to BGR for display. For deep images, first place them in a single channel (16-bit) object (this is because the value range of deep data is large), and recently narrow down the depth value to the value range of [0,255, as a grayscale image.

[6] the final image fusion. Because the addweighted () method requires the two input images to be of the same type, Xiao Jin first converts the depth grayscale image (Single Channel) to the BGR image, this is consistent with the color image. Then, this method is used for fusion. The proportion used by Xiao Jin is 0.5, 0.5, that is, the value of each pixel in the fusion image is (the pixel value at the point of the depth image * 0.5) + (the pixel value of the colored image points * 0.5 ).

----------------------------------

Author: Chen Jin)

Sina bib: @ Xiao Jin Chen

This article is an original article. If you need to repost and quote it, please specify the original author and link. Thank you.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.