Kinect development tutorial 2: openni reads and displays deep and color images

Source: Internet
Author: User

Careful friends must have discovered that the Kinect has three eyes, one of which is a color camera, the other two deep cameras, the other responsible for transmitting infrared light, and the other responsible for receiving, we can then get a color image and a deep image through Kinect. If you are interested in the technical details of kniect, click here.

The first example of Xiao Jin is to use openni to obtain color and depth images. The code is not long. Some of them refer to heresky kids shoes article
Opneni integrates the depth and color image information of the Kinect. In addition, Xiao Jin added the display part of opencv to make the example more intuitive.

#include <stdlib.h>#include <iostream>#include <string>//【1】#include <XnCppWrapper.h>#include "opencv/cv.h"#include "opencv/highgui.h"using namespace std;using namespace cv;void CheckOpenNIError( XnStatus result, string status ){ if( result != XN_STATUS_OK ) cerr << status << " Error: " << xnGetStatusString( result ) << endl;}int main( int argc, char** argv ){XnStatus result = XN_STATUS_OK;  xn::DepthMetaData depthMD;xn::ImageMetaData imageMD;//OpenCVIplImage*  imgDepth16u=cvCreateImage(cvSize(640,480),IPL_DEPTH_16U,1);IplImage* imgRGB8u=cvCreateImage(cvSize(640,480),IPL_DEPTH_8U,3);IplImage*  depthShow=cvCreateImage(cvSize(640,480),IPL_DEPTH_8U,1);IplImage* imageShow=cvCreateImage(cvSize(640,480),IPL_DEPTH_8U,3);cvNamedWindow("depth",1);cvNamedWindow("image",1);char key=0;//【2】// context xn::Context context; result = context.Init(); CheckOpenNIError( result, "initialize context" );  // creategenerator  xn::DepthGenerator depthGenerator;  result = depthGenerator.Create( context ); CheckOpenNIError( result, "Create depth generator" );  xn::ImageGenerator imageGenerator;result = imageGenerator.Create( context ); CheckOpenNIError( result, "Create image generator" );//【3】//map mode  XnMapOutputMode mapMode; mapMode.nXRes = 640;  mapMode.nYRes = 480; mapMode.nFPS = 30; result = depthGenerator.SetMapOutputMode( mapMode );  result = imageGenerator.SetMapOutputMode( mapMode );  //【4】// correct view port  depthGenerator.GetAlternativeViewPointCap().SetViewPoint( imageGenerator ); //【5】//read dataresult = context.StartGeneratingAll();  //【6】result = context.WaitNoneUpdateAll();  while( (key!=27) && !(result = context.WaitNoneUpdateAll( ))  ) {  //get meta datadepthGenerator.GetMetaData(depthMD); imageGenerator.GetMetaData(imageMD);//【7】//OpenCV outputmemcpy(imgDepth16u->imageData,depthMD.Data(),640*480*2);cvConvertScale(imgDepth16u,depthShow,255/4096.0,0);memcpy(imgRGB8u->imageData,imageMD.Data(),640*480*3);cvCvtColor(imgRGB8u,imageShow,CV_RGB2BGR);cvShowImage("depth", depthShow);cvShowImage("image",imageShow);key=cvWaitKey(20);}//destroycvDestroyWindow("depth");cvDestroyWindow("image");cvReleaseImage(&imgDepth16u);cvReleaseImage(&imgRGB8u);cvReleaseImage(&depthShow);cvReleaseImage(&imageShow);context.StopGeneratingAll();context.Shutdown();return 0;}


Here, Xiao Jin explains from top to bottom:

[1] <xncppwrapper. h> is the openni file header. If openni is used, you only need to include this file.

[2] depthgenerator and imagegenerator are called image generators. The former is used for deep images and the latter is used for color images. It is very easy to create a generator. First, we need to initialize a context, and then use context as a parameter of the create function to create a generator.

[3] xnmapoutputmode is used to set the generator parameters. Here, Xiao Jin sets the resolution to 640*480 (standard) and 30fps for sampling.

[4] depthgenerator. getalternativeviewpointcap (). setviewpoint (imagegenerator) may be confusing. It is used to adjust the perspective. Why is it adjusted? Because the three eyes of Kinect grow in different places, the camera and the color camera with the same picture width may see different scenes. Here, openni provides a function for alignment. Here, Xiao Jin sets the perspective of the Depth generator as the perspective of the color generator.

[5] After startgeneratingall () is called, the generators start to work. If it is to end, the stopgeneratingall () function is used.

[6] although the generators are working, they have been busy reading each other and there is no coordination between them. They will not give us the latest information. Before calling the getmetadata () method, you must use one of waitanyupdateall (), waitoneupdateall (), waitnoneupdateall (), and wiatandupdateall. The function is called as its name. Here, Xiao Jin uses the waitnoneupdateall () function, which is violent. no matter whether the generator has read new data, I will update it first. You can try the other three to see the effect.

[7] After using openni to obtain the image metadata data, Xiao Jin converts it to the iplimage image type of opencv through a series of functions, and then outputs the data. I mainly refer to this article.

For deep metadata, cvconvertscale is used to convert the scale to a grayscale image with a gray value of [0,255. For color metadata, use cvcvtcolor to convert the color space. Press ESC to exit the loop and end the program.

The final result is as follows:

In a grayscale image, the gray value 0 is black and the gray value 255 is white. Therefore, the area near the Kinect (desktop) is black, and the gray area is different degrees in the middle) displayed in white.


----------------------------------

Author: Chen Jin)

This article is an original article. If you need to repost and quote it, please specify the original author and link. Thank you.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.