Create a Kinect 3D point Cloud with Openni

Source: Internet
Author: User

This is also a continuation of the previous article, "through the Opneni combined Kinect depth and color image information." The depth of the Kinect can be read through Openni, after the color information, you can actually try to use this information to reconstruct the 3D environment to show that, in fact, the depth information that was read in the previous example is raw, and the coordinates are also the coordinates of the sensor's two-dimensional image, and If you want to rebuild the 3D scene, these information will still need to be counted, fortunately, Openni in Depth Generator has provided Convertprojectivetorealworld () and Convertrealworldtoprojective () These two functions can help the program developers to quickly do the coordinates of the conversion!

And if you put these 3D points to add color, with OpenGL painting, it will probably be the following film like it ~

Of course, point Cloud is not the best way to display, if necessary, you can also reconstruct the multi-polygon re-painting, but the multi-polygon reconstruction is another topic, so heresy is not going to discuss it here, and heresy in this article will not mention the parts of OpenGL, It will only provide simple examples of how to build these point cloud.

and in order to store these points and color information, it defines a simple structure, Scolorpoint3d, and so on:

struct Scolorpoint3d{floatX; floatY; floatZ; floatR; floatG; floatB; Scolorpoint3d ( Xnpoint3d pos, xnrgb24pixel color) {X = pos.    X Y = pos.    Y Z = pos.    Z R = (float)color.nred/255; G = ( float) color.ngreen/255; B = (float)color.nblue/255; }};

This structure is just a simple six points, the location of this point, and the color, and the part of the structure is the transformation of the structure into Openni definition: The Xnpoint3d representing the position and the RGB color xnrgb24pixel .

And for the convenience of seeing, heresy the parts of the coordinates into a function Generatepointcloud() with the following contents:

void generatepointcloud( xn::D epthgenerator& Rdepthgen,Const xndepthpixel* pdepth,Const xnrgb24pixel* pimage, vector<scolorpoint3d>& vpointcloud) {  //1. Number of point is the number of 2D image pixel xn::D epthmetadata MDEPTHMD;  Rdepthgen.getmetadata (MDEPTHMD);   unsigned intupointnum = mdepthmd.fullxres () * Mdepthmd.fullyres (); //2. Build the data structure for convert Xnpoint3d* pdepthpointset =  Newxnpoint3d[Upointnum]; unsigned intI, J, Idxshift, IDX;  for(j = 0; J < Mdepthmd.fullyres (); ++j) {idxshift = J * Mdepthmd.fullxres ();  for(i = 0; i < mdepthmd.fullxres (); ++i) {idx = Idxshift + i; PDEPTHPOINTSET[IDX].      X = i; PDEPTHPOINTSET[IDX].      Y = j; PDEPTHPOINTSET[IDX].    Z = Pdepth[idx]; }  }   //3. Un-project points to real world Xnpoint3d* p3dpointset =  Newxnpoint3d[Upointnum];  Rdepthgen.convertprojectivetorealworld (Upointnum, Pdepthpointset, P3dpointset); Delete[] pdepthpointset; //4. Build Point Cloud for(i = 0; i < upointnum; + + i) {//Skip the depth 0 pointsif(P3dpointset[i]. Z = = 0)Continue;  Vpointcloud.push_back (Scolorpoint3d (P3dpointset[i], pimage[i])); } Delete[] p3dpointset;}

This letter shows the xn: the:D epthgenerator and the depth images and color images that are read, which are used as sources of information, and also in a vector<Scolorpoint3d; As a 3D bit of information after the storage conversion.

The format of the depth image is the same as the Xndepthpixel of the const indicator, but in the color image of the part, heresy is to use the RGB packet good xnrgb24pixel, this can reduce some of the index value of the calculation And because of this modification, the program to read the color image before

Const XnUInt8* pimagemap = mimagegenerator. Getimagemap ();

Modified to

const xnrgb24pixel* pimagemap = mimagegenerator. Getrgb24imagemap();

In the part of the function, the part of the first paragraph is mainly through the acquisition of depth generator Meta-data:xn::D epthmetadata To do simple size, index calculation; If you don't want to use it, you can actually use 640 x 480 This fixed value to do the calculation, is to be with the previous setmapoutputmode () set the resolution of the same.

The second part, "build the data structure for convert", is the 640 x 480 point of the depth image, which is converted into an array of Xnpoint3d form, and the coordinates that are ready for conversion.

The third part, "un-project points to realworld", is actually part of the conversion. This is to switch the coordinates from the image's coordinates to the 3D coordinates, mainly with the Depth Generator Convertprojectivetorealworld (), and it's very simple to use, Just tell him the number of points to be converted (upointnum), transfer the dots in array form (constXNPOINT3D*) and give him a piece of allocate good Xnpoint3d Array (p3dpointset), you can move your own motion.

The fourth part of the heresy is to use a loop to sweep through all points, and to remove the depth of 0 (because these are the parts of the Kinect that do not determine the depth), and the color of the information into the form of Scolorpoint3d , dropped to The Vpointcloud is stored down.

(This action can actually be done in the second step, but it's more troublesome to make the color part.) )

And back to the part of the main program, the program to read the information is:

 //8. Read Data   eresult = Mcontext.waitnoneupdateall ();  if    (Eresult = XN_STATUS_OK) { //9a. Get the depth map   const    xndepthpixel* Pdepthmap = Mdepthgenerator.getdepthmap ();  //9b. Get the image map   const    xnuint8* Pimagemap = Mimagegenerator.getimagemap ();  }   

Also mentioned earlier, heresy is not going to mention the parts that are shown in OpenGL, so this is a constant update of the information, so instead of using an endless loop to keep updating the material and changing the coordinates, the result of the conversion is simply the number of points it has been derived from.

//8. Read Datavector<Scolorpoint3d> Vpointcloud; while( true) {Eresult = Mcontext.waitnoneupdateall (); //9a. Get the depth mapConst xndepthpixel* pdepthmap = Mdepthgenerator.getdepthmap (); //9b. Get the image mapConst xnrgb24pixel* pimagemap = Mimagegenerator.getrgb24imagemap (); //Generate Point cloudvpointcloud.clear (); Generatepointcloud  (Mdepthgenerator, Pdepthmap, Pimagemap, Vpointcloud); cout <<  "point number:"<< vpointcloud.size () << Endl;}

If you're going to use OpenGL to draw, basically don't use the infinite loop, but before each painting, then read the Kinect's material, and through the Generatepointcloud() to do a conversion ~ and if you do not want to rebuild the polygon, but like Heresy just a little bit, the result will probably be like the film above.

Create a Kinect 3D point Cloud with Openni

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.