Kinect for Windows V2 and V1 Contrast development ___ depth data acquisition

Source: Internet
Author: User

V1 Depth Resolution:

V2 Depth Resolution: 512x424


1. How to open the depth image frame

For V1:

hr = M_pnuisensor->nuiimagestreamopen (                                   nui_image_type_depth,nui_image_resolution_320x240,0, 2,                                   m_ Hnextdepthframeevent, &m_hdepthstreamhandle);                          if (FAILED (HR))                          {                                   cout<< "Could notopen image stream video" <<endl;                                   return hr;                    } This way you can set the resolution

For V2:

Initialize the Kinect and get the depth reader        idepthframesource* pdepthframesource =null; First Use        hr = M_pkinectse Nsor->open ();//Open Kinect         if (SUCCEEDED (HR))        {          hr =m_pkinectsensor->get_depthframesource (& Pdepthframesource);        } The Get_depthframesource method opens the source of the colored frame. Then use     if (SUCCEEDED (HR))        {            hr =pdepthframesource->openreader (&m_pdepthframereader);        }        Saferelease (Pdepthframesource); method Openreader Open the color frame reader.

2. How to update the depth frame

For V1: Using the Nuiimagestreamgetnextframe method

Nuiimagestreamgetnextframe (m_hdepthstreamhandle,0, &pimageframe);; /Get the frame data </span>

For V2: Using the Acquirelatestframe method

if (!m_pdepthframereader)    {        return;    }     idepthframe* pdepthframe = NULL;     HRESULT hr =m_pdepthframereader->acquirelatestframe (&pdepthframe);

3. How the data is processed

For V1: This data acquisition method is relatively clear to see the internal structure of the data,

Inuiframetexture *ptexture =pimageframe->pframetexture;                          Nui_locked_rect Lockedrect;                           Ptexture->lockrect (0, &lockedrect,null, 0);                           Rgbquad Q;                                                                              if (Lockedrect.pitch! = 0) {                                            BYTE * pbuffer = (byte*) (lockedrect.pbits);                                            INT size = lockedrect.size;                                            memcpy_s (M_pdepthbuffer,size, pbuffer, size);                                   ushort* pbufferrun =reinterpret_cast<ushort*> (M_pdepthbuffer); for (int i=0; i<image.rows; i++) {//ushort* ptr =                                            (ushort*) depthindeximage->height;                      ushort* Pdepthrow = (ushort*) (i);                      BYTE * pbuffer = (byte*) (lockedrect.pbits);  Uchar *ptr =image.ptr<uchar> (i);                                            The pointer of line i UCHAR * pbuffer = (uchar*) (lockedrect.pbits) +i*lockedrect.pitch; ushort* Pbufferrun = (ushort*) pbuffer;//Note This needs to be converted, because each data is 2 bytes, stored with the above color information is not the same, here is 2 bytes one                                            A message, can no longer be converted to ushort for (int j=0; j<image.cols; j + +) in byte                                                                                                       { PTR[J] = 255-(BYTE) (256*PBUFFERRUN[J]/0X0FFF);//Direct data normalization processing//                                                     PTR[J] = Pbufferrun[i * 640 + j];  PTR[J] = 255-(UCHAR) (PBUFFERRUN[J]/0X0FFF);                  Normalization of data directly to int player =pbufferrun[j]&7; INT data = (PBUFFERRUN[J]&AMP;0XFFF8) >> 3;                  Uchar ImageData = 255-(Uchar) (256*DATA/0X0FFF);                    Q.rgbblue = Q.rgbgreen =q.rgbred = 0;                            Switch (player) {case 0:q.rgbred = IMAGEDATA/2;                            Q.rgbblue = IMAGEDATA/2;                            Q.rgbgreen = IMAGEDATA/2;                        Break                            Case 1:q.rgbred =imagedata;                        Break                            Case 2:q.rgbgreen =imagedata;                        Break                            Case 3:q.rgbred = IMAGEDATA/4;  Q.rgbgreen = q.rgbred*4;  Here use the method of multiplication, instead of the original method can avoid the case of non-divisible q.rgbblue =q.rgbred*4;                        Can be used in the following Getthecontour (), to avoid missing some cases of break; CASE 4:q.rgbblue = IMAGEDATA/4;                            q.rgbred = q.rgbblue*4;                            Q.rgbgreen =q.rgbblue*4;                        Break                           Case 5:q.rgbgreen = IMAGEDATA/4;                            Q.rgbred =q.rgbgreen*4;                            Q.rgbblue =q.rgbgreen*4;                        Break                            Case 6:q.rgbred = IMAGEDATA/2;                             Q.rgbgreen = IMAGEDATA/2;                            Q.rgbblue =q.rgbgreen*2;                        Break                            Case 7:q.rgbred = 255-(IMAGEDATA/2);                            Q.rgbgreen = 255-(IMAGEDATA/2);                  Q.rgbblue = 255-(IMAGEDATA/2);                  } Ptr[3*j] = Q.rgbblue;                  PTR[3*J+1] = Q.rgbgreen; PTR[3*J+2] = Q.rgbrEd }} imshow ("Depthima GE ", image); The final form of the displayed image can be displayed in OpenCV.

For V2:


rgbquad*  m_pdepthrgbx;;/  /depth Data storage location M_PDEPTHRGBX (NULL)//constructor initialization    //create heap storage for color pixel data in Rgbxformat  M_PDEPTHRGBX = new Rgbquad[cdepthwidth *cdepthheight]; Below is acquirelatestframe after processing data        INT64 ntime = 0;        iframedescription* pframedescription =null;        int nwidth = 0;        int nheight = 0;        ushortndepthminreliabledistance = 0;        USHORT ndepthmaxdistance =0;        UINT nbuffersize = 0;        UINT16 *pbuffer = NULL;                if (SUCCEEDED (HR))        {            hr =pdepthframe->accessunderlyingbuffer (&nbuffersize, &pbuffer);                   }         if (SUCCEEDED (HR))        {            processdepth (ntime, Pbuffer,nwidth, nheight, Ndepthminreliabledistance, ndepthmaxdistance);        }

It feels like the current pbuffer is the depth of data stored, the question is how to use OPENCV to display it? The internal structure of this data is God horse appearance? Then how to display the image data with OpenCV? Unknown origin ...

Kinect for Windows V2 and V1 Contrast development ___ depth data acquisition

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.