How to convert from the color camera space to the depth camera space in Kinect For Windows

來源:互聯網
上載者:User

http://nsmoly.wordpress.com/2012/08/03/how-to-convert-from-the-color-camera-space-to-the-depth-camera-space-in-kinect-for-windows/

How to convert from the color camera space to the depth camera space in Kinect For Windows

Posted by nsmoly on August 3, 2012

Kinect has 2 cameras – video and depth (IR) and thefore there are 2 different coordinate systems where you can compute things – the depth camera coordinate frame of reference (that Kinect’s skeleton uses and
returns results in) and the color camera coordinate system. The Face
Tracking API, which we shipped with Kinect For Windows SDK 1.5 developer toolkit, computes results in the color camera coordinate frame since it uses RGB data a lot. To use the face tracking 3D results with Kinect skeleton you may want to convert from
the color camera space to depth camera space. Both systems are right handed coordinate systems with Z pointed out (towards a user) and Y pointed UP, but 2 systems do not have the same origin and their axis are not colinear due to camera differences. Therefore
you need to convert from one system to another.

Unfortunately, Kinect API does not provide this functionality yet. The proper way to convert between 2 camera spaces is to calibrate these cameras and use their extrinsic parameters for conversion. Unfortunately, Kinect API does not expose those and it does
not provide any function that does the conversion. So I came up with the code (below) that can be used to approximately convert from the color camera space to the depth camera space. This code only approximates the “real” conversion, so understand this when
using it. The code is provided “as is” with not warranties, use it at your own risk 

/*    This function demonstrates a simplified (and approximate) way of converting from the color camera space to the depth camera space.    It takes a 3D point in the color camera space and returns its coordinates in the depth camera space.    The algorithm is as follows:        1) take a point in the depth camera space that is near the resulting converted 3D point. As a "good enough approximation"        we take the coordinates of the original color camera space point.        2) Project the depth camera space point to (u,v) depth image space        3) Convert depth image (u,v) coordinates to (u',v') color image coordinates with Kinect API        4) Un-projected converted (u',v') color image point to the 3D color camera space (uses known Z from the depth space)        5) Find the translation vector between two spaces as translation = colorCameraSpacePoint - depthCameraSpacePoint        6) Translate the original passed color camera space 3D point by the inverse of the computed translation vector.    This algorithm is only a rough approximation and assumes that the transformation between camera spaces is roughly the same in    a small neighbourhood of a given point.*/HRESULT ConvertFromColorCameraSpaceToDepthCameraSpace(const XMFLOAT3* pPointInColorCameraSpace, XMFLOAT3* pPointInDepthCameraSpace){    // Camera settings - these should be changed according to camera mode    float depthImageWidth = 320.0f;    float depthImageHeight = 240.0f;    float depthCameraFocalLengthInPixels = NUI_CAMERA_DEPTH_NOMINAL_FOCAL_LENGTH_IN_PIXELS;    float colorImageWidth = 640.0f;    float colorImageHeight = 480.0f;    float colorCameraFocalLengthInPixels = NUI_CAMERA_COLOR_NOMINAL_FOCAL_LENGTH_IN_PIXELS;    // Take a point in the depth camera space near the expected resulting point. Here we use the passed color camera space 3D point    // We want to convert it from depth camera space back to color camera space to find the shift vector between spaces. Then    // we will apply reverse of this vector to go back from the color camera space to the depth camera space    XMFLOAT3 depthCameraSpace3DPoint = *pPointInColorCameraSpace;    // Project depth camera 3D point (0,0,1) to depth image    XMFLOAT2 depthImage2DPoint;    depthImage2DPoint.x = depthImageWidth  * 0.5f + ( depthCameraSpace3DPoint.x / depthCameraSpace3DPoint.z ) * depthCameraFocalLengthInPixels;    depthImage2DPoint.y = depthImageHeight * 0.5f - ( depthCameraSpace3DPoint.y / depthCameraSpace3DPoint.z ) * depthCameraFocalLengthInPixels;    // Transform from the depth image space to the color image space    POINT colorImage2DPoint;    NUI_IMAGE_VIEW_AREA viewArea = {  NUI_IMAGE_DIGITAL_ZOOM_1X, 0, 0 };    HRESULT hr = NuiImageGetColorPixelCoordinatesFromDepthPixel(         NUI_IMAGE_RESOLUTION_640x480, &viewArea,         LONG(depthImage2DPoint.x + 0.5f), LONG(depthImage2DPoint.y+0.5f), USHORT(depthCameraSpace3DPoint.z*1000.0f) << NUI_IMAGE_PLAYER_INDEX_SHIFT,         &colorImage2DPoint.x, &colorImage2DPoint.y );    if(FAILED(hr))    {        ASSERT(false);        return hr;    }    // Unproject in the color camera space    XMFLOAT3 colorCameraSpace3DPoint;    colorCameraSpace3DPoint.z = depthCameraSpace3DPoint.z;    colorCameraSpace3DPoint.x = (( float(colorImage2DPoint.x) - colorImageWidth*0.5f  ) / colorCameraFocalLengthInPixels) * colorCameraSpace3DPoint.z;    colorCameraSpace3DPoint.y = ((-float(colorImage2DPoint.y) + colorImageHeight*0.5f ) / colorCameraFocalLengthInPixels) * colorCameraSpace3DPoint.z;    // Compute the translation from the depth to color camera spaces    XMVECTOR vTranslationFromColorToDepthCameraSpace = XMLoadFloat3(&colorCameraSpace3DPoint) - XMLoadFloat3(&depthCameraSpace3DPoint);    // Transform the original color camera 3D point to the depth camera space by using the inverse of the computed shift vector    XMVECTOR v3DPointInKinectSkeletonSpace = XMLoadFloat3(pPointInColorCameraSpace) - vTranslationFromColorToDepthCameraSpace;    XMStoreFloat3(pPointInDepthCameraSpace, v3DPointInKinectSkeletonSpace);    return S_OK;}

相關文章

聯繫我們

該頁面正文內容均來源於網絡整理,並不代表阿里雲官方的觀點,該頁面所提到的產品和服務也與阿里云無關,如果該頁面內容對您造成了困擾,歡迎寫郵件給我們,收到郵件我們將在5個工作日內處理。

如果您發現本社區中有涉嫌抄襲的內容,歡迎發送郵件至: info-contact@alibabacloud.com 進行舉報並提供相關證據,工作人員會在 5 個工作天內聯絡您,一經查實,本站將立刻刪除涉嫌侵權內容。

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.