Kinect development tutorial 5: openni getting human skeleton

Source: Internet
Author: User

Almost graduated, Xiao Jin has been busy with related matters recently, and the tutorial is also stranded for a while. The previous tutorials introduced some basic examples of openni and their gesture applications. However, if you use Kinect to recognize some gestures, it's always a bit cool. In most somatosensory applications, steps to obtain the skeleton are indispensable, which is also a topic that Xiao Jin has always wanted to write.

Okay, let's get into the right corner!

In the enum xnskeletonjoint of openni library, 24 human joints are defined as follows:

Xn_skel_head = 1, xn_skel_neck = 2,
Xn_skel_torso = 3, xn_skel_waist = 4,
Xn_skel_left_collar = 5, xn_skel_left_shoulder = 6,
Xn_skel_left_elbow = 7, xn_skel_left_wrist = 8,
Xn_skel_left_hand = 9, xn_skel_left_fingertip = 10,
Xn_skel_right_collar = 11, xn_skel_right_shoulder = 12,
Xn_skel_right_elbow = 13, xn_skel_right_wrist = 14,
Xn_skel_right_hand = 15, xn_skel_right_fingertip = 16,
Xn_skel_left_hip = 17, xn_skel_left_knee = 18,
Xn_skel_left_ankle = 19, xn_skel_left_foot = 20,
Xn_skel_right_hip = 21, xn_skel_right_knee = 22,
Xn_skel_right_ankle = 23, xn_skel_right_foot = 24

Try it out. Currently, 14 joints are available, such:

First run the Code:

#include <stdlib.h>#include <iostream>#include <vector>#include <XnCppWrapper.h>#include <XnModuleCppInterface.h> #include "cv.h"#include "highgui.h"using namespace std;using namespace cv;//#pragma comment (lib,"cv210")//#pragma comment (lib,"cxcore210")//#pragma comment (lib,"highgui210")//#pragma comment (lib,"OpenNI")//【1】xn::UserGenerator userGenerator;xn::DepthGenerator depthGenerator;xn::ImageGenerator imageGenerator;/*    XN_SKEL_HEAD          = 1,    XN_SKEL_NECK            = 2,  XN_SKEL_TORSO         = 3,    XN_SKEL_WAIST           = 4,    XN_SKEL_LEFT_COLLAR        = 5,    XN_SKEL_LEFT_SHOULDER        = 6,  XN_SKEL_LEFT_ELBOW        = 7,  XN_SKEL_LEFT_WRIST          = 8,  XN_SKEL_LEFT_HAND          = 9,    XN_SKEL_LEFT_FINGERTIP    =10,    XN_SKEL_RIGHT_COLLAR    =11,    XN_SKEL_RIGHT_SHOULDER    =12,  XN_SKEL_RIGHT_ELBOW        =13,  XN_SKEL_RIGHT_WRIST          =14,  XN_SKEL_RIGHT_HAND      =15,    XN_SKEL_RIGHT_FINGERTIP    =16,    XN_SKEL_LEFT_HIP          =17,    XN_SKEL_LEFT_KNEE            =18,  XN_SKEL_LEFT_ANKLE        =19,  XN_SKEL_LEFT_FOOT            =20,  XN_SKEL_RIGHT_HIP          =21,    XN_SKEL_RIGHT_KNEE          =22,    XN_SKEL_RIGHT_ANKLE        =23,    XN_SKEL_RIGHT_FOOT          =24    *///a line will be drawn between start point and corresponding end pointint startSkelPoints[14]={1,2,6,6,12,17,6,7,12,13,17,18,21,22};int endSkelPoints[14]={2,3,12,21,17,21,7,9,13,15,18,20,22,24};// callback function of user generator: new uservoid XN_CALLBACK_TYPE NewUser( xn::UserGenerator& generator, XnUserID user,void* pCookie ){cout << "New user identified: " << user << endl;//userGenerator.GetSkeletonCap().LoadCalibrationDataFromFile( user, "UserCalibration.txt" );generator.GetPoseDetectionCap().StartPoseDetection("Psi", user);}// callback function of user generator: lost uservoid XN_CALLBACK_TYPE LostUser( xn::UserGenerator& generator, XnUserID user,void* pCookie ){cout << "User " << user << " lost" << endl;}// callback function of skeleton: calibration startvoid XN_CALLBACK_TYPE CalibrationStart( xn::SkeletonCapability& skeleton,XnUserID user,void* pCookie ){cout << "Calibration start for user " <<  user << endl;}// callback function of skeleton: calibration end void XN_CALLBACK_TYPE CalibrationEnd( xn::SkeletonCapability& skeleton,XnUserID user,XnCalibrationStatus calibrationError,void* pCookie ){cout << "Calibration complete for user " <<  user << ", ";if( calibrationError==XN_CALIBRATION_STATUS_OK ){cout << "Success" << endl;skeleton.StartTracking( user );//userGenerator.GetSkeletonCap().SaveCalibrationDataToFile(user, "UserCalibration.txt" );}else{cout << "Failure" << endl;//For the current version of OpenNI, only Psi pose is available((xn::UserGenerator*)pCookie)->GetPoseDetectionCap().StartPoseDetection( "Psi", user );}}// callback function of pose detection: pose startvoid XN_CALLBACK_TYPE PoseDetected( xn::PoseDetectionCapability& poseDetection,const XnChar* strPose,XnUserID user,void* pCookie){cout << "Pose " << strPose << " detected for user " <<  user << endl;((xn::UserGenerator*)pCookie)->GetSkeletonCap().RequestCalibration( user, FALSE );poseDetection.StopPoseDetection( user );}void clearImg(IplImage* inputimg){CvFont font;cvInitFont( &font, CV_FONT_VECTOR0,1, 1, 0, 3, 5);memset(inputimg->imageData,255,640*480*3);}int main( int argc, char** argv ){char key=0;int imgPosX=0;int imgPosY=0;// initial contextxn::Context context;context.Init();xn::ImageMetaData imageMD;IplImage* cameraImg=cvCreateImage(cvSize(640,480),IPL_DEPTH_8U,3);cvNamedWindow("Camera",1);// map output modeXnMapOutputMode mapMode;mapMode.nXRes = 640;mapMode.nYRes = 480;mapMode.nFPS = 30;// create generatordepthGenerator.Create( context );depthGenerator.SetMapOutputMode( mapMode );imageGenerator.Create( context );userGenerator.Create( context );  //【2】// Register callback functions of user generatorXnCallbackHandle userCBHandle;userGenerator.RegisterUserCallbacks( NewUser, LostUser, NULL, userCBHandle );  //【3】// Register callback functions of skeleton capabilityxn::SkeletonCapability skeletonCap = userGenerator.GetSkeletonCap();skeletonCap.SetSkeletonProfile( XN_SKEL_PROFILE_ALL );XnCallbackHandle calibCBHandle;skeletonCap.RegisterToCalibrationStart( CalibrationStart,&userGenerator, calibCBHandle );skeletonCap.RegisterToCalibrationComplete( CalibrationEnd,&userGenerator, calibCBHandle );  //【4】// Register callback functions of Pose Detection capabilityXnCallbackHandle poseCBHandle;userGenerator.GetPoseDetectionCap().RegisterToPoseDetected( PoseDetected,&userGenerator, poseCBHandle );// start generate datacontext.StartGeneratingAll();while( key!=27 ){context.WaitAndUpdateAll();imageGenerator.GetMetaData(imageMD);memcpy(cameraImg->imageData,imageMD.Data(),640*480*3);cvCvtColor(cameraImg,cameraImg,CV_RGB2BGR);// get usersXnUInt16 userCounts = userGenerator.GetNumberOfUsers();if( userCounts > 0 ){XnUserID* userID = new XnUserID[userCounts];userGenerator.GetUsers( userID, userCounts );for( int i = 0; i < userCounts; ++i ){//【5】// if is tracking skeletonif( skeletonCap.IsTracking( userID[i] ) ){XnPoint3D skelPointsIn[24],skelPointsOut[24];XnSkeletonJointTransformation mJointTran;for(int iter=0;iter<24;iter++){//XnSkeletonJoint from 1 to 24                                                skeletonCap.GetSkeletonJoint( userID[i],XnSkeletonJoint(iter+1), mJointTran );skelPointsIn[iter]=mJointTran.position.position;}depthGenerator.ConvertRealWorldToProjective(24,skelPointsIn,skelPointsOut);//【6】for(int d=0;d<14;d++){CvPoint startpoint = cvPoint(skelPointsOut[startSkelPoints[d]-1].X,skelPointsOut[startSkelPoints[d]-1].Y);CvPoint endpoint = cvPoint(skelPointsOut[endSkelPoints[d]-1].X,skelPointsOut[endSkelPoints[d]-1].Y);cvCircle(cameraImg,startpoint,3,CV_RGB(0,0,255),12);cvCircle(cameraImg,endpoint,3,CV_RGB(0,0,255),12);cvLine(cameraImg,startpoint,endpoint,CV_RGB(0,0,255),4);}}}delete [] userID;}cvShowImage("Camera",cameraImg);key=cvWaitKey(20);}// stop and shutdowncvDestroyWindow("Camera");cvReleaseImage(&cameraImg);context.StopGeneratingAll();context.Shutdown();return 0;}


[1] for the acquisition of human skeleton, Xiao Jin declares the usergenerator generator. usergenerator can detect the appearance or departure of new users (hereinafter referred to as characters) and obtain the number of characters in the image, the character location information is similar to the gesturegenerator introduced in the previous tutorial. After registering the callback function, the corresponding callback function will be called once it detects the dynamic and static content (such as the appearance of the character.

[2] Xiao Jin registers two callback functions for usergenerator: newuser and lostuser. The corresponding character appears and the character disappears.

[3] A new capability, skeletoncapability, is displayed. To avoid confusion, Xiao Jin often regards capability as a generator capability. For example, skeletoncapability can be used to understand the ability of usergenerator to obtain the skeleton information of a person.

Before obtaining the character skeleton, you must first perform the calibration. Therefore, skeletoncapability needs to register two callback functions: calibrationstart and calibrationend, which are called at the start and end of the character calibration. (In earlier versions of openni, the interface name may change)

[4] similar to [3], usergenerator. getposedetectioncap () obtains a posedetectioncapability, which can detect a person's specific posture. Currently, only the PSI posture is supported,


Xiao Jin registers the callback function posedetected for it. This function is called when the PSI position of the character is detected.

Concatenate the callback function of [2] [3] [4]. (1) When a character appears, newuser () is triggered to start pose detection. (2) detected that pose triggers posedetected () and request calibration. (3) calibrationstart () is triggered when the calibration starts. (4) calibrationend () is triggered when the calibration ends. If the calibration succeeds, call starttracking () of skeletoncapability to start tracing corresponding characters.

[5] using the getskeletonjoint () method, you can obtain the xnskeletonjointtransformation of the corresponding joint. This struct includes position and orientation, and position contains a position and fconfidence, which respectively represent the joint position and credibility, similarly, orientation includes the motion direction and credibility of the joint. Here, Xiao Jin operates on 24 joints, but only 14 joints can obtain the location information.

The position information obtained by these steps is a 3D Coordinate of a real scenario. It needs to be converted to the screen coordinate through projection. The conversion process is implemented by convertrealworldtoprojective.

[6] In order to output the display more intuitively, each joint can be connected in a straight line to form a skeleton of a human body. Xiao Jin defines the startskelpoints and endskelpoints arrays. The values of the two arrays correspond one to one, representing the joint pairs of the start and end points of a group. The start and end points of each group are connected in a straight line, such as head, NECT, and torso.

 

After the entire program is started, the body is facing the camera (at least expose the head and upper body), the console will display "new user identified", then make the PSI posture, after pose PSI detected, the program starts the calibration. At this time, the PSI posture is maintained for several seconds. After the calibration is successful, the skeleton is correctly displayed. Have a good time.

----------------------------------

Author: Chen Jin)

This article is an original article. If you need to repost and quote it, please specify the original author and link. Thank you.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.