Talk about the second program of Nite 2 combined with opencv (extracting human skeleton coordinates)

Source: Internet
Author: User

The basic use of NITE 2 involves the following steps:

1. initialize the NITE environment: nite: NiTE: initialize ();

2. create a User Tracker: nite: UserTracker mUserTracker; mUserTracker. create ();

3. Create and read the User Frame information: nite: UserTrackerFrameRef mUserFrame; mUserTracker. readFrame (& mUserFrame );

4. obtain the User information from the User Frame information: const nite: Array <nite: UserData> & aUsers = mUserFrame. getUsers (); then start tracing and identification of human bones based on User information.

5. release Frame information: mUserFrame. release ();

6. Disable the tracker: mUserTracker. destroy ();

7. Disable the NITE environment: nite: NiTE: shutdown ();

The following is a simple program code for tracking right-hand skeleton coordinates and displaying right-hand coordinate information:

// YeNITE2SimpleUsingOpenCV_Skeleton.cpp: defines the entry point of the console application. // # Include "stdafx. h "# include <iostream> // load NiTE. h header file # include <NiTE. h> // using namespaceusing namespace std; int main (int argc, char ** argv) {// initialize NITE nite: NiTE: initialize (); // create the User tracker nite: UserTracker mUserTracker; mUserTracker. create (); nite: UserTrackerFrameRef mUserFrame; for (int I = 0; I <1000; ++ I) {// read the User Frame information mUserTracker. readFrame (& mUserFrame); // obtain the User information from the User Frame information. const nite: Array <nite: UserData> & aUsers = mUserFrame. getUsers (); // number of users in the Frame for (int I = 0; I <aUsers. getSize (); ++ I) {const nite: UserData & rUser = aUsers [I]; // when a User appears in front of the Kinect, then, if (rUser. isNew () {cout <"New User [" <rUser. getId () <"] found. "<endl; // start human skeleton tracking mUserTracker. startSkeletonTracking (rUser. getId ();} // obtain the Skeleton coordinate const nite: Skeleton & rSkeleton = rUser. getSkeleton (); if (rSkeleton. getState () = nite: SKELETON_TRACKED) {// obtain the right-hand coordinate const nite: SkeletonJoint & righthand = rSkeleton. getJoint (nite: JOINT_RIGHT_HAND); const nite: Point3f & position = righthand. getPosition (); cout <"right hand coordinate:" <position. x <"/" <position. y <"/" <position. z <endl ;}}// release mUserFrame. release (); // disable the tracker mUserTracker. destroy (); // disable the NITE environment nite: NiTE: shutdown (); return 0 ;}

The program execution result is as follows:

However, we can observe the Code above and find that when we track human bones, we can also obtain the coordinates of the bones without making any ("surrender" or "chest-holding") actions. Is it unnecessary for human body posture detection during NITE2 bone tracking? In my understanding, it seems that pose tracking will become a chicken rib (simply by your imagination ...).

Then, with the help of commonly used tool libraries such as opencv, you can see the positioning and Display Effect of the skeleton coordinates under the deep image, and directly go to the Code:

// YeNite2SimpleUsingOpenCV. cpp: defines the entry point of the console application. // # Include "stdafx. h "# include <iostream> // load NiTE. h header file # include <NiTE. h> // load the OpenCV header file # include "opencv2/opencv. hpp "# include <opencv2/core. hpp> # include <opencv2/highgui. hpp> # include <opencv2/imgproc. hpp> using namespace std; using namespace cv; int main (int argc, char ** argv) {// initialize NITE nite: NiTE: initialize (); // create a User tracker nite: UserTracker * mUserTracker = new nite: UserTracker; MUserTracker-> create (); // create the OpenCV Image window namedWindow ("Skeleton Image", CV_WINDOW_AUTOSIZE); // read the data stream information cyclically and save it in HandFrameRef nite :: userTrackerFrameRef mUserFrame; while (true) {// read Frame information nite: Status rc = mUserTracker-> readFrame (& mUserFrame); if (rc! = Nite: STATUS_ OK) {cout <"GetNextData failed" <endl; return 0 ;}// converts deep data to the OpenCV format const cv: Mat mHandDepth (mUserFrame. getDepthFrame (). getHeight (), mUserFrame. getDepthFrame (). getWidth (), CV_16UC1, (void *) mUserFrame. getDepthFrame (). getData (); // to make the deep image more visible, run the CV_16UC1 ==> CV_8U format cv: Mat mScaledHandDepth, thresholdDepth; mHandDepth. convertize (mScaledHandDepth, CV_8U, 255.0/10000 );/ /Binarization processing, in order to display the obvious effect cv: threshold (mScaledHandDepth, thresholdDepth, 50,255, 0); // obtain the User information from the User Frame information const nite :: array <nite: UserData> & aUsers = mUserFrame. getUsers (); // number of users in the Frame for (int I = 0; I <aUsers. getSize (); ++ I) {const nite: UserData & rUser = aUsers [I]; // when a User appears in front of the Kinect, then, if (rUser. isNew () {cout <"New User [" <rUser. getId () <"] found. "<endl; // start tracing human bones MUserTracker-> startSkeletonTracking (rUser. getId ();} // obtain the Skeleton coordinate const nite: Skeleton & rSkeleton = rUser. getSkeleton (); if (rSkeleton. getState () = nite: SKELETON_TRACKED) {// only obtain the coordinates of the first eight bone points for (int I = 0; I <8; I ++) {// obtain the skeleton coordinate const nite: SkeletonJoint & skeletonJoint = rSkeleton. getJoint (nite: JointType) I); const nite: Point3f & position = skeletonJoint. getPosition (); float depth_x, depth_y ;/ /Map the coordinate of the skeleton point to the depth coordinate of mUserTracker-> convertJointCoordinatesToDepth (position. x, position. y, position. z, & depth_x, & depth_y); cv: Point point (int) depth_x, (int) depth_y ); // assign the corresponding coordinate point in the obtained depth image to 255 again. that is, each bone point is displayed in the deep image. ThresholdDepth. at <uchar> (point) = 255;} // display the Image cv: imshow ("Skeleton Image", thresholdDepth) ;}// terminate the shortcut key if (cv :: waitKey (1) = 'q') break;} // close Frame mUserFrame. release (); // disable the tracker mUserTracker-> destroy (); // disable the NITE environment nite: NiTE: shutdown (); return 0 ;}

: The "White Point" on the graph is the skeleton point.

You can refer to the references provided on the official website to learn about the specific bone points (location, direction, and reliability. I am sorry that the coordinates of the 15 skeleton points currently provided do not include other skeleton points such as the wrist, and only the whole body can be obtained.

Combined with program comments and previous blog posts, I think the last program should be quite understandable. According to your own feelings, I feel like writing code, without packaging, optimization, and refactoring. It is completely process-oriented, and there must be some details, which will be further optimized later.

The write is rough. You are welcome to criticize it ~~~

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.