How to handle the Kinect skeleton tracking data

Source: Internet
Author: User

http://www.ituring.com.cn/article/196144

Author/Wu Guobin

PhD, PMP, Microsoft Research Asia Academic cooperation manager. The Kinect for Windows Academic Cooperation program and the Microsoft Elite Challenge Kinect Theme project for universities and research institutions in China. He was a lecturer at the Microsoft TechEd2011 Kinect Forum, chairman of the Kinect Sub-Forum of the Microsoft Asia Education Summit, and head of the Kinect subject at the forefront of the Chinese computer Society.

Bone tracking technology is the core technology of Kinect, it can accurately calibrate the human body's 20 key points, and the location of these 20 points can be tracked in real-time. Using this technology, you can develop interesting applications based on the interaction of the body touching machine. structure of bone tracking data

Currently, the skeleton API in the Kinect for Windows SDK can provide location information for up to two people in front of the Kinect, including detailed posture and three-dimensional coordinate information for bone points. In addition, the Kinect for Windows SDK can support up to 20 bone points. The data object type is provided as a skeleton frame and can hold up to 20 points per frame, as shown in Figure 1.

Fig. 1 schematic diagram of 20 bone points

Each bone point in the SDK is represented by the joint type, and the 20 bone points of each frame consist of a collection based on the joint type. This type contains 3 properties, as shown below.

Jointtype: The type of the bone point, which is an enumeration type that lists the specific names of 20 bone points, such as "Hand_left", which indicates that the bone point is the left-hand node.

The Position:skeletonpoint type represents the location information for the bone point. Skeletonpoint is a struct that contains x, Y, z three data members to store the three-dimensional coordinates of a bone point.

The Trackingstate:jointtrackingstate type is also an enumeration type that represents the tracking state of the bone point. Where tracked indicates that the bone point is snapped correctly, nottracked indicates that no point has been snapped to the bone, and inferred indicates an indeterminate state. bust Mode

If your application only needs to capture the upper body posture action, you can use the bust mode provided by the Kinect for Windows SDK (seated mode). In the bust mode, the system only captures information about 10 bone points in the upper body, ignoring the location of the other 10 bone points, which solves the problem that the user cannot be identified by the Kinect when seated in a chair. Even if the data of the lower body bone point is unstable or absent, it will not affect the skeletal data of the upper body. And when the user is only 0.4 meters from the Kinect device, the application is still able to perform bone tracking normally, which greatly improves the performance of bone tracking.

The bust pattern is defined in the enumeration type Skeletontrackingmode, which contains two enumeration values: Default and Seated. The former is the default bone tracking mode, which captures 20 bone points normally, while the latter is a bust mode, and selecting this value captures only 10 bone points in the upper body.

The developer can set the pattern of bone tracking by changing the Trackingmode property of the Skeletonstream object, as shown in the following code:

KinectSensor.SkeletonStream.TrackingMode = skeletontrackingmode.seated;
How bone tracking data is obtained

The way the application gets the next frame of bone data is the same way that you get the color image and the depth image data by calling the callback function and passing a cache implementation that gets the Openskeletonframe () function for the bone data call. If the latest bone data is ready, the system copies it to the cache, but if the application makes a request and the new bone data is not ready, you can choose to wait for the next bone data until it is ready, or immediately return to send the request at a later time. For the Nui Skeleton API, the same skeleton data is provided only once.

The Nui Skeleton API provides two application models, the polling model and the time model, which are briefly described below.

The polling model is the simplest way to read bone events by invoking the Opennextframe () function of the Skeletonstream class. The declaration of the Opennextframe () function is as follows.

Public skeletonframe opennextframe (
     int millisecondswait
)

You can pass a parameter to specify the time to wait for the next frame of bone data. The Opennextframe () function returns when the new data is ready or when the wait time is exceeded.

The time model acquires bone data in an event-driven manner, which is more flexible and accurate. The application passes an event handler function to the Skeletonframeready event, which is defined in the Kinectsensor class. The event callback function is called immediately when the bone data for the next frame is ready. So the Kinect app should get bone data in real time by calling the Openskeletonframe () function. Instance--call API to get bone data and draw in real time

This instance program will implement getting bone data and then use the coordinates of the bone point as the 20 position coordinates of the ellipse control, connect the corresponding points with the segment, and finally map the drawn skeleton to the color image. The reader can start this instance on the basis of Example 1, as shown in the following steps.

1. Add the following skeleton data stream startup function in the window_loaded () function, and add the corresponding Skeletonframeready event for the Kinectsensor_skeletonframeready event handler function.

KinectSensor.SkeletonStream.Enable ();
Kinectsensor.skeletonframeready + = new
     eventhandler<skeletonframereadyeventargs> (kinectSensor_ Skeletonframeready);

2. Prepare the WPF interface. Add 20 dots to the interface with the following code, tracking the 20 key points of the human body obtained by the Kinect for Windows SDK, and marking the 20 points as different colors.

<canvas name= "Skeletoncanvas" visibility= "Visible" >
      <ellipse canvas.left= "0" canvas.top= "0" height= "10 "Name=" Headpoint "width=" ten "fill=" Red "/> <ellipse
    canvas.left=" "canvas.top=" 0 "height=" name= " Shouldercenterpoint "width=" "fill=" Blue "/>
    <ellipse canvas.left=" "canvas.top=" 0 "height=" Name= " Shoulderrightpoint "width=" "fill=" Orange "/>
    ... Omit the middle ellipse definition
     <image canvas.left= "303" canvas.top= "161" height= "Name=", "Image1"
        stretch= "Fill" Width = "/>"
</Canvas>

At this point, the design window is shown in Figure 2.

Figure 2 WPF Design Interface

3. Write the Kinectsensor_skeletonframeready () event handler function. When Kinect is properly connected, the event handler is triggered when the user is standing in front of the Kinect and the Kinect is able to correctly identify the human body, with the following code:

private void Kinectsensor_skeletonframeready (object sender, Skeletonframereadyeventargs e) {using (Skeletonframe Ske
                Letonframe = E.openskeletonframe ()) {if (skeletonframe! = null) {Skeletondata = new
                  Skeleton[kinectsensor.skeletonstream.frameskeletonarraylength];
            Skeletonframe.copyskeletondatato (This.skeletondata);
                Skeleton Skeleton = (from S in skeletondata where s.trackingstate = = skeletontrackingstate.tracked Select s). FirstOrDefault 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.