Kinect Learning notes the fourth article body
C #
Basic brief:
Kinect2.0 's skeleton recognition is mainly based on the depth image, similar to the bodyindex recognition, so it will also produce similar problems with Bodyindex (refer to the body of learning notes). Feel this is the essence of the Kinect, in fact, to achieve the key or his body recognition algorithm, who extracted it. Open source.
In the Kinect, a skeleton is represented by 25 joint points, as can be seen from the following figure. When you walk into the Kinect's field of vision, the Kinect can find the position of your 25 joints (of course you have to stand, you can't hide), and the position is represented by (x, y, z) coordinates.
The Kinect V2.0 is able to identify the complete 25 joint points, but I only found a generation of graphs.
The added points are the index points of the hands, the thumb points and the neck (neck).
25 Joint Point (joint)
Head Neck Spineshoulder (shoulder center) spinemid (spine) spinebase (hip center)
Shoulderright Shoulderleft
Elbowright Right elbow elbowleft
Wristright Right Wrist Wristleft
Handright Handleft
Handtipright Right index finger handtipleft
Thumbright Right Thumb Thumbleft
Hipright Right Buttock Hipleft
Kneeright Right Knee Kneeleft
Ankleright Right Ankle ankleleft
Footleft Footright
The position of each joint point of the player is represented by (x, y, z) coordinates. The coordinate unit is m. Axis X,y, z is the space X, y, Z axis of the depth sensor entity. This coordinate system is in the right hand spiral, the Kinect sensor is at the origin, and the z axis is aligned with the Kinect-induced orientation. The y-axis positive half axis extends upward, and the x-axis positive half axis (from the perspective of the Kinect sensor) is called the Skeleton Space (coordinates).
The location of the Kinect placement affects the generated image. For example, the Kinect may be placed on a surface that is not horizontal or may be rotated in a vertical direction to optimize the range of vision. In this case, the y-axis is often not perpendicular to the ground, or is not parallel to the direction of gravity. In the resulting image, although the person standing upright, in the image will also show an accident tilt.
However, the skeleton coordinate system is not the same as the depth coordinate and the color image coordinate system, even with the coordinate system on the UI interface. In the use of the general will use the "World Line" (coordinate mapping) of the change,/* All is the fate of the stone door arrangements. The */,kinect provides a corresponding method.
Coordinate transformation understanding reference to the Kinect application development combat in the most natural way to talk to machines in the first generation of the fourth chapter P93, "Kinect Human-computer interaction development practice." About the second generation of books is very few, who knows, please tell me AH. (It is noteworthy that we discard the z value in the three-dimensional coordinates of the bone joint point, using only the X,y value.) The Kinect has finally provided us with the depth data (z value) for each node and we are not using it, which seems wasteful. Not really, we use the z value of the node, but not directly, not in the UI interface. Depth data is required in the transformation of a coordinate space. Try to change the z value in joint's position to 0 in the Getjointpoint method, and then call the Mapskeletonpointtodepth method, and you'll find that the X and Y values in the returned object are 0, and you can also try scaling the image in Z-value, and so on. You can see that the size of the image is reversed with the z value (depth). That is, the smaller the depth value, the larger the image, i.e. the closer the character is to the Kinect, the greater the skeleton data. )
Body Code Analysis: (C #)
Using System;
Using System.Collections.Generic;
Using System.Linq;
Using System.Text;
Using System.Threading.Tasks;
Using System.Windows;
Using System.Windows.Controls;
Using System.Windows.Data;
Using System.Windows.Documents;
Using System.Windows.Input;
Using System.Windows.Media;
Using System.Windows.Media.Imaging;
Using System.Windows.Navigation;
Using System.Windows.Shapes;
Using Microsoft.kinect;
Namespace Mybodyviewer
{
<summary>
The interactive logic of MainWindow.xaml
</summary>
public partial class Mainwindow:window
{
Radius when drawing a hand ring
Private Const Double handsize = 30;
Seam thickness.
Private Const Double jointthickness = 3;
The thickness of the rectangular edge of the clip.
Private Const Double clipboundsthickness = 10;
Constant for Clamping Z-value camera space point negative
Private Const FLOAT Inferredzpositionclamp = 0.1f;
A brush used to draw a closed hand
Private readonly Brush Handclosedbrush = new SolidColorBrush (COLOR.FROMARGB (128, 255, 0, 0));
A brush used to draw an open hand
Private readonly Brush Handopenbrush = new SolidColorBrush (COLOR.FROMARGB (128, 0, 255, 0));
A brush used to draw the hand that is tracking recognition.
Private readonly Brush Handlassobrush = new SolidColorBrush (COLOR.FROMARGB (128, 0, 0, 255));
The brush used to draw the joint being tracked
Private readonly Brush Trackedjointbrush = new SolidColorBrush (Color.FromArgb (255, 68, 192, 68));
The brush used to draw the inferred joint that is being tracked
Private readonly Brush Inferredjointbrush = Brushes.yellow;
A pen used to draw the currently inferred bone.
Private readonly Pen Inferredbonepen = new Pen (brushes.gray, 1);
Draw a set of bodies for rendering output
Private DrawingGroup Drawinggroup;//drawinggroup represents a collection of drawings that can be performed as a single drawing
Figure Source
Private Drawingimage ImageSource;
Coordinate mapper (mapping one type of point to another)
Private Coordinatemapper coordinatemapper = null;
Private Kinectsensor kinectsensor = null;
Private