The previous article shows the main objects involved in the skeleton tracking system with examples of skeletal data drawn from the UI interface, and then discusses in detail the object models involved in skeletal tracking. But it's one thing to understand the basics, and the ability to build a complete, usable application is another story. This article discusses how to apply these objects to create a complete Kinect application by introducing a simple Kinect game. To deepen understanding of the various objects involved in the Kinect skeleton tracking.
1. Kinect Link Game
I believe we have done a math problem in the small time, that is, on the paper will be a number of columns (with a dot) from small to large lines connected. The logic of the game is simple, but what we are going to do here is to connect these dots together, not with a pen or a mouse.
This little game is obviously not as complicated as the first person shooter, but it would be better if it could be done. We use the skeleton tracking engine to collect the player's joint data, perform operations, and render the UI interface. This little game shows the concept of the Natural user interface (Natural user Interface,nui), which is a common interactive interface based on the Kinect development, which is hand tracking. This online game does not use only the WPF drawing function, no good-looking pictures and animation effects, these can be gradually added later.
Before you start writing code, you need to clearly define our game goals. Wired Game is an intellectual game, players need to be the number from small to Dalian up. The program can customize the number and location above the game (collectively a checkpoint). Each level includes a number of columns (in dots) and their locations. We want to create a Dotpuzzle class to manage the collection of these point objects. You may not need this class at first, just need a collection, but it's better to use the class to add additional functionality later. These points are used in two places in the program, one is the first time in the interface to draw the checkpoint, and second is to determine whether the user encountered these points.
When the user touches a point, the program begins to draw, the line starts at the point where it hits, and the next point the user touches at the end of the line. Then the next point is the starting point for another line, and so on. Until the last point and the first point together, so that the checkpoint is passed, the game is over.
Once the rules of the game are defined, we can start coding, and with the development of this little game, some other new features may be added. First, build a WPF project and then reference Microsoft.Kinect.dll, just like the previous project, to add code to discover and initialize the Kinect sensor. Then register the Skeletonframeready event for the Kinectsensor object.
1.1 User interface of the game
The game interface code is as follows, there are several places to explain. The Polyline object is used to represent the connection between points and points. When a user moves a hand between points and points, the program adds points to the Polyline object. The Puzzleboardelement Canvas object is used as a container for all points on the UI interface. The order of the canvas below the grid object is intentionally arranged, and we use another Gameboardelement canvas object to store the gesture, represented by image, and ensure that the layer is always above the point layer. Another benefit of putting each class of objects in the respective layer is that it's easy to start a new game, just clear all the child nodes under the Puzzleboardelement node, and crayonelement elements and other UI objects will not be affected.
Viewbox and Grid objects are important for the UI interface. As discussed in the previous article, bone node data is based on bone space. This means that we have to translate the skeleton vectors into the UI coordinate system to be able to draw. We hard-code the UI control and do not allow it to float as the UI form changes. The grid node defines the size of the UI space as 1920*1200. This is usually the full screen size of the monitor, and he is consistent with the aspect ratio of the depth image data. This can make the coordinate transformation clearer and can have a smoother gesture movement experience.
<window x:class= "Kinectdrawdotsgame.mainwindow" xmlns= "Http://schemas.microsoft.com/winfx/2006/xaml/presentat Ion "xmlns:x=" Http://schemas.microsoft.com/winfx/2006/xaml "title=" MainWindow "height=" "width=" "Ba" ckground= "White" > <Viewbox> <grid x:name= "LayoutRoot" width= "1920" height= "1200" > & Lt Polyline x:name= "crayonelement" stroke= "Black" strokethickness= "3"/> <canvas x:name= "puzzleboardelement "/> <canvas x:name=" gameboardelement "> <image x:name=" handcursorelement "source=" Images/hand.png "width=" "height=" rendertransformorigin= "0.5,0.5" > ; Image. rendertransform> <TransformGroup> <scaletransform x:name= " Handcursorscale "scalex=" 1 "/> </TransformGroup> </image.rendertra
Nsform> </Image> </Canvas> </Grid> </Viewbox> </Window>
The hard coded UI interface can also simplify the development process, making it easier and faster to transform from skeletal coordinates to UI coordinates, requiring only a few lines of code to complete the operation. Furthermore, if you do not encode, a change in the size of the corresponding main UI form will add additional work. Allow WPF to help us zoom by embedding the grid in the Viewbox node. The last UI element is the image object, which represents the position of the hand. In this little game, we use such a simple icon to represent the hand. You can choose a different image or use a Ellipse object instead. The picture in this game uses the right hand. In the game, the user can choose to use the left hand or the right hand, if the user uses the left hand, we will use the ScaleTransform transform to make the picture look like the right hand.
1.2 Hand Tracking
The player interacts with the hand, so it is critical that the position of the hand and hand be accurately determined for applications based on the Kinect development. The position and movement of hand is the basis of gesture recognition. The tracking hand movement is the most important use of data obtained from the Kinect. In this application, we will ignore other joint point information.
When we were kids, we used to use pencils or paint pens and then hand-control pencils or paint pens to connect. Our little game overturns this way, our interaction is very natural, is the hand. This has a good sense of immersion, making the game more interesting. Of course, the interaction of developing a Kinect based application is naturally essential. Luckily, we only need a little bit of code to do that.
There may be more than one player in the application, and we set, regardless of the hand that was closest to the Kinect, we used the hand of the player who was closest to the Kinect as the control program drawing. Of course, in the game, any time users can choose to use the left or right hand, which will make the user to operate more comfortable, Skeletonframeready code as follows:
private void Kinectdevice_skeletonframeready (object sender, Skeletonframereadyeventargs e) {using (Skeletonframe fra me = E.openskeletonframe ()) {if (frame!= null) {frame.
Copyskeletondatato (this.frameskeletons);
Skeleton skeleton = Getprimaryskeleton (this.frameskeletons);
skeleton[] DataSet2 = new Skeleton[this.frameskeletons.length]; Frame.
Copyskeletondatato (DataSet2);
if (skeleton = = null) {handcursorelement.visibility = visibility.collapsed;
else {Joint Primaryhand = Getprimaryhand (skeleton);
Trackhand (Primaryhand);
Trackpuzzle (primaryhand.position); '}}} private static Skeleton Getprimaryskeleton (skeleton[] skeletons) {Skeleton skeleton = n
ull; if (skeletons!= null) {//Find the nearest player for (int i = 0; i < Skeletons. Length; i++) {if (Skeletons[i].
Trackingstate = = skeletontrackingstate.tracked) {if (skeleton = null) {
Skeleton = Skeletons[i]; else {if (skeleton). Position.z > Skeletons[i].
POSITION.Z) {skeleton = skeletons[i];
}}} return skeleton; }