Introduction to the Kinect for Windows SDK Development (eight) skeleton tracking advanced

Source: Internet
Author: User
Tags touch

The first 7 files we introduced the basics of the various sensors in the Kinect SDK, and we demonstrated experimentally how these basic objects and methods are used, which are the most basic knowledge of the Kinect development. With this knowledge, you can develop a simple program based on the Kinect. But there is still a distance between these departures from sending out a good Kinect based application. The following article will introduce WPF and other Third-party tools, class libraries, in conjunction with the Kinect SDK, to build a Kinect-driven program with a better user experience. We will use the knowledge we have mentioned to do some of the more complex topics below.

The Kinect sensor core only emits infrared rays and detects infrared light reflection, thus calculating the depth value of each pixel in the field of view. The first extracts from the depth data are the body and shape of the object, as well as the player index information for each pixel point. The shape information is then used to match the various parts of the human body, and the position of each joint in the body is finally calculated. This is the skeleton track we introduced before.

Infrared imagery and depth data are important to the Kinect system, the core of the Kinect, which is second only to skeletal tracking in the Kinect system. In fact, these data are equivalent to an input terminal. With the popularity and popularity of the Kinect or other deep cameras. Developers can not focus on the original depth image data, they become unimportant or just as a basic data to obtain other data. We are at this stage, the Kinect SDK does not provide the interface for developers to access raw IR image data streams, but other Third-party SDK can do so. Most developers may not use the original depth data, but only the skeleton data that the Kinect has done. However, once posture and gesture recognition are integrated into the Kinect SDK and become part of it, developers may not even be exposed to skeleton data.

It is hoped that this integration will be achieved as early as possible, as it represents the maturing of the Kinect as a technology. Bone tracing is still discussed in this article and in the next article, but different methods are used to deal with skeletal data. We use Kinect as one of the most basic input devices like a mouse, keyboard, or touch screen. Microsoft launched the Kinect for Xbox slogan is "You are the controller", technically speaking, is "you are input device." With skeleton data, the application can do what the mouse or touch screen can do, and the difference is that deep image data allows users and applications to implement interactive methods that have never been done before. Let's look at the mechanics of Kinect control and interacting with the user interface.

1. User interaction

The application running on the computer needs to enter information. Traditional information comes from these input devices, such as the mouse or keyboard. The user interacts directly with these hardware devices, and then the hardware device responds to the user's actions and converts these operations into a data transfer to the computer. The computer receives information about these input devices and then displays the results in a visual format. Most computers have a cursor on the image user interface (Cursor), which usually represents the location of the mouse because the mouse is the first to have a wheel device. But now, if you refer to this cursor as a mouse cursor, it may not be accurate, because some touchpad or handwriting devices can now control the cursor like a mouse. The cursor also responds to this change when the user moves the mouse or moves the finger on the touchpad. When a user moves the cursor over a button, the appearance of the button usually changes, prompting the user to be positioned on the button. When the user clicks the button, the button displays a different look. When the user releases the button on the mouse, a different appearance appears. Obviously, a simple click event will involve different states of the button.

Developers may be accustomed to these interactions and operations, because user interaction platforms such as WPF make it easy for programs to interact with users. When developing a Web page program, the browser responds to the user's interaction, and the developer simply needs to interact by setting the style according to the hover state of the user's mouse. But unlike Kinect, as an input device, he is not integrated into WPF, so as a developer. The part of the work that the operating system and WPF cannot directly respond to needs to be done.

At the bottom, the mouse, touchpad or handwriting devices provide some x,y coordinates, and the operating system converts these x,y coordinates from its space coordinate system to the computer screen, similar to the spatial transformations discussed in the previous article. The responsibility of the operating system is to respond to the data entered by these standard input devices and then convert it to a graphical user interface or application. The graphical user interface of the operating system displays the cursor position and responds to user input. In some cases, this process is not so simple, we need to understand the GUI platform. In the case of a WPF application, it does not provide native support like a mouse or a keyboard to the Kinect. The job falls to the developer, we need to get the data from the Kinect and then use the data to interact with the buttons, drop-down boxes, or other controls. Depending on the complexity of the application or user interface, this kind of work may require us to learn a lot about WPF.

Introduction to input systems in 1.1 WPF applications

When developing a WPF application, developers do not need to pay special attention to the user input mechanism. WPF will handle these mechanisms for us so that we can focus on responding to user input. After all, as a developer, we should be more concerned with how to analyze the information that the user inputs, rather than reinvent the wheel to consider how to collect user input. If your application needs a button, just drag a button from the toolbox to put it in the interface, and then write the processing logic in the Click event of the button. In most cases, the developer may need to set a different appearance on the button in response to the different state of the user's mouse. WPF implements these events for us on the ground floor, such as when the mouse hovers over the button or is clicked.

WPF has a sound input system to get user input from input devices and to respond to changes in controls that are brought about by these input information. These APIs are located in the System.Windows.Input namespace (Presentation.Core.dll), which directly retrieves data input from the input device from the operating system, for example, named Keyboard,mouse,stylus, Touch and cursor of these classes. InputManager This class is responsible for managing the information acquired by all input devices and delivering that information to the performance framework.

Another type of WPF component is the four classes located under the System.Windows namespace (PresentationCore.dll), which are uielement,contentelement, FrameworkElement and FrameworkContentElement. FrameworkElement inherits from Uielement,frameworkcontentelement inherited from ContentElement. These are the base classes for all visual elements in WPF, such as Button,textblock and ListBox. More information about WPF input systems can be referenced in the MSDN documentation.

InputManager listens to all input devices and notifies UIElement and ContentElement objects through a series of methods and events informing them that the object input device is doing something related to the visual element. For example, in WPF, a mouseenterevent event is triggered when the mouse cursor enters a valid area of a visual control. UIElement and ContentElement objects also have onmouseenter events. This allows any object that inherits from the UIElement or ContentElement class to accept triggered events from the input device. WPF calls these methods before any other input event is triggered. There are similar events in the UIElement and ContentElement classes, including Mouseenter,mouseleave,mouseleftbuttondown,mouseleftbuttonup,touchenter, Touchleave,touchup and touchdown.

Sometimes developers need direct access to the mouse or other output devices, and the InputManager object has a property called Primarymousedevice. He returned a Mousedevice object. Using the Mousedevice object, you can get the mouse position at any time by calling Getscreenpositon. In addition, Mousedevice has a method named Getpositon that can pass in a UI interface element that will return the mouse position in the coordinate space in which the UI element resides. This information is especially important when you need to determine the operation of a mouse hover. When the Kinect SDK generates a new Skeletonframe frame data each time, we need to transform the coordinate space to convert the joint point position information into the UI space so that the visual elements can use the data directly. When a developer needs to use the mouse as an input device, the Getscreenpositon and GetPosition methods in the Mousedevice object provide location information for the current mouse point.

In some cases, the Kinect is similar to the mouse, but in some ways it is very different. The skeleton node enters or leaves a visual element on the UI similar to the mouse move in and out behavior. In other words, the hover behavior of the joint point is the same as the mouse cursor. However, like mouse clicks and mouse buttons Press and bounce these interactions, the joint points interact with the UI is not. In the following article, you can see the use of hands to simulate the click operation. This support for mouse clicks is relatively weak in the Kinect relative to implementing mouse movement and move out.

Kinect and Touchpad don't have much in common. Touch input can be accessed by a class called touching or TouchDevice. Single-point touch input is similar to mouse input, however, multi-touch is similar to the Kinect. There is only one interaction point (cursor) between the mouse and the UI, but the touch device can have multiple touch points. Just as Kinect can have multiple players. From each player, you can capture 20 joint points to enter information. Kinect can provide more information because we know that every input point is the part of the player's body. and touch input device, the application does not know how many users are touching the screen. If a program receives 10 input points, it is not possible to determine whether the 10 points are triggered by one finger of one person's 10 fingers or 10 persons. Although the touch device supports multi-touch, it is still a two-dimensional input similar to a mouse or a handwritten tablet. However, in addition to the x,y point coordinates, the touch-input device has the field of touch contact area. After all, users use their fingers as accurately as they do without a mouse cursor on the screen, and the contact area is usually larger than 1 pixels.

Of course, they have similar points. The Kinect input clearly conforms to the requirements of any input device that WPF supports. In addition to having other input devices similar to input, he has a unique way of interacting with the user and a graphical user interface. On the core, the mouse, the trackpad and the tablet are only passing a pixel position. The input system determines the pixel position of the point in the context of the visible element, and then the related elements respond to the location information and then respond to the action.

The expectation is that the Kinect will be fully integrated into WPF in the future. In WPF4.0, the touch device acts as a separate module. The first touch device was introduced as the surface of Microsoft. The Surface SDK includes a series of WPF controls, such as Surfacebutton,surfacecheckbox, and Surfacelistbox. If you want the button to respond to a touch event, it's best to use the Surfacebutton control.

It can be imagined that if the Kinect is fully integrated into WPF, there might be a class called Skeletondevice. He is similar to the Skeletonframe object in the Kinect SDK. Each skeleton object would have a method called Getjointpoint, similar to Mousedevice's Getpositon and TouchDevice's gettouchpoint. In addition, the core visualization elements (Uelement, contentelement, FrameworkElement, frameworkcontentelement) have the ability to correspond to events or methods that can inform and handle bone joint point interactions. For example, there may be a jointenter,jointleave, and Jointhover event. Further, just as a touch class has a manipulationstarted and manipulationended event, it is possible to accompany the getsturestarted and gestureended events in the Kinect input.

Currently, the Kinect SDK and WPF are completely separate, so he and the input system are not consolidated at the bottom. So as developers we need to track the location of the bone joints and determine whether the node position interacts with the elements on the UI interface. When the joint points are within the effective range of the UI's visual interface, we must manually change the appearance of these visual elements to respond to this interaction.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.