Improvement and new features of Kinect for Windows SDK 1.5

Source: Internet
Author: User

Later yesterday, Microsoft released the version 1.5 of the Kinect for Windows SDK. The new version of the SDK made major changes and updates to Version 1.0. This article is based on the articles Microsoft Kinect for Windows SDK and toolkit-v1.5 release notes and Kinect for Windows: SDK and runtime version 1.5 released, which are compiled based on your own installation and experience.

 

1. Download and install SDK 1.5

 

The new SDK version is fully compatible with the SDK version 1.0. If you have installed the SDK version 1.0 before, you can directly install the SDK version 1.5. If your previous development version is Beta, you need to uninstall the SDK before installing SDK 1.5. In Kinect for Windows SDK 1.0, the SDK and sample files are packaged and installed together. In sdk1.5, in order to upgrade the SDK separately, Microsoft splits the two parts separately as the Kinect for Windows SDK and the Kinect for Windows developer toolkit. Therefore, you need to download and install them separately, kinect for Windows SDK 1.5 and development toolkit for Windows developer Toolkit: http://www.microsoft.com/en-us/kinectforwindows/develop/developer-downloads.aspx

The installation process is very simple. Note the following:

  • Before installation, You need to unplug the Kinect from the USB of your computer.
  • Before installation, You need to disable anti-virus software, such as 360. If you do not have permission to install the software before, an error occurs.

For more information about installation, see my previous article.

 

2. New features added in SDK 1.5

 

The new SDK version adds many new features to facilitate developers to develop kienct applications. These new features include:

 

Added the Kinect studio tool.

Kinect studio can help developers record and play the Kinect data, which greatly shortens and simplifies the development cycle of the Kinect application. Debugging of the Kinect application development is troublesome because the data is read in real time. Now developers can record the Kinect data obtained by the application and play the data repeatedly to test and improve the application.

In, the upper left corner is the main interface of Kinect studio. When Using Kinect studio, you must append a running or debugged Kinect application, and the lower left corner is the facial recognition program in the running example. After the attachment is complete, there are three images on the right to display the Kinect data received by the current application. The figure in the middle of the right is the color image data, in the upper-right corner, the image is the depth image value data, and the right is the field of view of Kinect. On the main interface of Kinect studio, you can record and play back the color image data or deep image data.

 

 

 Human Interface Guidelines

In sdk1.5 developer toolkit, more than 70 pages of Human Interface Guidelines (HIG, Human-Computer Interaction Interface Design Guide) are added to help developers provide some guidance for the design of human-computer interaction interfaces of the Kinect application.

 

 

Face traking SDK)

The facial recognition SDK provides real-time 3D gridded facial features-capable of tracking the positions of eyebrows and mouth shapes. To better implement facial recognition in the previous SDK, you must use a third-party class library, such as emgu. This is described in the advanced SDK development. In my experiments on my machine, I felt that face recognition was smooth In SDK 1.5. In SDK 1.0, I used emgu to implement face recognition comparison cards, and did not provide such details as grid.

 

 

It can be seen that the mesh of the mouth, nose, and eyes is very dense, and the approximate shape of the mouth and the position of the eyes can be determined. In developer toolkit, we can see that some features added by facial SDK 1.5 are encapsulated in some DLL, and the facial recognition SDK is encapsulated in Microsoft. Kinect. toolkit. facetracking. dll.

 

 

More sample code and improvements to the previous Code

There are many new examples added to the sample code library, and some examples are improvements to the code of version 1.0. These examples provide three versions: C #, C ++, and Visual Basic.

 

 

In the figure above, the instance image is marked with "new" in the upper-right corner, indicating the newly added example In SDK 1.5, and "Update" indicates that the example in SDK 1.0 has been rewritten.

 

SDKDocument Improvement

The SDK documentation adds the new functions and features in version 1.5, and integrates them into msdn and updates them in real time.

 

3. Improvement of the SDK 1.5 skeleton tracing function

 

Added seated skeletal tracking)

In sitting mode, the bone information of the head, shoulders, and arm is tracked, ignoring the information about the legs and hip joints. Sitting Posture mode is not limited to use only in sitting posture. When the user is standing, the head, shoulder, and arm information can also be tracked. This allows us to create scenarios that optimize users' applications in the sitting posture, such as working in a company or interacting with 3D data. Or in some standing positions, the lower half of the body may be out of the field of view of the Kinect sensor, such as interacting with an outdoor advertising booth or a doctor browsing an MRI image in the operating room.

Support for bone tracing in close-to-scene mode

Supports bone tracing in close-to-scene mode, including the default and sitting modes. This allows developers to develop close-up bone tracing projects, such as applications where users sit in front of their sitting posture or require close-up display and interaction.

 

4. SDK 1.5 Performance and Data Quality Improvement

 

Sdk1.5 improves the performance and data quality in 1.0, improves the performance of RGB color images and the overlapping display of Deep Images and color images (this makes the "green screen image" technology easy to implement, in SDK 1.0, the implementation of green screen image cutting technology is introduced in the last part of the article 5 ). Sdk1.5 improves performance and data quality as follows:

  • Projected the point data in the deep image data frame to the position of the corresponding point in the color image data frame, kinectsensor. mapdepthframetocolorframe. This function significantly improves the performance, which is 5 times higher than the previous version. The mapdepthframetocolorframe function is a function that is required and frequently used to convert the tracked node data to the user interface.
  • Deep image frame data and color image frame data can be synchronized with each other. During the running of the Kinect for Windows system, the depth and color image frame data are continuously monitored to correct the deviation between the two to keep them synchronized.
  • RGB image quality is improved in RGB 640x480 @ 30fps and YUV 640x480 @ 15fps video modes, and the image quality is more sharp, the image quality accuracy is improved in high light and low light conditions.

In the previous article "green screen beat", to hide a character, you must find the data without the player index in the depth of field data, the mapdepthtocolorimagepoint function is called point-by-point to map these data points to a color image, and manual synchronization of depth of field data and color image data is required. Because SDK 1.5 has made important improvements to these two points, it runs smoothly compared with the previous one. Run the greenscreen-WPF example in the Kinect developer toolkit, and the notebook runs smoothly.

 

 

5. SDK 1.5 provides new functions for character roles in sports scenarios

 

New features make it easier to develop kienct-based applications to control 3D roles, such as the Kinect Sports Games. New features include:

  • The joint orientation information is added to the tracked bone information during the running of the Kinect for Windows system.
  • The node orientation information is provided in two forms: A hierarchical rotation based on a bone relationship defined on the skeletal tracking Joint Structure ), and based on the absolute deflection angle information (Absolute Orientation in Kinect camera coordinates) in the Kinect camera coordinate system ).

 

6. Four languages supported by speech recognition are added.

 

Speech recognition supports four languages: French, Spanish, Italian, and Japanese. In addition, a new Language Pack is provided to support recognition of languages in different regions. These are English/Great Britain, English/Ireland, English/Australia, English/New Zealand, English/Canada, french/France, French/Canada, Italian/Italy, Japan/Japan, Spanish/Spain, and Spanish/Mexico.

Unfortunately, Chinese recognition is still not supported.

 

7. Conclusion

 

This article briefly introduces the new features and features of the Kinect for Windows SDK 1.5. These new features make it easier for us to develop a better kienct application. I hope the above articles will help you understand the new SDK version!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.