Early development experience of Kinect for Windows SDK (2) Operating camera

Source: Internet
Author: User
Tags visual studio 2010
ArticleDirectory
    • Kinect Development Environment
    • Obtain RGB camera data
    • Get depth information
    • Written to the end

 

Author: Ma Ning

Less than 24 hours after the Kinect SDK came out, many geek people have published their own examples online. It can be seen that good things will certainly be recognized by everyone, and it is useless to put too much publicity into bad things.

This article is about to officially enter the programming world of Kinect, introducing how to get image information from camera. Let's first introduce the overall structure of Kinect, saving you from getting dizzy with some terms in the subsequent introduction.

There are a total of three camera in Kinect. One of them is RGB camera, which is used to obtain a x color image. Up to 30 frames can be obtained per second. The two sides are two depth-of-field (3D depth) sensors, it is used to detect the relative positions of players. The principle is the same as that of human stereoscopic imaging. However, these two sensors use infrared rays. Therefore, Obama cannot play with the idea that the people of the Kinect are always at ease. There is a microphone on both sides of the Kinect, and there is a removable base below to adjust the height of the Kinect.

Kinect Development Environment

Today, we mainly operate RGB camera and depth sensor. First, we need to complete the configuration of the Kinect development environment:

Step 1: Create a WPF Project

Open Visual Studio 2010 and create a WPF project named kinectwpfdemo:

Of course, the. NET-basedProgramIn addition to WPF, we can use both. Net winform and xNa frameworks. No one has successfully experimented on the Silverlight platform yet.

Step 2: add reference to the Kinect assembly

In Solution Explorer, right-click kinectwpfdemo and choose add reference… from the context menu ...". In the displayed dialog box, on the. NET tab, select the Microsoft. Research. Kinect assembly. As shown in:

Step 3: Add coding4fun Kinect Toolkit

This is an option, but we recommend that you add one for later programming convenience. Coding4fun of the Kinect Toolkit:

Http://c4fkinect.codeplex.com/

After decompression, there are a total of five files. For winform and WPF platforms, there is also a Microsoft. expression. Drawing. dll. Add reference to add coding4fun. Kinect. WPF. dll.

Obtain RGB camera data

Step 4: add controls

Double-click mainwindow. XAML and add two image controls to the designer. One is used to display the RGB image and the other is used to display the depth information.

Step 5: reference a namespace

Open the mainwindow. XAML. CS file and add a reference to the Kinect object in the file header:

UsingMicrosoft. Research. Kinect. Nui;UsingMicrosoft. Research. Kinect. Audio;UsingCoding4fun. Kinect. WPF;

Return to the mainwindow. XAML designer, select event in the attribute window, find the loaded and closed methods, double-click them, and add the processing functions of the two events:

In the mainwindow. XAML. CS file mainwindow class, declare the runtime variable:

Runtime Nui;

Then, add the runtime initializedCode:

 
Private VoidWindow_loaded (ObjectSender, routedeventargs e) {Nui =NewRuntime (); Nui. initialize (runtimeoptions. usecolor | runtimeoptions. usedepth | runtimeoptions. usedepthandplayerindex | runtimeoptions. useskeletaltracking );}

The following code disables the runtime in the closed event:

 
Private VoidWindow_closed (ObjectSender, eventargs e) {Nui. uninitialize ();}

The runtime object is one of the most important classes in the Kinect SDK. All operations on the Kinect are encapsulated by the runtime class. The runtime constructor does not accept any parameters, but has an explicit initialization function initialize that accepts the runtimeoptions parameter and specifies which functions of the Kinect are called. Runtimeoptions. usecolor indicates that RGB camera is used, while runtimeoptions. usedepth indicates that depth is used.

After initialization, we need to use RGB camera to obtain real-time image data. First, we need to declare an event processing method to receive video data information:

Nui. videoframeready + =NewEventhandler <imageframereadyeventargs> (nui_videoframeready );

Then there is the event processing function:

  void  nui_videoframeready ( Object  sender, imageframereadyeventargs e) {planarimage imagedata = E. imageframe. image; image1.source = bitmapsource. create (imagedata. width, imagedata. height, 96, 96, pixelformats. bgr32,  null , imagedata. BITs, imagedata. width * imagedata. bytesperpixel);  // image1.source = E. imageframe. tobitmapsource (); }

Tip: The sample code provided by getting started is incorrect. You must change data. width in the last parameter to imagedata. width to run properly.

The videoframeready event will pass an imageframereadyeventargs parameter to the event handler. The imageframe will contain various information about the image. For example, the Type Variable specifies whether the image is from RGB or depth, the resolution variable specifies the resolution, while the image saves the real data of the image in the byte [] array.

The next task is to create a bitmap object based on the data included in planarimage, and then pass it to the image control to display it on the WPF program interface.

Finally, we need to open the video stream in the constructor to obtain the video data:

 
Nui. videostream. Open (imagestreamtype. Video, 2, imageresolution. resolution640xlarge, imagetype. Color );

The first parameter is imagestreamtype, which is used to specify the stream type of the enabled device. The second parameter is poolsize, which specifies the number of buffers, at least 2. Ensure that a buffer is drawn, another buffer is used for data filling. The third parameter specifies the resolution of the camera. The fourth parameter is the image type obtained.

Shows the display effect:

The preceding sample code does not use the helper class of coding4fun. If it is used, the Code is as follows:

 
VoidNui_videoframeready (ObjectSender, imageframereadyeventargs e) {image1.source = E. imageframe. tobitmapsource (); E. imageframe. tobitmapsource (). Save ("Catpure.jpg", Imageformat. JPEG );}

The helper class uses C # extension methods and adds some conversion methods for imageframe. We can also save images as files. Considering the efficiency of file system storage, we recommend that you do not need to store each file.

Get depth information

Next we need to obtain the depth information. The process is similar to that of RGB camera. First, make sure that the runtimeoptions. usedepth attribute has been added when the runtime object is initialized. Otherwise, the device cannot open normally.

Then, add the event processing to get the depth data and open the data stream of depth. The resolution is 320x240 this time:

Nui. depthframeready + =NewEventhandler <imageframereadyeventargs> (nui_depthframeready); Nui. depthstream. Open (imagestreamtype. Depth, 2, imageresolution. resolution320x240, imagetype. Depth );

The following is the event processing function. In another image function, the depth image is displayed:

VoidNui_depthframeready (ObjectSender, imageframereadyeventargs e) {image2.source = E. imageframe. tobitmapsource ();}

So we use the helper class of coding4fun. The program running effect is as follows:

Written to the end

In this article, we complete the configuration of the Kinect development environment, added the coding4fun Kinect toolkit, and obtained image information from RGB camera and depth sensor.

Next, we will go to the motion capture part of the Kinect.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.