Kinect for Windows SDK v2.0 Development Notes (iv) Player index and green screen technology

Source: Internet
Author: User

(reprint please indicate the source)

Using the SDK: Kinect for Windows SDK v2.0 Public Preview

CSND blog appears has been released, modified a word to become "to be audited" status caused by slow,

But anyway, almost no one looked at the impact of 0.

This is the Kinect player index (BODYINDEX).


The player number in which Kinect represents the current depth coordinate in a single byte. At present the biggest support 6 people, already is very good, this "6" is a macro definition

#define BODY_COUNT 6
Support 6 people index and 6 human skeleton tracking, 1 generation support 6 people index but only support 2 skeleton tracking, visible USB3.0 credit.

In the stream of bytes obtained, 0x00 said that the number No. 0 player, 0x01, said 1th, until 0x05 the 6 players index, and no player was represented by 0xFF. Please note that

Player indexes are randomly assigned, not only one player is numbered 0x00. But after testing, it looks like a player using 0x00 to express a greater chance.


This visualization algorithm is simple, 6 players using 3 bits can be said, just RGB three kinds of 3 colors, so use the following code:


        for (UINT i = 0; i < nbuffersize; ++i) {
            pbgrxbuffer[i].rgbblue = pbuffer[i] & 0x01? 0x00:0xff;
            Pbgrxbuffer[i].rgbgreen = Pbuffer[i] & 0x02? 0x00:0xff;
            pbgrxbuffer[i].rgbred = Pbuffer[i] & 0x04? 0x00:0xff;
            pbgrxbuffer[i].rgbreserved = 0xFF;
        }

In this way, the 0x00 number is white, 0xFF is black, easy to watch. Note that the core algorithm for Kinect is based on depth images,

So the size of the player index is also consistent with the depth of the image size, the above section, method name almost, "Depthfame" changed to "Bodyindexframe" is almost.

The effect is as follows:


You can see that at close range, the depth image resolution is larger than the finger, and maybe Microsoft will provide the official ten-finger tracking in the next close-up.

Of course, you can use OPENCV based on depth image to achieve finger position tracking.


Look at this image, if combined with the color image, you can "separate" the color image of the characters out, this is a bit like movie and television

The "Color key" in the background using a green or blue image, post-processing to remove these background colors. In this becomes the green screen technology.


The algorithm is roughly as follows:

This point index is 0xFF to place the pixel as empty (0x00000000), otherwise it is placed in the corresponding position in the color frame at this point.

So how to get "this point color frame corresponding position" it. The Kinect SDK provides a coordinate mapping object. Yes, incredibly an object,

And not a function to fix it.

Also, as long as the mapping of the depth of the data, is to provide the source data (that is, depth data).

It turns out to be not a simple f (x).

Coordinate mapping provides a lot of mapping methods, such as depth coordinate mapping color coordinates, color coordinate mapping depth coordinates, detailed view of the official document.

It is also important to note that there may be changes in coordinate mapping, such as the color frame size from 1080P to 720P.

But it is not yet, so icoordinatemappingchangedeventargs inside is almost nothing. Which is not a pipe


Now we need to open the color data stream at the same time, the depth data stream has already been indexed by the player data stream. Synchronizing this stream is not a simple matter.

After all, we do not know what flow first to, what flow after. The SDK provides a "duplicate source Frame" object, which is convenient to use to seal multiple data streams together.

Use Ikinectsensor::openmultisourceframereader (DWORD, Imultisourceframereader *) to open the source frame, for example:

        hr = M_pkinect->openmultisourceframereader (
            framesourcetypes::framesourcetypes_color | 
            Framesourcetypes::framesourcetypes_bodyindex |
            Framesourcetypes::framesourcetypes_depth,
            &m_pmultisourceframereader
            );
Open the color data stream + Depth data stream + Player index stream, use the method and Tanyuan similar, but use to get the single source frame reference you need,

Like Imultisourceframe::get_colorframereference,imultisourceframe::get_depthframereference and so on. A detailed example can be seen.


The effect is as follows:


Ah, a person screenshot is simply inconvenient, so change to learn sister.


There is also the example provided, a solution contains both projects, need to debug a project need right key engineering-set startup



Example Download Address: Click here


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.