Kinect v2 Programming (c + +) text background removal skeleton tracking gesture recognition __c++

Source: Internet
Author: User
"Translation" Kinect v2 programming (c + +) body articleHttp://www.cnblogs.com/TracePlus/p/4138615.html
Http://www.cnblogs.com/TracePlus/p/4136368.html
Figure 3 Bodyindex Data
Kinect SDK v1 Kinect SDK v2 Preview
Name Player Bodyindex
Test Support number 6 people 6 people
The realm of the human body 1~6 0~5
The valuation of the non-human sphere 0 255 (0xFF)
Table 1 Comparison of the human area (player,bodyindex) of the Kinect SDK V1 and the Kinect SDK V2 Preview is a sample program that puts Bodyindex values in the body area with color table colors (="cv::vec3b (B, G, R), which is colored with black (="cv::vec3b (0, 0, 0)) in the non human body area to achieve visualization.


The Kinect SDK v2 preview version of the main features of the usage introduction is basically done.   This time, it is about obtaining the body (human posture) method description. In the previous section, you used the Kinect SDK v2 preview to obtain the Bodyindex (human area) from the Kinect V2 preview version.
This section introduces the method of obtaining body (posture) from Kinect.

Body so far, Kinect has been able to get depth (through sensor distance information) and Bodyindex (Human area).     And, based on these data, human posture can be obtained. The Kinect's human posture is presumed to be based on a large number of positions in the recognizer, the input of the human area information to be inferred (note: Because men and women are tall and fat thin body of different, so must be based on neural network database to accurately identify the people). Please also refer to the paper published by Microsoft Research.
The paper was published in IEEE CVPR 2011 (the first meeting in the field of computer vision and pattern recognition), the award-winning best Paper. Microsoft research "Real-time Human Pose recognition in Parts to a single Depth Image" Background technology may be complex, but developers through the Kinect SDK can simplify     Acquire and use human posture on a single ground. The posture of the human body data, you can get the head, hands, feet and so on 3-dimensional position, based on these can be achieved posture recognition.
This body area, known as "skeleton" in the Kinect SDK V1, is renamed "body" in the Kinect SDK v2 preview.   This section introduces the method of getting the body. The sample program uses the Kinect SDK v2 preview to get a sample program showing the body and color images overlay as "(dots)." Also, based on the body data, Hand State (the status of the hand) has also been shown.   The 2nd section introduces the stage excerpt narration for obtaining data, and the entire contents of the sample program are disclosed in the GitHub below. Https://github.com/UnaNancyOwen/Kinect2Sample Figure 1 Data acquisition process for the Kinect SDK v2 Preview (re-release)

"sensor" Get "sensor"

Sensor
ikinectsensor* psensor;   ...... 1
HRESULT HRESULT = S_OK;
HResult = Getdefaultkinectsensor (&psensor);  ...... 2
if (FAILED (HResult)) {
  std::cerr << "Error:getdefaultkinectsensor" << Std::endl;
  return-1;
}
HResult = Psensor->open ();  ...... 3
if (FAILED (HResult)) {
  std::cerr << "Error:ikinectsensor::open ()" << Std::endl;
  return-1;
}
Listing 1.1 is equivalent to the sensor interface in part 1 of the Kinect V2 preview version of Figure 1"source". 2 Get the default Sensor.   3 Open sensor. "source" obtained "source" from "sensor".
Source
ibodyframesource* Pbodysource;  ...... 1
hResult = Psensor->get_bodyframesource (&pbodysource);  ...... 2
if (FAILED (HResult)) {
  std::cerr << "Error:ikinectsensor::get_bodyframesource ()" << std:: Endl;
  return-1;
}
Listing 1.2 corresponds to the Source interface of Part 1 body frame of figure 1"source".     2 get Source from sensor. Here is only about getting the body source code commentary, however, in order to show the case program, but also made color. "reader""source" from Open "reader".
Reader
ibodyframereader* Pbodyreader;  ...... 1
hResult = Pbodysource->openreader (&pbodyreader);  ...... 2
if (FAILED (HResult)) {
  std::cerr << "Error:ibodyframesource::openreader ()" << Std::endl;
  return-1;
}
Listing 1.3 corresponds to the Reader interface of Part 1 body frame of figure 1"reader".   2 Open reader from source.   "frame"~"data" gets the latest "frame" from "reader" (listing 1.5). Prior to this, the Icoordinatemapper interface (listing 1.4) was obtained in order to match the coordinates obtained from the sensor, and the location of the B ody data and the color image needed to be matched because the location of the color camera and the depth sensor was separate.
Coordinate Mapper
icoordinatemapper* pcoordinatemapper;  ...... 1
hResult = Psensor->get_coordinatemapper (&pcoordinatemapper);  ...... 2
if (FAILED (HResult)) {
  std::cerr << "Error:ikinectsensor::get_coordinatemapper ()" << Std::endl ;
  return-1;
}
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.