"Translation" Kinect v2 programming (c + +) Depth
Kinect SDK v2 Preview for a description of how to get depth data. The previous section, described by using the Kinect for Windows SDK V2 Preview (hereinafter referred to as the Kinect SDK v2 preview), was taken from the Kinect for Windows V2 Developer Preview (hereafter, Kinect v2 preview) Method of color. This section describes how to get depth data from the Kinect.Depth SensorThe Kinect is equipped with a depth sensor to obtain depth data (and sensor distance information). Kinect v1, can read the projected infrared pattern, from the pattern of deformation to obtain depth information, equipped with "light coding" mode depth sensor. In the Kinect v2 preview, the depth sensor is changed to "time of Flight (ToF) mode by the time it bounces back from the projected infrared pulses to get depth information. "light coding" is the depth sensing technology of the Israeli PrimeSense company. For details, please refer to patent information, US Patent disclosure (US 2010/0118123 A1)-Depth Mapping using projected Patterns. "time of Flight (TOF) is a company acquired by Microsoft Corporation in the United States with the time of Flight (TOF) Depth sensing technology (3DV Systems Corporation, Canesta Company), which is generally considered to be using this technology. Depth the resolution of the data, the Kinect v1 is, however, the Kinect v2 preview is promoted to 512x424. In addition, the resolution of the depth direction is increased. You can get the range of Depth data, Kinect v1 is the range of 0.8~4.0[m], Kinect v2 preview can get 0.5~4.5[m]. It says the depth data range for default mode. The Kinect v1 provides near Mode (0.4~3.0[m), which obtains close-up Depth data, and Extended Depth (~ approx. 10.0[m]) to obtain long-range Depth data. However, the accuracy of depth data that deviates from the range of Default mode decreases. This section introduces methods for obtaining depth data from depth sensors.Figure 1"light Differences in coding" mode and "time of Flight (ToF) mode (image of how the depth sensor works) Sample ProgramThe Kinect SDK v2 preview Gets the depth data to visualize the sample program. An excerpt from the data acquisition phase of the introduction of the previous section. The full contents of this sample program are all disclosed in the following GitHub. Https://github.com/UnaNancyOwen/Kinect2SampleFigure 2 Data acquisition process for Kinect SDK v2 Preview (re-send)"sensor"Get "sensor"
Sensor
ikinectsensor* psensor; ...... 1
HRESULT HRESULT = S_OK;
HResult = Getdefaultkinectsensor (&psensor); ...... 2
if (FAILED (HResult)) {
std::cerr << "Error:getdefaultkinectsensor" << Std::endl;
return-1;
}
HResult = Psensor->open (); ...... 3
if (FAILED (HResult)) {
std::cerr << "Error:ikinectsensor::open ()" << Std::endl;
return-1;
}
Listing 1.1 corresponds to part of Figure 1"sensor" (re-sent) 1 processing the Sensor interface of the Kinect v2 preview. 2 Get the default Sensor. 3 Open sensor.
"source"Obtained "source" from "sensor".
Source
idepthframesource* Pdepthsource; ...... 1
hResult = Psensor->get_depthframesource (&pdepthsource); ...... 2
if (FAILED (HResult)) {
std::cerr << "Error:ikinectsensor::get_depthframesource ()" << Std::endl ;
return-1;
}
Listing 1.2 corresponds to section 1 of Figure 1"source" to obtain the Source interface of depth frame. 2 get Source from sensor. The Kinect SDK v1, to get depth data is primarily the use of depth and Player (human area) "stream" that can be obtained simultaneously. Therefore, it is necessary to have depth data and player data segmentation processing. (Note: The Kinect SDK V1 provides 2 ways to handle depth and player Index, the old method returns a set of ushort, separating the two by displacement; The new method returns the structure of each of the 2 ushort) Kinect SDK v2 Preview, Depth and Bodyindex (equivalent to the Kinect SDK v1 player) are taken as "source" respectively, as described in Bodyindex in the next section. The Kinect SDK v1 also has the "stream" to get depth, because there are also a lot of things to do with the player and skeleton (body posture) data, which can be achieved with depth and player "stream" at the same time.
"reader""source" from Open "reader".
Reader
idepthframereader* Pdepthreader; ...... 1
hResult = Pdepthsource->openreader (&pdepthreader); ...... 2
if (FAILED (HResult)) {
std::cerr << "Error:idepthframesource::openreader ()" << Std::endl;
return-1;
}
Listing 1.3 corresponds to section 1 of Figure 1"reader" to get the Reader interface of depth frame. 2 Open reader from source.
"frame"~"data"Get the latest "frame" from "reader".
int width = 512; ...... 1 int height = 424; ...... 1 unsigned int buffersize = width * Height * sizeof (unsigned short); ...... 2 Cv::mat buffermat (height, width, cv_16sc1); ...... 3 Cv::mat depthmat (height, width, cv_8uc1); ......
3 Cv::namedwindow ("Depth"); while (1) {//Frame idepthframe* pdepthframe = nullptr; ...... 4 HResult = Pdepthreader->acquirelatestframe (&pdepthframe); ...... 5 if (SUCCEEDED (HRESULT)) {HResult = Pdepthframe->accessunderlyingbuffer (&buffersize, reinterpret_cast< ; Uint16**> (&buffermat.data)); ...... 6 if (SUCCEEDED (HResult)) {Buffermat.convertto (Depthmat, cv_8u, -255.0f/4500.0f, 255.0f); ......
7}} saferelease (Pdepthframe);
Show window cv::imshow ("Depth", Depthmat);
if (cv::waitkey () = = Vk_escape) {break; }
}
Listing 1.4 corresponds to the size of Part 1 depth data (512x424) of Figure 1"frame","data". To simplify the description, the image size is set with hard code, and the sample program can get the frame information from the source. 2 depth The size of the data. 3 The Cv::mat type of OPENCV prepared to handle depth data. "buffermat" is the original depth data of 16bit, "depthmat" in order to be displayed as an image, the depth data stored in the range of 8bit processing. "cv_16sc1", is the unsigned 16bit integer (16S) placed in 1 channel (C1) parallel to represent 1 pixels of the data format. (Note: It should be CV_16UC1) "cv_8uc1" is a data format that shows unsigned 8bit integers (8U). 4 Gets the frame interface of the depth data. 5 Get the latest frame from reader. 6 get depth data from frame. Gets a pointer to an array of depth data stores. Here for depth data visualization, easy to change processing, with the Cv::mat type to obtain. 7 to display the depth data image, convert from 16bit to 8bit. If you get "frame", you can visualize it by taking out depth data and making it into an image. The extracted depth data, as shown in Figure 3, is composed of 1 pixels of 16bit (0~4500). Because such images cannot be displayed (note: OpenCV can only display 8bit of image data), the format needs to be converted into a range of 8bit (0~255). The sample program, using the Cv::mat Conversion command (Cv::mat::convertto ()), turns the display near the sensor to a very white (255), far-off display as a very dark (0) way.
Figure 3 Arrangement of depth data
Run ResultsRun this sample program to get a depth image in the Kinect v2 preview, just like Figure 4.
Figure 4 Running results
Summary This section describes a sample program that obtains depth data through the Kinect sdk v2 preview. The following section describes the sample program that obtains the B odyindex data.