(reproduced please specify the source)
Using the SDK: Kinect for Windows SDK v2.0 1409
This section is a supplemental section, the depth frame is displayed in 3D, and theSDK updates are described in later chapters.
Before thinking wrong, thinking that each frame shows so-called point clouds, the GPU is a big burden,
Background 500 * 400 * 3 * 4 for each frame, only 2M of data, previously estimated wrong.
3D interface is still d3d11, this time a lot easier, incidentally review it
Points:
0. Initialize : just need:
ikinectsensor ( using getdefaultkinectsensor)
Idepthframereader (using Ikinectsensor::get_depthframesource and Idepthframesource::openreader)
Icoordinatemapper (using Ikinectsensor::get_coordinatemapper)
1. Polling mode, because wait-based vertical synchronization
Use Idepthframereader::acquirelatestframe for polling
2. Data acquisition
Data acquisition using a multi-touch approach with Idepthframe
3. Coordinate mapping
Use Icoordinatemapper::mapdepthframetocameraspace to map depth data to camera coordinate space.
4. Rendering rules
Just the render point, so use the simple vs-ps mode, vs calculate coordinates, PS calculates the color.
Visualization algorithm, different with 2D algorithm, this time, we will map the depth value of the 0.4~4.5 to the wavelength of 400~700 (roughly)
That's the color of the rainbow.
So, the code for PS is like this
PS input <-> VS output struct psinput{float4 position:sv_position; FLOAT4 raw_position:position;};/ /Spectral color FLOAT4 spectral_color (float l) {float T; FLOAT4 color = float4 (0, 0, 0, 1); R if ((L >= 400.0) && (L < 410.0)) {t = (l-400.0)/(410.0-400.0); color.x = + (0.33*t)-(0.20*t* T); } else if ((L >= 410.0) && (L < 475.0)) {t = (l-410.0)/(475.0-410.0); color.x = 0.14-(0.13*t*t); } else if ((L >= 545.0) && (L < 595.0)) {t = (l-545.0)/(595.0-545.0); color.x = + (1.98*t)-(t*t) ; } else if ((L >= 595.0) && (L < 650.0)) {t = (l-595.0)/(650.0-595.0); color.x = 0.98 + (0.06*t)- (0.40*t*t); } else if ((L >= 650.0) && (L < 700.0)) {t = (l-650.0)/(700.0-650.0); color.x = 0.65-(0.84*t) + (0.20*t*t); }//G if ((L >= 415.0) && (L < 475.0)) {t = (l-415.0)/(475.0-415.0); color.y = + (0.80*t*t);} else if ((L >= 475.0) && (L < 590.0)) {t = (l-475.0)/(590.0-475.0); color.y = 0.8 + (0.76*t)-(0.80*t*t);} else if ((L >= 585.0) && (L < 639.0)) {t = (l-585.0)/(639.0-585.0); color.y = 0.84-(0.84*t);} B if ((L >= 400.0) && (L < 475.0)) {t = (l-400.0)/(475.0-400.0); color.y = + (2.20*t)-(1.50*t* T); } else if ((L >= 475.0) && (L < 560.0)) {t = (l-475.0)/(560.0-475.0); color.y = 0.7-(t) + (0.30*t* T); } return color; Shader entrance FLOAT4 Main (psinput input): sv_target{float4 judgment = FLOAT4 (0.4, 4.5, 400, 700); if ((input.raw_position.z >= judgment.x) && (input.raw_position.z <= judgment.y)) {//maps 0.4~4.5 to 40 0~700//73.1707 = (judgment.w-judgment.z) * (judgment.y-judgment.x) return Spectral_color (JUDGMENT.W-(input . raw_position.z-judgment.x) * 73.1707f) * 2.F; } return float4 (0, 0, 0, 1);}
The last multiply 2 is to increase the brightness.
VS will not be posted.
Each query depth data is mapped to the camera coordinate space and sent to the GPU.
It's easy to render these vertices every frame.
This is the finished product:
Code: Click here
Yes, the upload only found that this time do not need to copy files, so the custom build step there is superfluous
Kinect for Windows SDK v2.0 Development Note (17) Depth frame 3D