Reprint Please specify Source: http://blog.csdn.net/lxk7280
first of all, to contact Kinectorbit This camera library, this article has the download URL of this library and simple introduction: http://blog.csdn.net/lxk7280/article/details/38184355. Once the downloaded file is placed in a subdirectory of the corresponding processing, it can be used.
How to operate the mouse and keyboard under the Kinectorbit library:1. Right-click: Camera Shake. 2. Left-drag: Rotates around the object.
3. Scroll: Zoom operation. 4.P key to save. O Key to exit. Once the program is executed again, it starts with the last saved pilot, and the parameters of the viewpoint are stored in a file named "Orbitset_0.csv" in the data directory, assuming the file is deleted. The default value is started.
Please attach three copies of this: 1. Close range View
watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvbhhrnzi4ma==/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast ">
Because of the depth range of the Kinect and the angle of view range such as the following:
color and depth |
1.2 ~ 3.6 meters |
skeleton tracker |
1.2 ~ 3.6 meters |
View Angle |
Horizontal 57 degrees, vertical 43 degrees |
part of my body is outside the range that the Kinect's depth camera can capture, so it doesn't show up in the picture.
2. Medium Distance Viewing angle
watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvbhhrnzi4ma==/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast ">
It was so obvious that I could see the big projection behind my desk and my arm behind.
3. Long Distance viewing angle
watermark/2/text/ahr0cdovl2jsb2cuy3nkbi5uzxqvbhhrnzi4ma==/font/5a6l5l2t/fontsize/400/fill/i0jbqkfcma==/ Dissolve/70/gravity/southeast ">
Code: The first step: import three libraries you need to use
Import Processing.opengl.*;import Simpleopenni.*;import kinectorbit.*;
Step two: Define the object Myorbit and Kinect
Kinectorbit Myorbit; Simpleopenni Kinect;
Step Three: Initialize the object. Start the depth camera
void Setup () { size (800,600,OPENGL); Myorbit = new Kinectorbit (this,0); Kinect = new Simpleopenni (this); Kinect.enabledepth ();}
Fourth step: Under 3D rendering, draw the point cloud and the frustum (that is, the 3D area visible on the screen. The Kinect frustum means the area that Kinect can see in space. )
void Draw () { kinect.update (); Background (0); Myorbit.pushorbit (this); Drawpointcloud (); Kinect.drawcamfrustum (); Myorbit.poporbit (this);}
Fifth step: Finish plot point cloud function
void Drawpointcloud () { int[] Depthmap = Kinect.depthmap (); int steps = 3; int index; Pvector Realworldpoint; Stroke (255); for (int y=0;y < kinect.depthheight (); y + = steps) {for (int x=0;x < Kinect.depthwidth (); x + = steps) { Stroke ( Kinect.depthimage (). get (x, y)); index = x + y * kinect.depthwidth (); if (Depthmap[index] > 0) { realworldpoint = Kinect.depthmaprealworld () [index]; Point (REALWORLDPOINT.X,REALWORLDPOINT.Y,REALWORLDPOINT.Z);}}} }
Copyright notice: This article blog original article. Blogs, without consent, may not be reproduced.
Kienct and the Arduino Learning Note (2) the coordinates of the depth image and the real-world depth map