Reprinted please indicate the source: http://blog.csdn.net/lxk7280
First, contact the kinectorbit camera library. The download URL and brief introduction of this library are provided in this article: http://blog.csdn.net/lxk7280/article/details/38184355. Put the downloaded file in a subfolder of the corresponding processing.
How to operate the mouse and keyboard under the kinectorbit Library:
1. Right-click and drag the camera to shake the camera.
2. left-click drag: rotate around the object.
3. Scroll: zoom operation.
4. Save the P key and exit the O key. After the program is run again, the above stored pilot will start. The viewpoint parameters are stored in a file named Export orbitset_0.csv in the data folder. If the file is deleted, start with the default value.
First, attach the following three items:
1. Close View
Because the depth range and field angle range of the Kinect are as follows:
Colors and depth |
1.2 ~ 3.6 meters |
Skeleton Tracing |
1.2 ~ 3.6 meters |
AOYE Angle |
Horizontal 57 degrees, vertical 43 degrees |
Some of my bodies are out of the range that the Kinect deep camera can take, so they are not shown in the picture.
2. Moderate distance Angle of View
Obviously, we can see the big projection of the table behind me and my arm behind me.
3. Remote View
Code:
Step 1: import the three databases you need
import processing.opengl.*;import SimpleOpenNI.*;import kinectOrbit.*;
Step 2: define the object myorbit and Kinect
KinectOrbit myOrbit;SimpleOpenNI kinect;
Step 3: Initialize the object and start the deep camera
void setup(){ size(800,600,OPENGL); myOrbit = new KinectOrbit(this,0); kinect = new SimpleOpenNI(this); kinect.enableDepth();}
Step 4: In 3D rendering, draw the vertex cloud and the video cone (that is, the 3D area visible on the screen. The Kinect cone indicates the area that the Kinect can see in the space .)
void draw(){ kinect.update(); background(0); myOrbit.pushOrbit(this); drawPointCloud(); kinect.drawCamFrustum(); myOrbit.popOrbit(this);}
Step 5: complete the point cloud Function
void drawPointCloud(){ int[] depthMap = kinect.depthMap(); int steps = 3; int index; PVector realWorldPoint; stroke(255); for(int y=0;y < kinect.depthHeight();y += steps){ for(int x=0;x < kinect.depthWidth();x += steps){ stroke(kinect.depthImage().get(x,y)); index = x + y * kinect.depthWidth(); if(depthMap[index] > 0){ realWorldPoint = kinect.depthMapRealWorld()[index]; point(realWorldPoint.x,realWorldPoint.y,realWorldPoint.z); } } } }