ProgramSource code: Http://idav.ucdavis.edu /~Okreylos/resdev/Kinect/Download.html
To run this program, you need to installVrui Toolkit, For http://idav.ucdavis.edu /~ Okreylos/resdev/vrui/download.html
Kinect2.2 requirements Vrui-2.4-001 and above.
Install vruitoolkit
1. install necessary tools first.
Sudo aptitude update
Sudo aptitude install build-essential
Sudo aptitude install zlib1g-dev Mesa-common-dev libgl1-mesa-dev libglu1-mesa-dev
2. install some optional Libraries
Sudo aptitude install libusb-1.0-0-dev libpng12-dev libjpeg62-dev libtiff4-dev
Sudo aptitude install libdc1394-22-dev libspeex-dev libogg-dev libtheora-dev libbluetooth-dev libopenal-Dev
These libraries can be connected to the Kinect via USB, download and save images in multiple formats, record and play back the stereo sound, capture the video, and send the video or audio to the Internet,
3. Download and install vrui
Decompress the downloaded installation package ~ /Src path, make, make install to complete the installation
4. Test
Go to the exampleprograms folder and make
Run showearmodel in the bin folder.
If the installation is successful, a earth model can be displayed.
InstallKinect2.2
In~ /SrcDecompress the installation package
Make
Make install
Use of kinect2.2
For a single Kinect
./Bin/kinectutil can be used to list the connected Kinect Devices
./Bin/rawkinectviewer <camera index>
The color and depth graphs can be calibrated and a binary intrinsicparameters-<serial number>. dat file is generated.
./Bin/kinectviewer-C <camera index> returns the reconstructed result.
For multiple Kinect servers
To merge the 3D reconstruction results of multiple Kinect cameras, you must first use rawkinectviewer to calibrate each camera. Then all the Kinect devices are calibrated based on the selected world coordinate system.
1. Place the calibration target in the field of view of all the devices to be calibrated.
2 for each Kinect device:
A runs rawkinectviewer and fits the mesh of the Target Image in the deep stream.
B. Save but not project the mesh to accept a series of 3D nodes.
C. Copy the node to the file kinectpoints-<serial number>. CSV
3. Create a New targetpoints.csv file that contains the 3D interior corner positions of the calibration target.
4 For each Kinect device:
A. Run alignpoints:
Alignpoints-ogkinectpoints-<serial number>. CSV targetpoints.csv
B. Observe the Mismatching points displayed by alignpoints. if the error is too large, repeat Step 2.
C. Save the optimal fitting transformation displayed in the final alignpoints to the projectortransform-<serial number>. txt file.
5. Run kinectviewer on all Kinect devices. All 3D videos are displayed, and there are more or less gaps in matching, depending on the quality of the previous calibration.
In addition, there are video demos on YouTube, but there is a gap between my running effect and the video. I guess it may be because there is a library that is not compatible with NVIDIA's graphics card.