I. Introduction what do we do for a game? In the early days of the Chengdu TGC2016 Exhibition, we developed a "Naruto hand Tour" of the body sense game, the main simulation hand Tour chapter "Nine tail hit", the user avatar four generations, and nine tails to duel, attracting a large number of players to participate. On the face of it, this game is the same as the other body experience, in fact, it has been running under the browser Chrome, that is, we only need to grasp the front-end corresponding technology, you can develop the Kinect-based web body sense game.
Second, the realization of the principle of realization of the idea is what? Using H5 to develop Kinect-based somatosensory games, in fact, it works very simple, from the Kinect to capture players and environmental data, such as human skeleton, in some way, so that the browser can access the data.
1. Collect Data
The Kinect has three lenses, and the middle lens is similar to a normal camera for color images. The left and right sides of the lens are through the infrared to obtain depth data. We use the SDK provided by Microsoft to read the following types of data:
- Color data: color image;
- Depth data: color attempt information;
- Human skeleton data: Based on the above data, it is calculated to obtain the human skeleton data.
2. Make the browser accessible to Kinect data
The framework I've tried and learned is basically a socket that allows the browser process to communicate with the server for data transfer:
- KINECT-HTML5 uses C # to build the service side, color data, trial data, bone data are provided;
- Zigfu support H5, u3d, Flash development, API is more complete, seemingly charges;
- DEPTHJS provides data access in the form of browser plugins;
- Node-kinect2 to Nodejs to build server-side, provide more complete data, more examples.
I finally choose Node-kinect2, although there is no documentation, but more instances, using the front-end engineers familiar with the Nodejs, and the author feedback relatively fast.
- Kinect: Capturing player data such as depth images, color images, etc.;
- Node-kinect2: Get the data from Kinect and make two processing;
- Browser: Listen to node app to specify interface, get player data and complete game development.
Third, prepare to buy a Kinect first
1. System Requirements:
This is a hard requirement and I have wasted too much time in a non-conforming environment.
- USB3.0
- Video card with DX11 support
- Win8 and above system
- Web Sockets-enabled browsers
- Of course, the Kinect v2 sensor is indispensable.
2, Environment construction process:
- Connect the Kinect v2
- Installing kinectsdk-v2.0
- Installing Nodejs
- Installing NODE-KINECT2
NPM Install Kinect2
Iv. examples demonstrate that nothing is better than giving me an example! As shown, we demonstrate how to get the human skeleton and identify the middle spine and gestures:
1. Server-side
Create a Web server and send bone data to the browser side with the following code:
var Kinect2 = require ('.. /.. /lib/kinect2 '), Express = require (' Express '), App = Express (), Server = require (' http '). Createserver (APP), Io = require (' Socket.io '). Listen (server), var kinect = new Kinect2 (),//Open Kinectif (Kinect.open ()) {//Listen on 8000 port server.listen (8000);// Specifies that the request points to the root directory App.get ('/', function (req, res) {res.sendfile (__dirname + '/public/index.html ');}); /Send bone data to browser-side kinect.on (' Bodyframe ', function (bodyframe) {io.sockets.emit (' bodyframe ', Bodyframe);}); /start reading bone data Kinect.openbodyreader ();}
2. Browser-side
The browser-side gets the skeleton data and paints it with canvas, with the key code as follows:
var socket = Io.connect ('/'), var ctx = Canvas.getcontext (' 2d '), Socket.on (' Bodyframe ', function (bodyframe) { Ctx.clearrect (0, 0, canvas.width, canvas.height); var index = 0;//traverse all bone Data bodyFrame.bodies.forEach (function (body) {if ( body.tracked) {for (Var jointtype in body.joints) {var joint = Body.joints[jointtype];ctx.fillstyle = colors[index];//if bone The iliac node is the spine midpoint if (Jointtype = = 1) {Ctx.fillstyle = colors[2];} Ctx.fillrect (JOINT.DEPTHX *, Joint.depthy * 424, 10, 10);} Identify hand gestures updatehandstate (body.lefthandstate, body.joints[7]); Updatehandstate (Body.righthandstate, body.joints[11 ]); index++;});
Very simple lines of code, we have completed the player skeleton capture, a certain JavaScript based on the students should be very easy to understand, but do not understand is what we can get data? How do I get it? What are the bone node names? and Node-kienct2 has no documentation to tell us about this.
Five, development document NODE-KINECT2 did not provide documentation, I will summarize my test document as follows:
1, the server side can provide data types;
Kinect.on (' Bodyframe ', function (Bodyframe) {}); What other data types do you have?
Bodyframe |
Bone data |
Infraredframe |
Infrared data |
Longexposureinfraredframe |
Similar infraredframe, seemingly more accurate, optimized data |
Rawdepthframe |
Unprocessed depth of field data |
Depthframe |
Depth of field data |
Colorframe |
Color image |
Multisourceframe |
All data |
Audio |
Audio data, not tested |
2. Bone Node Type
BODY.JOINTS[11]//joints including what?
Node type |
Jointtype |
Node name |
0 |
Spinebase |
Base of spine |
1 |
Spinemid |
Central Spine |
2 |
Neck |
Neck |
3 |
Head |
Head |
4 |
Shoulderleft |
Left shoulder |
5 |
Elbowleft |
Left elbow |
6 |
Wristleft |
Left wrist |
7 |
Handleft |
Left hand |
8 |
Shoulderright |
Right shoulder |
9 |
Elbowright |
Right elbow |
10 |
Wristright |
Right wrist |
11 |
Handright |
Right hand |
12 |
Hipleft |
Left fart |
13 |
Kneeleft |
Left knee |
14 |
Ankleleft |
Left ankle |
15 |
Footleft |
Left foot |
16 |
Hipright |
Right fart |
17 |
Kneeright |
Right knee |
18 |
Ankleright |
Right ankle |
19 |
Footright |
Right foot |
20 |
Spineshoulder |
Lower Cervical spine |
21st |
Handtipleft |
Left hand finger (no small food) |
22 |
Thumbleft |
Left thumb |
23 |
Handtipright |
Right hand finger |
24 |
Thumbright |
Right thumb |
3, gesture, according to the detection and identification is not too accurate, in the case of high accuracy requirements of the use of
0 |
Unknown |
Not recognized |
1 |
Nottracked |
Failed to detect |
2 |
Open |
Palm |
3 |
Closed |
Fist |
4 |
Lasso |
Scissors hand, and merge the index finger |
4. Bone Data
Body [Object] {
Bodyindex [number]: index, allowing 6 people
joints [Array]: Bone node, containing coordinate information, color information
lefthandstate [number]: Left hand gesture
righthandstate [number]: right hand gesture
Tracked [Boolean]: whether to capture
Trackingid
}
5. Kinect Object
On |
Listening data |
Open |
Open Kinect |
Close |
Shut down |
Openbodyreader |
Reading Bone Data |
Open**reader |
Similar to the above method, read other types of data |
Vi. Summary of actual combat experience summary of the fire shadow body feeling game next, I summarize some of the problems encountered in the development of the body sense game of TGC2016 "Naruto Hand Tour".
1, before the explanation, we need to understand the next game process.
1.1. Start the game by gesture Trigger |
1.2, player Avatar four generations, left and right running to avoid nine tail attacks |
1.3, gesture "secret", triggering four-generation big strokes |
1.4, the user scan QR code to get their own Live Photos |
2. Server-side
The game requires player skeleton data (move, gesture), color image data (triggering a photo under a gesture), so we need to send these two pieces of data to the client. It is worth noting that the color image data volume is too large, need to be compressed.
var emitcolorframe = false;io.sockets.on (' Connection ', function (socket) {Socket.on (' startcolorframe '), function (data) {emitcolorframe = true;});}); Kinect.on (' Multisourceframe ', function (frame) {//Send player skeleton data io.sockets.emit (' Bodyframe ', frame.body);//player take photo if ( Emitcolorframe) {var compression = 1;var Origwidth = 1920;var Origheight = 1080;var origlength = 4 * origwidth * Origheigh T;var compressedwidth = Origwidth/compression;var Compressedheight = Origheight/compression;var resizedLength = 4 * Co Mpressedwidth * Compressedheight;var resizedbuffer = new Buffer (resizedlength),///...///. Photo data is too large to compress to improve transmission performance zlib.deflate ( Resizedbuffer, function (err, result) {if (!err) {var buffer = result.tostring (' base64 '); Io.sockets.emit (' Colorframe '), buffer);}); Emitcolorframe = false;}}); Kinect.openmultisourcereader ({frameTypes:Kinect2.FrameType.body | Kinect2.FrameType.color});
3, client
Client business logic is more complex, we extract the key steps to explain.
3.1, when the user takes pictures, because the data processed is relatively large, in order to prevent the page from stalling, we need to use Web worker
(function () {importscripts (' pako.inflate.min.js '); var imageData; function init () {AddEventListener (' message ', function (event) {switch (event.data.message) { Case "Setimagedata": ImageData = Event.data.imageData; Break Case "Processimagedata": Processimagedata (Event.data.imageBuffer); Break } }); } function Processimagedata (compresseddata) {var imagebuffer = pako.inflate (Atob (compresseddata)); var pixelArray = Imagedata.data; var newpixeldata = new Uint8array (imagebuffer); var imagedatasize = imageData.data.length; for (var i = 0; i < imagedatasize; i++) {imagedata.data[i] = Newpixeldata[i]; } for (var x = 0; x < 1920x1080) {for (var y = 0; y < y++) {var idx = (x + y * 1920) * 4; var r = imagedata.data[idx + 0]; var g = Imagedata.data[idx + 1]; var B = Imagedata.data[idx + 2]; }} self.postmessage ({"Message": "ImageReady", "ImageData": ImageData}); } init ();}) ();
3.2, after the projector, if the rendering area is large, white screen will appear, you need to turn off the browser hardware acceleration.
3.3, the scene light is darker, other players interference, in the course of tracking the player's motion trajectory, there may be jitter, we need to remove the interference data. (You need to remove the data when a large displacement occurs suddenly)
var tracks = This.tracks;var len = tracks.length;//data filter if (Tracks[len-1]!== window.undefined) {if (Math.Abs (N-tracks[len -1]) > 0.2) {return;}} This.tracks.push (n);
3.4, when the player is standing, just a small amount of shaking, we think the player is standing state.
Keep 5 data if (This.tracks.length > 5) {this.tracks.shift ();} else {return;} Total displacement var dis = 0;for (var i = 1; i < this.tracks.length; i++) {dis + this.tracks[i]-this.tracks[i-1];} if (Math.Abs (dis) < 0.01) {This.stand ();} else {if (this.tracks[4] > This.tracks[3]) {this.turnright ();} else { This.turnleft ();} This.run ();}
Vii. Outlook 1, the use of HTML5 development Kinect body Sense game, reduce the technical threshold, front-end engineers can easily develop the body sense of the game;
2, a large number of frameworks can be applied, such as with jquery, Createjs, Three.js (three different rendering methods);
3, infinite imagination space, imagine the lower body sense game combined with Webar, combined with Webaudio, combined with mobile devices, too can dig things up ... It's exciting to think about it, isn't it?
Using HTML5 to develop the Kinect body sense game