Use KINECT2 as an input device for Oculus gaming applications

Source: Internet
Author: User

Note: The article was written in August 2015, the current VR game demo has been completed, so the last time to share some of the experience of pre-research, I hope to help you

Background

At the beginning of contact with Oculus, downloaded from the Internet a lot of demo to experience, but the operation experience is mostly poor, especially FPS class. It also makes us realize that for VR games, the biggest challenge is not the way the display changes, but the way they interact. In an immersive environment, the most natural interaction is the perfect way. The basic requirement is that you can use your hands to interact with virtual environments in VR. In this way, first the mouse or the handle is eliminated, we have to target some of the market input devices, one by one to evaluate the experiment:
-Wiimote: can only detect movement and direction, can not accurately locate the position of the hands
-Leap Motion: Sticking to the Oculus can simulate both hands, but the recognition range is too small, the stability of the bones is poor, seriously affect the experience
-Razer Hydra: Can get the space position and rotation of both hands, plus the buttons on the two handles can also trigger some state switch, it seems to be a good solution. Disadvantage is that the position deviation is relatively large, it may be the problem of magnetic field interference
-RealSense: Similar to leapmotion, but low accuracy, resulting in the identification of bone position jitter is serious, can not be used for both hands of bone mapping

Try to try, as if the current market in addition to high-cost motion capture equipment, there is no more perfect VR input device can be used. Finally, the eyes turned to the next Xboxone development machine with the KINECT2.
Because our group last year carried out the development of xboxone body sense game, accumulated some experience of Kinect2 somatosensory operation, the Kinect2 connected to the PC, to see whether the somatosensory operation and Oculus VR display combined.

Demand analysis

As mentioned earlier, as much as possible to achieve close to the nature of interaction, it is necessary to achieve these key points:

    • You can show your hands in the virtual world, preferably with a limb torso.
    • The two-handed position in the virtual space is consistent with the relative position (including rotation) of the tumbling part in the real space
    • Can use both hands to influence objects in the virtual world
    • Ability to identify simple gestures such as grasping, pushing, pulling, pressing, touching, etc.

What are the data or functions that KINECT2 provides?

    • Color/depth/bodyindex/ir data for 30 frames/s
    • Body bone position and orientation (less stable, jitter)
    • Three state identification of the hands, just corresponding to the stone scissors cloth (false recognition rate is higher)
    • Other functions, such as voice, are temporarily unavailable.

Like LeapMotion, each finger alone can not be done, back and second, only in the interactive design to avoid the small elements, using the entire palm as an interactive
The skeletal position of the hands provides us with space for both hands, while three states can refer to our UI interaction experience in the Xboxone body game, using gestures like grab and drag.

Implementation details drawing of the hands and limbs

Because the Kinect API already provides the transformation information for the human skeleton, it's natural that we want to bind to a skin model in the game.

In the end we also realized in the UE4, but the experience is very dissatisfied, why?

    • The skeletal transformation information obtained in the Kinect is frequently shaken and, if not processed, will be like convulsions
    • If the stability of the bone transformation data filtering processing, will increase the response delay, resulting in virtual limb movements than the actual is always slower half a beat
    • People of different shapes can have problems mapping to the same model, such as imagining what a tall person feels when they see their arms become shorter in Oculus. This can affect spatial location judgments based on intuition and experience.

Is there any other way to achieve the drawing of the hands and limbs? When using Kinectstudio debugging, it is found that the depth rendering under the 3D view is more interesting:

This is actually the discrete point in the depth data (depthbuffer) mapping to 3D space, which I call the "point Cloud (Pointcloud)".

So the whim of the virtual space to use the point cloud to express their own body, hands and fingers can accurately map the movement of the past. So, is this possible?

    • Delay: Because Depthbuffer is the raw data of hardware acquisition, is not processed, there is no intermediate data processing time (delay), so in response to the speed is definitely very ideal, can be controlled around 70ms (Kinect2 hardware fixed 60ms)
    • Data Volume: Depthbuffer resolution is 512x424, that is, need to map to more than 210,000 vertices, although a bit more, but also within the acceptable range, it is not possible to take the point interlaced, with the final effect demand for the subject
    • UE4 Point Cloud Rendering: Each frame calculates the corresponding vertexbuffer according to the Depthbuffer, constructs the Dynamicmesh to draw, PrimitiveType uses pointlist

Then, according to the Bodyindex data to eliminate the surrounding environment and other people's points, it is perfect to map themselves to the UE4 3D scene. A simple material is added, and the vertex normals are calculated using the Slopebased method commonly used in the terrain.

Point Cloud coordinate system alignment

With a little cloud of the body, how to "install" it in the virtual world under the head?
Because Kinect, Oculus, UE4 is equivalent to three different coordinate systems, some coordinate mapping and transformations are required if you want to map the point cloud to the position of the body in the Oculus perspective.

    • UE4 has already integrated the Oculus support by default, so the processing of these two coordinate systems does not bother us, the default Oculus is the coordinates of the UE4 camera and the location of the postiontracking plus the offset
    • And the location of the Oculus Head is based on the camerasensor in the Oculus DK2, which is the base point of Oculus virtual coordinates, but UE4 did the transformation, the initial position of the Oculus map to the location of the camera
    • When the Depthbuffer in Kinect are mapped to vertices, they are all cameraspacepoint, which means that the Kinect device itself is the origin point. It is important to note that the Kinect coordinates and UE4 coordinates need to be converted, corresponding to Ue4vector = Fvector (-V.Z *, v.x *, V.Y * 100)

After finding the datum points that were fixed in the Oculus and Kinect coordinate systems, they were aligned and the coordinate systems on both sides were not coincident. The method is simple to "fit" the Kinect and Oculus Camerasensor:

The world coordinates of the sensor can be calculated by the cameracomponent position and the Oculus Cameraorigin, then the point cloud is aligned to this position, and offset correction can be made by a saved configuration offset.

Interactive design

The whole interaction was inspired by the interaction of holographic projections in the Iron Man movie, and our goal was to turn the lens of this sci-fi film into reality.

With the holographic projection as the guiding direction of the art style, combined with our most daily contact features, we have implemented 5 interactive controls:

    • Picture viewer: Only one page turn action

    • Video player, can operate play/pause, enlarged after the movie theater to see the feeling, which is the current VR video application more commonly used way

    • Web browser: We have integrated CEF, which is equivalent to embedding a chrome that supports HTML5 games. In the following video we have selected a H5 guessing word game, support click Action on the page

    • Flying games: This is the use of somatosensory operation, although it is a 2D plane game, but after the explosion of debris will fall on the floor, visual effect is good

    • Model Viewer: Mainly used to demonstrate how to visually observe a three-dimensional object in 3D space, which is the highlight of VR interaction, you can observe every detail of an object from all angles and sizes.

Each control we also do a unified tooltips pop-up animation hint, this 3D space information display is also a more common AR application scenario

To better demonstrate the functionality of each control, we divide the entire holographic interaction scene into the front and back two "layers"

    • Vision: There can only be one control at the same time, you can grab the drag and zoom operation, and do each control-specific functions, such as Web page click, small game gesture movement and so on.
    • Close-up: placing each function control, equivalent to the taskbar icon, you can use gestures to "throw" it to the horizon, the equivalent of the window to maximize the/active state

When you operate on a vision control, you add a lightning effect to your hands, simulating an ipad-like experience on a remote control, just like releasing magic.

PS: In order not to interfere with the first person VR display, the head of the point cloud has been "chopped off"

Close-up interaction is a hands-based "Touch" operation that takes the two-handed bone position through the Kinect and hangs two collisions to detect the overlap State between the controls

Effect Show

Click to play video (slightly)

Optimization

The point cloud based on VertexBuffer is very CPU-intensive due to its vertex coordinate calculation, and in order to save time, the vertex computation can be transferred to the GPU, and the mesh is built using static vertexbuffer+ dynamic vertextexture. At the same time, the point cloud is no longer limited to dot rendering and can be made into a particle look.

Summarize

In the process of VR technology pre-research, we also found that the release of the three major VR equipment (Oculus, Steam VR, PS VR) operating equipment has become consistent: dual-holding controller, each controller can get position and rotation, and with the traditional buttons and sticks. Although this is not the most natural way of interaction, but is currently in the cost and function of a balance between the following VR game development, operation can be based on these devices to do a unified design.

With the successful experience of this VR interactive demo, we have also brought this interactive mode into a VR game demo that is being developed, which is a relatively good operation experience that we can achieve in VR before Oculus Touch is listed. Personally, only the change of the display mode does not bring about much change in the game, the two-hand controller can make the VR game play more creative, fundamentally promote the generation of new game types and new experience.

Use KINECT2 as an input device for Oculus gaming applications

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.