Dragon Valley Hand tour Webvr technology sharing

Source: Internet
Author: User

Mainly facing the Web front-end engineer, need certain JavaScript and Three.js Foundation;
This article mainly shares the content is based on the three.js development Webvr thought and encounters the question;
Interested students, welcome to discuss the thread.

Directory:
First, the project experience
1.1. Project Introduction
1.2. Function Introduction
1.3. Gaming Experience
II. Technical programmes
2.1. Why use Webvr
2.2. Common Webvr Solutions
2.2.1, Mozilla's A-frame solution
2.2.2, Three.js and Webvr-polyfill Solutions
Third, the realization of technology
3.1. Knowledge Reserve
3.2. Implementation steps
3.3. Working principle
Four, technical difficulties
4.1, the program and the user jointly control the camera
4.2. Multiple Mask Maps
4.3. Lens movement
4.4.3d Adaptive length text hint
4.5. Unity3d Terrain Export
4.6.3dmax Animation Export problem
V. Complete source code and corresponding components


First, the project experience
1.1. Project Introduction:
1.1.1, Name:
"Re-calendar Altria"--Dragon Valley Hand tour first ChinaJoy2016 preheating VR games


1.1.2, Development background:
Based on the 3D properties of Dragon Valley Hand tour, the panoramic view experience, and the ChinaJoy premiere of the offline scene, we discuss with the brand in addition to the VR-based offline experience project. Due to the good compatibility and development efficiency of web-based technology, we use WEBVR technology to realize the whole experience.

1.1.3, using Webvr advantages:
1.1.3.1, ordinary Web front-end engineers can participate in VR application development, reduce the development threshold;
1.1.3.2, cross-device terminal, cross-operating system, cross-app carrier;
1.1.3.3, rapid development, easy to maintain, adjust at any time, easy to spread;
1.1.3.4, the browser can experience without installation.

1.2. Function Introduction
Based on the in-game 3D scene, character and prop model, the VR mini-game developed through the WEBGL framework Three.js provides players with offline VR interactive experience at the ChinaJoy Dragon Valley Hand Tour booth, and is used for online marketing communication. Users who do not have VR glasses can choose the normal mode for an interactive experience.

1.3. Gaming Experience
If you happen to have VR glasses, choose VR mode and if not, select Normal mode.
It is necessary to note that, as this application is targeted at offline scenarios, and the partner Samsung offers the latest S7 phones and GEARVR devices, the project is only optimized for S7 experience, so some of the phones may be stuck or 3D models are out of order.

You can scan the QR code below or open the http://dn.qq.com/act/vr/to experience:

II. Technical programmes
2.1, why is it time to try to Webvr?
2.1.1, the time is slowly ripe, we can perceive through a few events:
At the beginning of 2015, Mozilla added support for Webvr in Firefox nightly;
At the end of 2015, MOZVR team launched the Open source framework A-frame, can be HTML tags, you can create VR Web page;
At the end of 2015, Egret3d released, the development team said that in the future version will be implemented WEBVR support;
In early 2016, Google and Mozilla jointly created the WEBVR standard;
In June 2016, Google plans to move the entire Chrome browser into the VR world.
2.1.2, Webvr development cost is lower.
2015 VR hardware rapid development, but today, VR content is slightly thin. The reason is that VR development costs are too high, and Webvr relies on the WEBGL and similar threejs frameworks, greatly reducing the developers to enter the field of VR threshold.
2.1.3, the advantages of the web itself
mentioned above, relying on the web, with no need to install, easy to spread, easy to quickly iterate and so on.

2.2, the current stage, the Common Webvr solutions:
2.2.1, A-frame
Introduction: Mozilla's Open source framework, which allows you to build a framework for WEBVR scenarios with custom HTML elements for beginners without WebGL and Threejs Foundation.
Advantages: Based on the Threejs package, you can quickly create a VR Web page with a specific tag.
Disadvantage: The provided components are limited and difficult to complete a more complex project.
Instance:
2.2.1.1, create a simple scene.

<! DOCTYPE html>

SOURCE Explanation:
As simple as a few tags, you can build a scene containing lights, cameras, follow the camera object, the rest of the things will be analyzed by A-frame, specific tags and attributes are not much to explain, you can refer to A-frame DOC.

2.2.1.1, load a model that is exported by software (such as 3dmax).

<! DOCTYPE html>

SOURCE Explanation:
This example shows how to add components, A-frame, because A-frame has too few components at this stage to load custom patterns that need to expand the components themselves. Component additions require a three.js base.
So,a-frame starting point is very good, learning a few simple tags and properties, that can build 3d/webvr scene, but the reality is not yet mature, and with the A-frame main designer job-hopping to Google, so I gave up this program very early.

2, based on the Threejs and Webvr components, in fact, A-frame is based on both the package.
Advantages: Complex projects can be completed, can be combined with the original WebGL;
Disadvantage: Need to master Threejs, need to know WebGL, learning cost is higher.

In this project, the choice is this program, in the next section, will be detailed.

Third, the realization of technology
3.1. Knowledge Reserve:
Three.js (mastering), WebGL (understanding), JavaScript
For Three.js No basic students, can move to three.js example tutorial

3.2. Implementation steps:
Simply put, the following three steps are required to complete a WEBVR application:
3.2.1, building the scene

As with the following:
First we need to load our resources, which include terrain, characters, animations, and auxiliary elements;
Then create the elements we need, such as lights, cameras, sky, etc.
Then complete the main business logic.

3.2.2, Interactive
That is, the user's action input, these actions include:
Position movement, rotation, line-of-sight focus, sound, and even all body joint movements.
Of course, we currently have a limited number of hardware devices available, such as gyroscopes, compasses, and handsets. Other assistive devices are commonly used such as leap Motion, Kinect, and so on.
More additional devices are aware of higher cost of use, in this case the use of the Action input information:
The user current direction, by the vrcontrols.js and the webvr-polyfill.js realization completes;
User perspective Focus, complete button click, Attack and other actions, by following the camera object detection collision to complete.

3.2.3, split screen

As shown, in order to let the user more immersion sense, usually according to the user pupil distance will be divided into two parts of the screen with a certain disparity, do not need to worry about, this part of the work by Vreffect.js to complete.

3.3. Working principle
Mentioned in the previous section Webvr related components, we can simply use the interface it provides can be done, but certainly there are classmates curious, how it works.

This begins with the WebVR API proposal that Mozilla and Google launched in early 2016, WebVR specification, which defines a custom-designed interface for VR hardware, enabling developers to build immersive, comfortable VR experiences. However, since the standard is still in the draft stage, we develop the need for Webvr Polyfill, a component that does not require a specific browser to use the interfaces in the Webvr API.
So we just need to introduce Webvr-polyfill.js and Vrcontrols, vreffect two classes in the project, and call it.

Vreffect = new three. Vreffect (renderer); vrcontrols = new three. Vrcontrols (camera);

Webvr-polyfill implements the Webvr API 1.0 function based on the common browser;
Vrcontrols Update the camera information so that the user is placed in the scene in the first person;
Vreffect is responsible for split screen.

Four, technical difficulties

4.1, the program and the user jointly control the camera
When the program moves the lens automatically, allowing the user to look around, it is necessary to have a secondary container to control the lens rotation and movement together.

Add Camcorder camera = new three. Perspectivecamera (size.w/size.h, 1, 10000), Camera.position.set (0, 0, 0); Camera.lookat (new three. Vector3 (0,0,0));//auxiliary lens movement Dolly  = dolly = new three. Group ();d Olly.position.set (Ten, olly.rotation.y);d = Math.pi/10;dolly.add (camera); Scene.add (Dolly);

4.2. Multiple Mask Maps


As shown, the terrain is synthesized by a mask of three textures, which we need to implement using custom shader, controlled by RBG three channels.
Core code (Element shader):

Fragmentshader: [' Uniform sampler2d texture1; ', ' Uniform sampler2d texture2; ', ' Uniform sampler2d texture3; ' Uniform Sampler2d mask; ', ' void Main () {', ' vec4 colorTexture1 = texture2d (Texture1, vuv* 40.0); ', ' vec4 colorTexture2 = texture2d (t Exture2, vuv* 60.0); ', ' vec4 colorTexture3 = texture2d (texture3, vuv* 20.0); ', ' vec4 colormask = texture2d (mask, vUv); ', ' ve C3  Outgoinglight = vec3 (COLORTEXTURE1.RGB*COLORMASK.R + colortexture2.rgb *COLORMASK.G + Colortexture3.rgb * COLORMASK.B) * 0.6; ', ' Gl_fragcolor =  vec4 (outgoinglight, 1.0); ', '} '].join ("\ n")

Complete code (add three.js Lights, fog):

Synthetic material var map1 = texloader.load (' cross-domain/skins/foor_stone02.png '); var map2 = Texloader.load (' Cross-domain/skins /green_wet09.png '); var map3 = texloader.load (' cross-domain/skins/stone_dry02.png ');//Custom Composite Mask Shaderthree.fogshader = {Uniforms:lib.extend ([three. uniformslib["Fog"],three. uniformslib["Lights"],three. uniformslib["Shadowmap"],{' texture1 ': {type: "T", value:map1}, ' Texture2 ': {type: "T", value:map2}, ' Texture3 ': {type : "T", value:map3}, ' mask ': {type: "T", Value:texLoader.load (' Cross-domain/skins/mask.png ')}}]), vertexshader: [" Varying vec2 vUv; "," varying vec3 vnormal; "," varying vec3 vviewposition; ", three. shaderchunk["Skinning_pars_vertex"],three. shaderchunk["Shadowmap_pars_vertex"],three. shaderchunk["Logdepthbuf_pars_vertex"], "void Main () {", three. shaderchunk["Skinbase_vertex"],three. shaderchunk["Skinnormal_vertex"], "vec4 mvposition = Modelviewmatrix * VEC4 (position, 1.0);", "vUv = UV;", "vnormal = Nor Malize (Normalmatrix * normal); "," Vviewposition=-mvposition.xyz; "," gl_position = ProjectionMatrix * Modelviewmatrix * VEC4 (Position, 1.0); ", three. shaderchunk["Logdepthbuf_vertex"], "}"].join (' \ n '), Fragmentshader: [' Uniform sampler2d texture1; ', ' uniform Sampler2d texture2; ', ' Uniform sampler2d texture3; ', ' uniform sampler2d mask; ', ' varying vec2 vUv; ', ' varying vec3 vnormal; ' , ' varying vec3 vviewposition; ',//' VEC3 outgoinglight = vec3 (0.0); ", three. shaderchunk["Common"],three. shaderchunk["Shadowmap_pars_fragment"],three. shaderchunk["Fog_pars_fragment"],three. shaderchunk["Logdepthbuf_pars_fragment"], ' void Main () {', three. shaderchunk["Logdepthbuf_fragment"],three. shaderchunk["Alphatest_fragment"], ' vec4 colorTexture1 = texture2d (Texture1, vuv* 40.0); ', ' vec4 colorTexture2 = Texture2d (Texture2, vuv* 60.0); ', ' vec4 colorTexture3 = texture2d (texture3, vuv* 20.0); ', ' vec4 colormask = texture2d (Mask  , vUv); ', ' vec3 normal = normalize (vnormal); ', ' vec3 Lightdir = normalize (vviewposition); ', ' float dotProduct = max (dot ( Normal, LigHtdir), 0.0) + 0.2; ', ' vec3 outgoinglight = vec3 (COLORTEXTURE1.RGB*COLORMASK.R + colortexture2.rgb *COLORMASK.G + color Texture3.rgb *colormask.b) * 0.6; ', three. shaderchunk["Shadowmap_fragment"],three. shaderchunk["Linear_to_gamma_fragment"],three. shaderchunk["fog_fragment"],//' Gl_fragcolor = VEC4 (COLORTEXTURE1.RGB*COLORMASK.R + colortexture2.rgb *COLORMASK.G + C Olortexture3.rgb *COLORMASK.B, 1.0) + VEC4 (outgoinglight, 1.0); ',//' Gl_fragcolor = outgoinglight; ', ' Gl_fragcolor = VEC 4 (Outgoinglight, 1.0); ', '} '].join ("\ n")}; Three. FogShader.uniforms.texture1.value.wrapS = three. FogShader.uniforms.texture1.value.wrapT = three. repeatwrapping; Three. FogShader.uniforms.texture2.value.wrapS = three. FogShader.uniforms.texture2.value.wrapT = three. repeatwrapping; Three. FogShader.uniforms.texture3.value.wrapS = three. FogShader.uniforms.texture3.value.wrapT = three. Repeatwrapping;var material = new three. Shadermaterial ({uniforms:three. Fogshader.uniforms,vertexshader:three. FogShader.vertexshader,fragmentshader:three. Fogshader.fragmentshader,fog:true});

3, Lens move (dependent tween Class)
Function:

Cameratracker:function (Paths) {var tweens = [];for (var i = 0; i < paths.length; i++) {(function (i) {var tween = new TWEE N.tween ({pos:0}). to ({pos:1}, paths[i].duration | |); tween.easing (paths[i].easing | | TWEEN. Easing.Linear.None); Tween.onstart (function () {var oripos = Dolly.position;var orirotation = dolly.rotation; This.oripos = {x:oripos.x, y:oripos.y, z:oripos.z};this.orirotation = {x:orirotation.x, Y:ORIROTATION.Y, Z:orirotati ON.Z};}); Tween.onupdate (Paths[i].onupdate | | function () {if (paths[i].pos) {dolly.position.x = this.oripos.x + This.pos * (paths[i  ].pos.x-this.oripos.x);  DOLLY.POSITION.Y = This.oripos.y + This.pos * (PATHS[I].POS.Y-THIS.ORIPOS.Y); Dolly.position.z = this.oripos.z + This.pos * (PATHS[I].POS.Z-THIS.ORIPOS.Z);}  if (paths[i].rotation) {dolly.rotation.x = this.orirotation.x + This.pos * (paths[i].rotation.x-this.orirotation.x);  DOLLY.ROTATION.Y = this.orirotation.y + This.pos * (PATHS[I].ROTATION.Y-THIS.ORIROTATION.Y); Dolly.rotatioN.Z = this.orirotation.z + This.pos * (PATHS[I].ROTATION.Z-THIS.ORIROTATION.Z);}}); Tween.oncomplete (function () {Paths[i].fn && paths[i].fn (); var fn = Tweens.shift (); FN && Fn.start ();}); Tweens.push (tween);}) (i);} Tweens.shift (). Start ();}

Call:

Lib.cameratracker ([{' pos ': {x: -45,y:5, Z: -38}, ' rotation ': {x:0, y: -1.8, z:0},  ' easing ': TWEEN. Easing.Cubic.Out, ' duration ': 4000}]);

4. Self-adapting length text hint
Generates a canvas as a map to a Sprite object based on the length of text.

hint = function (text, type, PosY, fadetime) {var chinense = text.replace (/[u4e00-u9fa5]/g, '); var dbc = Chinense.length;va  R SBC = Text.length-dbc;var length = DBC * 2 + sbc;var fontsize = 40;var TextWidth = fontsize* length/2;posy = PosY | | 0.3;type = Type | | 1;fadetime = Fadetime = = = window.undefined? 500:fadetime;if (Text = = ' sucess ' | | text = = ' fail ') {text = ';} var canvas = document.createelement ("canvas"); var width = 1024x768, height = 512;canvas.width = Width;canvas.height = Height;v AR context = Canvas.getcontext (' 2d '), var imageobj = Document.queryselector (' #img-hint-' + type); Context.drawimage ( Imageobj, WIDTH/2-IMAGEOBJ.WIDTH/2, HEIGHT/2-IMAGEOBJ.HEIGHT/2); Context.font = ' Bold ' + fontsize + ' px simhei '; FillStyle = "Rgba (255,255,255,1)"; Context.filltext (text, WIDTH/2-TEXTWIDTH/2, height/2+15); var texture = new three. Texture (canvas); texture.needsupdate = true;var Mesh;var material = new three. Spritematerial ({map:texture,transparent:true,opacity:0}); mesh = new THREe. Sprite (material); Mesh.scale.set (width/400, height/400, 1); Mesh.position.set (0, PosY,-3); Camera.add (mesh); var Tweenin = new TWEEN. Tween ({pos:0}). to ({pos:1}, fadetime); Tweenin.onupdate (function () {material.opacity = This.pos;}); if (fadetime = = = 0) {material.opacity = 1;} else {Tweenin.start ();} var tweenout = new TWEEN. Tween ({pos:1}). to ({pos:0}, fadetime); Tweenout.onupdate (function () {material.opacity = This.pos;}); Tweenout.oncomplete (function () {camera.remove (mesh);}); Tweenout.fadeout = Tweenout.start;tweenout.remove = function () {camera.remove (mesh);} return tweenout;};

5. Unity Terrain Export
5.1. First export the Unity terrain as obj

5.2, then import 3dmax, using threejsexporter.ms Export to JS format.

6.3dmax Animation Export problem
6.1. Animation Export Error
Typically an object is an editable polygon and needs to be converted into a mesh object.

Operation Steps:
6.1.1, select Object, right-click Convert to editable network;
6.1.2, select Skin modifier, re-skin;
6.1.3, click Bones > Add in the Skin modifier to add the original bone.
6.2, Animation export confusion
It's easy to think it's a weight problem, but in my own experience with multiple project animations, most of them appear on bone additions. In 3dmax and unity, not adding a root node often does not affect animation execution, but exporting to three.js requires adding a root node. If the problem persists, carefully observe which bone is causing it, and the extra bones or missing bones can cause animation confusion.

V. Complete source code and corresponding components
Click to download
Main.js-Full Source code
Tween.min.js-Animation class
Orbitcontrols.js-View controller, rotate, move, zoom the scene for easy commissioning
Audio.min.js-motion audio components to solve AutoPlay audio problems
The rest of the VR-related components are described above

Dragon Valley Hand tour Webvr technology sharing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.