How to mark on a WebGL panorama

Source: Internet
Author: User

WebGL can be used to render panoramas of the 3D effect, such as a panorama of the Forbidden City. But sometimes we're not just presenting panoramas, we need to increase our interaction. The Forbidden City can also be divided into a lot of areas, such as the outer-facing road, the west Road, and other east-facing road. We need to make some markup on the 3D chart to indicate a small area. When this tag is clicked, the interface switches to the panorama of the corresponding marker area. is a small demo that implements this feature:

How do I implement such a feature? Through the introduction of this article, we can understand the above interactive process code implementation method. Here, let me ask you a few questions.

1). How do I get the 3D coordinates of an address in a 3D panorama?

2). How do I convert the 3D coordinates of the acquired address to the 2D coordinates on the screen?

3). How do I move the 2D screen coordinates that correspond to the 3D coordinates when I rotate the 3D panorama?

4). How do I confirm that a marker point is in the viewable area of the camera?

To figure out the above questions, it is easy to mark the panorama. Next we will implement the function around each problem.

How do I get the 3D coordinates of an address for a 3D panorama?

Generally get the mark of an address on the scenic spot, which is obtained by hand. Because these marks are irregular to find. So we have to think about how to get an address on the 3D figure manually. Human-Computer interaction with the mouse to operate, but the mouse is 2D coordinates, need to convert to the corresponding 3D coordinates. Three.js provides us with Raycaster objects that we can easily get to a 2D point corresponding to the 3D coordinates. Declare several objects first:

var Raycastercubemesh; var New three. Raycaster (); var New  var tags = [];

Here, you need to register the MouseMove event on the document to get the 3D coordinates of the mouse in real time. The event code is as follows:

functionOnMouseMove (event) {mousevector.x= 2 * (event.clientx/window.innerheight)-1; Mousevector.y=-2 * (event.clienty/window.innerheight) + 1;            Raycaster.setfromcamera (Mousevector.clone (), camera); varintersects =raycaster.intersectobjects ([Cubemesh]); if(Raycastercubemesh) {scene.remove (Raycastercubemesh); } activepoint=NULL; if(Intersects.length > 0){                varPoints = []; Points.push (NewThree. Vector3 (0, 0, 0)); Points.push (intersects[0].point); varMat =NewThree. Meshbasicmaterial ({color:0xff0000, transparent:true, opacity:0.5}); varSpheregeometry =NewThree. Spheregeometry (100); Raycastercubemesh=Newthree.                Mesh (Spheregeometry, Mat); RaycasterCubeMesh.position.copy (intersects[0].point);                Scene.add (Raycastercubemesh); ActivePoint= Intersects[0].point; }        }

Most of the code I have described in "How to implement object interaction". Only the relevant code for the current function is described here. The intersects contains a collection of 3D objects that are picked up at the current position of the mouse. If the length is greater than 0, it means that the 3D object has been picked up. Since we have only passed the Cubemesh object (that is, the panorama) to the Intersectobjects function, the length of the intersects must be 1. intersects[0].point represents the coordinates on the surface of the Cubemesh object that the mouse casts. This coordinate is exactly the 3D Mark point we need. So I'm storing this point in activepoint. Raycastercubemesh directly with the interaction point as a center of the picture of a sphere, the mouse movement of the sphere will follow the move.

When the mouse moves, it can get to the 3D coordinates. How do we confirm that this coordinate is what we need? We also have to register a MouseDown event for Docuent. Confirm by right click. The registration events are as follows:

functionOnMouseDown (event) {if(Event.buttons = = 2 &&ActivePoint) {                varTagmesh =NewThree. Mesh (NewThree. Spheregeometry (1),NewThree. Meshbasicmaterial ({color:0xffff00}));                TagMesh.position.copy (ActivePoint);                Tagobject.add (Tagmesh); varTagelement = document.createelement ("div"); Tagelement.innerhtml= "<span> mark" + (Tags.length + 1) + "</span>"; TagElement.style.background= "#00ff00"; TagElement.style.position= "Absolute"; Tagelement.addeventlistener ("Click",function(evt) {alert (tagelement.innertext);                }); Tagmesh.updatetag=function(){                    if(IsOffScreen (Tagmesh, camera)) {TagElement.style.display= "None"; }Else{TagElement.style.display= "Block"; varPosition =toscreenposition (Tagmesh, camera); TagElement.style.left= position.x + "px"; TagElement.style.top= Position.y + "px";                }} tagmesh.updatetag (); document.getElementById ("Webgl-output"). appendchild (tagelement);            Tags.push (Tagmesh); }        }

The first line of code has if judgment, only the right mouse button is triggered, and activepoint is not empty, the following code is executed. First create a sphere Tagmesh and set the coordinates to ActivePoint, and then add it to the Tagobject object. Tagobject is a Object3d object that is used to store all Tagmesh for unified management.

The code then creates a tagelement element that sets the style and content. and attached to the WebGL container. Tagmesh has customized the Updatetag function, which calls two particularly important functions: Toscreenposition and IsOffscreen. This is not the first place to introduce the Updatetag function. Next, we will answer the remaining questions by introducing these two functions.

How do I convert the 3D coordinates of the acquired address to the 2D coordinates on the screen?

If you are familiar with GIS students, you should know what is called projection. The process of mapping our coordinates to 2D coordinates is called projection. Toscreenposition is precisely the conversion using the projection function. The function code is as follows:

functiontoscreenposition (obj, camera) {varVector =Newthree.            Vector3 (); varWidthhalf = 0.5 *Renderer.context.canvas.width; varHeighthalf = 0.5 *Renderer.context.canvas.height;            Obj.updatematrixworld ();            Vector.setfrommatrixposition (Obj.matrixworld);            Vector.project (camera); Vector.x= (Vector.x * widthhalf) +widthhalf; Vector.y=-(VECTOR.Y * heighthalf) +heighthalf; return{x:vector.x, y:vector.y}; }

Widthhalf and Heighthalf each represent half the width and height of the canvas container. The global coordinates of the Obj object are then updated. The position of the vector is then pointed to the global coordinates of obj, and then the Viector.project (camera) is called to convert the vector to 2D coordinates. But at this point the 2D coordinates are Cartesian coordinates. The origin is in the middle position and needs to be converted to screen coordinates (the origin is in the upper left corner). The final return is the 2D coordinates we need.

How do I move the 2D screen coordinates that correspond to 3D coordinates when I rotate the 3D panorama?

Before we introduce the Updatetag function of Tagmesh, here we look at the function:

 Tagmesh.updatetag = function   () {                         if   (IsOffScreen (Tagmesh, camera) {                    TagElement.style.display  = "None" ;  else  {TAGELEMENT.STYLE.D                        Isplay  = "block" ;  var  position = Toscreenposition (Tagmesh                        , camera);                        TagElement.style.left  = position.x + "px" ;                    TagElement.style.top  = position.y + "px" ; }                }

Just look at the else code, set the display of the element to block, and make it visible. Then call Toscreenposition (Tagmesh, camera) to get the coordinates of the Tagmesh 3D object projected on the screen, all of which we set directly to the Tagelement style left and top. This is only the first step. If the panorama is rotated, the tagelement and Tagmesh positions do not correspond. This function is also called to perform update 2D coordinates at each render.

function render () {            controls.update ();            Tags.foreach (function(tagmesh) {                tagmesh.updatetag ();            });            Renderer.render (scene, camera);            Requestanimationframe (render);        }

The above code iterates through all the collection of tokens, each time the rendering is updated. The above two steps realize the linkage between 3D coordinates and 2D screen coordinates.

How to implement the function only according to the above introduction, will find a problem. Each time we add a marker, we will see this marker before and after the camera is rotated in the panorama. This is because the 2D coordinates do not have a Z-direction, so there are two symmetrical points projected on the same 2D plane on the space. How to solve? Look at one last question.

How do I confirm that a marker point is in the viewable area of the camera?

   We know that the camera has a visible area, and if a 3D coordinate is within the viewable area, the coordinates projected onto the screen need to be displayed. If the 3D coordinate is not in the viewable area of the camera, then we should not project the point onto the screen. Three.js provides a Frustum object to solve this kind of problem. We determine if the 3D object is off-screen by calling the IsOffscreen function. The code is as follows:

 function   IsOffScreen (obj, camera) {var  frustum = //            frustum used to determine the viewable area of the camera  var  Cameraviewprojectionmatrix = new  Span style= "color: #000000;" > three.            Matrix4 (); Cameraviewprojectionmatrix.multiplymatrices (Camera.projectionmatrix, camera.matrixworldinverse);  //  Get the camera's normals  Frustum.setfrommatri X (Cameraviewprojectionmatrix); //             set frustum along the camera normal direction  return !        frustum.intersectsobject (obj); }

First create the Frustum object, and then create a 4 * 4 matrix object. The next line of code converts the Cameraviewprojectmatrix to the normal matrix of the camera. Set it directly on the Frustum object.

The Frustum.intersectobject function is then called to determine if obj is within the viewable area of the frustum. As for the internal implementation logic, you can view the source code of Three.js.

The above is the core code that implements the Panorama markup. As for how panorama is created, you can download the source code from my github. Address:

Https://github.com/heavis/threejs-demo

How to mark on a WebGL panorama

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.