Threejs Development 3D Map Practice Summary

Source: Internet
Author: User

The previous period of time on a continuous one-month class, overtime to complete a 3D project. Also considered from the traditional web transformation to the WEBGL graphics development, pits many, did a summary share.

1. Normal vector problem

Normals are vectors perpendicular to the surface of the object we want to illuminate. Normals represent the direction of the surface so they play a decisive role in modeling the interaction of light sources and objects.  Each vertex has an associated normal vector. If a vertex is shared by multiple triangles, the normal vector of the shared vertex equals the sum of the normal vectors of the shared vertices in different triangles. N=N1+N2; So if you do not do any processing, the point of the 3-dimensional object directly to the Buffergeometry, then because the normal vector is synthesized, after the slice shader interpolation, you will get this heibuliuqiu effect my way of processing so that the vertex's normal vector is unique, then you need to share the vertex , copy a vertex, and recalculate the index, yes each vertex shared by multiple polygons has multiple parts, each with a separate normal vector, so that each face has a same color 2, the light source and the face block color development process designed to a set of color, but once there is light, The final color of the face block is mixed with the light source, and the color naturally differs from the final design color.  The following is a hybrid algorithm for the Lambert illumination model. And the requirements of the product is the top surface to maintain the color of the design, the side needs to add light source change effect, when the operation of the map, the side color needs to be changed according to the perspective. So my approach is to draw the top and side (create two mesh), the top face using Meshlambertmaterial's Emssive property to set the self-luminous color consistent with the design color, there will be no lighting effect, The side uses emssive and color to apply the light effect.View Code

  

3. Poi Callout

Creating a poi that always faces the camera in three can use the Sprite class, and you can draw text and pictures on the canvas and place the canvas as a texture map onto the sprite. But one of the problems here is that the canvas image will be distorted because the scale of the sprite is not properly set, causing the image to be stretched or scaled distorted.

  

The solution to the problem is to ensure that the scaled size in the 3d world, after a series of transformations projected onto the camera screen, remains consistent with the size of the canvas on the screen. This requires that we calculate the ratio of the screen pixel to the unit of length in the 3d world, and then scale the sprite to the appropriate 3d length.

  

4, click Pick Problem in WebGL 3D objects drawn to the screen will pass through the following stages so to do click Pick in the 3D application, first to convert the screen coordinate system into the NDC coordinate system, the XY coordinates of the NDC, because the 2d screen does not have Z-value, so, The z of the screen point converted to 3d coordinates can be arbitrarily evaluated, generally taking 0.5 (Z between 1 and 1).
function Fromsreentondc (x, Y, container) {  return {    x:x/container.offsetwidth * 2-1,    y:-y/container.of Fsetheight * 2 + 1,    z:1  };} function Fromndctoscreen (x, Y, container) {  return {    x: (x + 1)/2 * container.offsetwidth,    y: (1-y)/2 * Container.offsetheight  };}

The NDC coordinates are then converted to the coordinates: NDC = P * MV * VEC4 Vec4 = MV-1 * P-1 * NDC This process has been implemented in three classes in Vector3:
    Unproject:function () {        var matrix = new Matrix4 ();        return function Unproject (camera) {            matrix.multiplymatrices (Camera.matrixworld, Matrix.getinverse ( Camera.projectionmatrix));            return this.applymatrix4 (matrix);}        ;    } (),

The resulting 3d point is combined with the camera position to make a ray that detects the collision with the object in the scene, respectively. First, with the object's outsourcing ball intersection detection, and the ball does not intersect with the exclusion of the ball intersection with the save into the next process. The objects intersecting all the outsourced spheres and rays are sorted by distance from the camera, and the rays are intersected with the triangles that make up the object. Find the intersecting object. Of course, this process is also encapsulated by the Raycaster in three, which is simple to use:

mouse.x = ndcpos.x;      Mouse.y = Ndcpos.y;      This.raycaster.setFromCamera (mouse, camera);      var intersects = this.raycaster.intersectObjects (this._getintersectmeshes (floor, zoom), true);

5. Performance optimization

As more and more objects are in the scene, the drawing process becomes more time-consuming, making it virtually unusable on the phone side.

There is an important concept in graphics that is called "one draw All" at a time, which means that the fewer times the drawing API is called, the higher the performance. For example, FillRect, filltext in the canvas, drawelements, drawarrays in WebGL, so the solution here is to unify the sides and top faces of the same object into a single buffergeometry. This can greatly reduce the number of calls to the drawing API and greatly improve rendering performance.

  

This solves the rendering performance problem, but brings another problem, now is that all the same style faces are placed in a buffergeometry (which we call a style graphic), then we can not individually determine which object (what we call the object graph) is selected when the face is clicked, This object cannot be highlighted and scaled. My way of handling this is to keep all the objects generated by the object in memory, and use this part of the data to do the intersection detection when the face is clicked. For the highlighted zoom processing after the selected object, first cut off the corresponding part of the style surface, and then add the selected object graph to the scene to zoom and highlight it. The cropping method is to record the index position of each object in the style graph, and make this Part index zero when it needs to be cut. This part of the index is restored to its original state where it needs to be restored.

6. Click to move to the center of the screen

This part is also met a lot of pits, the first idea is:

The center point of the polygon is currently coordinates within the world coordinate system, First use Center.project (camera) to get the normalized device coordinates, in accordance with the NDC to get the screen coordinates, and then according to the center point screen coordinates and screen center point coordinates interpolation, get offset, in accordance with the Pan method in the Oribitcontrols to update the camera position.  This approach ultimately fails because the camera may make various transformations, so the offset of the screen coordinates is not linearly aligned with the position in the 3d world coordinate system. The final idea is that we now want to move the center point of the click to the center of the screen, and the NDC coordinates of the center of the screen are always (0,0) our NDC coordinates, which look at the focus of the line of sight and the close-up of the surface, are 0, 0; that is, we are going to use the center point as our observation  , here we can directly to the surface center of the so-called line of sight observation point, the use of LookAt method to get the camera matrix, but if the effect of such a simple treatment will give the impression that the camera attitude changes, that is, it will not feel the translation of the past, so we have to do is to keep the camera's current attitude to the center as The process of turning the screen movement into a camera change while panning is to know that the screen offset is the target, and what we have to do here is to know the process of target reverse screen offset. First, according to the current target and face center to find out the camera's offset vector, according to the camera offset vector to find the camera x-axis and up axis projection length, according to the projection length can be rolled back to the screen should be the amount of translation.View Code

7, 2/3d switch

The main content of the 23D switch is that when the camera's line of sight is perpendicular to the plane of the scene, a parallel projection is used, so that the user can only see the feeling of the top side as the 2D view. So we have to calculate the world view of the parallel projection according to the perspective Cone.

Because users do a lot of things in 2D, 3D scenes, such as panning, zooming, and rotating, the key is to keep parallel projections consistent with the position and lookat of the frustum camera, and the key points to zoom out: distance is consistent with zoom.

In parallel projection, the larger the zoom represents the smaller the two surface area of the six-face body, the larger the magnification. 8, 3D geography level is actually the pixel with the Mercator coordinate system, the corresponding relationship between the meter, this has a general standard and the calculation formula:
r=6378137resolution=2*pi*r/(2^zoom*256)

The corresponding relationship between the pixels and the meters in each level is as follows:

Resolution Zoom 2048 blocksize BlockSize Scale (dpi=160) 156543.0339 0 320600133.5 40075016.69 9 86097851.578271.51696 1 160300066.7 20037508.34 493048925.839135.75848 2 80150033.37 10018754.17 2 46524462.919567.87924 3 40075016.69 5009377.086 123262231.49783.939621 4 20037508.34 2504688.543 6 1631115.724891.96981 5 10018754.17 1252344.271 30815557.862445.984905 6 5009377.086 626172.1357 15 407778.931222.992453 7 2504688.543 313086.0679 7703889.465611.4962263 8 1252344.271 156543.0339 38 51944.732305.7481131 9 626172.1357 78271.51696 1925972.366152.8740566 10 313086.0679 39135.75848 9 62986.183176.4370283 11 156543.0339 19567.87924 481493.091638.2185141 12 78271.51696 9783.939621 2 40746.545819.1092571 13 39135.75848 4891.96981 120373.27299.5546285 14 19567.87924 2445.984905 601 86.636454.7773143 15   9783.939621 1222.992453 30093.318222.3886571 16 4891.96981 611.4962263 15046.659111.1943286 17 2445.984905 305.7481131 7523.3295560.5971643 18 1222.992453 152.8740566 3761.6647780.2985821 19 61 1.4962263 76.43702829 1880.8323890.1492911 20 305.7481131 38.21851414 940.41619450.0746455 210.037 3227 22

The calculation strategy in 3D is that the coordinates of the 3D world must first be clearly mapped to the Mercator unit, and if the unit is already in MI, then the height of the camera's projection screen can be directly compared with the number of pixels on the screen, and the results will compare with the above ranking. Select the unused level data and the scale bar. Note that the scale bar in the 3D map does not meet this scale anywhere on all screens and the real world, only that the pixels of the camera's center point at the screen position satisfy this relationship, because the parallel projection has a nearly large and far smaller effect.

9. Poi collision

Since labels are always moving towards the camera, the dimension collision is the conversion of the callout point to the screen coordinate system with the width height to calculate the intersection of the rectangles. As for the specific collision algorithm, you can find it on the Internet, not to expand here. Here's the code to calculate the POI rectangle

List of comments

Threejs Development 3D Map Practice Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.