3D Room Data Center visualization based on HTML5 's WebGL and VR technology

Source: Internet
Author: User
Tags truncated

Objective

In the 3D computer room data Center visualization application, with the continuous popularization and development of video surveillance networking system, network cameras are more used in monitoring system, especially the advent of high-definition era, more speed up the development and application of network cameras.

While the number of surveillance cameras is constantly huge, in the monitoring system faced with a serious problem: mass video dispersion, isolation, incomplete perspective, location is not clear, etc., always around the user. Therefore, how to manage the camera and control the video dynamics more intuitively and more clearly has become an important topic to enhance the value of video application. So the current project is to solve this problem from the perspective of the present situation. Around how to improve, manage and effectively use the massive information collected by front-end equipment for public security services, especially under the trend of technology integration, how to combine the current advanced video fusion, real-time integration, three-dimensional dynamic and other technologies to achieve three-dimensional scene dynamic visual monitoring, more effective identification, analysis, It has become the trend and direction for the visual development of video surveillance platform to exploit the public application of the effective information service of massive data. At present, in the monitoring industry, Hoi Hong, Dahua and other monitoring industry leaders can be based on such a way to plan the layout of cameras in public places, such as the camera, can be Hoi Hong, Dahua and other camera brand parameters, adjust the system in the visual range of the camera model, monitoring direction, etc. More convenient to let people intuitively understand the camera monitoring area, monitoring angle and so on.

Here is the project address: HTML5-based WebGL custom 3D camera Monitor model

Effect Preview

Overall scene-camera

Local scene-camera

Code generation
Camera Models and scenes

The camera model used in the project is generated through 3dMax modeling, which can export obj and MTL files, and in HT can generate a camera model in a 3d scene by parsing obj and MTL files.

The scenario in the project is built with the 3d editor of HT, some of the models in the scene are modeled through HT, some modeled by 3dMax, and then imported into HT, the ground-white lights in the scene are rendered by the HT 3d editor for ground maps.

Cone modeling

The 3D model is composed of the most basic triangular polygons, for example 1 rectangles can be made up of 2 triangles, 1 cubes consist of 6 polygons, 12 triangles, and so more complex models can be combined by a number of small triangles. Therefore, the 3D model definition is the description of all triangles that construct the model, and each triangle is composed of three vertex vertex, each vertex vertex is determined by x, Y, z three dimensional coordinates, and HT uses the right-hand helix rule to determine the front of the three vertex construction triangle face.

HT through HT . The Default.setshape3dmodel (name, model) function, which registers a custom 3D model, is generated by the cone generated in front of the camera. The cone can be considered to consist of 5 vertices and 6 triangles, as shown below:

Ht. Default.setshape3dmodel (name, model)

1. name is the model name, and if the name is the same as the predefined one, the predefined model is replaced
2. model is a JSON type object, where vs represents an array of vertex coordinates, is represents an indexed array,UV represents an array of map coordinates, and if you want to define a polygon individually, you can Bottom_vs,bottom_is,bottom_uv,top_vs,top_is, Top_uv and so on to define, then can pass shape3d.top.*, shape3d.bottom.* to control a face separately

Here is the code for the model I define:

Camera is the current camera entity//Fovy for the camera's half of the angle of the tan value var Setrangemodel = function(camera, Fovy) {    var fovyval = 0.5 * F Ovy;    var Pointarr = [0, 0, 0,-fovyval, Fovyval, 0.5, Fovyval, Fovyval, 0.5, Fovyval,-fovyval, 0.5,-fovyval,-fovyval, 0.5
    
     ];    Ht. Default.setshape3dmodel (Camera.gettag (), [{        Vs:pointarr, is        : [2, 1, 0, 4, 1, 0, 4, 3, 0, 3, 2, 0
     ],        from_v S:pointarr.slice (3,),        from_is: [3, 1, 0, 3, 2, 1], FROM_UV: [0, 0, 1, 0, 1, 1, 0, 1]}];} 
    

I use the tag tag value of the current camera as the name of the model, and the tag tag is used in HT to uniquely identify an entity, and the user can customize the value of the tag. The Pointarr Records five vertex coordinate information for the current five-face body, and the code is FROM_VS, From_is, and From_uv to build a five-face base, and the bottom surface is used to display the image that the current camera renders.

The wf.geometry property of the Cone style object is set in the code, which allows the model's wireframe to be added to the cone, enhancing the stereoscopic effect of the model, and by Wf.color,wf.width Adjust the color, thickness, etc. of the line frame.

The setup code for the related Model style property is as follows:

RANGENODE.S ({2     ' Shape3d ': Cameraname, 3     //Camera model name 4     ' shape3d.color ': ' Rgba (, 148, 252, 0.3) ', 5< c6/>//Cone model color 6     ' shape3d.reverse.flip ': true, 7     //The reverse side of the cone model shows the contents of the front 8     ' shape3d.light ': false, 9< Whether the c12/>//cone model is affected by light rays     ' shape3d.transparent ': true, 11//cone model transparent ' 3d.movable ': false, 13// Whether the cone model can be moved by the ' wf.geometry ': true//whether the cone model wireframe is displayed.     

Camera Image Generation Principle

Perspective projection

Perspective projection is a method of drawing or rendering on a two-dimensional paper or canvas plane in order to achieve a visual effect that is close to a real three-dimensional object, also known as a Perspective view. Perspective makes far objects smaller, near objects become larger, parallel lines will appear in the first intersection and so more closer to the visual effect of human eye observation.

As shown, the perspective projection ultimately shows the content of the truncated cone (View Frustum) portion of the content on the screen, so Graph3dview provides the eye, center, up, Far,near,fovy, and aspect parameters to control the specific range of the truncated cone. The specific perspective projection can refer to the 3D manual for the HT for Web .

According to the description, in this project, after the camera is initialized, the current 3d scene eyes the position of the eye, as well as the center of the position, then the 3d scene Eyes eye and center centers to the camera center point location, and then at this moment to get the current 3d scene, the That is, the monitoring image of the current camera, and then set the 3d scene center and eyes to the beginning of the cache eyes and center location, through this method can achieve the 3d scene anywhere in the snapshot, so as to achieve real-time camera monitoring image generation.

The relevant pseudo-code is as follows:

1 function getfrontimg (camera, Rangenode) {2     var oldeye = G3d.geteye (); 3     var oldcenter = g3d.getcent ER (); 4     var oldfovy =     g3d.validateimp ();    

After testing, the image obtained by this method will cause the page to lag, because it is to get the whole of the current 3d scene, because the current 3d scene is relatively large, so todataurl get the image information is very slow, so I took a off-screen way to get the image, the following:
1. Create a new 3d scene, set the width and height of the current scene to 200px, and the content of the current 3d scene is the same as the scene of the main screen, HT in new Ht.graph3d.Graph3dView (Datamodel) to create a scene, where D Atamodel is all elements of the current scene, so both the main screen and the off-screen 3d scene share the same datamodel, guaranteeing the consistency of the scene.
2. Set the newly created scene location to a place that the screen cannot see, and add it to the DOM.
3. The previous operation to get the image to the main screen to get the image from the screen operation, when the size of the off-screen image compared to the main screen to get the size of the image is much smaller, and off-screen access does not need to save the original eye Eyes location and center location, because we did not change the main screen eyes and Center position, so also reduces the overhead of switching, greatly improving the camera to get the speed of the image.

The following is the code implemented by the method:

1 functionGetfrontimg (camera, Rangenode) {2//captures the current image when the five-face body that the camera belongs to hides 3 rangenode.s (' shape3d.from.visible ', false); 4 rangenode.s (' shape3d.visible ', false); 5 rangenode.s (' Wf.geometry ', false); 6 var cameraP3 =CAMERA.P3 (); 7 var cameraR3 =CAMERA.R3 (); 8 var cameraS3 =Camera.s3 (); 9 var updatescreen = function() { demoutil.canvas2drender (camera, Outscreeng3d.getcanvas ()) ; rangenode.s ({shape3d.from.image ': camera.a (' canvas ') ); RANGENODE.S (' shape3d.from.visible ') , true); RANGENODE.S (' shape3d.visible ', true); RANGENODE.S (' Wf.geometry ', true); };18 19// Current cone start position var realP3 = [camerap3[0], camerap3[1] + cameras3[1]/2, camerap3[2] + cameras3[2]/2];21//Put the current eye position around the camera The head start position rotates to get the correct eye position in the var Realeye = Demoutil.getcenter (cameraP3, realP3, cameraR3); Outscreeng3d.seteye ( Realeye); Outscreeng3d.setcenter (Demoutil.getcenter (Realeye, [realeye[0], realeye[1], realeye[2] + 5], cameraR3)); Outscreeng3d.setfovy (camera.a (' Fovy ')); Outscreeng3d.validate ();  updatescreen (); }                

One of the Getcenter methods in the above code is to get the position of point A in the 3d scene after point A in the 3d scene, which rotates angle angle around dot B, using HT in HT package. Math The following method, the following is the code:

1//Pointa for pointb around the rotation point 2//POINTB for the point 3//R3 for rotation angle array [Xangle, Yangle, Zangle] for rotation around X, y, Z axis  4 var g Etcenter = function(Pointa, POINTB, R3) {5     var Mtrx = new ht. Math.matrix4 (); 6     var Euler = new ht. Math.euler (); 7     var v1 = new ht. Math.vector3 (); 8     var v2 = new ht. Math.vector3 (); 9     Mtrx.makerotationfromeuler (Euler.set (r3[0], r3[1], r3[2 v2.sub (v1); [pointb[0] + v2.x, POINTB [1] + v2.y, pointb[2] + v2.z];17};      

This applies to the partial knowledge of the vector, as follows:

OA + OB = OC

The method is divided into the following steps to solve:

1.var Mtrx = new ht. MATH.MATRIX4 ()Create a transformation matrix, byMtrx.makerotationfromeuler (Euler.set (r3[0], r3[1], r3[2]))Gets the rotation matrix that rotates around r3[0],r3[1],r3[2] that is the x-axis, y-axis, and z-axis.
2. Bynew HT. Math.vector3 ()Create a v1,v2 two vector.
3.V1.fromarray (POINTB)To create a vector from the origin to the POINTB.
4.V2.fromarray (Pointa)To create a vector from the origin to the Pointa.
5.V1.fromarray (POINTB). Sub (V2.fromarray (pointa))That is, vector ob-oa at this point the vector is AB, at this time v1 into a vector ab.
6.v2.copy (v1)V2 vector copy v1 vector, followed byv2.copy (v1). APPLYMATRIX4 (Mtrx)The rotation matrix is applied to the V2 vector, and then the vector v2 after the V1 vector revolves around the pointa.
7. At this time byv2.sub (v1)It gets the starting point is POINTB, the end point is the pointb rotation after the dot constitutes the vector, the vector is now v2.
8. The point after being rotated by the vector formula is [pointb[0] + v2.x, pointb[1] + v2.y, pointb[2] + v2.z].

3D scene examples in the project is actually Hightopo recently Guizhou several Bo, HT on the industrial Internet booth VR example, the public expectations of Vr/ar is very high, but the road still has to step by step, even if the first product to finance $2.3 billion Magic Leap can only be full of Shit, this topic after the start, here on the scene at the time of the video photos:

2d image paste to 3d model

We can get a screenshot of the current camera position by the previous step, so how do I paste the current image to the bottom of the five-face body we built earlier? by From_vs, from_is to build the rectangle at the bottom, so in HT you can set the Shape3d.from.image property in the style of the five-face body to the current image, where the FROM_UV array is used to define the location of the map, as follows:

The following is the code that defines the map location From_uv:

1 From_uv: [0, 0, 1, 0, 1, 1, 0, 1]

FROM_UV is an array of positions that define the map, which, according to the explanation, can be pasted into the from plane of the 3d model.

Control Panel

In HT, the panel is generated as a result of new Ht.widget.Panel () :

Each camera in the Panel has a module to present the current monitor image, in fact this place is also a canvas, the canvas and the scene in front of the cone monitor image is the same canvas, each camera has its own canvas to save the current camera real-time monitoring screen, This allows you to paste the canvas anywhere and add the canvas to the panel with the following code:

1 formpane.addrow ([{2 element:camera.a (' Canvas ') 3}], +);

The code stores the canvas node under the Attr property of the camera entity, and then camera.a (' canvas ') to get the picture of the current camera.

Each control node in the panel is added via formpane.addrow , which can be referenced in the HT for Web form manual. You can then add the form panel formpane to the Panel panel by Ht.widget.Panel, and refer to the HT for WEB Dashboard manual.

Some of the control codes are as follows:

1 Formpane.addrow ([' Rotatey '    slider: {3         min:-        Max:Math.PI, 5         value:r3[1], 6         Onvaluechanged:function() {7 var cameraR3 = Camera.r3 (); 8 Camera.r3 ([camerar3[0], This.getvalue (), camerar3[ 2]]); 9 RANGENODE.R3 ([camerar3[0], This.getvalue (), camerar3[2 }13}], [0.1, 0.15]);      

Control Panel by AddRow to add control elements, the above code for the addition of the camera around the y axis of rotation control,onvaluechanged When the value of the slider is changed when the call, at this time through the CAMERA.R3 ( ) gets the rotation parameters of the current camera, because it is rotated around the y axis so the angle of the x-axis and z-axis is constant, changing the rotation angle of the y-axis, so through CAMERA.R3 ([camerar3[0], This.getvalue (), cameraR3 [2]]) to adjust the camera rotation angle and through RANGENODE.R3 ([camerar3[0], This.getvalue (), camerar3[2]) to set the camera front cone rotation angle, Then call the previously encapsulated getfrontimg function to get the real-time image information below the rotation angle.

In the project, you can set the background of the title to a background with transparency, other similar titlecolor, titleheight, etc. by using the configuration parameters of the panel Titlebackground:rgba Header parameters can be configured, through the Separatorcolor,separatorwidth and other split parameters can be set between the internal panels of the color, width and so on split line. The final panel sets the panel's position to the upper-right corner via panel.setpositionrelativeto (' righttop ') and Document.body.appendChild ( Panel.getview ()) adds the outermost div of the Panel to the page, and Panel.getview () is used to get the outermost DOM node of the panel.

The specific initialization panel code is as follows:

1 functionInitpanel () {2 var panel = newHt.widget.Panel (); 3 var config ={4 title: "Camera Control Panel", 5 titlebackground: ' Rgba (230, 230, 230, 0.4) ', 6 Titlecolor: ' RGB (0, 0, 0) ', 7 titleheight:30, 8 Separatorcolor: ' RGB (67, 175, 241) ', 9 separatorwidth:1, Ten Exclusive:true, 11Items: []12};13 Cameraarr.foreach (function(Data, NUM) {$ var camera = data[' camera '];15 var rangenode = data[' Rangenode ' ];16 var formpane = new  Ht.widget.FormPane (); +  Initformpan E (Formpane, Camera, Rangenode),  Config.items.push ({title: "Camera" + (num + 1 ), Titlebackground: ' Rgba (23 0, 0.4) ' , Titlecolor: ' RGB (0, 0, 0) ' , titleheight:30 , Separatorcolor: ' RGB (67, 175, 241 ) ' , Separatorwidth:1 ,  content:formpane,26 flowlayout:true , contentheight:400 , width:250 , Expanded:num = = = 030 }); });  panel.setconfig (config); panel.setposition RelativeTo (' righttop ' );  Document.body.appendChild (Panel.getview ()); Window.addeventlistener (" Resize "," function  () {Panax Notoginseng  panel.invalidate (); }); (+)      /span>                

In the control Panel you can adjust the orientation of the camera, the camera monitor the radiation range, the length of the front cone of the camera and so on, and the camera image is generated in real time, the following is run:

The following is the implementation of the 3D scene combined with the HT for Web-based VR technology in this project:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.