[WebGL Primer] 21, light emitted from a parallel source

Source: Internet
Author: User
Tags cos

Note: The article is translated from http://wgld.org/, the original author Sambonja 広 (doxas), the article assumes that I have additional instructions. I will add [Lufy:], in addition, my WEBGL research is not deep enough, some professional words. If the translation is wrong, you are welcome to correct me.



The results of this demo's execution
illuminate the worldThe last time a doughnut-like ring model was drawn, although no special new knowledge was involved, the 3D model was successfully plotted. So, take a look at the light this time. There are many kinds and usages of light in 3D rendering. Want to study the light thoroughly. is also very not easy. in the real world we can see objects that are reflected by objects that come into our eyes. that means. Without light, our eyes cannot see anything. in the 3D programming world, the model can be rendered even without light. so far, no light processing has been used, and polygons have been drawn as well. However, assuming that the light is added to the simulated world , the visual effects of 3D can be greatly enhanced. This introduction of light, is from the General parallel light Source (directional light) emitted light, is a relatively simple realization of a light. Before we introduce the parallel light source , let's briefly talk about the light.


Simulation of Lightthe light emitted by the parallel light source must deal with the occlusion of the light, that is to say, the effect of the shadow. take a look at the results of this demo. and the demo in the previous article is a clear comparison.
when dealing with light, the part of the color that collides with the light should be clear, and the part of the color that does not collide with the light should be dark. Assuming that there is no light, the brightness of all colors should be the same. When simulating light, There is no light on one side. Should join the shadow. in WebGL, the intensity of the color range is between 0 and 1. Depending on the various elements of RGBA, set the value of the difference to determine.

When dealing with light, the values in the original Rgba are superior to the corresponding coefficients. The range of this coefficient is also between 0 ~ 1, with a light side, showing a similar state of primary colors. The backlight side uses a darker color.

For example, the various elements of RGBA are 0.5, the light coefficient is 0.5, so multiply the words. The elements that get RGBA are 0.5 x 0.5 = 0.25, which is darker than the original color. According to this principle. The intensity of the corresponding light and the intensity of the color are calculated separately , and then multiplied to finally be able to deal with the light and shadow.
What is a parallel light sourceThe parallel light source, which is emitted from infinitely far away, allows the emitted light to remain parallel throughout the three-dimensional space. This concept sounds more difficult to understand. The main thing is that the direction of light remains the same, relative to the three-dimensional space in whatever model it is. The direction of illumination is the same, for example
The yellow arrows indicate the direction of the light. The calculation of light emitted by a parallel light source is not a big burden, and it is simpler to achieve. So it is often used in 3D programming. And. The direction of light must be known in the collision of light emitted by a peaceful light source. can be defined with vectors. And then pass it on to the shader.

but, actually. Only the direction of light can not achieve the effect of light, but also need the normal information of the vertex. So what is normal, to come down to the specific introduction.
normal vector and light vectorpeople who don't know much about 3D programming and math. Basically, I've never heard of the word normal. in simple terms. A normal is a vector with a direction. in a two-dimensional space in order to represent the direction of a line phase perpendicular, in three-dimensional space in order to represent the direction of a polygon. The normal vector to use.

But, why the light effect, in addition to the direction of light also need the law floss? in the real world. The light emitted from the sun or light is reflected on the object, so reflection is taken into account. Light emitted from the light source. after touching the surface of the object. The direction of light changes as the reflection occurs.


The pink lines in the light represent the orbits of the lights, and the direction changes when the faces collide with the face of the light. In this way, the direction of the surface that forms the model can be left and right in orbit.

the light in 3D is only a certain degree of light simulation, There is no need to completely simulate the real world of light in orbit and motion. because of the total simulation, the amount of computation is too large. This time the light emitted by the parallel light source is based on the normal vector of the vertex and the direction of the Light (light vector), to some extent the diffusion, reflection, etc. of the light is calculated. The light shoots vertically into a face. This side will completely reflect the light, that is to say, the influence of the optical is very large. Conversely, a face without light. The light will not spread at all , for example.
if the angle between the light vector and the normal vector is more than 90 degrees, there is no influence on the optical. This calculation can be obtained by using the inner product between vectors. * The inner product is an unknown solution here. people who want to know the details can check the relevant information themselves. The inner product can be calculated easily by the function built into the shader, which does not need to worry about.

Just have the right data ready. The rest of the calculations are given to WebGL.

so. This time, you have to change the vertex shader. Of course, the JavaScript section also needs to be changed to take a slow look.


Shader for directional lightsSo, let's look at the shader section first. This time the shader changes are only for the vertex shader. The calculation of the light in the vertex shader. It then passes the result of the calculation to the fragment shader. > code for vertex shaders
Attribute vec3 position;attribute vec3 normal;attribute vec4 color;uniform   mat4 mvpmatrix;uniform   mat4 Invmatrix;uniform   vec3 lightdirection;varying   vec4 vcolor;void Main (void) {    vec3  invlight = Normalize (Invmatrix * VEC4 (lightdirection, 0.0)). xyz;    float Diffuse  = clamp (dot (normal, invlight), 0.1, 1.0);    Vcolor         = color * VEC4 (VEC3 (diffuse), 1.0);    Gl_position    = Mvpmatrix * VEC4 (Position, 1.0);}
today's demo is very different from before, at first glance seems quite complex, look at the specific change point. first, start with the variable. The attribute variable of the shader, the new normal is added, and the variable is used to store the normal information of the vertex. and the uniform function adds two. The one is the variable Invmatri x that receives the inverse of the model coordinate transformation matrix, and one that is used to receive the direction of the light. The is the variable lightdire ction of the vector that is the direction of light emitted from a parallel light source.
what is an inverse matrix? This time the Invmatrix added to the vertex shader is the variable that holds the inverse of the model's coordinate transformation matrix, which is not expected to be known by most people as the inverse matrix. the light emitted by the parallel light source (the light emitted by the directional light) usually requires a light vector, and all the models in the three-dimensional space are illuminated in the same direction. But, just imagine. transform by model coordinates. Ability to zoom in and out of the model, rotate. Move. It is assumed that the direction of light is given only by the calculation of normals and light vectors. Position, the direction of the model, the position of the influence.

The position and direction of the light that was supposed to be correct. Due to the influence of the model coordinate transformation, the correct result is not obtained. Therefore, the influence of the model coordinate transformation is offset by the total inverse transformation of the coordinate transformation of the model. when the model rotates 45 degrees along the x-axis, it rotates 45 degrees in the opposite direction, thus offsetting the rotation. Even if the model rotates, the position of the light source and the direction of light can be fixed. Similarly, scaling the model is a matrix multiplication operation that can be offset by multiplying the inverse matrix.

In this way, we need to prepare a inverse matrix of the model coordinate transformation matrix for the light, and provide a function to generate the inverse matrix in the minmatrix.js, which is used by the site to calculate the light.

and then. When lighting, you also need to calculate the light factor, this part of the code such as the following. > Calculation of illumination coefficient
VEC3  invlight = Normalize (Invmatrix * VEC4 (lightdirection, 0.0)). xyz;float Diffuse  = clamp (dot (normal, Invlight), 0.1, 1.0) Vcolor         = color * VEC4 (VEC3 (diffuse), 1.0);
First, we start by declaring a variable invlight of type VEC3 and doing some calculations. The most started normalize is a built-in function. The function is to standardize the vectors.

Using this function, the inverse matrix of the model coordinate transformation and the result of multiplying the light vectors are normalized. The model can be offset by inverse transformation if it is rotated and coordinate transformation. This calculation is followed by a. xyz, which is used to substitute the transformation result as the correct three-dimensional vector.

The value of the variable diffuse of the float type is then obtained.

In fact, this is the inner product of the normal and light vectors. here clamp and dot are GLSL built-in functions,clamp is to limit the value to a certain extent, the second parameter is the minimum value, The third parameter is the maximum value.

The reason to limit the scope. is due to the possible negative value of the inner product of the vector. the treatment to prevent such a situation.

There is also a built-in function that dot is used to find the inner product, and the number of parameters is normal. Another is the light vector after the inverse matrix processing. Finally, the light coefficients are calculated. Multiplies the vertex color and passes the result to the varying variable. The fragment shader. by receiving this number of references. To decide the color of the last.


Append normal information to the VBOThere are more places to change this time, and JavaScript looks at it. in the previous article, the function that generated the vertex data for the torus was slightly altered, and the content was changed. Returns the information for the normals as well. So far, only returned the position, color, index of these three, normal intelligence also need to return. The normal is a vector that represents a direction, as in the above, and is represented by the three elements of X Y Z, like position intelligence. In addition, normal normalization ranges from 0 to 1.

> Generate torus and normal information join
Function torus that generates the torus (row, column, Irad, Orad) {var pos = new Array (), nor = new Array (), col = new Array (),    IDX = new Array ();        for (var i = 0; I <= row; i++) {var r = Math.PI * 2/row * i;        var rr = Math.Cos (R);        var ry = Math.sin (R);            for (var II = 0; II <= column; ii++) {var tr = Math.PI * 2/column * II;            var tx = (RR * Irad + orad) * Math.Cos (TR);            var ty = ry * IRAD;            var tz = (RR * Irad + orad) * Math.sin (TR);            var rx = RR * Math.Cos (TR);            var RZ = RR * Math.sin (TR);            Pos.push (TX, Ty, TZ);            Nor.push (Rx, Ry, RZ);            var TC = HSVA (360/column * II, 1, 1, 1);        Col.push (Tc[0], tc[1], tc[2], tc[3]);            }} for (i = 0; i < row, i++) {for (ii = 0; ii < column; ii++) {R = (column + 1) * i + II;            Idx.push (R, r + column + 1, r + 1);        Idx.push (r + column + 1, r + column + 2, R + 1);  }  } return [Pos, nor, col, IDX];} 
The corresponding normal information is returned from the function that generated the torus. It is important to Note that in the function that generates the torus, the order of the elements in the returned array is [location information]? [Normal information]? [Vertex color]? [index]. what has been done in the function. May not look at a glance , but want to do the same treatment as before. is to standardize the normal intelligence , the output part of the vertex coordinate of the torus and the output part of the normal information are dealt with separately. Next, take a look at the part where the function that generated the torus was called. > About vertex data processing
Get attributelocation and put the array var attlocation = new Array (); attlocation[0] = Gl.getattriblocation (PRG, ' position '); ATTLOCATION[1] = gl.getattriblocation (PRG, ' normal '); attlocation[2] = Gl.getattriblocation (PRG, ' color ');// Save the number of elements in the attribute to the array var attstride = new Array (); attstride[0] = 3;attstride[1] = 3;attstride[2] = 4;//generate the vertex data of the torus var torus Data = Torus (+, 1.0, 2.0), var position = Torusdata[0];var Normal = Torusdata[1];var color = Torusdata[2];var Index = torusdata[3];//generates Vbovar Pos_vbo = Create_vbo (position), var Nor_vbo = Create_vbo (normal), var Col_vbo = Create_vbo (color) ;
Unlike the previous demo, in order to handle normals. An array of normal is added , and a VBO is generated using this array.

In order to receive normal information in the vertex shader. A variable of type attribute is declared. So don't forget to get attributelocation.

in addition. A variable of type uniform has also been added. Therefore, it is necessary to add a uniformlocation to get the processing. >uniform Related Treatments
Get uniformlocation and save to the array var unilocation = new Array (); unilocation[0] = Gl.getuniformlocation (PRG, ' Mvpmatrix '); UNILOCATION[1] = gl.getuniformlocation (PRG, ' Invmatrix '); unilocation[2] = Gl.getuniformlocation (PRG, ' lightDirection ‘);
may not be easy to grasp at the beginning. Total shaders and scripts are non-cutting. The two parties must be paired out in the code today.
join in about light processingso finally, let's take a look at the process of passing the light-related parameters to the shader, first of all the code. > Defining light and Matrix-related data
The generation and initialization of each matrix var Mmatrix = m.identity (M.create ()), var Vmatrix = m.identity (M.create ()), var PMatrix = m.identity (m.create ()) var Tmpmatrix = m.identity (M.create ()), var Mvpmatrix = m.identity (M.create ()), var Invmatrix = m.identity (M.create () );//view x projection coordinate transformation matrix M.lookat ([0.0, 0.0, 20.0], [0, 0, 0], [0, 1, 0], Vmatrix); M.perspective (C.width/c.height, 0.1, 100, PMatrix); m.multiply (PMatrix, Vmatrix, Tmpmatrix);//direction of the parallel light source var lightdirection = [-0.5, 0.5, 0.5];
defines a vector containing three elements lightdirection, This time defines the light. It is the light that moves from the left back direction to the origin point. Other than that. The initialization part of the matrix adds a new Invmatrix, The data in this Invmatrix such as the following. definition and generation of > Inverse matrices
Counter self-increment count++;//use counter to calculate the angle var rad = (count%) * math.pi/180;//model coordinate transformation matrix generation m.identity (Mmatrix); M.rotate (Mmatrix, Rad , [0, 1, 1], Mmatrix); M.multiply (Tmpmatrix, Mmatrix, Mvpmatrix);//Generate Inverse matrix M.inverse (Mmatrix, Invmatrix) based on the model coordinate transformation matrix;// Uniform variable GL.UNIFORMMATRIX4FV (unilocation[0], false, Mvpmatrix); GL.UNIFORMMATRIX4FV (Unilocation[1], False, Invmatrix); GL.UNIFORM3FV (unilocation[2], lightdirection);
The inverse matrix of the model coordinate transformation matrix is computed using the minmatrix.js built-in inverse function. and specify the correct uniformlocation. Set the direction of light at the same time lightdirection. this time, the direction of light is constant, so there is no need to set each cycle, but for easy understanding, so put together to deal with. Note that since the light vector is a vector that consists of three elements, unlike matrices, the UNIFORM3FV is used, and the number of parameters is different.


SummaryIt's too long written. Sure enough, even if it is simply said, the processing of light also requires a very long descriptive narrative. The point is. In 3D rendering, there is no way to completely simulate the reality of light, just roughly what it is.

The computational amount is very large when the physics of nature is completely simulated. So the replacement of these is what this is about. Using parallel light sources, normals, inverse matrices and other techniques. To a certain extent, as far as possible to make the picture look real. Understanding the content of this article requires a certain degree of mathematical knowledge, vectors, normals, matrices. These will not appear in ordinary life, but if you think about it, you should be able to understand it. The demo connection will be given at the end of the article, the content of this change is much more, so all the code is posted at once.

Lufy: The code is too long, I will not post, we open the demo directly with the browser to see it.
next time. Further in-depth introduction of the content of light.


the demo of the torus is drawn by a parallel light source .http://wgld.org/s/sample_009/


reprint Please specify:Turn from Lufy_legend's blog Http://blog.csdn.net/lufy_legend

[WebGL Primer] 21, light emitted from a parallel source

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.