Note: The article is translated from http://wgld.org/, the original author Sambonja 広 (doxas), the article if there is my additional instructions, I will add [Lufy:], in addition, The WEBGL research is not deep enough, some professional words, if the translation is wrong, you are welcome to correct.
The results of this demo run
illuminate the worldThe last time a doughnut-like ring model was drawn, although no special new knowledge was involved, the 3D model was successfully plotted. So, take a look at the light this time. There are many kinds and ways of using light in 3D rendering, it is not easy to study the light thoroughly. in the real world we can see objects because the light reflected by the object enters our eyes. that is to say, without light, our eyes cannot see anything. in the 3D programming world, the model can be rendered even without light. so far, no light processing has been used, and polygons have been drawn as well. However, if you add light to the simulated world, The visual effects of 3D can be greatly enhanced. This introduction of light, is from the General parallel light Source (directional light) emitted light, is a relatively simple realization of a light. Before we introduce the parallel light source in detail, let's briefly talk about the light.
Simulation of Lightthe light emitted by the parallel light source must deal with the occlusion of the light, that is to say, the effect of the shadow. This look at the results of this demo, and the previous article in the demo compared to understand.
when dealing with light, the part of the color that collides with the light should be clear, and the part of the color that does not collide with the light should be dark. If there is no light , the brightness of all colors should be the same. When simulating light, a shadow should be added to the side without light. in WebGL, the intensity of the color range is between 0 and 1. Depending on the various elements of RGBA, set the value of the difference to determine. when dealing with light, in the original RGBA value and the corresponding coefficient, the range of the coefficient is also between 0 ~ 1, a light side, showing a similar state of the primary color , while the backlight side of the use of darker colors. For example, the various elements of RGBA is 0.5, the light coefficient is 0.5, so multiply, get the RGBA element is 0.5 x 0.5 = 0.25, which is darker than the original color. According to this principle , the intensity of the corresponding light and the intensity of the color are calculated separately, and then multiplied, the light and shadow can be processed in the end.
What is a parallel light sourceThe parallel light source, which is emitted from infinitely far away, allows the emitted light to remain parallel throughout the three-dimensional space. This concept sounds rather difficult to understand. The main thing is that the direction of light remains the same, and the direction of illumination is the same as that of any model in three-dimensional space .
The yellow arrows indicate the direction of the light. The burden of light from the parallel light source is not large, and is relatively simple to implement, so it is often used in 3D programming. Moreover, the collision of light emitted by a peaceful light Source needs to know the direction of the light, which can be defined by a vector and then passed on to the shader, which can be achieved. However, in fact, only the direction of light is not to achieve the effect of light, but also the normal intelligence of the vertex, then what is normal, the following details.
normal vector and light vectorpeople who don't know much about 3D programming and math don't really hear the word normal. in simple terms, a normal is a vector with a direction, in two- dimensional space in order to represent the direction of a line phase perpendicular to the three-dimensional space in order to represent the direction of a polygon, the normal vector is used. But why does the light need to be floss in addition to the direction of light when the lighting effect is achieved? in the real world, the light emitted from the sun or the light is reflected after the object, so reflection occurs when the light emitted by the light from the source touches the surface of the object, and the direction of the light changes.
The pink lines in the light represent the orbits of the lights, and the direction changes when the faces collide with the face of the light. In this way, the direction of the surface that forms the model can be left and right in orbit. The light in 3D, just to a certain extent in the simulation of light, There is no need to fully simulate the real world of light in orbit and motion. because of the full simulation, the calculation is too large. This time the light emitted by the parallel light source is based on the normal vector of the vertex and the direction of the Light (light vector), to some extent the diffusion, reflection, etc. of the light is calculated. If the light is perpendicular to a face, the surface will completely reflect the light, that is to say, the effect is very large. Conversely, a face without light, the light will not spread at all, such as.
if the angle between the light vector and the normal vector is more than 90 degrees, there is no influence on the optical. This calculation can be obtained by using the inner product between the vectors. * The product is not explained in detail here, people who want to know more can check the relevant information on their own. The inner product can be easily computed with the shader's built-in function, which does not need to be feared. As long as you have the right data, the rest of the calculation is given to WebGL. So, this time, you have to modify the vertex shader , and of course the JavaScript section needs to be modified to take a slow look.
Shader for directional lightsSo, let's look at the shader section first. This time the shader is modified only for the vertex shader, the light is computed in the vertex shader, and the result is passed to the fragment shader. > code for vertex shaders
Attribute vec3 position;attribute vec3 normal;attribute vec4 color;uniform mat4 mvpmatrix;uniform mat4 Invmatrix;uniform vec3 lightdirection;varying vec4 vcolor;void Main (void) { vec3 invlight = Normalize (Invmatrix * VEC4 (lightdirection, 0.0)). xyz; float Diffuse = clamp (dot (normal, invlight), 0.1, 1.0); Vcolor = color * VEC4 (VEC3 (diffuse), 1.0); Gl_position = Mvpmatrix * VEC4 (Position, 1.0);}
now the demo and before a very different, at first glance seems quite complex, look at the specific change point. first, start with the variable. in the shader's attribute variable, the new normal is added to store the normal information for the vertex. The uniform function is increased by two. One is the variable that receives the inverse of the model coordinate transformation matrixInvmatri x, and the other is the variable that receives the direction of the light, which is the vector of the direction of light emitted from the parallel light source Lightdire ction.
what is an inverse matrix? This time the Invmatrix added in the vertex shader is the variable that holds the inverse of the model's coordinate transformation matrix, and most people don't know what it is called the inverse matrix. the light emitted by the parallel light source (the light emitted by the directional light) usually requires a light vector, and all the models in the three-dimensional space are irradiated by light in the same direction. However, imagine that through the model coordinate transformation, the model can be zoomed out, rotated, moved, if only through the normal and light vector calculation, will be subject to the direction of light, position, the direction of the model, position and so on. the position and direction of the correct light, because of the influence of the model coordinate transformation, can not get the correct result. Therefore, the influence of the model coordinate transformation can be offset by a complete inverse transformation of the model's coordinate transformation. when the model rotates 45 degrees along the x-axis, it rotates 45 degrees in the opposite direction, thus offsetting the rotation and the position of the light source and the direction of the light can be fixed even if the model is rotated. Similarly, scaling the model is a matrix multiplication operation that can be offset by multiplying the inverse matrix. in this way, it is necessary to prepare a inverse matrix of the model coordinate transformation matrix for the light, and to provide the function of generating the inverse matrix in minmatrix.js, which is used by the website for the calculation of light. |
then, when the light is illuminated, the light factor needs to be computed, which is the code below. > Calculation of illumination coefficient
VEC3 invlight = Normalize (Invmatrix * VEC4 (lightdirection, 0.0)). xyz;float Diffuse = clamp (dot (normal, Invlight), 0.1, 1.0) Vcolor = color * VEC4 (VEC3 (diffuse), 1.0);
First, we start by declaring a variable invlight of type VEC3 and doing some calculations. The first normalize is a built-in function that standardizes vectors. Using this function, the inverse matrix of the model coordinate transformation and the result of multiplying the light vectors are normalized. The model can be offset by inverse transformation if the rotation and other coordinates are transformed. This calculation is followed by a. xyz, which is used to substitute the transformation result as the correct three-dimensional vector. The value of the variable diffuse of the float type is then obtained. In fact, here is the normal and light vector of the inner product, where the clamp and dot are GLSL built-infunctions, clamp is to limit the value to a certain extent, the second parameter is the minimum value, The third parameter is the maximum value. The limitation is due to the possible negative values of the inner product of the vector and the processing to prevent this. another built-in function is that dot is used to find the inner product, one of the parameters is the normal and the other is the light vector after the inverse matrix processing. Finally, the calculated light coefficients are multiplied by the vertex color, and the result is passed to the varying variable. In the fragment shader, The final color is determined by the received parameter.
Append normal information to the VBOThere are more places to change this time, and JavaScript has to take a look at it. in the previous article, the function that generated the vertex data for the torus was slightly modified to change the information of the normals back together. So far, only returned the position, color, index of these three, normal intelligence also need to return. The normal is a vector that represents a direction, as in the above, and is represented by the three elements of X Y Z, like position intelligence. In addition, normal normalization ranges from 0 to 1. > Creating torus and normal information additions
Function torus that generates the torus (row, column, Irad, Orad) {var pos = new Array (), nor = new Array (), col = new Array (), IDX = new Array (); for (var i = 0; I <= row; i++) {var r = Math.PI * 2/row * i; var rr = Math.Cos (R); var ry = Math.sin (R); for (var II = 0; II <= column; ii++) {var tr = Math.PI * 2/column * II; var tx = (RR * Irad + orad) * Math.Cos (TR); var ty = ry * IRAD; var tz = (RR * Irad + orad) * Math.sin (TR); var rx = RR * Math.Cos (TR); var RZ = RR * Math.sin (TR); Pos.push (TX, Ty, TZ); Nor.push (Rx, Ry, RZ); var TC = HSVA (360/column * II, 1, 1, 1); Col.push (Tc[0], tc[1], tc[2], tc[3]); }} for (i = 0; i < row, i++) {for (ii = 0; ii < column; ii++) {R = (column + 1) * i + II; Idx.push (R, r + column + 1, r + 1); Idx.push (r + column + 1, r + column + 2, R + 1); } } return [Pos, nor, col, IDX];}
The corresponding normal information is returned from the function that generated the torus. It is important to note that in the function that generates the torus, the order of the elements in the returned array is [location information]? [Normal information]? [Vertex color]? [index]. function of what has been done, may not see, but want to do with the same treatment as before, is to standardize the normal information , the vertex coordinates of the torus and the output of the normal information of the output parts are processed separately. Next, take a look at the part where the function that generated the torus was called. > About vertex data processing
Get attributelocation and put the array var attlocation = new Array (); attlocation[0] = Gl.getattriblocation (PRG, ' position '); ATTLOCATION[1] = gl.getattriblocation (PRG, ' normal '); attlocation[2] = Gl.getattriblocation (PRG, ' color ');// Save the number of elements in the attribute to the array var attstride = new Array (); attstride[0] = 3;attstride[1] = 3;attstride[2] = 4;//generate the vertex data of the torus var torus Data = Torus (+, 1.0, 2.0), var position = Torusdata[0];var Normal = Torusdata[1];var color = Torusdata[2];var Index = torusdata[3];//generates Vbovar Pos_vbo = Create_vbo (position), var Nor_vbo = Create_vbo (normal), var Col_vbo = Create_vbo (color) ;
Unlike the previous demo, in order to handle normals , an array of normal is added, and the array is used to generate a VBO. to receive the normal information in the vertex shader, declare a variable of type attribute, so don't forget to get attributelocation. In addition, a variable of type uniform is added, so you also need to append a handle to get uniformlocation. >uniform Related Treatments
Get uniformlocation and save to the array var unilocation = new Array (); unilocation[0] = Gl.getuniformlocation (PRG, ' Mvpmatrix '); UNILOCATION[1] = gl.getuniformlocation (PRG, ' Invmatrix '); unilocation[2] = Gl.getuniformlocation (PRG, ' lightDirection ‘);
at first it may not be easy to grasp that the total value shader and script are indivisible and both sides must appear in pairs in the code.
add the processing of lightso finally, take a look at the process of passing the light-related parameters to the shader, first of all the code. > Defining light and Matrix-related data
The generation and initialization of each matrix var Mmatrix = m.identity (M.create ()), var Vmatrix = m.identity (M.create ()), var PMatrix = m.identity (m.create ()) var Tmpmatrix = m.identity (M.create ()), var Mvpmatrix = m.identity (M.create ()), var Invmatrix = m.identity (M.create () );//view x projection coordinate transformation matrix M.lookat ([0.0, 0.0, 20.0], [0, 0, 0], [0, 1, 0], Vmatrix); M.perspective (C.width/c.height, 0.1, 100, PMatrix); m.multiply (PMatrix, Vmatrix, Tmpmatrix);//direction of the parallel light source var lightdirection = [-0.5, 0.5, 0.5];
defines a vector lightdirection with three elements, this time defined light, which is the light that advances from the left back direction origin. In addition , the initialization part of the matrix adds a new Invmatrix, and the data in this Invmatrix is as follows. definition and generation of > Inverse matrices
Counter self-increment count++;//use counter to calculate the angle var rad = (count%) * math.pi/180;//model coordinate transformation matrix generation m.identity (Mmatrix); M.rotate (Mmatrix, Rad , [0, 1, 1], Mmatrix); M.multiply (Tmpmatrix, Mmatrix, Mvpmatrix);//Generate Inverse matrix M.inverse (Mmatrix, Invmatrix) based on the model coordinate transformation matrix;// Uniform variable GL.UNIFORMMATRIX4FV (unilocation[0], false, Mvpmatrix); GL.UNIFORMMATRIX4FV (Unilocation[1], False, Invmatrix); GL.UNIFORM3FV (unilocation[2], lightdirection);
The inverse matrix of the model coordinate transformation matrix is computed using the minmatrix.js built-in inverse function, and the correct uniformlocation is specified, and the direction of the light is set lightdirection. this time, the direction of light is constant, so there is no need to set each cycle, but for easy to understand, so put together to deal with. Note that because the light vector is a vector containing three elements, unlike matrices, the UNIFORM3FV is used, and the number of parameters is different.
Summarywrite too long, sure enough, even if it is simple to say, about the processing of light also need a very long description. The point is that there is no way to completely simulate the reality of light in 3D rendering, but it's just a matter of general. fully simulating the physics of nature, the computational amount is very large, so instead of these are introduced, using parallel light source, normal, inverse matrix technology, to a certain extent as far as possible to make the picture look real. Understanding the content of this article requires a certain degree of mathematical knowledge, vectors, normals, matrices, these will not appear in ordinary life, but think about it, it should be understandable. The demo connection will be given at the end of the article, the content of this modification is more, so all the code is posted at once. Lufy: The code is too long, I will not post, we open the demo directly with the browser to see it.
next time, further in-depth introduction to the content of light.
the demo of the torus is drawn by a parallel light source .http://wgld.org/s/sample_009/
reprint Please specify:Turn from Lufy_legend's blog Http://blog.csdn.net/lufy_legend
[WebGL Primer] 21, light emitted from a parallel source