[Getting started with WebGL] 21. Light source from parallel Light Source: webgl Light Source
Note: The article is translated from http://wgld.org/, the original author shanbenya (doxas). If I have additional instructions in the article, I will add [lufy:]. In addition, the research on webgl is not in-depth enough, and some professional words are required, if the translation is incorrect, please correct me.
Running result of this demo
The last time we built a donut model like a donut,Although no special new knowledge is involved,But I have drawn a 3D model.So let's take a look at this time. There are many types and usage methods of light in 3D rendering, and we want to thoroughly study the light,It is not easy.In the real world, we can see objects because the light reflected by objects enters our eyes.That is to say, without light, our eyes will not see anything.In the 3D programming world, the model can be rendered even if there is no light.So far, no illumination is used, and a polygon is drawn. However,If light is added to the simulated world,Then the 3D visual effect will be greatly improved.The light introduced in this article is the light emitted from a general parallel light source (oriented light,Is a relatively simple implementation of light. Before introducing parallel light sources in detail,Let's talk about it briefly.
When simulating the parallel light source, the light must be masked,That is to say, the effect of the shadow should be processed.Let's take a look at the running results of this demo,Compare it with the demo in the previous article.
When processing light, the partial color of the light collision should be clear,The color of the parts that do not conflict with light should be dark. If there is no light,The brightness of all colors should be the same. When simulating light,Add a shadow to the side without light.In WebGL, the intensity of the color ranges from 0 ~ Between 1. It is determined based on the different values in each element of RGBA.When processing light, the value of the original RGBA is superior to the corresponding coefficient,The coefficient range is also 0 ~ Between 1, there is a light side, shows a similar state of primary colors,The backlight side uses a darker color.For example, if each element of RGBA is 0.5, the optical coefficient is 0.5,In this way, the elements of RGBA are 0.5x0.5 = 0.25, which is darker than the original color. According to this principle,Calculate the light intensity and color intensity respectively, and then multiply,In the end, we can process light and shadows.
What is a parallel light source from an infinitely distant place,And keep the emitted light in parallel throughout the 3D space.This concept sounds hard to understand.It mainly means that the direction of light is consistent,The illumination direction is the same for any model in 3D space,For example
The yellow arrow indicates the direction of light. The computing workload of parallel light sources is not large, and it is easy to implement,So it is often used in 3D programming. In addition, the collision of light emitted by the same light source,You need to know the direction of light. You can use a vector to define the light and then pass it to the shadow,You can achieve it.However, in fact, only the direction of light cannot achieve the effect of light,We also need the normal information of the vertex, so what is the normal? The following describes in detail.
Normal vectors and light vectors do not know much about 3D programming and mathematics. Basically, they have never heard of the word "normal.In short, the normal is a vector with directions,In two-dimensional space, to indicate the vertical direction of a line,In 3D space, a normal vector is used to represent a certain plane direction.However, why is it possible to achieve the illumination effect,In addition to the direction of light, what else do we need the optical line?In the real world, the light emitted from the sun or lamp,Reflection occurs after the image is taken into consideration. Therefore, the light emitted from the light source is considered as a reflection,When the surface of an object is touched, reflection occurs, and the direction of light changes.
The pink line in the line indicates the track of the light, and after the collision with the face of the light,The direction will change. In this way, the direction of the model is formed, and the light orbit can be left or right.The light in 3D is only a simulation of the light to a certain extent,There is no need to fully simulate the orbit and motion of light in the real world.Because of the full simulation, the calculation amount is too large. The light from this parallel light source,Based on the normal vector of vertex and the direction of light (light vector,To a certain extent, we calculate the diffusion and reflection of light.If the light perpendicular to a plane, the plane will completely reflect the light, that is,It has a great impact on light. Otherwise, if there is no light on a plane, light will not spread at all,For example.
If the angle between the light vector and the normal vector exceeds 90 degrees,There is no influence on light. This computation can be obtained using the inner product between vectors.*Inner Product is not explained in detail here,If you want to know more, you can check the relevant information by yourself.The inner product can be easily calculated using the built-in function of the coloring tool, which does not need to be concerned.You only need to prepare the correct data and submit the remaining calculations to WebGL.Therefore, you must modify the vertex shader,Of course, the javascript part also needs to be modified. Let's take a look.
So, let's look at the part of the shadow.This modification is only for the vertex shader,Calculate the light in the vertex shader and pass the calculation result to the fragment shader.> Vertex coloring code
attribute vec3 position;attribute vec3 normal;attribute vec4 color;uniform mat4 mvpMatrix;uniform mat4 invMatrix;uniform vec3 lightDirection;varying vec4 vColor;void main(void){ vec3 invLight = normalize(invMatrix * vec4(lightDirection, 0.0)).xyz; float diffuse = clamp(dot(normal, invLight), 0.1, 1.0); vColor = color * vec4(vec3(diffuse), 1.0); gl_Position = mvpMatrix * vec4(position, 1.0);}
The current demo is very different from the previous one. At first glance, it seems quite complicated, Let's take a look at the specific change points.First, start with the variable. Normal is added to the attribute variable of the colorant, This variable is used to store the normal information of the vertex. The uniform function adds two. An invMatri variable is used to receive the inverse matrix of the model coordinate transformation matrix. X. The other is used to receive light, That is, the variable lightDire of the vector of the light from a parallel light source. Ction.
What is an inverse matrix? The invMatrix added in the vertex coloring tool is used to save the model coordinates.Variable of the inverse matrix of the transformation matrix,Most people do not know what the inverse matrix is.The light emitted by a parallel light source (the light emitted by a directed light) usually requires a light vector,All models in 3D space are illuminated by light in the same direction. But imagine,Through model coordinate transformation, you can zoom in, zoom out, rotate, move,If only the normal and light vectors are used for calculation, the direction and position of the light will be affected,Influence of the model direction and position.Originally, the correct light position and direction are affected by the Coordinate Transformation of the model,The correct results will not be obtained. Therefore,Perform a complete inverse transformation on the Coordinate Transformation of the model,To offset the effect of model coordinate transformation.If the model rotates 45 degrees along the X axis, it rotates 45 degrees in the opposite direction,In this way, the rotation is offset, even if the model is rotated,The position of the light source and the direction of the light source can also be fixed. Similarly, if you scale the model,Is a matrix multiplication operation, which can be offset by multiplication with the inverse matrix.In this way, we need to prepare an inverse matrix of the model coordinate transformation matrix for light,In minMatrix. js, a function is provided to generate an inverse matrix,This website uses it for optical computing. |
Next, you also need to calculate the optical coefficient when lighting. The Code is as follows.> Illumination Coefficient Calculation
vec3 invLight = normalize(invMatrix * vec4(lightDirection, 0.0)).xyz;float diffuse = clamp(dot(normal, invLight), 0.1, 1.0);vColor = color * vec4(vec3(diffuse), 1.0);
First, declare a variable of the vec3 type, invLight, In addition, some calculations are performed.The first normalize is a built-in function, It is used to standardize vectors. Use this function, Normalize the inverse matrix of the model Coordinate Transformation and the result of multiplying the light vector. If the model performs Coordinate Transformation such as rotation, it can be offset by the inverter. This computation is followed by. xyz, This is to substitute the transformation result as a correct three-dimensional vector.The next step is to obtain the diffuse value of the float type variable. In fact, this is the inner product of the line and light vector, Both clamp and dot appear here are GLSL built-in functions, Clamp is to limit the value to a certain range, and the second parameter is the minimum value, The third parameter is the maximum value. To limit the scope, It is because the Inner Product of the vector may have negative values, To prevent this situation from being handled.Another built-in function is dot, which is used to calculate the inner product. One parameter is the normal, The other is the light vector after inverse matrix processing.Finally, the calculated light coefficient is multiplied by the vertex color, Pass the result to the varying variable. In the fragment shader, The final color is determined by the received parameter.
Adding the normal information to the VBO has a lot to do with this modification. Let's take a look at javascript. In the previous article, the function for generating vertex data of a ring was slightly modified, The modified content is to return the normal information together. Last article, Only the positions, colors, and indexes are returned. The normal information also needs to be returned.The normal is a vector that represents the direction as described above. It is the same as location intelligence, It is represented by the three elements x y z. In addition, the range after normal standardization is 0 ~ Between 1.> Generate the addition of the Ring Body and normal information
// Function torus (row, column, irad, orad) for generating the ring {var pos = new Array (), nor = new Array (), col = new Array (), idx = new Array (); for (var I = 0; I <= row; I ++) {var r = Math. PI * 2/row * I; var rr = Math. cos (r); var ry = Math. sin (r); for (var ii = 0; ii <= column; ii ++) {var tr = Math. PI * 2/column * ii; var tx = (rr * irad + orad) * Math. cos (tr); var ty = ry * irad; var tz = (rr * irad + orad) * Math. sin (tr); var rx = rr * Math. cos (tr); var rz = rr * Math. sin (tr); pos. push (tx, ty, tz); nor. push (rx, ry, rz); var tc = hsv a (360/column * ii, 1, 1, 1); col. push (tc [0], tc [1], tc [2], tc [3]) ;}} for (I = 0; I <row; I ++) {for (ii = 0; ii <column; ii ++) {r = (column + 1) * I + ii; idx. push (r, r + column + 1, r + 1); idx. push (r + column + 1, r + column + 2, r + 1) ;}} return [pos, nor, col, idx];}
The corresponding normal information is returned from the function that generates the ring. Note that, In the function for generating the Ring Body, the elements in the returned array are in the [position information] Starting [normal information] Starting [vertex color] Starting [Index] Order.What has been done in the function, maybe not at a glance, However, the process to be done is the same as the previous one, that is, standardizing normal intelligence, The output part of the vertex coordinate of the Ring Body and the output part of the normal information are processed respectively. .Next, let's take a look at the called part of the function that generates the Ring Body.> Processing of vertex data
// Get attributeLocation and put the Array var attLocation = new Array (); attLocation [0] = gl. getAttribLocation (prg, 'position'); attLocation [1] = gl. getAttribLocation (prg, 'normal'); attLocation [2] = gl. getAttribLocation (prg, 'color'); // Save the number of attribute elements to the Array var attStride = new Array (); attStride [0] = 3; attStride [1] = 3; attStride [2] = 4; // generate the vertex data var torusData = torus (32, 32, 1.0, 2.0) of the Ring Body ); var position = torusData [0]; var normal = torusData [1]; var color = torusData [2]; var index = torusData [3]; // generate VBOvar pos_vbo = create_vbo (position); var nor_vbo = create_vbo (normal); var col_vbo = create_vbo (color );
Unlike the previous demo, in order to process the normal, Added an array normal, This array is used to generate a VBO. In order to receive the normal information in the vertex shader, Declares an attribute type variable, So don't forget to get attributeLocation.In addition, a variable of the uniform type is added, Therefore, you also need to append a request to obtain the uniformLocation.> Uniform-Related Processing
// Obtain the uniformLocation and save it to the Array var uniLocation = new Array (); uniLocation [0] = gl. getUniformLocation (prg, 'mvpmatrix '); uniLocation [1] = gl. getUniformLocation (prg, 'invalmatrix '); uniLocation [2] = gl. getUniformLocation (prg, 'lightdirection ');
It may not be easy to grasp at the very beginning. The total value of the shader and the script are inseparable, Both parties must appear in the code in pairs.
Add light processing. At last, let's take a look at how to pass the light-related parameters into the coloring er, The first is code.> Define light and matrix-related data
// Generate and initialize each matrix. var mMatrix = m. identity (m. create (); var vMatrix = m. identity (m. create (); var pMatrix = m. identity (m. create (); var tmpMatrix = m. identity (m. create (); var mvpMatrix = m. identity (m. create (); var invMatrix = m. identity (m. create (); // view x projection coordinate transformation matrix m. lookAt ([0.0, 0.0, 20.0], [0, 0, 0], [0, 1, 0], vMatrix); m. perspective (45, c. width/c. height, 0.1, 100, pMatrix); m. multiply (pMatrix, vMatrix, tmpMatrix); // direction of the parallel light source var lightDirection = [-0.5, 0.5, 0.5];
Defines a vector lightDirection containing three elements, The light defined this time is the light that advances from the left back to the origin. In addition, In the initialization section of the matrix, a new invMatrix is added, The data in this invMatrix is as follows.> Definition and generation of Inverse Matrices
// Counter auto-increment count ++; // use the counter calculation angle var rad = (count % 360) * Math. PI/180; // model coordinate transformation matrix generation m. identity (mMatrix); m. rotate (mMatrix, rad, [0, 1, 1], mMatrix); m. multiply (tmpMatrix, mMatrix, mvpMatrix); // generates an inverse matrix m based on the model coordinate transformation matrix. inverse (mMatrix, invMatrix); // uniform variable gl. uniformMatrix4fv (uniLocation [0], false, mvpMatrix); gl. uniformMatrix4fv (uniLocation [1], false, invMatrix); gl. uniform3fv (uniLocation [2], lightDirection );
Use the inverse function built in minMatrix. js to calculate the inverse matrix of the model coordinate transformation matrix, specify the correct uniformLocation, and set the light direction to lightDirection. This time, the direction of light remains unchanged, so there is no need to set it for each loop, but in order to be easy to understand, it is put together for processing. Note: Since the light vector is a vector containing three elements, different from the matrix, the uniform3fv parameter is used, and the number of parameters is different.
The summary is too long. Indeed, even simply put, it takes a long description of light processing. The point is that there is no way to completely simulate the light in reality in 3D rendering, but it is roughly the case. When fully simulating the physics of nature, the computational workload is very large, so instead of this, we will introduce the parallel light source, normal, inverse matrix and other technologies, to a certain extent, try to make the image look real. Understanding the content of this article requires a certain degree of mathematical knowledge, vector, normal, matrix, which will not appear in ordinary life, but if you think about it, it should be understandable. The demo connection will be given at the end of the article. This modification involves a lot of content, so all the code is pasted at a time. Lufy: the code is too long and I will not post it. You can open the demo and use your browser to check it out.
Next time, we will further introduce the content of light.
Demohttp: // wgld.org/s/sample_009/
Reprinted Please note: