Model View transformations in the "Giser&&painter" Chapter02:webgl

Source: Internet
Author: User
Tags cos

In the previous section we mentioned how to draw a simple geometry on a canvas, create a canvas, get the webglrendering context, create a simple shader, then bind some vertex data to the buffer of GL, and finally bind the buffer data, Provides a case of vertex data in buffer, performs a render drawing method, flushes the data results from buffer to the frame cache. The entire process is clear, but by comparing the entire process in OpenGL, we find that some of the important processing steps are missing, although we have created our own shaders, and we do not have a Model view transformation in the vertex data similar to the vertex processing pipeline. The Perspective projection Transformation , and so on, simply implements a static geometry on the screen. So this one lets us reconstruct a shader that can replace the vertex processing pipeline, and use that shader to transform our geometry accordingly.

A variable in a shader

As mentioned in the previous section, there are four types of variables in the shader: Attribute/uniform/varying/const

-Uniform global variables, which can appear in the vertex shader, the slice shader. Generally, shared properties can be defined with this type of variable for all vertices.

-Attribute The vertex shader-specific properties that receive input data from the outside and are used primarily to represent vertex-wise information.

-Varying as mentioned in the previous article, Varying, as a messenger, passes the parameters in the vertex shader to the slice shader, and it is worth mentioning that the data type of the Varying variable can only be limited to: float, vec2, VEC3, VEC4, MAT2, MAT3 and MAT4. The reason for this is that vertex data that is typically passed from the vertex shader is often passed through the rasterizer to the slice shader. During rasterization, the values of each pixel tend to be interpolated based on vertex pixel values, and if the vertex value is a data type of some text character type, the interpolation cannot be completed.

-Const constants, which must be constants and cannot be modified after they are set.

Two affine transformations and matrices

As we all know, the screen is made up of a div, and the geometry we draw on the screen is made up of pixels. We can transform the problem of geometry transformation into a problem of pixel graphics transformation, that is to say, all the pixels in the geometry of the operation, you can complete the geometry transformation.

So how do you make geometric transformations on pixels? In general, we look at the screen as a coordinate system, where the geometry of the screen is in the coordinate system, and each pixel can be considered as a point in the screen coordinate. In this way, we deal with the problem of pixels, but also to deal with the problem of the midpoint position of the coordinate system, from the graph à pixel à point, we constantly decompose the abstract a complex problem, and eventually converted it into a convenient solution to the situation, which is a basic idea for us to deal with the problem .

The mapping of the real world to the mathematical world

Now that we have transferred the problem to the coordinate system, we can solve the problem by taking the mathematics we have learned before. We take a two-dimensional coordinate system as an example, we need to translate a point a (x, y) to another point a ', very simple, a ' coordinate is (x+a, y+b), the same, will make a number of points of a geometry to pan all, then this geometry will also complete a translation. What if I want to rotate a graphic? Zoom in and zoom out? These transformations operate as if they were not translated so intuitively, so we need some external force to solve these problems.

homogeneous coordinates : here it is necessary to introduce the homogeneous coordinates mentioned in the OpenGL base and increase the dimensions of the coordinates, which makes it more convenient for us to represent the high dimensional features in the low-dimensional space. Here's a good explanation for the homogeneous coordinates: http://blog.csdn.net/janestar/article/details/44244849, this blog uses homogeneous coordinates to prove the cause of the vanishing point in the plane space, very interesting. So in a planar Cartesian coordinate system, we can use the homogeneous coordinates (x, y, 1) to represent a point, so that we suddenly find it seems like the point multiplication of the matrix can reach the axis of the realization point rotation and the idea of scaling the graph!

We want to be able to use the homogeneous coordinates (x, y, 1) to represent a point, and then use the homogeneous coordinates of each point by a 3*3 transformation matrix, resulting in a transformed homogeneous coordinates (x ', y ', 1), at this time the X ' and Y ' is the processing of coordinates, this is the processing coordinate affine transformation of a basic idea

           

Translation matrix py rotation matrix XZ scaling matrix SF

Through the point (X, Y, 1) • The transformation matrix PY/XZ/SF can be obtained:

(X+tx, Y+ty, 1) (x cosα+ y sinα,y cosα-x sinα) ① (x SX, y sy, 1)

① the mathematical proof of rotation can be referenced by the following proofs: (mainly through the triangular function and the difference formula in the unit circle of the relationship to prove)

Suppose the plane right angle coordinate system o, there is a radius of r of the unit circle, point A is a circle on the point (X, y), OA line and x-axis angle is α, first the OA around the origin o counterclockwise rotation β°, the point A to rotate to the coordinate value after the B point:

∵| oa| = x/cosα= Y/sinα; | ob| = X '/cos (α + β) = y '/sin (α + β)

R = | oa| =| ob|

∴x ' = cos (α + β) r,y ' = sin (α + β) • R

The trigonometric function and the difference formula can be obtained by:

X ' = R. (Cosα cosβ-sinα sinβ) = x cosβ-y sinβ

Y ' = R. (sinα cosβ+ sinβ cosα) = y cosβ+ x sinβ

  Note that this is a counterclockwise rotation and the beta value is less than 0, so the result here is a symbolic difference from the above formula

Three continuous transformations

Through the simple introduction above, it is possible to understand how to perform affine transformations in the geometry of a plane Cartesian coordinate system, but in practical operation, we tend to do many kinds of transformations on a graph.

Very simply, we can assume that there is a comprehensive transformation matrix z = F (py XZ SF), which is obvious through (x, y, 1) · Z can get the point: 1 first x, y scale sx, SY times, 2) then rotate α radians, 3) the last X, y to translate TX, TY units respectively. I don't know if you find that, there is an interesting phenomenon here, we obviously follow the translation of the Py, rotation XZ and the order of scaling SF as the input of the row matrix multiplication, but in the later process text description, I follow the inverse to explain. What is this for? Because the matrix is stored in a two-dimensional array, WebGL passes through the shader in the process of passing the matrix to the shader , that is, the shader is processed by the data for each column .

In general, two matrices are multiplied and can be represented as follows:

Multiplyfunction(A, b) {varA00 = a[0 * 3 + 0]; varA01 = a[0 * 3 + 1]; varA02 = a[0 * 3 + 2]; varA10 = a[1 * 3 + 0]; varA11 = a[1 * 3 + 1]; varA12 = a[1 * 3 + 2]; varA20 = a[2 * 3 + 0]; varA21 = a[2 * 3 + 1]; varA22 = a[2 * 3 + 2]; varB00 = b[0 * 3 + 0]; varB01 = b[0 * 3 + 1]; varB02 = b[0 * 3 + 2]; varB10 = b[1 * 3 + 0]; varB11 = b[1 * 3 + 1]; varB12 = b[1 * 3 + 2]; varB20 = b[2 * 3 + 0]; varB21 = b[2 * 3 + 1]; varB22 = b[2 * 3 + 2]; return[B00* A00 + b01 * A10 + b02 *A20, B00* A01 + b01 * A11 + b02 *A21, B00* A02 + b01 * A12 + b02 *A22, B10* A00 + B11 * A10 + b12 *A20, B10* A01 + B11 * A11 + b12 *A21, B10* A02 + B11 * A12 + b12 *A22, B20* A00 + B21 * A10 + B22 *A20, B20* A01 + B21 * A11 + B22 *A21, B20* A02 + B21 * A12 + B22 *A22,]; }

The above code can be found: the input parameter is a, B, but the final multiplication is to B. A to achieve. In the common WebGL API, this is the case: although from the process of invoking the API, the input order is: translation, rotation, scaling, in the specific implementation of the API, the scale, rotation, translation in the Order of code implementation. To tell the truth, I did not really understand why to do so, the total sense of design is more anti-human; some data suggest that the problem can be considered from the angle of spatial transformation, the effect is: the object can be imagined as space rather than the geometry of space, then the translation, rotation, Scaling can be thought of as an operation on space, where the geometry of the space changes, and the position coordinates of the geometry change passively (because the position of the geometry depends entirely on the coordinate value representation provided by the spatial coordinate system, and the location information of the geometry is lost). Concrete visible:https://webglfundamentals.org/webgl/lessons/zh_cn/webgl-2d-matrices.html;

<!--2D vertex shader-<!--attribute: For the same reason application, one eye for the vertex shader--<!--uniform: Global variables, passed externally to the shader, read-only , cannot be modified-<!--varying: variable variables, as communication between vertex shader and element shader--<script id= "2d-vertex-shader" type= "X-shader/x-vertex" >attribute vec2 a_position;  Uniform VEC2 u_resolution; //Matrixuniform MAT3 U_matrix; voidMain () {//2d-vertex-shader:Gl_position = VEC4 ((U_matrix * VEC3 (a_position, 1)). XY, 0, 1); }functionDraw () {.....
   //Operation order: Zoom, rotate, pan:     //Translationmatrix: translation matrix;      //Rotationmatrix: rotation matrix;      //Scalematrix: Scaling matrix      varMatrix =m3.multiply (ProjectionMatrix, Translationmatrix); Matrix=m3.multiply (Matrix, Rotationmatrix); Matrix=m3.multiply (Matrix, Scalematrix);

GL.UNIFORMMATRIX4FV (Matrixlocation, False, matrix);
     Draw
Gl.enable (GL. Depth_test);
Gl.enable (GL. Cull_face);
     Issue a draw command
var primitivetype = gl. triangles;
var offset = 0;
var count = 16*6;
Gl.drawarrays (PrimitiveType, offset, count);

}

PS: Different transformation order, such as: translation-rotation-zoom and zoom-rotation-translation, these two procedures will be completely different two effects, if the pan and then zoom, the distance of the translation will be due to the scaling of the relationship between elongation or shortening, completely deviate from the operator's original intent. In the general transformation process, it is often recommended to scale and then translate.

In this paper, the correlation between the geometric affine transformation and the rectangle computation of the planar Cartesian coordinates in the re-WebGL is described, and the next step is to expand into the three-dimensional coordinate system and add the camera, projection and texture.

A very complete introductory tutorial is recommended: Https://webglfundamentals.org/webgl/lessons/zh_cn/webgl-fundamentals.html, which is by far the most reference to a material, especially learning ideas.

Model View transformations in the "Giser&&painter" Chapter02:webgl

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.