[OpenGL] from vertex coordinates to Rasterization (render pipeline)

Source: Internet
Author: User
Tags reflection
I. Entering object information

We generally use three-dimensional grids to express three-dimensional objects. For a built-in model (obj), it is represented by the following data:


The latter two are not required, but we assume it exists to facilitate our discussion of the rendering pipeline.

The normal direction is mainly used to specify the positive and negative side, it participates in many operations, such as back blanking, light calculation ... In the case where we do not specify OpenGL, the direction of the right-hand rule is positive in the order in which the vertices are drawn.

An index is used to specify how vertices (or texture coordinates) are formed into triangular patches.

two. Transition from world coordinate system model to viewport coordinate system


Suppose we have imported a series of obj, each three-dimensional mesh has its own local coordinate system, and we translate, rotate, and scale the vertices from the local coordinate system to the world coordinate system first.

Next, we specify our viewport coordinate system through the Gllookat. We specify the direction of the head upward, as well as the position of the eye, looking at the position. The user-specified two vectors are not necessarily perpendicular, OpenGL calculates the vertical line based on the plane of the two vectors, obtains the third dimension vector (in the right-hand coordinate system direction), then calculates the perpendicular according to the plane of sight vector and the third dimension vector, Update the upward direction vector (less than 90 degrees from the original angle) so that the orthogonal coordinate system is obtained.

We then transform the object from the world coordinate system to the viewport coordinate system. (The process of concrete transformation is the solution of linear algebraic equations)

three. Calculates the color component of a vertex based on illumination, texture, and material information


First of all, we calculate the color component under the condition of known illumination and material related information. The exact process is this:

1. The input information is the face normal, and for a vertex, we find the average normal of all the interface normals, and get the vertex normal direction.

2. Phong Shading:

Computes the normal direction of the polygon interior points using bilinear interpolation and calculates the RGB values for each point based on the Phong illumination model

or Gouraud shading:

The color components of each vertex are calculated using the Phong illumination model, and then the color components of the polygon interior points are computed using bilinear interpolation.

Phong Illumination Model:


(Ka is the environmental reflection coefficient, KD is the diffuse reflectance coefficient, KS is the specular reflection coefficient, summing all the specific light sources, and has Kd+ks=1)

The above parameters are specified by the user during programming, or loaded when the model is imported.

(1) Calculate the component of each point on the edge: D = a A + (1-a) B

(2) Calculate the component of the internal point: F = B D + (1-b) E

3. Texture mapping

Texture-to-vertex mapping based on user-specified filtering requires processing of varying texture size and mapping object size (sampling). The final result is the color component of each point.

If you have a mix of textures and light, the color components obtained in these two ways will be superimposed in different ways.
Four. Projection


You can specify a positive projection and a perspective projection.

In OpenGL, we use Glortho to specify a positive projection, with gluperspective to specify a perspective projection, which is a cube, and the latter is a four-bevel area. The operation of the orthographic projection is simple and can be obtained by a similar triangle, while the perspective projection is relatively more complex.

Five. Transform to window coordinates


We use Glviewport to set up this process, get the part that is displayed on the final screen, and crop it.
Six. Rasterization and blanking


Notice that at this time we get the vertex information, we began to use the scanning line to raster processing, we go from top to bottom, row by line processing, scanning process can be used to accelerate the idea of incremental, it is actually the implementation of the DDA algorithm, you can use a specific data structure-the active edge table to maintain the edge table topology information, Accelerate the process.

For padding, determine whether to draw pixels based on the parity of the number of intersections encountered, where the point at the corner is specially handled. During the filling process, we also perform blanking and maintenance Z-buffer.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.