OpenGL Super Treasure note----Rendering pipeline

Source: Internet
Author: User

Everything in OpenGL is in 3D space, but screens and windows are a 2D pixel array, so most of OpenGL's work is about how to turn 3D coordinates into 2D pixels that fit your screen. The process of converting 3D coordinates to 2D coordinates is done by OpenGL's graphics rendering pipeline. The image rendering pipeline can be divided into two main parts: the first part converts your 3D coordinates to 2D coordinates, and the second part transforms the 2D coordinates into the actual colored pixels.

The rendering pipeline receives a set of 3D coordinates and then turns them into colored 2D pixels on your screen. The rendering pipeline can be divided into several stages, each of which requires the output of the previous phase as input. All of these phases are highly specialized, and they can be executed in parallel. Because of their parallel execution, most graphics cards today have tens of thousands of small processor cores that run their own small programs on the GPU for each phase, allowing you to quickly process your data in a graphics pipeline. These small programs are called shaders. Some shaders allow developers to configure themselves by replacing the default existing shaders with our own. This gives us more granular control over the specific parts of the rendering pipeline because they run on the GPU, so they also save valuable CPU time. Shaders are written in OpenGL shader language (OpenGL shading Language)GLSL .

Shows the main work done at each stage in the render pipeline, and the blue part represents the shader that we can define ourselves.

In, we pass 3 3D coordinates as an array as input to the rendering pipeline, which is used to represent a triangle, which is called the vertex data (Vertex), where the vertex data is a collection of several vertices. Each vertex is represented by the vertex attribute (vertex attributes), which can contain any data we want to use, let's look at the main work done at each stage of the rendering pipeline:

    The first part of the
    • rendering pipeline is the vertex shader (Vertex shader), which takes a single vertex as input. The main purpose of the vertex shader is to convert the 3D coordinates to another 3D coordinate (projected coordinates), while the vertex shader allows us to do some basic processing of the vertex properties. The
    • entity Assembly (primitive assembly) stage takes the representation of a vertex shader as input for all vertices of the base shape, and assembles all points into the shape of a particular basic shape; The output from the assembly phase of the
    • entity is passed to the geometry shader (geometry shader). The geometry shader takes as input the set of vertices formed by the basic graph, which creates new (or other) basic shapes by generating new vertices to generate other forms. The
    • tessellation Shader (tessellation shaders) has the ability to subdivide a given base graph into more small basic shapes. This way we can create smoother visuals by creating more triangles when the object is closer to the player. The output of the
    • tessellation shader goes into the rasterization (rasterization) stage, where it maps the basic graphic to the corresponding pixel on the screen, creating a pixel shader (fragment  Shader) The fragment used (one fragment in OpenGL is all the data that OpenGL needs to render a single pixel. )。 The crop (clipping) is performed before the pixel shader runs. Cropping increases execution efficiency by discarding pixels that are outside your view. The main purpose of the
    • pixel shader is to calculate the final color of a pixel, which is where OpenGL advanced effects are generated. Typically, a pixel shader contains some data for a 3D scene that calculates the final color of a pixel (such as light, shadow, color of light, and so on).
    • after all the corresponding color values have been determined, it eventually passes to another stage, which we call the alpha testing and blending (blending) phase. This phase detects the corresponding depth (and stencil) values of the pixels, using these to check whether the pixel is in front of or behind another object, and so on. This stage also looks at the alpha value (the alpha value is the transparency value of an object) and the blend (blend) between the objects. So even if the color of a pixel is calculated in the pixel shader, the final pixel color may be quite different when rendering multiple triangles.

Although the rendering pipeline has multiple stages, each stage requires a corresponding shader, but in most cases we have to do only vertex and pixel shaders, geometry shader and subdivision shader is optional, usually using the default shader is OK. In OpenGL now, we must define at least one vertex shader and one pixel shader (because there is no default vertex/pixel shader in the GPU).

OpenGL Super Treasure note----Rendering pipeline

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.