OpenGL render pipeline Overview

Source: Internet
Author: User

Translated from OpenGL wiki: http://www.opengl.org/wiki/rendering_pipeline_overview

Pipelines

The OpenGL rendering pipeline works as follows:

1. Prepare and render vertex array data

2. vertex processing:

1. Each vertex must be processed by the vertex coloring tool. Each vertex is processed as an output vertex in sequence in the data stream;

2. Optional fragment stages of element and surface;

3. optional element geometric coloring machine processing, output as a series of basic elements;

3. After vertex processing, the output in the previous stage is adjusted or moved to different positions.

1. The conversion feedback is carried out here;

2. Cropping elements, normalization, and transformation from the view to the window;

4. Metadata Loading

5. Difference between SCAN conversion and element parameters

6. The fragment shader processes each fragment and generates a series of outputs for each fragment.

7. Processing by sampling point

1. Scissors Test

2. template Test

3. Deep Test

4. Hybrid

5. logical operations

6. Write a mask

Vertices Normalization

In the process of vertices normalization, the application creates an ordered vertex table and sends it to the pipeline. These vertices define the boundaries of elements.

Elements are basic drawing shapes, such as triangles, straight lines, and points. How can a vertex table be interpreted as submitting elements to the next stage for processing.

This section of the pipeline processes some objects, such as vertex array objects and vertex buffer objects. The vertex array object defines the data of each vertex, And the vertex buffer object stores the actual vertex data.

The data of a vertex is a series of attributes, each of which is a small set of data and will be computed in the next phase. Although a series of attributes can indeed determine a vertex, it is not specified that the attribute set of the vertex must include the position or normal. Attribute data is completely arbitrary, which is unique in the vertex processing phase.

Vertex Rendering

Once vertices are normalized, they are rendered as elements through rendering instructions.

Vertex Processing

Each vertex collected from raw data must be processed, which is the responsibility of the vertex coloring tool. It receives attribute input from the previous stage and converts each input vertex to a single output vertex based on an arbitrary, user-defined program.

Unlike the information of input vertices, the output vertex data requires less. The Vertex coloring tool only needs to fill in a position value to send out a valid vertex.

A limitation of vertex processing is that each input vertex must correspond to a specific output vertex. Because the call to the vertex coloring er cannot share the state, the input attributes correspond to the output vertex data one by one. That is to say, if you use the same attributes to input the same vertex color in the same element, you will get the same output vertex data. This gives the shader The right to optimize vertex processing. If they can detect a previously processed vertex, they can use the processed data in the transformed data buffer, this prevents repeated vertex processing.

Metaassembly

Metaassembly is a process of collecting a series of vertex data output from the vertex shader and synthesizing it into a feasible basic element. The type of elements used for rendering determines the processing method.

The output of this processing is an ordered sequence of simple elements (points, lines, or triangles. For example, if the input is a triangle stripe element containing 12 vertices, 10 triangles are output.

Surface Subdivision

Through a fixed function splitter between two coloring ers, elements can be subdivided by surfaces.

Geometric coloring Tool

In addition to the Common Assembly steps, you can also use a geometric coloring tool. This is a user-defined program used to process each input element and return zero or multiple output elements.

The input element of the geometric coloring tool is the output element of the element assembly. Therefore, if you input a triangle strip as an element, the geometric coloring tool regards it as a series of triangles.

However, there are also some types of input elements that are specially customized for the geometric coloring tool. These adjacent elements give the geometric coloring tool a broader perspective on these elements: they provide information about the adjacent vertex.

The output of the geometric coloring tool is zero or multiple simple elements, which is similar to the output of the element assembly. The geometric coloring tool can remove elements or split a single input into multiple output elements. The geometric coloring tool can also be used to process vertex values, including doing some work for the vertex coloring tool, or performing interpolation while segmenting. The geometric coloring tool can even convert elements into different types: the input vertex elements can be converted into triangles or straight lines into points.

Change feedback

The output results of geometric coloring ERs or element assembly are written into a series of buffer objects specially set for this purpose. This is called the transformation feedback mode: it allows users to perform data transformation through vertex and geometric coloring devices, you can save the results for later use.

The data output to the transform feedback buffer is sent from each element in this step.

By dropping the raster result, the rendering pipeline can end efficiently. This may make the transformation feedback the only output of the rendering process.

Crop and remove

The elements are then cropped and adapted.

Cropping means that the elements at the outer and outer boundaries of the visible area are divided into several elements. At the same time, the vertex shader can define some cropping boxes in the protected space and act on the elements. These cropping boxes will perform additional cropping on the elements passing through them.

Triangle Surface Removal also occurs in this phase. The specific implementation can immediately remove any elements not in the view area or completely inside the border of the cropping box.

The position of the vertex is converted from the cropping space to the window space by means of the same normalization and view transformation,

Grating

The elements arriving at this stage are raked in the input order, and the output is a series of clips.

A fragment is a set of States used to calculate the final data pixels in the output frame buffer (or sample points, if multiple sampling is allowed ). The status of a clip includes its position on the screen, coverage that allows multi-sampling, and a column of arbitrary data output from previous vertices or geometric colorants.

The final dataset is obtained through data interpolation between fragment vertices, And the interpolation method is defined by the coloring tool that outputs these values.

Fragment Processing

The data of each segment obtained from the grating stage is processed by the fragment shader. The output is the color value, depth value, and template value to be written to the color buffer. The fragment shader cannot set the template data of the fragment, but can control the color and depth values.

Processing by sampling point

Then, the output part data processed by the segment goes through a series of steps.

The first step is different removal detection. If template detection or deep detection is available, this part is removed and not added to the frame buffer if it fails. If any of these detection fails, this segment is removed and not added to the buffer zone.

Note:If your part coloring tool does not fill in the depth value, that is, the general calculated depth value, you can use an optimization strategy called early depth testing in the specific implementation. It performs deep (and template) detection before fragment processing. Therefore, if a fragment is removed, you do not need to run the fragment coloring tool.

After that, perform a hybrid operation. A specific mixing operation is performed on the color values of each clip for the existing color values at the corresponding position in the frame buffer.

Finally, the fragment data is written to the frame buffer. The mask operation allows users to avoid writing specific values. You can enable or disable the mask for color, depth, and template writing. The mask can also be used for independent color channels.

OpenGL render pipeline Overview

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.