Geometry shader
A geometry shader is a shader stage that precedes an element assembly and a slice shader, and is an optional stage. Its input is the complete vertex information of an entity, usually from the vertex shader, but if the subdivision calculation shader is enabled, the input is the output of the subdivision compute shader, and the output of the corresponding geometry shader is also the complete entity information. So a simple understanding of geometric shaders is a stage where we can modify the meta information again.
This change can be reflected in two aspects, one is the number of elements, and the other is the type of the entity. That is, we can enter a triangle, and then output two or more triangles, of course, you can not output triangles, and the element type change can be understood as input if it is a triangle, the input can become points or lines of other primitive types.
Don't do anything.
First, let's look at one of the simplest examples where this geometry shader does nothing, that is, not exporting entities. Of course, such a shader does not have practical value.
points, layout (max_vertices = 1 out; void main () {}
This example is simple, the main function is the master process, and we leave it blank to indicate that nothing is done. Next, we focus on the key points of the geometry shader. The first is the "layout (points) in" line of code, which represents the input primitive type of the geometry shader, that is, the geometry shader is performed on a per-pixel basis (vertex shader-by-vertex, slice-shader-by-Element), and this primitive type is compatible with the entity type specified when the program is drawn, such as points for GL _points,line corresponds to gl_lines,triangles corresponding to gl_triangles; the second line declares that the geometry shader output primitive type is a point, and the maximum number of vertices for the output is 1. Note that in a geometry shader, the output primitive type can only be points, multi-segment bands and triangle bands, cannot output individual segments or triangles, or loop lines or triangle slices. This is because a stripe can be treated as a superset of a single primitive type, that is, a separate triangle or segment is a stripe of an entity. If we draw a triangle and end the strip immediately, it is equivalent to drawing a separate triangle.
Pass directly backwards.
Here is the next stage of passing the input data intact.
points, layout (max_vertices = 2 out; out VEC4 Fcolor; void Main () { int n; for (n =0; n < gl_in.length (); n++) { = gl_in[n].gl_position; Emitvertex (); } Endprimitive ();}
Gl_primitiveidin
Gl_primiiveidin is a built-in variable for the GLSL of a geometry shader stage, which can be understood as a unique identifier for an entity. It is an shaping variable, starting with 0, which corresponds to the input gl_primitiveid of the slice shader. If the gl_primitiveid of the slice shading is valid, and the geometry shader is also valid, the geometry shader must assign a value to the Gl_primitiveid. Here is an example of an element geometry shader that only renders an odd number of elements.
#version 330 Corelayout (points) Span style= "COLOR: #0000ff" >in ;layout (points, max_vertices =2 ) out ; void main () { if (Gl_primitiveidin & 1 ) = = 1 ) {int n; for (n =0 ; n < gl_in.length (); n++ = gl_in[n].gl_position; Emitvertex (); } endprimitive (); }}
Gl_layer
Gl_layer is a variable for a geometry shader that enables layered rendering. This layered rendering typically refers to rendering to a frame cache object, and the frame cache attachment is usually a two-dimensional array texture or cube map, so this layer can be understood as a slice of a two-dimensional array texture or a polygon of a cube map. Layered rendering enables different renderings of each layer in the geometry shader. Let's look at a layered rendering example:
#version410core layout (triangles)inch; layout (Triangle_strip, max_vertices= -) out;inchvs_gs_vertex{vec4 color;} Vertex_in[]; outgs_fs_vertex{vec4 color;} Vertex_out;uniform mat4 Proj;uniformintoutput_slices;voidMain () { for(intj =0; J < output_slices;++j) {Gl_layer=J; for(inti =0; I < gl_in.length (); ++i) {gl_position= proj *gl_in[i].gl_position; Vertex_out.color=Vertex_in[i].color; Emitvertex (); } endprimitive (); }
}
In this geometry shader, we use the program to set the value of the output_slices to control the number of layers, and the number of this layer is consistent with the number of two-dimensional array textures or the number of polygons in cube map. The most important step in this shader is to write Gl_layer, which represents the layers of our output entities. Here we render the same information in all layers, and the reader can design it according to its own wishes. So in this shader we have expanded the geometry to a multiple of output_slices.
The most typical application of this usage is to make a shadow texture of a point light source. We know that the point light will emit light in all directions, so we use shadow mapping to create a shadow, we need to create a shadow map in 6 directions, if we render the scene 6 times, the impact on performance will be relatively large, So you can use the technique of layered rendering to achieve these 6 shadow map drawing. A general process is:
1. Build a cube map;
2. Build the frame cache and associate the cube map to the depth cache attachment of the frame cache;
3. Build 6 observation matrices based on the location information of the point light source and pass in the geometry shader;
4. Then layered rendering 6 times, each layer using the corresponding observation matrix can be;
The values of cube map 6 polygons are:
Gl_texture_cube_map_positive_x 0
Gl_texture_cube_map_negative_x 1
Gl_texture_cube_map_positive_y 2
Gl_texture_cube_map_negative_y 3
Gl_texture_cube_map_positive_z 4
Gl_texture_cube_map_negative_z 5
The specific process can be consulted: https://learnopengl-cn.github.io/05%20Advanced%20Lighting/03%20Shadows/02%20Point%20Shadows/
Glviewportindex
Glviewportindex is another built-in variable in the geometry shader that you can use to implement multi-viewport rendering. The function that usually sets the viewport call in OpenGL is Glviewport, which is used to set the viewport range for the current render. So we have a way to implement a multi-viewport: Use Glviewport to set the viewport, render the scene, call Glviewport to set the viewport again, render the scene .... Before the geometry shader appears, the way we implement a multi-viewport is usually it, but after the shader we can use another method, that is, Glviewportindex.
Before we introduce Glviewportindex, we introduce 3 functions to set the viewport parameters:
GLVIEWPORTINDEXEDF (gluint index,glfloat x,glfloat y, glfloat w,glfloat h);
GLVIEWPORTINDEXEDFV (gluint index, glfloat* v);
GLDEPTHRANGEINDEXEDF (gluint index, glclampd N, glclampd F);
where (x, y) represents the upper-left corner of the viewport (the viewport is a rectangle), w,h represents the width and height of the viewport. Index is the indexed viewport, because we are multi-viewport rendering and need index to indicate which viewport is set. GLVIEWPORTINDEXEDFV is a vector version, where the parameters of the viewport are stored in a vector; Gldepthrangeindexedf sets the depth range of the viewport, and N and F represent the near and far planes of the viewport.
There are two ways to render multi-viewport rendering through geometric shaders, one is to expand the geometry with geometry shader, that is, one object can be expanded to two objects, and the other is to instantiate the geometry shader, set the request invocations to a specified quantity, such as 3, and then finish rendering the geometry in the corresponding viewport.
In the following example, we introduce the implementation of the second approach, the first of which can be analogous.
Geometry Shader:
#version410Corelayout (triangles,invocations=4)inch; layout (Triangle_strip, max_vertices=3) out; uniform mat4 model_matrix[4];uniform mat4 proj; outVEC4 Gs_color;ConstVEC4 colors[4] = vec4[4] (VEC4 (1.0,0.7,0.3,1.0), VEC4 (1.0,0.2,0.3,1.0), VEC4 (0.1,0.6,1.0,1.0), VEC4 (0.3,0.7,0.5,1.0));voidMain () { for(intI=0; I<gl_in.length (); + +i) {Gl_viewportindex=Gl_invocationid; Gs_color=Colors[gl_invocationid]; //Gs_normal = (Model_matrix[gl_invocationid] * VEC4 (vs_normal[i],0.0)). xyz;Gl_position = proj * (Model_matrix[gl_invocationid] *gl_in[i].gl_position); Emitvertex (); }}
In this geometry shader, the focus is on the settings of invocations and Gl_viewportindex. Invocations represents the number of primitives processing requests, the simple understanding is that this element is processed several times, where we process 4 times, where one of the instantiated related built-in variables is Gl_invocationid, which represents the call value of the geometry shader request (invocation) Assignment ( Invocation), can also be simply understood as the first few calls, through Gl_invocation to give Gl_viewportindex assignment to which viewport, and Model_matrix is the viewport rendering matrix, So the process of this geometry shader is to set the viewport index first, and then use the viewport's matrix to transform vertices and output entities. The result is a multi-viewport rendering that renders the scene in a given matrix in a previously set viewport.
Where viewport parameter settings:
voidReshape (intWidthintheight) { Const floatWOT =float(width) *0.5f; Const floatHot =float(height) *0.5f; GLVIEWPORTINDEXEDF (0,0.0f,0.0f, Wot,hot); GLVIEWPORTINDEXEDF (1Wot0.0f, Wot,hot); GLVIEWPORTINDEXEDF (2,0.0f, Hot,wot,hot); GLVIEWPORTINDEXEDF (3, wot,hot,wot,hot);}
Vertex shader and slice shader:
410 Core Layout (location =0 in vec3 IPos; // uniform mat4 model; // uniform mat4 view; // uniform mat4 proj; void Main () { = VEC4 (IPos,1.0);}
Slice shader:
410 Core inch VEC4 Gs_color; out VEC4 color; void Main () { = gs_color;}
Linux OpenGL Practice Chapter -13-geometryshader