OpenGL deferred Shading

Source: Internet
Author: User

Original address: http://www.verydemo.com/demo_c284_i6147.html

First, Deferred shading Technology Introduction

Deferred shading is a technique that defers lighting/ rendering calculations to a second step. We do this to avoid rendering the same pixel multiple times (more than 1 times).

The basic ideas are as follows:

1, in the first step, we render the scene, but unlike the usual case of applying a reflection model to calculate the fragment color, we simply store the geometry information (positional coordinates, normal vectors, texture coordinates, reflection coefficients, etc.) in the middle buffer, Such buffers we call G-buffer (g is the abbreviation for geometric geometry).

2. In the second step, we read the information from the G-buffer and apply the reflection model to calculate the final color of each pixel.

The application of Deferred shading technology allows us to avoid applying reflection models to fragments that are ultimately not visible. For example, consider such a pixel, which is located in an area where two polygons overlap. The usual fragment shader reads one pixel at a time for each polygon, but the result of the two executions is ultimately only one of the final colors of the pixel (based on the assumption that blending has been disabled). In this way, one of the calculations is useless. With the deferred shading technique , the calculation of the reflection model is deferred until all geometry is processed, and the visibility of the geometry for each pixel position is also known. Thus, for each pixel on the screen, the calculation of the reflection model occurs only once.

Deferred shading Easy to understand and easy to use. It can help implement very complex light/reflection models.

Second, the combination of examples to illustrate the deferred shading technology

The following example uses the deferred shading technique to render a scene that contains a teapot and a ring. The effect is as follows:

Figure A scene rendering

In this example, we store positional coordinates, normals, and diffuse reflectance factors in the g-buffer. In the second step, we use the data inside the g-buffer to calculate the diffuse light model.

The G-buffer contains 3 textures: used to store location coordinates, normals, and diffuse reflectance factors, respectively. corresponding to the use of 3 uniform variables: Positiontex, Normaltex, Colortex.

They are all linked to a FBO. For FBO use see: FBO.

Here's the code to create a FBO that contains g-buffer:

gluint depthbuf, Postex, Normtex, Colortex; //Create and bind the FBOGlgenframebuffers (1, &DEFERREDFBO);        Glbindframebuffer (Gl_framebuffer, DEFERREDFBO); //The depth bufferGlgenrenderbuffers (1, &depthbuf);      Glbindrenderbuffer (Gl_renderbuffer, depthbuf);        Glrenderbufferstorage (Gl_renderbuffer, gl_depth_component, width, height); //The position bufferGlactivetexture (GL_TEXTURE0);//Use Texture Unit 0Glgentextures (1, &Postex);      Glbindtexture (gl_texture_2d, Postex); Glteximage2d (gl_texture_2d,0, gl_rgb32f, width, height,0, Gl_rgb, Gl_unsigned_byte, NULL);      Gltexparameteri (gl_texture_2d, Gl_texture_min_filter, gl_nearest);        Gltexparameteri (gl_texture_2d, Gl_texture_mag_filter, gl_nearest); //The normal bufferglactivetexture (Gl_texture1); Glgentextures (1, &Normtex);      Glbindtexture (gl_texture_2d, Normtex); Glteximage2d (gl_texture_2d,0, gl_rgb32f, width, height,0, Gl_rgb, Gl_unsigned_byte, NULL);      Gltexparameteri (gl_texture_2d, Gl_texture_min_filter, gl_nearest);        Gltexparameteri (gl_texture_2d, Gl_texture_mag_filter, gl_nearest); //The color bufferglactivetexture (Gl_texture2); Glgentextures (1, &Colortex);      Glbindtexture (gl_texture_2d, Colortex); Glteximage2d (gl_texture_2d,0, Gl_rgb, width, height,0, Gl_rgb, Gl_unsigned_byte, NULL);      Gltexparameteri (gl_texture_2d, Gl_texture_min_filter, gl_nearest);        Gltexparameteri (gl_texture_2d, Gl_texture_mag_filter, gl_nearest); //Attach The images to the framebufferGlframebufferrenderbuffer (Gl_framebuffer, Gl_depth_attachment, Gl_renderbuffer, depthbuf); Glframebuffertexture2d (Gl_framebuffer, Gl_color_attachment0, gl_texture_2d, Postex,0); Glframebuffertexture2d (Gl_framebuffer, Gl_color_attachment1, gl_texture_2d, Normtex,0); Glframebuffertexture2d (Gl_framebuffer, Gl_color_attachment2, gl_texture_2d, Colortex,0); Glenum drawbuffers[]={gl_none, gl_color_attachment0, Gl_color_attachment1, gl_color_attachment2}; Gldrawbuffers (4, drawbuffers); Glbindframebuffer (Gl_framebuffer,0);

gluint depthbuf, Postex, Normtex, Colortex; //Create and bind the FBOGlgenframebuffers (1, &DEFERREDFBO);        Glbindframebuffer (Gl_framebuffer, DEFERREDFBO); //The depth bufferGlgenrenderbuffers (1, &depthbuf);      Glbindrenderbuffer (Gl_renderbuffer, depthbuf);        Glrenderbufferstorage (Gl_renderbuffer, gl_depth_component, width, height); //The position bufferGlactivetexture (GL_TEXTURE0);//Use Texture Unit 0Glgentextures (1, &Postex);      Glbindtexture (gl_texture_2d, Postex); Glteximage2d (gl_texture_2d,0, gl_rgb32f, width, height,0, Gl_rgb, Gl_unsigned_byte, NULL);      Gltexparameteri (gl_texture_2d, Gl_texture_min_filter, gl_nearest);        Gltexparameteri (gl_texture_2d, Gl_texture_mag_filter, gl_nearest); //The normal bufferglactivetexture (Gl_texture1); Glgentextures (1, &Normtex);      Glbindtexture (gl_texture_2d, Normtex); Glteximage2d (gl_texture_2d,0, gl_rgb32f, width, height,0, Gl_rgb, Gl_unsigned_byte, NULL);      Gltexparameteri (gl_texture_2d, Gl_texture_min_filter, gl_nearest);        Gltexparameteri (gl_texture_2d, Gl_texture_mag_filter, gl_nearest); //The color bufferglactivetexture (Gl_texture2); Glgentextures (1, &Colortex);      Glbindtexture (gl_texture_2d, Colortex); Glteximage2d (gl_texture_2d,0, Gl_rgb, width, height,0, Gl_rgb, Gl_unsigned_byte, NULL);      Gltexparameteri (gl_texture_2d, Gl_texture_min_filter, gl_nearest);        Gltexparameteri (gl_texture_2d, Gl_texture_mag_filter, gl_nearest); //Attach The images to the framebufferGlframebufferrenderbuffer (Gl_framebuffer, Gl_depth_attachment, Gl_renderbuffer, depthbuf); Glframebuffertexture2d (Gl_framebuffer, Gl_color_attachment0, gl_texture_2d, Postex,0); Glframebuffertexture2d (Gl_framebuffer, Gl_color_attachment1, gl_texture_2d, Normtex,0); Glframebuffertexture2d (Gl_framebuffer, Gl_color_attachment2, gl_texture_2d, Colortex,0); Glenum drawbuffers[]={gl_none, gl_color_attachment0, Gl_color_attachment1, gl_color_attachment2}; Gldrawbuffers (4, drawbuffers); Glbindframebuffer (Gl_framebuffer,0);

Note: Three textures are associated to FBO color correlation points 0, 1, 2, respectively, using the function glframebuffertexture2d (). The function gldrawbuffers is then called to associate them with the output variables of the fragment shader.

The function Gldrawbuffer indicates the connection between the FBO member and the fragment shader output variable. The I member in FBO corresponds to an output variable of index i in the fragment shader. In this way, the corresponding output variables in the fragment shader (complete code listed below) are Posiutiondata,normaldata and Colordata, respectively.

The vertex shader implements a simple function of converting positional coordinates and normals into eye sapce, and then passing them to the fragment shader. The texture coordinates are not changed.

Fragment shaders are as follows:

#version -    structLightinfo {VEC4 Position; //Light position inch eye coords. VEC3 Intensity;//a,d,s Intensity};    Uniform Lightinfo Light; structMaterialinfo {vec3 Kd; //Diffuse reflectivity};    Uniform Materialinfo Material; SubroutinevoidRenderpasstype ();    subroutine uniform Renderpasstype Renderpass;    Uniform sampler2d Positiontex, Normaltex, Colortex; inchVEC3 Position; inchVEC3 Normal; inchVEC2 Texcoord; Layout ( location=0) outVEC4 Fragcolor; Layout ( location=1) outVEC3 Positiondata; Layout ( location=2) outVEC3 Normaldata; Layout ( location=3) outVEC3 Colordata; VEC3 Diffusemodel (vec3 pos, VEC3 norm, Vec3 diff) {vec3 s= Normalize (VEC3 (light.position)-POS); floatSdotn = max (dot (s,norm),0.0 ); VEC3 Diffuse= light.intensity * diff *Sdotn; returnDiffuse; } subroutine (Renderpasstype)voidPass1 () {//Store position, normal, and diffuse color in texturesPositiondata =Position; Normaldata=Normal; Colordata=MATERIAL.KD; } subroutine (Renderpasstype)voidPass2 () {//Retrieve position and normal information from texturesVEC3 pos =vec3 (Texture (Positiontex, texcoord)); VEC3 Norm=vec3 (Texture (Normaltex, texcoord)); VEC3 Diffcolor=vec3 (Texture (Colortex, texcoord)); Fragcolor= Vec4 (Diffusemodel (Pos,norm,diffcolor),1.0 ); }    voidMain () {//This would call either Pass1 or Pass2Renderpass (); }  
#version -    structLightinfo {VEC4 Position; //Light position inch eye coords. VEC3 Intensity;//a,d,s Intensity};    Uniform Lightinfo Light; structMaterialinfo {vec3 Kd; //Diffuse reflectivity};    Uniform Materialinfo Material; SubroutinevoidRenderpasstype ();    subroutine uniform Renderpasstype Renderpass;    Uniform sampler2d Positiontex, Normaltex, Colortex; inchVEC3 Position; inchVEC3 Normal; inchVEC2 Texcoord; Layout ( location=0) outVEC4 Fragcolor; Layout ( location=1) outVEC3 Positiondata; Layout ( location=2) outVEC3 Normaldata; Layout ( location=3) outVEC3 Colordata; VEC3 Diffusemodel (vec3 pos, VEC3 norm, Vec3 diff) {vec3 s= Normalize (VEC3 (light.position)-POS); floatSdotn = max (dot (s,norm),0.0 ); VEC3 Diffuse= light.intensity * diff *Sdotn; returnDiffuse; } subroutine (Renderpasstype)voidPass1 () {//Store position, normal, and diffuse color in texturesPositiondata =Position; Normaldata=Normal; Colordata=MATERIAL.KD; } subroutine (Renderpasstype)voidPass2 () {//Retrieve position and normal information from texturesVEC3 pos =vec3 (Texture (Positiontex, texcoord)); VEC3 Norm=vec3 (Texture (Normaltex, texcoord)); VEC3 Diffcolor=vec3 (Texture (Colortex, texcoord)); Fragcolor= Vec4 (Diffusemodel (Pos,norm,diffcolor),1.0 ); }    voidMain () {//This would call either Pass1 or Pass2Renderpass (); }  

The fragment shader contains some information about the light source and the material, all of which are uniform variables for lighting calculations.

The fragment shader uses subroutine technology , which implements two functions Pass1 and PASS2, each containing the first and second steps. We can choose to use the appropriate functionality in OpenGL applications by setting the value of the uniform variable.

Inside the OpenGL application,

The steps to implement the first step are as follows:

1, binding FBO;

2, case color and depth buffer, select Pass1 subroutine function, enable depth test;

3, render the scene.

The steps to implement the second step are:

1, remove the FBO binding (bind it to 0), the purpose is to render the scene to the default buffer, not FBO inside, it can be displayed on the screen;

2, clear the color buffer to go to the object. Disable in-depth testing;

3, choose the Pass2 subroutine function, render a full-screen quadrilateral, with texture coordinates, the range of texture coordinates in each direction is from 0 to 1. Calculates the light model and draws the final fragment color.

Third, how to choose the use of deferred shading technology

In the field of graphics , the advantages and disadvantages of deferred shading technology are controversial. This technique does not apply to all occasions, it depends on the needs of your application. So be sure to weigh the pros and cons of this technology before you think about it.

One important drawback of Deferred shading technology is the inability to use multi-sample antialiasing based on hardware implementations. Because the rendering process takes place in the second step, we need multiple samples in the second step. However, in the second step we have only one sample per pixel.

Another drawback is the inability to use hybrid technology .

Resources:

The 9th chapter of GPU Gems 2

The 19th chapter of GPU Gems 3

http://blog.csdn.net/zhuyingqingfen/article/details/19406163

OpenGL deferred Shading

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.