"Step-by-step OpenGL 23"-Shadow Map 1

Source: Internet
Author: User

Tutorial Shadow map 1

Original: http://ogldev.atspace.co.uk/www/tutorial23/tutorial23.html

CSDN full Edition column: http://blog.csdn.net/column/details/13062.html

background

Shadows are closely connected with light, just as you need it to cast shadows. There are many techniques to generate shadows, and in the next two chapters we will learn a basic and simple technique-shadow mapping.

When it comes to rasterization and shading, you might ask if this pixel is in the shadow? Or is the path from the light source to the pixel pass through other objects? If so, the pixel may be in the shadow (assuming the other object is opaque), otherwise the pixel is not in the shadow. To some extent, this question is similar to the one we asked in the previous tutorial: How to determine when two objects overlap, what we see is closer to that? If we put the camera in the position of the light source, then these two problems are a matter of a while. The pixels we want to fall behind in the depth test are because the pixels are in shadow. Only pixels that win in the depth test are exposed to light. These pixels are directly in contact with the light source, and nothing will obscure them. This is the principle behind the shadow map.

A seemingly deep test can help us detect whether a pixel is in the shadow, but there is another problem: the camera and the light source are not always in the same place. Depth testing is often used to solve the problem of whether an object is visible from the camera's point of view. So how do we use depth testing for shadow testing when the light source is far away? The solution is to render the scene two times . First, from the point of view of the light source, the result of the render channel is not stored in the color buffer, instead, the depth value closest to the light source is rendered into the depth buffer created by the application (rather than automatically by glut), and secondly, from the camera's point of view, The depth buffers we create are bound to the slice shader for reading. For each pixel, we take the corresponding depth value from this depth buffer, and we also calculate the distance from this pixel to the light source. Sometimes the two depth values are equal. Indicates that this pixel is closest to the light source, so its depth value is only written into the depth buffer, at which point the pixel is considered to be in light and will calculate its color as normal. If the values of these two depths are different, it means that there are other pixels blocking the pixel when viewed from the light source, in which case we need to increase the shadow factor in the color calculation to mimic the shadow effect. Look at the following picture:

The above scene consists of two objects-the surface of the object and the cube. The light source is in the upper-left corner and points to the cube. During the first rendering process, we present a depth buffer from the angle of the light source. Take a look at these 3 points in the picture a,b,c. When B is rendered, its depth value goes into the depth buffer because there is nothing between B and the light source, and we default it to the closest point on that line from the light source. When a and C are rendered, however, they "compete" at the same point in the depth buffer. Both points are on the same line from the light source, so after the perspective projection, the Rasterizer discovers that the two points need to go to the same pixel on the screen. This is the depth test, the last C Point "win", then the depth value of the C point is written in the depth cache.

During the second rendering process, we render the surface and the cube from the camera's perspective. In addition to doing some calculations for each pixel in the shader, we also calculate the distance from the light source to the pixel and compare it to the depth value in the depth buffer. When we rasterize the B point, the two values should be almost equal (there may be some gaps between the interpolation and the precision of the floating-point type), so we think B is not in the shadow and is calculated as usual. When we rasterize point A, we find that the depth of the stored value is significantly smaller than the distance from a to light source. So we think A is in the shadows, and some shadow parameters are applied at point A, making it darker than ever.

In short, this is the Shadow mapping algorithm (the depth buffer we render in the first render pass is called the "shadow Map"), and we will learn it in two stages. In the first stage (this section) we will learn how to render depth information into a shadow map, render a texture created by the application, called ' texture rendering ', and we will use a simple texture mapping technique to display the shadow map on the screen, which is a good debugging process, in order to get a full shadow effect, It is important to draw the shadow map correctly. In the next section we'll see how to use a shadow map to calculate whether the vertex is in shadow.

The model we use in this section is a simple quadrilateral mesh that can be used to display shadow maps. This quadrilateral is made up of two triangles, and the texture coordinates are set to cover the entire texture. When the quadrilateral is rendered, the texture coordinates are interpolated by the rasterizer, so you can sample the entire texture and display it on the screen.

Source Code Explanation

(SHADOW_MAP_FBO.H:50)

class  ShadowMapFBO{        public : Shadowmapfbo ();        ~SHADOWMAPFBO (); bool  Init (unsigned  int  windowwidth, unsigned  int  windowheight); void         Bindforwriting (); void     Bindforreading (Glenum textureunit);        private : Gluint m_fbo; Gluint M_shadowmap;};  

The result of the 3d pipeline output in OpenGL is called a ' frame buffer object ' (FBO). FBO can mount color buffers (displayed on the screen), depth buffers, and some other useful buffers. When Glutinitdisplaymode () is called, it uses some specific parameters to create the default frame cache, which is managed by the windowing system and will not be deleted by OpenGL. In addition to the default frame cache, an application can create its own fbos. Under the control of the application, these objects can be manipulated for use in different technologies. The Shadowmapfbo class provides an easy-to-use interface for FBO, which is FBO used to implement shadow mapping techniques. There are two OpenGL handles inside the Shadowmapfbo class, where the ' M_FBO ' handle represents the real FBO,FBO encapsulates all the state of the frame cache, once the object is created and the appropriate parameters are set, We can simply change the frame cache by binding different objects. Note Only the default frame cache can be displayed on the screen. The frame cache created by the application can only be used for "off-screen rendering", which can be said to be an intermediate rendering process (such as our Shadow map buffers), which can later be used for "real" rendering channels on the screen.

In itself, the frame cache is just a placeholder, in order to make it usable, we need to attach the texture to one or more available mount points, and the texture contains the actual memory space of the frame cache. OpenGL defines some of the following attachment points:

    • Color_attachmenti: The texture attached to here will receive the color from the slice shader. The ' i ' suffix means that multiple textures can be attached to the color attachment point at the same time. There is a mechanism in the element shader to ensure that multiple colors are rendered to the buffer at the same time.
    • depth_attachment: The texture attached to it will receive the results of the depth test.
    • stencil_attachment: The texture attached to it will act as a template buffer. Template buffers limit the areas of rasterization and can be used in different technologies.
    • depth_stencil_attachment: This is only a combination of depth and template buffers, because they are often used together.

For shadow mapping technology, we only need a depth buffer. The member property "M_shadowmap" is a texture handle attached to the depth_attachment attachment point. SHADOWMAPFBO also provides methods that are mainly used in rendering functions. At the beginning of the second rendering, we are going to call bindforwriting () before rendering to the shadow Map and bindforreading ().

(shadow_map_fbo.cpp:43)

glGenFramebuffers(1, &m_fbo);

Here we create FBO. And textures are created in the same way that buffers these objects, we specify the address of an gluints array and its size, which is filled with a handle.

(shadow_map_fbo.cpp:46)

glGenTextures(100, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

Next we create textures as shadow plots. In general, this is a standard 2D texture with a specific configuration, which is used to achieve the following purposes:

    1. The internal format of the texture is gl_depth_component. Unlike before, we usually set the internal format of the texture to a color-related type such as (GL_RGB), where we set it to gl_depth_component, meaning that each texel in the texture holds a single-precision floating-point number used to hold the normalized depth value.
    2. The last parameter of glteximage2d is null, which means we do not provide any data for initializing buffer, because we want buffer to contain the depth value of each frame and the depth value of each frame may change. Whenever we start a new frame, we use glclear () to clear buffer. These are the things we do during the initialization process.
    3. We tell OpenGL that if the texture coordinates are out of bounds, it needs to be truncated to [0,1]. When a projection window with a camera as a viewport exceeds the projection window with the light source as the viewport, the texture coordinates are out of bounds. In order to avoid bad phenomena, such as the shadow is repeated in other places because of wraparound, we want to truncate the texture coordinates.
      (shadow_map_fbo.cpp:54)

glBindFramebuffer(GL_FRAMEBUFFER, m_fbo);

We have generated the FBO texture object and configured the texture object for the shadow map, and now we need to attach the texture object to the FBO. The first thing we need to do is bind the FBO, and then all the operations on FBO will have an effect on it. The parameter of this function is the FBO handle and the desired target. Target can be either Gl_framebuffer,gl_draw_framebuffer or Gl_read_framebuffer. Gl_read_framebuffe is used when we want to call Glreadpixels (not used in this tutorial) to read content from FBO, and we need to use Gl_draw_framebuffe when we want to render the scene into FBO; when we use Gl_ FRAMEBUFFER, FBO's read and write status will be updated, it is recommended to initialize FBO; When we actually start rendering, we use Gl_draw_framebuffer.

(shadow_map_fbo.cpp:55)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_shadowMap, 0);

Here we attach the shadow map texture to the depth attachment point of the FBO. The last parameter of this function indicates the mipmap level to be used. The MIPMAP layer is a feature of the texture map that presents a texture at different resolutions. 0 represents the maximum resolution, and as the hierarchy increases, the resolution of the texture becomes smaller. Combining the mipmap texture with the tri-linear filter can produce better results. Here we have only one mipmap layer, so we use 0. We have the shadow map handle as the fourth parameter. If we use 0 here, then the current texture (in the example above is the depth) will fall off the specified attachment point.

(shadow_map_fbo.cpp:58)glDrawBuffer(GL_NONE);glReadBuffer(GL_NONE);

Since we are not going to render to color buffer (output depth only), we disable writing to the color cache through the above function. By default, the color cache is bound to gl_color_attachment0, but our FBO doesn't even contain a texture buffer, so it's best to tell OpenGL exactly what we're doing. The parameters available for this function are Gl_none and gl_color_attachment0 to Gl_color_attachmentm, ' m ' Is (gl_max_color_attachments–1). These parameters are valid only for FBOs. If the default framebuffer is used, then the valid parameters are Gl_none, Gl_front_left,gl_front_right,gl_back_left and Gl_back_right, which allows you to render the scene directly to FRONT Buffer or back buffer (each of which has left and right buffer). We will also set the read operation from the cache to Gl_none (note that we do not intend to invoke any of the functions in the Glreadpixel APIs). This is primarily to avoid problems that occur because the GPU supports only opengl3.x and does not support 4.x.

(shadow_map_fbo.cpp:61)

GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER);if (Status != GL_FRAMEBUFFER_COMPLETE) {    printf("FB error, status: 0x%x\n", Status);    returnfalse;}

When we have finished configuring the FBO, be sure to verify that its status is "complete" as defined by OpenGL, ensuring that no errors occur and that Framebuffer is now available. The above is the code that examines this.
(shadow_map_fbo.cpp:72)

void ShadowMapFBO::BindForWriting(){    glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo);}

We need to switch the render target between shadow map and the default framebuffer during the rendering process. In the second rendering process, we will bind shadow map as input. This function and the next function encapsulate this work for easy invocation. The above function binds only FBO for writing data, which we will call before the first render.

(shadow_map_fbo.cpp:78)

void ShadowMapFBO::BindForReading(GLenum TextureUnit){    glActiveTexture(TextureUnit);    glBindTexture(GL_TEXTURE_2D, m_shadowMap);}

This function is called before the second render to bind the shadow map for reading data. Note that we are binding texture objects rather than FBO itself. The parameter of this function is the texture unit and binds the shadow map to the texture unit. The index of this texture unit must be synchronized with the shader (because the shader has a sampler2d consistent variable used to access the texture). Note that the Glactivetexture parameter is an enumeration value for the texture index (such as gl_texture0,gl_texture1, etc.), and the consistent variable in the shader requires only the index value itself (such as 0,1, etc.), which can cause many bugs to appear.

(Shadow_map.vs)

#version 330012) in vec3 Normal;uniform mat4 gWVP;out vec2 TexCoordOut;void main(){    1.0);    TexCoordOut = TexCoord;}

We will use the same shader program in two renderings. The vertex shader is used during two rendering, and the slice shader is used only during the second rendering process. Because we prohibit the data from being written to the color cache during the first rendering, it is useless to the slice shader. The vertex shader above is very simple, it simply transforms the position coordinates into the cropping coordinate system through the WVP matrix, and passes the texture coordinates to the slice shader. In the first rendering process, the texture coordinates are superfluous (because there is no element shader). However, this has no practical effect. As can be seen from the shader perspective, whether this is a rendering depth process or a true rendering process is not different, and the real difference is that the application in the first rendering process is the light source as the viewport of the WVP matrix, and in the second rendering process is the camera as the viewport of the WVP matrix. During the first rendering process, Z buffer is populated with the Z-value closest to the location of the light source, and during the second rendering, Z buffer is populated with the Z-value closest to the camera position. We need to use the texture coordinates in the slice shader during the second rendering because we will sample from the shadow map, which is the input to the shader at this point in time.

(SHADOW_MAP.FS)

#version 330in vec2 TexCoordOut;uniform sampler2D gShadowMap;out vec4 FragColor;void main(){    float Depth = texture(gShadowMap, TexCoordOut).x;    1.0 - (1.025.0;    FragColor = vec4(Depth);}

This is the slice shader used to display the shadow map during rendering. The X-y texture coordinates are used to sample from the shadow map. Shadow map textures are created with the Gl_depth_component type as an internal format, meaning that each texel in the texture is a single-precision floating-point data rather than a color. This is why the '. x ' is used during the sampling process. When we display the content in the depth cache, one of the things we might encounter is that the result of rendering is not clear enough. So, after we get the depth value from the shadow map, we zoom in on the distance from the current point to the far edge (where Z is 1), and then 1 minus the magnified value, to make the effect obvious. We use this value as the value of each color channel of the slice, which means that we will get some changes in grayscale (the far clipping surface is white and the near clipping surface is black).

Now how do we combine the code snippets above to create the application.

(tutorial23.cpp:106)virtualvoid RenderSceneCB(){    m_pGameCamera->OnRender();    0.05f;    ShadowMapPass();    RenderPass();    glutSwapBuffers();}

The main renderer becomes much simpler as most functions move to other functions. Let's deal with the whole thing first, such as updating the camera's position and the class member used to rotate the object. Then we call a Shadowmappass () function to render the depth information into the shadow map texture, and then use the Renderpass () function to display the texture. Finally, call Glutswapbuffer () to display the final result on the screen.
(tutorial23.cpp:117)

Virtual voidShadowmappass () {m_shadowmapfbo.bindforwriting ();    Glclear (Gl_depth_buffer_bit);    Pipeline p; P.scale (0.1f,0.1f,0.1f); P.rotate (0.0f, M_scale,0.0f); P.worldpos (0.0f,0.0f,5.0f); P.setcamera (M_spotlight.position, M_spotlight.direction, vector3f (0.0f,1.0f,0.0f)); P.setperspectiveproj (20.0f, Window_width, Window_height,1.0f,50.0f);    M_PSHADOWMAPTECH->SETWVP (P.getwvptrans ());    M_pmesh->render (); Glbindframebuffer (Gl_framebuffer,0);}

Before rendering the shadow map, we bind FBO first. From now on, all depth values will be rendered to the shadow map while discarding the color write process. We only clear the depth buffer before the rendering starts, and then we initialize a pipeline class object to render the mesh (for example, a tank). One thing to note here is that the camera-related settings are based on the spot and direction of the spotlight. We first render the mesh and then switch back to the default framebuffer by binding FBO to zero.

(tutorial23.cpp:135)

virtual  void  Renderpass () {Glclear (Gl_color_buffer_bit |    Gl_depth_buffer_bit);    M_pshadowmaptech->settextureunit (0 );    M_shadowmapfbo.bindforreading (GL_TEXTURE0);    Pipeline p; P.scale (5.0f , 5.0f ,     5.0f ); P.worldpos (0.0f , 0.0f ,    10.0f );    P.setcamera (M_pgamecamera->getpos (), M_pgamecamera->gettarget (), M_pgamecamera->getup ()); P.setperspectiveproj (30.0f , Window_width, Window_height,    1.0f , 50.0f );    M_PSHADOWMAPTECH->SETWVP (P.getwvptrans ()); M_pquad->render ();}  

Before the second rendering process begins, we clear the color and depth caches, which belong to the default frame cache. We tell the shader to use texture unit 0 and bind the shadow map to read the data in it. Starting from here is the same as before. We magnify the quads, put it directly in front of the camera and render it. Samples the shadow map during rasterization and displays it on the model.

Note: In this tutorial code, when the grid file does not specify a texture, we no longer automatically load a white texture because the shadow map can now be bound instead. If the mesh does not contain textures we are not bound to anything, but instead call the code to bind its own texture.

"Step-by-step OpenGL 23"-Shadow Map 1

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.