"D3d11 Game Programming" study Note 21: Cube Mapping and one of its applications: the implementation of the Sky box

Source: Internet
Author: User

(Note: "D3d11 game Programming" study Note series by CSDN author Bonchoix wrote, reproduced please indicate the source: Http://blog.csdn.net/BonChoix, thank you ~)

This section discusses the advanced content for texture mapping: Cube Mapping.

1. Introduction

Single from the name, you can probably see the point, translated into Chinese for the cube map, so it must be related to the cube. Indeed, Cube mapping is a texture map using six square images. These six images correspond to six faces in a cube. Since this cube is axis-aligned, each polygon can be uniquely represented by the six axes in the coordinate system: positive x-Face, negative X-plane, positive y-plane, negative y-face, positive z-face, Negative Z-face. We refer to the six textures that correspond to these six faces as "Cube map". Here is a cube Map that expands from the cubes:

2. Texture mapping method

In general, texture mapping is done through texture coordinates u,v on vertices. For a two-dimensional texture, the u,v only determines the position of the Texel. But for a cube Map, only the u,v coordinates are not able to determine the pixel position, because it has six texture maps. Therefore, we need a new mapping technique to determine the corresponding texel in a texture on a one-off.

In fact, in cube mapping, the mappings are implemented by a three-dimensional vector. The three-dimensional vector can be conceived as a starting point in the center of the cube, pointing outside the cube. The corresponding texel of the vector to the intersection of the cube is the corresponding texel of the mapping result. Obviously, a three-dimensional vector and a cube have only one intersection point, so this method can be used to implement the cube mapping.

The following is a diagram on a two-dimensional plane: v represents the vector used by the map, the square represents the cube map, and the corresponding Texel at the intersection of the graphs is the result we want.

Here we derive the mapping process from a mathematical perspective:

The first step is to give a 3D vector [x, Y, z], and first find the one that has the largest value. In the example of [-3.2, 5.1,-8.4], the largest dimension is Z, and the largest dimension is used to position the cube map from six images to a single graph. 8.4 is negative, which corresponds to the negative z-plane of the cube, so the next focus will be the texture that corresponds to the negative z-plane of the cube;

In the second step, the other two dimensions in the vector are divided by the maximum dimension, and a two-dimensional vector is obtained (3.2/8.4, -5.1/8.4). It is easy to know that the range of values in this two-dimensional vector is between [-1, 1].

In the third step, the two-dimensional vectors obtained in the previous stage are converted to [0, 1]. It is simple to convert the number between [-1, 1] to [0, 1] Between: (x + 1)/2. For the above example, 3.2/8.4 * 2 + 0.5 = 0.31, -5.1/8.4 * 2 + 0.5 = 0.51. So we get two-dimensional vectors (0.31, 0.51). This two-dimensional vector is the texture coordinate we use to get Texel on the negative z-axis.

To summarize, the process of texture mapping through 3D vectors in Cube mapping is divided into three steps:

1. Uniquely define a texture map based on the maximum dimension

2. Use the other two dimensions as two-dimensional vectors, divided by the maximum dimension

3. Convert the values in the two-dimensional vectors to [0, 1], and obtain two-dimensional texture coordinates to map the two-dimensional textures by conventional methods.

3. Use of cube map in D3d11

In HLSL, the texture sampling method is exactly the same as the regular two-dimensional texture, using a type that is specifically used to represent the cube map, Texturecube. As shown below:

[CPP]View plain copy
    1. Texturecube G_cubemap;
    2. Samplerstate samtexture
    3. {
    4. Filter = Min_mag_mip_linear;
    5. Addressu = Wrap;
    6. ADDRESSV = Wrap;
    7. };

Given a 3-dimensional vector, use the cube map to get the texture values as follows:

[CPP]View plain copy
    1. G_cubemap.sample (Samtexture,dir);

It can be seen that, as with conventional two-dimensional textures, the difference is only that the second parameter changes from two-dimensional texture coordinates to three-dimensional.

In a C + + program, the type used to store the cube map is still id3d11texture2d.

In all of the previous programs, we used it to store a single two-dimensional texture map. In fact, this interface type is powerful enough to hold a texture array, in addition to being able to store a single texture. Texture array, that is, multiple texture images. More than just texture arrays, each texture has a MIP chain that can be stored in it. For us, it only needs to be represented by Id3d11texture2d. The distribution of each texture graph when using the type to represent multiple textures, and their mip chains, is shown clearly:

The different textures are represented from left to right, respectively, and the MIP chains for each texture are in the vertical direction.

For more details on id3d11texture2d, there will be a special article in the back to explain, here for the time being not concerned about these details. Instead, focus on the representation of the cube map.

In addition, you can centrally store six textures in a cube map using a. DDS-formatted picture from D3D. Reading a picture gets the texture view in exactly the same way as the previous two-dimensional texture:

[CPP]View plain copy
    1. D3dx11createshaderresourceviewfromfile (m_d3ddevice,l"Textures/snowcube1024.dds", 0,0,&m_SkyBoxSRV,0)

Similarly, with this view: M_skyboxsrv, you can assign values directly to Texture2d in HLSL.

4. Using the cube mapping to achieve the Sky Box effect (Skybox)

In order to give people an immersive feel, all the scenes in the game Use Sky Box technology to simulate the sky, mountains and other scenes in infinity. The implementation of the Sky box has the following key points:

1. The sky should theoretically be located infinitely far away, and any item in the scene is located in the front of the sky box without being obscured;

2. When moving in the scene, the object in the scene moves relative to the character, but the sky box remains stationary relative to its position;

For 1, you can make the Sky box's transformed z-coordinate at the maximum value in the viewable range. Any item in the scene can be rendered correctly and can block the distant sky. Thus accord with the actual situation;

For 2, in order to achieve the sky box relative to its own static effect, there are two ways: first, for any moment of their own world coordinates, so that the sky box with the same translation, so that they will always be in the center of the sky Box, and thus relatively static; Another method is to render the sky box, not the world Transform Sky Box, And let the translation part of the View matrix change to [0, 0, 0] so that the sky box can remain at the origin, because the camera is always in the origin point of view space, so its relative sky box position always remain relatively static.

Here's a look at the specific steps of the Sky Box implementation:

1. The geometry representation of the Sky box

In real life, the sky gives us the feeling of being a hemispherical. So here we use a sphere, which is represented as the geometry of the sky box;

2. Sky Box Texture Map

In order to represent the infinite distance of the sky, mountains and other phenomena, we need to paste on the surface of the sphere containing any angle of the texture map.

Cube map contains six maps that represent each facet of the cube, and the entire map encloses the entire space inside the cube, so it can be used to simulate the surrounding environment. In this case, we call the environment Mapping (environment Mapping). This idea is very easy to understand, consider the following scenario: Adjust the camera to a 90-degree view, the width and height ratio is set to 1, at a certain point. Then take a picture of the surrounding upper, lower, left, right, front, and back six directions respectively. In this case, the part that the camera can capture will cover the entire scene, and the cube map of the six images taken is the environment diagram (environment map) that represents the surrounding scene at the camera's location.

3. Sky Texture Mapping

With the geometric representation of the sky box and the environment diagram, the question now is how to map the environment diagram. As we explained in cube mapping, cube mapping uses 3D vectors to implement texture mapping. Here environment Mapping as a special cube Mapping, of course, can also use this mapping method.

As shown, this diagram illustrates the way the sky box texture mapping can be used:

It is shown from the figure that for any point on the sphere, the environment map can be mapped using a 3D vector from the center of the ball that passes through that point, which is also consistent with the situation in which we observe objects in our reality.

4. Program implementation

The core of the Sky box implementation is the shader part. In a C + + program, we want to do exactly the same as the previous program: Create a sphere and its corresponding vertices, index buffers, create a Sky texture view, and so on. So here is only the implementation of the key points of the shader section:

The first is the input vertex structure in the vertex shader, because we do not do light calculations when we implement the Sky box here, but only texture mapping, so the vertex structure contains only the vertex position. There is no texture coordinate, because this is mapped by a 3D vector that is computed by itself. As follows:

[CPP]View plain copy
    1. struct Vertexin
    2. {
    3. FLOAT3 posl:position;
    4. };

For output vertices, the position of the projection space is always required. In addition, in order to calculate 3D vectors for texture mapping in a pixel shader, we also need to preserve the vertex coordinates of the model space.

[CPP]View plain copy
    1. struct Vertexout
    2. {
    3. FLOAT4 posh:sv_position;
    4. FLOAT3 posl:position;
    5. };

Now is the key point: As mentioned earlier, one of the key points of the sky box is that it is always located at infinity, that is, behind any object in the scene. In other words, it is always farthest from the visible distance. Because the size of different scenes is not fixed, it is not a good idea to specify a specific radius for the sphere. Because in this case we always need to specify the appropriate radius when using the Sky box. Here, we use a more ingenious method, located in the vertex shader, with the following code:

[CPP]View plain copy
    1. Vertexout VS (Vertexin vin)
    2. {
    3. Vertexout Vout;
    4. VOUT.POSL = VIN.POSL;
    5. Vout.posh = Mul (FLOAT4 (VIN.POSL,1.F), g_worldviewproj). Xyww;
    6. return vout;
    7. }

The key is the transformation of the third row to the vertex. In general, what we need is the coordinates of "world, angle of view, projection transform" for vertex, namely XYZW component. But here we do not accept the Z component, but we use the W component as the z component of the projection transform. It's strange at first glance, but it's really practical. We know that for the [x, Y, Z, W] shape representation of the 3D coordinate, it corresponds to the actual 3D space expressed as [x/w, y/w, z/w]. Therefore, when the z component is changed to W, the true coordinate of the point is [x/w, y/w, 1], that is, the x, Y coordinates remain unchanged, and the z component is always 1. Also know that because of the vertex projection space now, the visual range (z value) is transformed between [0, 1] and 1 means the maximum visible distance. Therefore, any point on the Sky box geometry, regardless of its distance from the camera in world space, is always guaranteed to be at infinity (at the maximum visible distance) after the projection transformation, thus meeting our requirements.

In the pixel shader, all we have to do is texture map to get the color value for any point in the sky:

[CPP]View plain copy
    1. FLOAT4 PS (vertexout pin): Sv_target
    2. {
    3. return G_cubemap.sample (SAMTEXTURE,PIN.POSL);
    4. }

Here, we use the vertex in the sphere world space directly as a 3D vector, because the starting point of the vector is at the origin point.

Finally, there are 2 points to note:

1. Since we set the default depth value to 1.0 when the screen is cleared, and the depth value of all the vertices on the Sky box is 1, it is necessary to set the rendering state accordingly, so that the depth test function is less_equal to ensure that the sky box passes the depth test all the time.

2. Since we are now inside the sphere, the triangular vertex order of the inner surface of the sphere observed is counterclockwise, so you also need to set the render state to: counterclockwise to the front.

The corresponding settings are as follows:

[CPP]View plain copy
    1. Rasterizerstate Counterclockfrontrs
    2. {
    3. Frontcounterclockwise = true;
    4. };
    5. Depthstencilstate Lessequaldss
    6. {
    7. Depthfunc = less_equal;
    8. };

and turn these settings on in technique:

[HTML] view Plain  copy
    1. Technique11 Skyboxtech
    2. {
    3. Pass P0
    4. {
    5. SetVertexShader (Compileshader (Vs_5_0, vs ()));
    6. SetPixelShader (Compileshader (PS_5_0, PS ()));
    7. Setdepthstencilstate (LESSEQUALDSS, 0);
    8. Setrasterizerstate (Counterclockfrontrs);
    9. }
    10. }

5. The example program in this section

All right, here's the whole Sky box implementation details, here's a very simple sample program to demonstrate the effect of the sky box. In order to simply and clearly highlight the implementation of the Sky box, there is no other object in the scene except the sky. Here is a list:

Click here for source code

(GO) "D3d11 Game Programming" study Note 21: Cube Mapping and one of its applications: the implementation of the Sky box

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.