3D Programming: Nineth Chapter Normal Mapping and displacement Mapping

Source: Internet
Author: User

Nineth Chapter Normal Mapping and displacement Mapping

This chapter focuses on two graphics techniques that support adding more detail to the scene without adding objects poly primitive. The first is normal mapping, which simulates illumination by creating some "fake" geometry (dummy polygon entities). The second is displacement mapping, which is based on texture data moving vertices actually (corresponding to "fake", which refers to real movement) to create rugged surfaces.

Normal Mapping (normal map)

In the previous chapters, SPECUALR maps,environment maps and transparency maps have been discussed, and these texture maps all provide additional data information. The data in the specular maps is used to limit the specular highlight,environment maps contains reflective surfaces for colors,transparency Maps is used to control alpha blending in the output-merge phase. These additional information is provided by each pixel and is provided by each vertex, with higher accuracy. Similarly, a normal map provides the number of normal vectors for each pixel, which can be applied to a variety of technologies.

One of the applications of Normal maps is to fabricate the details of a concave-convex surface, such as a stone wall. You can use enough geometry to simulate this rugged wall, the more vertices you use, the more details you get. This allows you to better handle the lighting in the scene, and the stones away from the light will appear as darker areas. However, increasing the geometry will result in increased computational costs. Conversely, for an object containing a small amount of poly geometry (even a flat plane), the normal map method can simulate the same lighting effect as a large increase in geometry. This application is called normal mapping.

Normal Maps

Figure 9.1 shows a color map (left) and normal map (right) for a stone wall. Relative to the color Map,normal map looks a bit strange. Although the NORML map can be displayed, the normal map stores 3D directions vectors (3-D direction vectors). When the normal map is sampled, the results obtained from the RGB channel represent the x,y,z component of the directional vector. These normal vectors can be used for calculations of some effects, such as diffuse lighting.

Figure 9.1 A color map (left) and normal map (right) for a stone wall. (Textures by Nick

Zuccarello, Florida Interactive Entertainment Academy.)

Each channels of the RGB texture stores a unsigned 8-bit value (unsigned char) that has a numeric range of [0, 255]. But a normalized direction vector, the corresponding XYZ component values range is [-1, 1]. Therefore, normal vectors must be converted to a range [0, 255] before they are stored to texture, and the texture must be converted to a range [-1, 1] before being sampled. You can use the following formula to convert a floating-point vector from the range [-1, 1] to the range [0, 255]:

f (x) = (0.5x + 0.5) * 255 again use the following equation to transform back:

In practice, image processing software (such as Adobe Photoshop) is typically used to encode a normal map into an RGB texture format. However, the sampling texture needs to be manually computed in shader, and the data is converted from the range [0, 255] back to [-1, 1]. Floating-point division (divided by 255) has been performed during the sampling process, so the resulting sample results are between the range [0, 1]. Therefore, you only need to use the following equation function to convert the range [0, 1] to the range [1, 1]:

f (x) = 2x 1 or you can use a 16-bit or 32-bit floating-point number as the texture format for normal maps, which can result in better detail, but at the expense of some performance.

Tangent spaces (cut space)

The normal of each vertex is used to compute the diffuse, as well as the normal of each pixel. But the normal must be in the same coordinate space as the light. For Per-vertex normals, provided by object space. But normal maps is in tangent space.

Tangent space (or texture space) is a coordinate system relative to the texture, which is determined by three orthogonal vectors: surface normal vector, Tangent vector, and binormal vector. Figure 9.2 depicts these three vectors.

Figure 9.2 An illustration of tangent. (Texture by Nick Zuccarello, Florida Interactive Entertainment Academy.)

Where the normal vector, N, is a vertex surface normal vector. The tangent vector, T, is perpendicular to the normal of the surface, with the direction of the U-axis pointing to the texture. The binormal vector, B, points to the V-axis direction of the texture.

You can use these three vectors to create a TBN (tangent, binormal, normal) transformation matrix, as follows:

You can use this matrix to transform vectors from tangent space to object space. However, since the light vector is usually in the world, it is necessary to transform the noraml from the normal map from tangent space to the world. Alternatively, create a TBN matrix directly using vectors already in the world.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.