[Translation] Delayed coloring (1)

Source: Internet
Author: User
Tags pixel coloring

Latency coloring is a later lighting technology for 3D scenes. This technology breaks through the limitations of the previous rendering system's support for multiple dynamic light sources, with dramatic reduction in efficiency and performance. This allows a 3D scenario to support the effects of hundreds of dynamic light sources.

Its technical thinking mainly renders the geometric illumination information (location, normal, material information) of 3D scenes to the render target, and converts them from the 3D space of the world to the color space of the screen, as the input for illumination calculation, the input information of each light source is used for computation to generate a frame, and then such a frame (render target) merged to the frame cache of the result. When all the light sources are traversed, the computation is complete, and the image on the frame cache is the final rendering result.

Overview: Advantages of delayed coloring
1. It allows you to draw ry without worrying about any illumination issues.
2. Use multiple rendering flows to render multiple formats of pixels, such as spatial coordinates and Normal directions, to their respective destination buffers.
3. Use the rendering data mentioned in to calculate the pixel coloring, as if to draw in 2D image space.

What are the characteristics of traditional single-path illumination (which affects all the illumination of objects in one shader pass?
1. Traditional rendering methods can work well in scenarios with few light sources, such as those in the wild where only sunlight is available.
2. When there are many light sources, it is very difficult to manage the light.
3. Because all illumination is completed in a shader, the number of shader commands exceeds the GPU limit.

What are the characteristics of traditional multi-path illumination (that is, the effect of each light source on objects is completed in separate pass?
1. The complexity is too high. After calculating all the light, the number of objects x the number of light sources is required.
2. Batch Processing by object or by light source is quite troublesome.
3. In theory, light sources should be divided and managed based on their respective scopes. However, if light sources are computed based on multiple paths, dynamic light sources will be quite difficult to handle.

What are the characteristics of delayed coloring?
for each object:
render to multiple targets
for each light:
applylight as a 2D postprocess
1. the complexity is moderate, and the number of objects + the number of light sources is required to pass.
2. Batch computing is easy.
3. Multiple Small-range light sources have almost the same computing workload as a large-range light source (because each pixel affects the same number of light sources ).

what rendering target buffer do we need?
1. we need the following geometric rendering results:
-spatial coordinates
-normal data
-material parameters (diffuse color, self-emitting color, highlight color, and highlight attenuation coefficient)
2. delayed coloring is not suitable for illumination calculation that requires special multiple input parameters (such as illumination Calculation of Spherical Harmonic Functions ).
"fat" frame buffering
formats of various frame buffering:
-space coordinate a32b32g32r32f
-normal a16b16g16r16f
-diffuse color a8r8g8b8
-material parameter a8r8g8b8
each pixel occupies 256 bytes, 1024x768 occupies 24 MB of space without anti-aliasing. Currently, the hardware does not support multiple frame buffering formats.

Optimized frame buffer size
-The a2r10b10g10 format is used to store the legal line.
-You can use the color palette mechanism to store material attributes and use indexes to obtain the expected values.
-There is no need to use a vector3 to store the pixel space location, because we know the camera space location and the coordinates of the pixel in the screen space, in this way, we only need to know the distance from the camera to the pixel to obtain the 3D coordinates of the pixel space.

Select My frame cache format

• 128 bits per pixel = 12 Meg @ 1024x768:
-Z depth r32f
-Normal and scattering a2r10g10b10
-Diffuse color + self-emitting color a8r8g8b8
-Other material parameters: a8r8g8b8
My material parameters include: highlight intensity, highlight coefficient, occlusion factor, and number of shadows. I also used 2bit in the normal buffer alpha channel to control subsurface scattering.

Latency coloring is a later lighting technology for 3D scenes. This technology breaks through the limitations of the previous rendering system's support for multiple dynamic light sources, with dramatic reduction in efficiency and performance. This allows a 3D scenario to support the effects of hundreds of dynamic light sources.

Its technical thinking mainly renders the geometric illumination information (location, normal, material information) of 3D scenes to the render target, and converts them from the 3D space of the world to the color space of the screen, as the input for illumination calculation, the input information of each light source is used for computation to generate a frame, and then such a frame (render target) merged to the frame cache of the result. When all the light sources are traversed, the computation is complete, and the image on the frame cache is the final rendering result.

Overview: Advantages of delayed coloring
1. It allows you to draw ry without worrying about any illumination issues.
2. Use multiple rendering flows to render multiple formats of pixels, such as spatial coordinates and Normal directions, to their respective destination buffers.
3. Use the rendering data mentioned in to calculate the pixel coloring, as if to draw in 2D image space.

What are the characteristics of traditional single-path illumination (which affects all the illumination of objects in one shader pass?
1. Traditional rendering methods can work well in scenarios with few light sources, such as those in the wild where only sunlight is available.
2. When there are many light sources, it is very difficult to manage the light.
3. Because all illumination is completed in a shader, the number of shader commands exceeds the GPU limit.

What are the characteristics of traditional multi-path illumination (that is, the effect of each light source on objects is completed in separate pass?
1. The complexity is too high. After calculating all the light, the number of objects x the number of light sources is required.
2. Batch Processing by object or by light source is quite troublesome.
3. In theory, light sources should be divided and managed based on their respective scopes. However, if light sources are computed based on multiple paths, dynamic light sources will be quite difficult to handle.

What are the characteristics of delayed coloring?
for each object:
render to multiple targets
for each light:
applylight as a 2D postprocess
1. the complexity is moderate, and the number of objects + the number of light sources is required to pass.
2. Batch computing is easy.
3. Multiple Small-range light sources have almost the same computing workload as a large-range light source (because each pixel affects the same number of light sources ).

what rendering target buffer do we need?
1. we need the following geometric rendering results:
-spatial coordinates
-normal data
-material parameters (diffuse color, self-emitting color, highlight color, and highlight attenuation coefficient)
2. delayed coloring is not suitable for illumination calculation that requires special multiple input parameters (such as illumination Calculation of Spherical Harmonic Functions ).
"fat" frame buffering
formats of various frame buffering:
-space coordinate a32b32g32r32f
-normal a16b16g16r16f
-diffuse color a8r8g8b8
-material parameter a8r8g8b8
each pixel occupies 256 bytes, 1024x768 occupies 24 MB of space without anti-aliasing. Currently, the hardware does not support multiple frame buffering formats.

Optimized frame buffer size
-The a2r10b10g10 format is used to store the legal line.
-You can use the color palette mechanism to store material attributes and use indexes to obtain the expected values.
-There is no need to use a vector3 to store the pixel space location, because we know the camera space location and the coordinates of the pixel in the screen space, in this way, we only need to know the distance from the camera to the pixel to obtain the 3D coordinates of the pixel space.

Select My frame cache format

• 128 bits per pixel = 12 Meg @ 1024x768:
-Z depth r32f
-Normal and scattering a2r10g10b10
-Diffuse color + self-emitting color a8r8g8b8
-Other material parameters: a8r8g8b8
My material parameters include: highlight intensity, highlight coefficient, occlusion factor, and number of shadows. I also used 2bit in the normal buffer alpha channel to control subsurface scattering.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.