In-depth description of HDR Technology

Source: Internet
Author: User

Author: freeknightduzhi

Bloom and HDR posts and images are already on the Internet, but it is rare to be clearProgramThe post about the implementation process needs to be explained in a simple way tomorrow, so we will make a supplementary record here.

First, HDR is highly dynamic. Note two words: 1: high (High Precision ). 2: Dynamic (Real-time computing during illumination ).

Then let's talk about most of the current computer graphics Colors representing a8r8g8b8, that is, the color can be represented as 0-brightness (that is, between a dark gray color and a gray color, it cannot represent a dark black or a Very dazzling bright white color. It is expressed in [0, 1] as a floating point. In fact, this is far from a level of brightness that can be distinguished by the human eye, the human eye can tell that the brightness level is 1000, and the current color can express the brightness level of 10 to the power of 12 ...... Miserable, poor human eyes ). The brightness higher than 1.0f will be lost and rendered as 1.0f. The result is that there is no HDR game picture.

The purpose of HDR technology is to express the highlighted brightness and extremely dark brightness that cannot be expressed in the color channel.Its featuresThat is:

1: brighter place. 2: The dark area is darker. 3: the details of the bright and dark parts are obvious.

Technical PrinciplesIs to use a higher color precision texture Format Rendering.

ItsTraditional ClassificationYes: int16, fp16, and fp32 are classified according to the precision of Single-color channels. First, int16 means that a color channel is represented by a 16-bit Int. This is very strange. It can be supported in sm2.0, with low efficiency and poor effect. Now it is discarded -. -Fp16 is a single-precision float type to represent a color channel. GPU usually accelerates floating point processing, so it is more efficient and effective. Fp32 is a double-precision floating point type that represents a color channel. It naturally delivers higher precision and better performance, but it will pay some cost for efficiency. FP series all require sm3.0 or above. Later, dx10 and 11 were able to support fp64 and fp128 online games developed by Shen _ Yun. In the face of China's second-and third-tier cities, he was given 8 considerations ......

Basic Idea Process: When we use a8r8g8b8, we certainly cannot fully express what we want to express with the naked eye to distinguish details. So we can use a more precise color channel to represent the brightness. It's easy to use d3dfmt_a16r16g16b16f. But unfortunately, due to GPU restrictions, we cannot set the background buffer to such a high-precision mode. So this method is not ...... Sad reminder ...... Then, we create such a high-precision texture and then perform rendering. After the rendered result is submitted to the backend buffer, we can get this high-precision result on the screen :)

Therefore, we have to create a high-precision texture as the rendering object and use it to replace the original background buffer. This texture is called HDR rendering texture.

In fact, after we use the HDR rendering texture as the rendering object, it is enough to render it as usual. After rendering, it is OK to draw the texture to the original background buffer. Er, how can we buffer textures in the background ?...... Directly render a full-screen square patch with this texture to the background buffer -.-

From the above process, we found that we have not yet done Pixel Precision improvement processing. This step is called exposure.

About the description of the exposure, interested to see the http://freespace.virgin.net/hugo.elias/graphics/x_posure.htm (Baidu _ Baidu) said a blog lazy translation)

In short, one of the final results of various mathematicians is: float4 exposed = 1.0-Pow (2.71,-(vignette * unexposed * exposure ));
This is the only formula we need to pay attention to when writing the shader (of course there are other types of formulas, and different formulas have different effects ). Unexposed is the original unprocessed color texture. As described in vignette, exposure is the exposure level, which is generally greater than 1. (You prefer to set it to 2.0f to save trouble)

Vignette is called a dark angle in photography. Because the camera lens has a curve, the four corners of the photo may be lost or dimmed. This is the dark corner.

The following two formulas are used to indicate the hiddenCode.

Float2 VTC = float2 (itc0-0.5 );
Float vignette = POW (1-(dot (VTC, VTC) * 1.0), 2.0 );
Here, itc0 is the size of the texture corner, and the value is limited to [0, 1.

Er, stay away. As we have said before, the color of nature is now represented by the 12th power of 10, but the human eyes can only distinguish the 3rd power of 10. Then whether the human eye will be unable to distinguish many colors, of course not -. -We naturally do not need to render or discuss colors that cannot be distinguished by human eyes ...... So how does the human eye identify a wider range of colors. It is an automatic brightness adaptation mechanism. We can assume that we have just walked into a dark room from a bright outdoor room, so the human eyes will not see anything at first, but the human eyes will gradually adjust, adapt to the brightness of the dark, you can see some colors, This is the brightness adaptation.

The game sometimes needs to do this, for example, from indoors to outdoors, or sudden changes in the weather (flashing). At this point, what we need to do is to dynamically modify the exposure, that is, exposure. So how is the exposure calculated?

The simplest method is to use MIP-map and MIP-map to recursion to one pixel texture. The brightness of this pixel is very close to the average brightness of the current full scene, then, using the brightness as the exposure will be much more convenient and natural than hard coding. Besides, it's so dizzy that dynamic illumination is adaptive in all scenarios.

Finally, perform bloom. I believe that the person who can study HDR will not be unfamiliar with this word. It is very simple, that is, to obtain a copy of the texture of the current scene, then perform the copy to the Blur to obtain the processed texture (blur usually goes horizontally and then vertically. Of course, you can perform the merge _ transform operation multiple times ), then, combine the processed texture with the original scene texture to get the final result.

Of course, there are two details: 1. In order to make the bloom more efficient, we can often copy the original scenario, reduce it to the original 1/4, and then perform blur ,. 2. We often use blur to make the bright place brighter, rather than the entire scene (there is no need to perform blur on the colors that can be expressed in General colors ), therefore, before compressing and copying the scene texture, we usually subtract a fixed color value and only process the pixel texture higher than the color value. This will also improve the efficiency. Of course, vice versa, you can handle darker cases.

We finally got a high-precision texture, but considering the poor display screen, we need to work with tonemapping to ensure that the final rendered RGB value cannot exceed the display range of the computer display. The purpose of tonemapping is to map a high-range value to a low-range value. A proper ing is required here.Algorithm.

Here, http://www.graphixer.com.cn/ShowWorks.asp? Type = 1 & id = 48.

Then, render the texture to the background buffer.

Then, HDR is all over here -. -

If you need special effects, you can adjust the exposure value or perform a strange blur operation.

Still not clear? So let's go into detail.

Detailed Process:

1: Create a d3dusage_rendertarget high-precision format texture, and set the rendering object to this texture to replace the background buffer. Device-> setrendertarget (0, m_hdrtex );

2: Create MIP-map for this texture to get 1/4, that is, the third level texture m_hdrtexlevel3.

3: m_hdrtexlevel3 is shot in pixels by texture.

4: Perform blur on the m_hdrtexlevel3 diagram.

5: Restore m_hdrtexlevel3 to the original size of m_hdrtex.

6: Combine the original scene graph with the m_hdrtex diagram of blur.

7: Perform tone-mapping.

8: Get the final texture and render it to the background buffer.

The above process can be written in a. FX file for GPU processing ~

//-------------------------------

Appendix:Obtain brightnessVery simple ......

Float Lum = dot (color. RGB, float3 (0.333, 0.333, 0.333 ));

Some people like to pay attention to Green's contempt for blue, which doesn't necessarily make sense ...... The test results are still balanced -. -But give it.

Float Lum = 0.27 * r + 0.67 * g + 0.06b; // It is said that this formula is based on the sensitivity of human eyes to red, green, and blue. Well, it's called the animal brick house.

//-----------------------------

sleepy, so I went to sleep .... Today's English words are missing ....

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.