Per-pixel PRT illumination and HDR Rendering

Source: Internet
Author: User
Not updated for a long time...
A recent demo is actually completed. It may have been about a year before I wrote the code.

System Requirements:
R300 +, nv40 + video card
D3dx9_29.dll required
Ppprt.part1.rar
Ppprt.part2.rar

The principle is relatively simple. The result of SH illumination on the three axes of hl2 is rendered using MRT to reach the sh texture space (also called the lightmap space ), then, the final result is rendered using the radiosity normal ing method of hl2 for later processing.
Details:
Pre-computation creates a level 5 sh texture, and the PRT illumination is calculated at the pixel level, so that it can adapt to low-mode scenarios. This is just different from the general pixel PRT. Considering the bump mapping rendering under the phong illumination, real-time sampling is required. P. P. Sloan uses various base functions to implement bump. However, these are three sh-encoded terms for the rendering formula, and then calculate the convolution of the normal direction, resulting in the sh matrix requirement. To avoid the sh matrix, the real-time method can be separated to three known hl2 orthogonal bases for pre-calculation. Considering the complexity of the formula, specular uses the first-order function (r. s ). When rendering, we first render the three base vectors to obtain the lightmap of the currently seen regions. For more information, see ATI Ruby demo. The rendering results are encoded with rgbs to reduce storage and bandwidth. Then, the CoS weights of the three base vectors are obtained using the forward and eye reflection vectors to calculate the final illumination formula.
HDR allows you to use int16 to render intermediate results on the ATI graphics card because Fp Filter does not support rendering. The sky texture is encoded with rgbs, and the filter effect is very good.

2006-7-2:
Updated and added some options:
/Files/Nicky/ppprt_update.rar
Decompress the package and overwrite the original bin folder.

I wanted to perform fsaa and found that a good method could not be found. The hardware of fp16 + msaa has not yet been popularized, so we have to use other methods. Using shader for ssaa is too costly, FPS is greatly reduced, and the sampling rate cannot be too high. It is feasible to use rgbe, rgbs, rgbdiv, and other codes to use framebuffer for msaa, but all of them belong to hack. In addition, rgbs and rgbdiv cannot adapt to a wide range of dynamic conditions, and banding may occur. Edge-detection fsaa can only be used to soften the boundary, because the sampling rate does not increase.

:

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.