How to Use depth texture (from directxdev)

Source: Internet
Author: User

Guys,

As this issue comes up fairly often I'll try to summarize depth textures support for both ATI and NVIDIA chipsets. If I get any of NVIDIA's depth implementation wrong then please let me know

Both ATI and nvidia hw support depth textures, although in a different way. The creation of the Depth textures themselves is very similar:

* Exposed formats
-ATI exposes two fourcc to create 16 or 24-bit depth textures:
# Define fourcc_df16 (d3dformat) makefourcc ('D', 'F', '1', '6 '))
# Define fourcc_df24 (d3dformat) makefourcc ('D', 'F', '2', '4 '))
Df16 is supported on r300 chipsets and up (9500 +) while df24 is supported on rv530 chipsets and up (x1600 and x1900 ).
-NVIDIA uses the predefined d3dfmt_d16 and d3dfmt_d24s8 formats.
Gefore3 chipsets and up support those.

In most cases a 16-bit format shocould be enough to accommodate most needs. There shocould be enough precision as long as your projection matrix is chosen carefully (using a front clip plane value as large
Possible) and your Z-range distributed sensibly. It is stronly recommended to prefer 16-bit shadow maps whenever possible as they will perform better in terms of performance, and are more widely supported.

* To check availability of those formats the checkdeviceformat () API shocould be used.
-Thus for a 16-bit depth surface you wowould call for ATI:
Hres = d3d-> checkdeviceformat (adapter, devicetype, adapterformat, d3dusage_depthstencel, d3drtype_texture, fourcc_df16 );
-And for NVIDIA:
Hres = d3d-> checkdeviceformat (adapter, devicetype, adapterformat, d3dusage_depthstencel, d3drtype_texture, d3dfmt_d16 ); note that it is safer to check for NVIDIA device IDS as well as doing the above check since NVIDIA's depth textures functionality relies on "overloading" the meaning of an existing format (one key difference is that sampling from a NVIDIA depth texture will actually * not * return depth values ).

* Texture surface Creation
Again the only difference between ATI and NVIDIA implementation is which format to call:
-For ATI:
Hres = d3ddevice-> createtexture (shadowmapwidth, shadowmapheight, 1, d3dusage_depthstencel, fourcc_df16, d3dpool_default, & pshadowmap );
-For NVIDIA:
Hres = d3ddevice-> createtexture (shadowmapwidth, shadowmapheight, 1, d3dusage_depthstencel, d3dfmt_d16, d3dpool_default, & pshadowmap );

* The intermediate setup (surface binding, viewport, etc.) shocould be the same between the two.

* Once rendering has taken place the depth texture can be used as a normal texture using the settexture () API.

* The main difference between ATI and NVIDIA's depth textures implementations are in the shader to use.
-Sampling from ATI depth textures will return depth values. it is up to the shader to fetch depth samples and to perform comparisons with an incoming Z value. this allows more flexibility when choosing the filter kernel to use and the weights to apply to each sample. the x1600 and x1900 support an additional feature called fetc4that returns four adjacent depth samples into the rgba channels of the destination register with a single texture instruction. this enables high-performance shadow maps and/or larger kernels to be used.
-Sampling from NVIDIA depth textures will return percentage-closer-filtered results as the comparison with an incoming Z value is automatically saved med when sampling from depth textures.

It shoshould be fairly straightforward to automate the creation process to cater for ATI or NVIDIA's versions of depth textures as this part of the process is very similar in code. the bulk of the work consists in adding # ifdefs to your HLSL shader code in order to support ATI and NVIDIA styles of calculating shadow contributions for each pixel. both vendors have code and shader examples for their respective implementations (along with documentation) on their developer websites.

Two items of note to ensure high performance (based on real-life examples :)):

-Remember to disable color writes entirely when rendering shadow casters into your depth texture. in most cases you're only interested in the contents of the Depth textures (the runtime requires a valid binding to a color buffer of the same dimensions as the depth buffer/texture ).
"Forgetting" to disable color writes will cause unnecessary color buffer bandwidth to be consumed (it happens ).

-About rendering transparent (alpha-tested) shadow casters into your depth textures: Make sure to only enable alpha-testing (or texkill if the destination surface cannot be used
D3dusage_query_postpixelshader_blending) for primitives that are supposed to be transparent. Leaving alpha-testing on (or using a texkill
Shader) for all shadow casters objects will defeat early Z advantages as the pixel shader may get executed before the depth compare takes place.
It can be common to want to use the same flexible shader for all your shadow rendering but it pays to make that extra step

Nick
European developer relations, ATI Technologies MrT@ati.com

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.