I have been working on shadow recently. A "sideline" is to replace all the things originally implemented with fixed pipeline with shader. Why? I believe that anyone who has done Shadow Mapping demo knows this.
Now that shader is used to implement lighting, simply use per-Pixel Lighting. Consider spot/fall off and far/near attenuation. In this case, a mesh may be affected by multiple light sources. Therefore, a dynamic shader merge mechanism is required to dynamically generate a combination of different numbers and types of light sources. On sm3.0, it is okay to unify all lighting calculations, Put light parameters in a 1D texture, and use a static loop to calculate all the light sources.
Well, let's look at sm2.0. Really?
Apparently not. The shadow calculation has not been taken into consideration. Because the shadow calculation requires sample depth texture, while the texture sampler is not indexvisible in sm3.0, it is still necessary to go back to the shader merge path. Don't tell me about multi-pass. Now the system's bottle neck is the draw API and geometry pipeline ......
Deferred shading is very suitable for my situation in various aspects. I sometimes have the urge to try it. Unfortunately, it is too aggressive and I don't want to be scolded by users, do not be so aggressive.
The main problem of deferred shading is fill rate. dx10 does not mean high fill rate. The development of HW feature can be met with one touch, however, the development of file rate is still the "Post Moore's Theorem", that is, the transmission rate increases exponentially and latency increases linearly. (Continue tomorrow)