Recently, when writing a shader, I found a strange problem. The shader code below my 6200 card can correctly calculate the result, but it is wrong on the card of ATI 9600, if you have checked a lot of information, you still have to think about it. record it as a little bit of experience. If you have other friends who have encountered similar problems, you don't need to troubleshoot the problem by line like me :)
In my shader, there are two lines of code used to calculate the pixel normal:
Float3 vnormal = normalize (2 * (va + VB)-2 );
Some irrelevant code is omitted.
Float ndotl = max (dot (vnormal, eyedir), 0 );
Here VA and VB are float3, which are from the bump texture sampling and vs input. On my 6200 card, the result can be correctly calculated. That is to say, the value of ndotl is, on the colleague's ATI 9600 card, the ndotl result is always 0. After a row-by-row exclusion test, the final result is:
Float3 vnormal = normalize (2 * (va + VB)-2 );
An error occurs. The vnormal calculation result is not as expected and the code is adjusted:
Float3 vnormal1 = 2 * vA-1;
Float3 vnormal2 = 2 * vB-1;
Float3 vnormal = vnormal1 + vnormal2;
Vnormal = normalize (vnormal );
After the problem was solved, I thought that my colleague's machine's DX version was too low, which led to an error in the shader Code Compiled by fxc. However, after my colleagues and I exchanged fxo, my colleagues still had problems, I am also normal, and it is normal for a colleague to use the ATI 9800 card. So far, I have concluded that there may be a problem with the hardware design of the ATI 9600 card, because the driver has been updated to the latest version; or the shader Code Compiled by fxc of DX is not highly portable, because it is no problem to use nvidia cg to write the same code on 9600.