Unityshader can do a lot of things (such as setting the rendering state, etc.), but its most important task is to specify the code required by the various shaders. These shader codes can be written in the Subshader semantic block (the practice of the surface shader), or in the pass semantics block (fixed-point/slice-shader and fixed-function shader practices).
In unity, we can write unityshader using the form below 3. Regardless of the form, the shader code in the real sense needs to be included in the Shaderlab semantic block as follows:
" MyShader " { properties{ // required Various properties } subshader{ // The real shader code will appear here . // surface shader (surface Shader) or // fixed-point/Chip shader (vertex/fragment Shader) or // Stationary function shader (fixed functions Shader) } subshader{ // and previous Subshader } }
- Surface shader
A surface shader (surface Shader) is a shader code type created by unity itself. It requires a small amount of code, and unity does a lot of work behind it, but the cost of rendering it is relatively large. It is essentially the same as the fixed-point/element shader below. That is, when unity provides a surface shader, it still converts it to the corresponding vertex/slice shader behind it. As we can see, the surface shader is a higher level abstraction of Unity's vertex/element shader. The value of its existence is that unity handles a lot of light detail for us, so we don't have to worry about these "annoying things".
The sample code for a very simple surface shader is as follows:
Shader"custom/simple Surface Shader"{subshader{tags{"Rendertype"="Opaque"} cgprogram #prama surface surf Lambertstructinput{float4 Color:color; }; voidSurf (Input in, InOut surfaceoutput o) {O.albedo=1; } ENDCG} FallBack"Diffuse"}
As can be seen from the above procedure, the surface shader is defined between the Cgprogam and the ENDCG in the Subshader semantic block (not the pass semantics block). The reason is that the surface shader doesn't need a developer's concern about how many passes to use, how each pass is rendered, and unity will do it for us behind the scenes, and we just have to tell him, "Hey! Use these textures to fill the color, use this normal texture to fill the normals, use the Lambert lighting model, and other things don't bother me! ”。
The code between Cgproram and ENDCG is written using CG/HLSL, that is, we need to nest the CG/HLSL language in the Shaderlab language. It is worth noting that the CG/HLSL is provided by unity after encapsulation, his syntax is almost the same as the standard CG/HLSL syntax, but there are subtle differences, such as some native functions and usages that unity does not provide support.
2. The smartest kid: fixed-point/slice Shader
In unity we can use the CG/HLSL language to write vertex/slice shaders (verter/fragment Shader). They are more complex, but more flexible.
The sample code for a very simple vertex/chip shader is as follows:
Shader " custom/simple vertexfragment Shader " {subshader{pass{Cgprogram #prama vertex vert #prama fragment Frag float4 vert (float4 v:position): sv_position{ return Mul (unity_ MATRIX_MVP, V); } float4 Frag (): sv_target{ return fixed4 (1.0 , 0.0 , 0.0 , 1.0 ); } ENDCG}}}
Similar to the surface shader, the code for the vertex/chip shader needs to be defined between Cgprogram and ENDCG, but different is that the vertex/element shader is written in the pass semantics block, not within the subshader, because We need to customize the shader code that each pass needs to use. While we may need to write more code, the benefits are high flexibility and, more importantly, we can control the implementation details of the rendering, and the code between Cgprogram and ENDCG is also written using CG/HLSL.
3. Abandoned corner: fixed function shader
Both of the two Unity Shader forms use a programmable pipeline. For some older devices (whose CPUs only support DirectX7.0, OpenGL 1.5, or OpenGL ES 1.1), such as IPhone3, they do not support a programmable pipeline shader, so at this point we need to use the fixed function shader Shader) to complete the rendering. These shaders can often accomplish some very simple effects.
A very simple fixed function shader sample code is as follows:
" Tutorial/basic " { properties{ _color ("Main Color", color) = (10.5 0.5 1 ) } subshader{ pass{ Material { diffuse [_color] } Lighting on } }}
As you can see, the code for the fixed function shader is defined in the pass semantics block, which is equivalent to some of the rendering settings in the pass, as we mentioned earlier.
For fixed function shaders, we need to use the Shaderlab syntax completely (that is, using the Shaderlab render Setup command) instead of using CG/HLSL.
Because most GPUs now support a programmable rendering pipeline, this fixed pipeline has been gradually discarded. In fact, in Unity5.2, all fixed function shaders are compiled by unity into the corresponding vertex/slice shader behind the back, so the true fixed function shader no longer exists.
Some suggestions for choosing a shader.
- Unless you have a very specific need to use fixed function shaders, such as the need to run your game on very old devices (these devices are very rare), use a programmable pipeline shader that indicates a shader or vertex/slice shader.
- If you want to deal with a variety of light sources, you might prefer to use the show shader, but be careful about his performance on the mobile platform.
- Using a vertex/slice shader is a better choice if you need to use a very small number of lights, such as only one parallel light.
- Most importantly, if you have a lot of custom rendering effects, select the vertex/slice shader.
Unityshader Getting Started Essentials -3.5 unityshader form