From Fengle, "Unity Shader Essentials of Getting Started"
In Unity 3 in 2010, Surface Shader appeared.
An example of a surface shader.
Let's do the following preparatory work first.
1) Create a new scene to remove the Sky box
2) Create a new material, create a new shader, and assign it to the material.
3) Create a capsule body in the scene and assign it to the upper step material
Then we modify the shader code:
- Shader "Unity Shaders book/chapter 17/bumped Diffuse" {
- Properties {
- _color ("Main Color", color) = (1,1,1,1)
- _maintex ("Base (RGB)", 2D) = "White" {}
- _bumpmap ("Normalmap", 2D) = "Bump" {}
- }
- Subshader {
- Tags {"Rendertype" = "Opaque"}
- LOD 300
- Cgprogram
- #pragma surface surf Lambert
- #pragma target 3.0
- Sampler2d _maintex;
- Sampler2d _bumpmap;
- Fixed4 _color;
- struct Input {
- FLOAT2 Uv_maintex;
- FLOAT2 Uv_bumpmap;
- };
- void Surf (Input in, InOut surfaceoutput o) {
- Fixed4 Tex = tex2d (_maintex, In.uv_maintex);
- O.albedo = Tex.rgb * _COLOR.RGB;
- O.alpha = tex.a * _COLOR.A;
- O.normal = Unpacknormal (tex2d (_bumpmap, In.uv_bumpmap));
- }
- Endcg
- }
- FallBack "Legacy Shaders/diffuse"
- }
We add a point light and a spotlight to the scene, as shown in the effect.
As can be seen from the above example, the surface shader has a very small amount of code compared to the vertex/chip shader technique previously learned. And we can easily achieve common lighting effects without even having to deal with any lighting variables, and unity helps us handle the lighting results for each light source.
Unlike vertex/element shaders, the CG code for the surface shader is direct and must also be written in Subshader, and unity will generate multiple passes for us behind the scenes. Of course, you can use tags at the beginning of Subshader to set the label used by the surface shader.
The most important part of a surface shader is the two structure and its compilation instructions. Two of these structures are bridges of information between different functions in a surface shader, and compiling instructions is an important means of communicating with unity.
Compiling instructions
The most important function of a compile instruction is to indicate the surface and illumination functions used by the surface shader, and to set some optional parameters. The first sentence in the CG block of a surface shader is often its compiler directive. The general format is as follows:
- #pragma surface surfacefunction Lightmodel [Optionalparams]
Where #pragma surface is used to indicate that the compilation directive is used to define a surface shader, it needs to indicate the surface function and lighting model used, and some optional parameters can be used to control some of the behavior of the surface shader.
Unlike the vertex/slice abstraction layer encountered before, the surface properties of an object define its reflectivity, smoothness, and transparency equivalents. The surfacefunction in the compilation directives are used to define these surface properties. A surfacefunction is usually a function called surf (which can be any number of functions). The format of its function is fixed:
- void Surf (Input in,inout surfaceoutput o)
- void Surf (Input in,inout surfaceoutputstandard o)
- void Surf (Input in,inout surfaceoutputstandardspecular o)
The latter two are the two newly added structures in Unity 5 due to the introduction of physical rendering. Surfaceoutput, Surfaceoutputstandard, and surfaceoutputstandardspecular are all unity built-in structures that need to be used with different lighting models.
In a surface function, the input struct body input in is used to set various surface properties and store these properties in the output structure surfaceoutput, Surfaceoutputstandard, or Surfaceoutputstandardspecular, the illumination function is then passed to the light functions to calculate the illumination result.
In addition to the surface functions, we also need to specify another very important function-the light function. The illumination function uses the various surface properties set in the surface function to simulate the light effects on the surface of the object by using certain lighting models. Unity has built-in physical-based lighting model standard and standardspecular, as well as simple non-physical-based lighting model functions Lambert and Blinnphong.
Of course, we can also customize our own lighting functions. For example, you can use the following function to define a light function for forward rendering:
- For light models that do not depend on the perspective, such as diffuse
- Half4 lighting<name> (surfaceoutput s, Half3 lightdir, half atten);
- Light models for dependent viewing angles, such as specular reflections
- Half4 lighting<name> (surfaceoutput s, half3 lightdir, half3 viewdir,half atten);
At the end of the compilation instructions, we can also set some optional parameters. These optional parameters contain a number of very useful instruction types, such as turning on/off transparency blending/testing, indicating custom vertex and color modification functions, controlling generated code, and so on.
1) custom modify function. In addition to surface functions and lighting models, surface shaders can support two other custom functions: Vertex modification function and final color modification function. Vertex modification functions allow us to customize some vertex properties, such as passing vertex colors to a surface function, or modifying vertex positions, implementing some vertex animations, and so on. The last color modification function allows you to change the color value at the last time before the color is drawn to the screen, for example, to implement a custom fog effect.
2) Shadow. We can control and shadow-related code through some directives. For example, the Addshaow parameter generates a shadow-cast pass for the surface shader. In general, unity can find a common lighting model for Shadowcaster's pass directly in fallback, rendering objects correctly rendered to depth and shadow textures. But for some objects with vertex animation and transparency testing, we need to do special processing of shadow casts to give them the right shade. The Fullforwardshadows parameter enables shading of all light types in the forward render path. By default, unity supports only the shadow effect of the most important parallel light. This parameter can be added if we need to let the point light or spotlight be shaded in forward rendering. Conversely, if we do not want to use this shader object for any shadow calculations, you can use the Noshadow parameter to disable the shadow.
3) Transparency blending and transparency testing, we can control transparency blending and transparency testing with alpha and alphatest directives. For example, the ALPHATEST:VARIABLENAME directive uses a variable named Varialname to reject elements that do not meet the criteria. At this point, we may also need to use the Addshadow parameter mentioned above to generate the correct shadow cast pass.
4) Light. Some instructions can control the effect of light on an object, such as the noambient parameter, which tells unity not to have any ambient lighting or light probes. The Novertexlights parameter tells unity not to apply any vertex-wise illumination. Noforwardadd will remove the extra pass from the forward rendering. In other words, this shader only supports one pixel-by-pixels parallel light, while the other light sources calculate the illumination effect by vertex or sh. This parameter is typically used in the surface shader of the mobile platform version. There are also some parameters for controlling lighting baking and fog simulation, such as Nolightmap, NOFOG, etc.
5) control the generation of the code. Some directives also control the code that is automatically generated by the surface shader, which, by default, generates a corresponding forward rendering path for a surface shader and a pass for deferred render paths, which results in a larger shader file generation. If we determine that the surface shader will only be used in certain render paths, you can exclude_path:deferred, Exclude_path:forward, and Exclude_path: Prepass to tell Unity that there is no need to generate code for certain render paths.
Two structural bodies
The input struct contains many data sources for surface properties, so it acts as an input structure for surface functions. Input supports a number of built-in variable names, which we tell unity to use for data information. The following table shows the variables built into the input struct body.
It is important to note that we do not need to calculate each of these variables ourselves, but simply declare these variables in the input struct by the name above, and unity will prepare the data for us behind the scenes, and we just need to use them directly in the surface functions. One exception is that we have customized the vertex modification function and we need to pass some custom data to the surface function. For example, in order to customize the fog effect, we may need to calculate the fog-effect mixing factor in the vertex modification function based on the position information of the vertex in the view space, so that we can define a variable named half fog in the input struct, and then store the result of the calculation in the variable output.
With the input struct to provide the required data, we can calculate various surface properties accordingly. Thus, another struct is the structure used to store these surface properties, namely Surfaceoutput, Surfaceoutputstandard, and Surfaceoutputstandardspecular, which act as the output of the surface function, Various lighting calculations are then performed as input to the light function. The variables in this struct are declared well in advance, and cannot be increased or decreased, compared to the freedom of the input structure. The Surfaceoutput declaration can be found in the Lighting.cginc file:
- struct Surfaceoutput {
- Fixed3 Albedo;
- Fixed3 Normal;
- FIXED3 emission;
- Half specular;
- Fixed Gloss;
- Fixed Alpha;
- };
And the declarations of Surfaceoutputstandard and surfaceoutputstandardspecular can be found in unitypbslighting.cginc.
- struct Surfaceoutputstandard
- {
- Fixed3 Albedo; Base (diffuse or specular) color
- Fixed3 Normal; Tangent space Normal, if written
- HALF3 emission;
- Half metallic; 0=non-metal, 1=metal
- Half smoothness; 0=rough, 1=smooth
- Half occlusion; Occlusion (default 1)
- Fixed Alpha; Alpha for transparencies
- };
- struct Surfaceoutputstandardspecular
- {
- Fixed3 Albedo; Diffuse color
- FIXED3 specular; Specular color
- Fixed3 Normal; Tangent space Normal, if written
- HALF3 emission;
- Half smoothness; 0=rough, 1=smooth
- Half occlusion; Occlusion (default 1)
- Fixed Alpha; Alpha for transparencies
- };
In a surface shader, you only need to select one of these three, depending on the lighting model that we choose to use. There are two types of lighting models built into unity, a simple, non-physical-based lighting model, including Lambert and Blinnphong, and Unity5 added, physics-based lighting models, which include standard and Unity5 Standardspecular, this model will be more consistent with physical laws, but the calculations will be much more complex. If you are using a non-physical-based lighting model, use Surfaceoutputstad, or use Surfaceoutputstandard and surfaceoutputstandardspecular, respectively. The Surfaceoutputstandard structure is used for the default metal workflow, which corresponds to the standard illumination function, while the surfaceoutputstandardspecular structure is used in the high-light workflow, Corresponds to the standardspecular illumination function.
In the surfaceoutput structure, some of the surface properties are:
1) fixed3 Albedo The reflectivity of the light source. It is usually calculated from the product of texture sampling and color properties.
2) Fixed3 normal surface normal direction
3) Fixed3 emission self-luminous. Unity typically uses a statement similar to the following to make simple color additions before the last output of the slice shader.
- C.rgb + = o.emission;
4) Half the coefficients of the exponential portion of the specular specular reflection, affecting the calculation of specular reflectance. For example, if you use the built-in blinnphong illumination function, it calculates the intensity of specular reflections using the following statement:
- FLOAT spec = POW (nh,s.specular*128.0) *s.gloss;
5) The strength coefficient of the fixed Gloss high-light reflection. Typically used in light models that contain specular reflections
6) fixed Alpha transparent channel
What's behind Unity?
As we have said before, unity creates a vertex/slice shader with many passes based on the surface shader behind it. Some of these passes are intended for different render paths, for example, by default unity generates Lightmode for Forwardbase and forwardadd for forward render paths and generates lightmode for deferred render paths prior to unity 5. For Prepassbase and Prepassfinal Pass, generate Lightmode as Deferred pass for deferred render path after Unity5. There are also some passes that are used to generate additional information. For example, to extract surface information for light maps and dynamic global illumination, Unity generates a pass that lightmode as a Meta. Some surface shaders have modified the vertex position, so we can use the Adddshaow compile instruction to generate the corresponding Lightmode for Shadowcaster shadow cast pass. These passes are generated based on the compiler instructions and custom functions in our re-surface shader, which are governed by the rules. Unity provides a feature that allows us to look at the code that is automatically generated by the surface shader: There is a "Show generated code" button on each panel that compiles the finished surface shader, as shown in. We just need a click to see all the vertex/slice shaders generated by unity for this surface shader.
By looking at the code, we can see how unity actually generates individual passes based on the surface shader. For example, the lightmode generated by unity is the Forwardbase pass, as shown in its rendering pipeline.
Unity's auto-generation process for this pass is roughly as follows:
1) directly copy the code between Cgprogram and ENDCG in the surface shader, which includes the definition of variables and functions such as input struct body, surface function, illumination function, etc. These functions and variables are called as normal structures and functions during subsequent processing.
2) Unity parses the above code and generates the output--v2f_surf structure of the vertex shader, which is used to pass data between the vertex shader and the slice shader. Unity analyzes the variables we use in the function, such as texture coordinates, perspective direction, reflection direction, and so on. If necessary, it generates the corresponding variable in the V2f_surf. And even though we sometimes define some variables in input, unity finds that we do not use them when parsing the code, and these variables are not actually generated in V2f_surf. In other words, Unity has done some optimizations. The V2f_surf also contains some other variables that you need, such as shadow texture coordinates, light texture coordinates, vertex-wise illumination, and so on.
3) Next, create the vertex shader. If we customize the vertex modification function, unity will first call the vertex modification function to modify the vertex data, or populate the variable in the custom input struct body. Unity then parses the modified data in the vertex modification function and stores the result of the modification to the V2f_surf corresponding variable, as needed, by using the input struct body. The other generated variable values in V2f_surf are then computed. This mainly includes vertex position, texture coordinate, normal direction, vertex illumination, optical Chaovenli sampling coordinates, etc. Of course, we can control whether certain variables need to be evaluated by compiling instructions. Finally, the V2f_surf is passed to the next slice shader.
4) raw meta shader. Use the corresponding variable in V2f_surf to populate the input structure, for example, texture coordinates, perspective direction, and so on. Then call our custom surface function to populate the SURFACEOUTPUT structure. The light function is then called to get the initial color value. If you are using built-in Lambert or Blinnphong lighting functions, Unity also calculates dynamic global illumination and adds it to the calculation of the lighting model. And then another color overlay. For example, no lighting baking is used, and the effect of per-vertex lighting is added. Finally, if you customize the last color modification function, unity invokes it for the final color modification.
Unity Shader Getting Started Essentials learning notes-17th Unity's surface Shader quest