A cat can learn. Unity3d shader Getting Started Guide (ii)
About this series
This is the second chapter in the Unity3d Shader Primer series, the object of this series is the new contact shader development Unity3d users, because I myself is also a shader beginner, so there may be errors or omissions, if you have experience in shader development, I welcome and implore you to point out the flaws in the text, I will correct as soon as possible. The basic knowledge of some shader, including the basic structure and syntax of Shaderlab, and a simple step-by-step explanation of a basic shader are introduced in the previous opening. With these basics in hand, reading simple Shader should not be too much of a problem, just read the unity Surface Shader Example before continuing the tutorial to verify that you have mastered the previous section. If you don't have much problem reading most of the sample shader, you can correctly point out the structure, declaration, and use of shader, stating that you are ready to continue reading this section.
Normal map (normal Mapping)
Normal mapping is a common application of bump mapping (Bump mapping), simply to add visual detail and realistic effect to the original map and model by rendering different color depths of the dark and bright parts without increasing the number of model polygons. The simple principle is on the basis of the ordinary map, and then provide a map corresponding to the original map, you can represent the rendering of the shading. By adding this additional representation of the surface-convex map to the actual original map, you can get new details to enrich the three-dimensional rendering effect. In this section, we will first implement the shader of a normal map, then discuss the lighting model of unity shader and implement a custom lighting model. Finally, by changing the shader to simulate the snow effect on a stone, and making some modifications to the model vertices, the snow effect looks more realistic. At the end of this section, we will have a powerful shader that can be used to meet some real-world development work, and more importantly, we will know how it was created.
For the normal map, you can compare to see below. The number of model polygons is 500, the left side uses only simple diffuse coloring, and the normal map is used on the right. Comparing two pictures is not difficult to find, the use of the normal map of the stone in the dark and light have a better performance. On the whole, the concave-convex feeling is much stronger than the diffuse result, and the stone looks more real and more textured.
The above footage that needs to be used in this section can be downloaded here, including the model of the stone above, a map, and the corresponding normal map. Import the downloaded package into the project and create a new material, using the simple diffuse shader (as we did in the previous section), and with a suitable parallel light source, we can get the effect of our left image. In addition, this section and later will involve some unity built-in shader content, such as some standard common functions and constant definitions, related content can be found in unity built shader, built-in shader can be found on the right side of the Unity download page version.
Next we implement the normal map. Before we do this, let's start with a little bit more basic knowledge of normal maps. Most normal graphs are usually similar to the following figure, which is a blue-violet-dominated graph. This normal map is actually an RGB map in which the red, green, and blue three channels represent the normal point of the point that is converted from the height graph: Nx, Ny, Nz. The normals in most of these points point to the z direction, so the graph is more blue. When the shader is processed, we can get the light and dark characteristics in the light and the normal value of the point, and then apply it to the original image to reflect the concave and convex relation of the object in a certain light environment. For more information on normal maps, refer to the relevant entries on the wiki.
To get back to the point, the main consideration is shader, not the principle of image science. A little bit of modification on the basis of the shader we wrote in the previous section allows you to get a new shader that adapts to and completes the normal map rendering. The newly added sections are numbered and then explained.
Shader"Custom/normal Mapping" {Properties {_maintex ("Base (RGB)", 2D) ="White" {}1 _bump ("Bump", 2D) ="Bump" {}}Subshader {Tags {"Rendertype" ="Opaque"}Lod200Cgprogram #pragma surface surfLambert sampler2d _maintex;//2 sampler2d _bump; struct input {float2 uv_MainTex; Span class= "hljs-comment" >//3 float2 uv_bump; }; void Surf (input in, inout surfaceoutput o" {half4 c = tex2d (_maintex, IN.uv_ Maintex); //4 o.normal = UnpackNormal (Tex2D (_ Bump, in.uv_bump); O.albedo = C.RGB; O.Alpha = C.A; } ENDCG} fallback "Diffuse"}
- Declare and add a map with a display name to
Bump
place the normal map
- In order to be able to use this poster in CG programs, you must add a sample, I hope you remember ~
- Get UV information for bump as input
- Extracts the normal information from the normal graph and assigns it to the normal property of the output of the corresponding point.
UnpackNormal
is the method defined in the Unitycg.cginc file, which contains a series of commonly used CG variables and methods. UnpackNormal
accepts an input from a fixed4 and converts it to the corresponding normal value (FIXED3). After unpacking this value, assigning it to the normal of the output, you can participate in the light operation to complete the next rendering work.
Now save and compile the shader, create a new material and use this shader, drag and drop the stone texture map and normal map into base and bump, and apply it to the stone model, you should see the effect of the right side of the graph.
Illumination model
In the shader we saw before (in fact, the basic diffuse of the previous section and the normal mapping here), we used only the Lambert lighting model (#pragma surface surf Lambert), This is a very classical diffuse reflection model, which is proportional to the direction of the incident light and the cosine of the surface normal angle at the reflection point. Some detailed calculations and inferences about Lambert and diffuse reflection can be described in Wiki (Lambert, diffuse) or elsewhere. The simple explanation of a sentence is that the reflected light intensity of a point is related to the normal vector of the point and the incident light vector and the intensity and angle, and the result is the dot product of the two vectors. Now that we know the principle of lighting calculation, let's first look at how to implement a light model of our own.
Make the following changes on the shader just now.
- First, change the original
#pragma
line to this
#pragma surface surf CustomDiffuse
- Then add the following code in the Subshader block
inline float4 LightingCustomDiffuse (SurfaceOutput s, fixed3 lightDir, fixed atten) { float difLight = max(0, dot (s.Normal, lightDir)); float4 col; col.rgb = s.Albedo * _LightColor0.rgb * (difLight * atten * 2); col.a = s.Alpha; return col;}
- Last saved, back to unity. Shader will compile, and if everything works, you will not see any difference between the new shader and the previous one in material performance. But in fact our current shader has been diffuse with Unity's built-in lighting model, using our own set of lighting models.
Meow, what the code has done! I am sure you will have such doubts ... No problem, no doubt that is not called beginners, or a line to speak. First, as we said in the previous article, the #pragma
statement here declares the type of next shader, computes the method name of the call, and specifies the lighting model. We have previously specified Lambert as the illumination model, and now we are changing it to customdiffuse.
The code that is added next is the implementation of the calculation lighting. In shader, there is a strict convention for the name of a method, and to create a lighting model, the first thing to do is to declare a light-computed function name according to the rules Lighting<Your Chosen Name>
. For our illumination model Customdiffuse, the name of the computed function is natural LightingCustomDiffuse
. The illumination model is calculated after the surface color of the surf method, according to the input lighting conditions to the original color in the performance of this light, and finally output a new color value to the rendering unit to complete the screen drawing.
Perhaps you have guessed that the Lambert light model we used before has a light calculation function named Lightinglambert? Bingo. In Unity's built-in shader, there is a lighting.cginc file that contains the Lightinglambert implementation. You may also notice that the content of the lightingcustomdiffuse that we achieve is now exactly the same as the Lightinglambert in unity, which is why there is no difference in the original vision using the new shader, Because the implementation is really exactly the same.
First of all, the input amount, SurfaceOutput s
this is after the surface calculation function Surf processing output, we say on the point on which the light is processed, fixed3 lightDir
is the direction of light, the fixed atten
coefficient of light attenuation. In the code that calculates the illumination, we first set the normal value of the input s (the value in normal mapping is already the corresponding amount in the normal graph) and the input ray to dot product (dot function is the mathematical function built in CG, I hope you remember, can refer to here). The result of the dot product is between 1 and 1, and the larger the value, the smaller the angle between the normals and the light, the brighter the point. Then using Max to limit the result of this factor to 0 to 1, is to avoid the existence of negative numbers, resulting in the final calculation of the color becomes negative, the output of a black group, generally this is what we do not want to see. Next we will surf the color of the output and the color of the light (which is _LightColor0.rgb
obtained by unity based on the light source in the scene, It is declared in lighting.cginc) and then multiplied by the light intensity coefficient and the attenuation coefficient of the input, and finally the color output in this light (about Diflight * Atten * 2) Why there is a multiply 2, this is a historical issue, Mainly for some light intensity compensation, see the discussion here).
After understanding the basic implementation, we can look at some changes to play. The simplest example is to change the Lambert model to light, such as half Lambert model. Half Lambert is a technique created by valve to brighten objects in low-light conditions, and was first used in half-life (half lives) to avoid the shape of objects in low light. Simply put the light intensity coefficient first half, and then add 0.5, the code is as follows:
inline float4 LightingCustomDiffuse (SurfaceOutput s, fixed3 lightDir, fixed atten) { float difLight = dot (s.Normal, lightDir); float hLambert = difLight * 0.5 + 0.5; float4 col; col.rgb = s.Albedo * _LightColor0.rgb * (hLambert * atten * 2); col.a = s.Alpha; return col;}
In this way, the original light intensity 0 point, the corresponding value now becomes 0.5, and the original is 1 of the place will now remain 1. That is to say, the darker part of the model map is enhanced, and the light is essentially the same as the original, preventing overexposure. Before and after using half Lambert, notice that the shadows below the rightmost stone are more visible, and all of this is only a visual change, without any changes in textures and models.
Added effect of surface mapping
OK, the discussion of lighting and custom lighting models is over for the time being, because if you expand it will be a huge topic of graphics and classical optics. Let's go back to shader and make some exciting results together. For example, in your game scene is a scene of snow scenes, and you want to do some snow on the stone cover effect, what should be done? Do you want your cute 3D designer to go out with a set of snow-covered stickers and use the new stickers? Of course not, not not, but not. Because the new map will not only increase the size of the project's resource package, but also increase the difficulty of modification and maintenance, think if there are many stones need to achieve the same snow-covered effect, or with the game time accumulation of snow gradually become more, what should you do? Do you want the designer to put all the stone stickers on the snow and then 5 different stickers according to the thickness of the snow? Believe me, they're going to be crazy.
So, let's consider using shader to do this work! First consider what we need, snow effect, we need the snow level (to indicate the amount of snow), the color of snow, and the direction of snow. The basic idea is similar to implementing a custom lighting model by calculating the dot product of the original's point in world coordinates and the snow direction, if it is greater than the threshold of the set snow level, the direction is the same as the snow direction, it is snow-covered, the color of the snow is displayed, otherwise the original map color will be used. Nonsense no longer say, on the code, on the basis of the above shader, change the contents of the properties to
properties {_maintex ( "Base (RGB) ", 2d) = " white "{} _bump (" Bump " , 2d) = "bump" {} _snow ( "Snow level", Range (0,1)) = 0 _snowcolor ( "Snow Color", color) = (1.0,1.0,1.0,1.0) _snowdirection ( "Snow Direction", Vector) = (0,1,0)}
There's not much to say, the only thing to mention is that the default value for _snowdirection setting is (0,1,0), which means we want the snow to fall vertically. Correspondingly, these variables are declared in the CG program:
sampler2D _MainTex;sampler2D _Bump;float _Snow;float4 _SnowColor;float4 _SnowDirection;
Next change the contents of input:
struct Input { float2 uv_MainTex; float2 uv_Bump; float3 worldNormal; INTERNAL_DATA};
In relation to the above shader input, added a float3 worldNormal; INTERNAL_DATA
, if the surfaceoutput set the normal value, through Worldnormal can get the current point in the world normal values. A detailed explanation can be found in Unity's shader documentation. Next, you can change the surf function to actually install the snow effect.
void Surf (input in, inout surfaceoutput O) {half4 c = tex2d (_maintex, in.uv_maintex); O.normal = unpacknormal (tex2d (_bump, in.uv_bump)); if (dot (worldnormalvector (in, O. normal), _snowdirection.xyz) > Lerp (1,- 1,_snow)) {O.albedo = _snowcolor.rgb;} else {o.albedo = C.RGB;} o.alpha = C.A;}
Compared with the above, added a if...else ... The judgment. First look at the left side of the inequality of this condition, we dot product the direction of the snow and the world normal direction of the input point. The direction of the WorldNormalVector
world coordinates is calculated by the input point and the normal value of the point; the Lerp function on the right is not difficult to understand as long as the students have a concept of interpolation: When Snow takes a minimum of 0 o'clock, the function returns 1, and when snow takes the maximum value, it returns-1. So we can set the value of _snow to control the snow threshold, if snow is 0 o'clock, the left of the inequality can not be greater than the right side, so there is no snow; On the other hand, if snow takes the maximum value 1 o'clock, because the left side must be greater than-1, so the full model of snow. And with the change of the median value, the snow will be different.
Apply this shader and adjust the snow level and color appropriately to get the right effect as shown below.
Change the vertex model
To the present, we are only referring to the original map operation, whether the use of the line chart to make the model look concave and convex, or add snow, all the calculation and color output is just "fake", and there is no material changes to the model. But for the snow effect, the snow is actually attached to the stone, rather than simply replacing the original color. But the simplest way to do this is to replace the color directly, but we can change the model a little bit to make the original model slightly larger in the direction of snow, so that a snow is attached to the stone.
We continue to revise the previous shader, first we need to tell surface shadow we want to change the vertices of the model. First, change the #param line to
#pragma surface surf CustomDiffuse vertex:vert
This tells shader that we want to change the vertex of the model, and we will write a vert
function called to change the vertex. Next we add a parameter that declares a variable in the properties that _SnowDepth
represents the thickness of the snow, and of course we need to declare it in the CG section:
//In Properties{…}_SnowDepth ("Snow Depth", Range(0,0.3)) = 0.1//In CG declarefloat _SnowDepth;
The next step is to implement the Vert method, which is similar to the previous snow operation, to determine the dot product size to determine if the model needs to be enlarged and to determine the direction of the model enlargement. Add the following vert methods to the CG segment
void vert (inout appdata_full v) { float4 sn = mul(transpose(_Object2World) , _SnowDirection); if(dot(v.normal, sn.xyz) >= lerp(1,-1, (_Snow * 2) / 3)) { v.vertex.xyz += (sn.xyz + v.normal) * _SnowDepth * _Snow; }}
Similar to the surf principle, the system will enter a value for the current vertex, we can calculate and fill in the new values as needed to return. The first line above uses transpose
the method to output the transpose matrix of the original matrix, where _object2world is the built-in value of unity Shaderlab, which represents the transformation of the current model into a matrix in world coordinates, A matrix product is made from the snow direction to obtain a projection of the snow direction in the object's world space (converting the snow direction into world coordinates). We then calculated the point product of the actual snow direction in this world coordinate and the normal value of the current point, and compared the results to the threshold values after comparing LERP with 2/3 of the snow level. In this way, the current point will change the height of the model vertex of the point if it is consistent with the snow and the snow is more complete.
Add the effect before and after the model changes for example, the right-hand image that is added to the model adjustment is more realistic.
Go to the cat can learn Unity3d shader Getting Started Guide (ii)