Motivation
If you want to know a few things, I suggest you read the following tutorial:
- Want to know how to write a Multipass toon shader.
- Learn more about different reference coordinate systems (space spaces) and their effects in shader.
- Learn a practical fragment shader in depth.
- Learn to multiply matrices and use unity built-in matrices.
This tutorial is more practical than the fifth tutorial.
Preparatory work
To achieve a toon shader of a stroke, we need to do this:
- Strokes the model.
- Porting the toon shader (using surface shader) described in the fourth article to vertex&fragment shader.
Strokes
There are many ways to stroke, and in the fourth article we use Rim Lighting (edge illumination) to add a stroke effect to our characters. Now we're going to use another way to improve the existing stroke effect with an extra pass.
Unlike the implementation of the previous stroke effect, in this tutorial, you can zoom in on the part of the model you don't see (such as the back), and then render it black, so that you can achieve the stroke effect. This method can render the front of the original model intact.
So we first try to:
- Write a single pass that is used only to draw the back of the model.
- Extend the vertices on the back of the model to make them look larger.
The following pass is used to simply draw the back of the model (cull Front, remove the front polygon):
Pass { cull Front Lighting Off}
Now let's consider the simplest part-drawing all the pixel values passed in to the pass as black!
Cgprogram #pragma vertex vert#pragma fragment frag"unitycg.cginc"/ / The remaining features are implemented here float4 Frag (v2f in): color{ return float4 (0,0 ,0,1);} ENDCG
The Fragment function returns FLOAT4 (0,0,0,1)-All Black.
Now add the input structure for our shader. We use the struct, which contains vertex and normal, to extend each vertex of our model along the normal direction-the vertex is the point on the back patch. So our input structure must contain vertex position vertex and vertex method to normal information.
struct a2v{ float4 vertex:position; struct v2f{ float4 pos:position;};
Next we define a _outline property value in the Properties code area, with a range of 0.0~1.0, and we define the same variable float _outline in the CG code.
Finally, we extend the vertex function vert to the normal extension vertex:
float _outline; v2f Vert (a2v v) { v2f o; 0) * _outline)); return o;}
What we did was to stretch the V.vertex down the normal _outline scale and then use Unity's built-in matrix UNITY_MATRIX_MVP to convert the results to the projection space (projection spaces).
Matrices are used in shader to transform many things. We can see from the matrix that a 4x4 matrix is multiplied by the previous 4x1, or a 4*1 matrix. There are many predefined matrices in unity, and we can use these matrices to get the transformations of various spatial coordinate systems.
Your code should now be guaranteed to look like this ( note that this is the code added on the basis of part fifth tutorial ):
Pass {//reject the front of the model and render only the backCull Front Lighting Off cgprogram#pragmaVertex vert#pragmaFragment Frag#include"Unitycg.cginc" structa2v {float4 vertex:position; FLOAT3 Normal:normal; }; structv2f {float4 pos:position; } float_outline; v2f Vert (a2v v) {v2f o; O.pos= Mul (UNITY_MATRIX_MVP, V.vertex + (FLOAT4 (V.normal,0) *_outline)); returno; } floatFrag (v2f in): COLOR {returnFLOAT4 (0,0,0,1)} ENDCG}
It looks like a bit of a effect, but looking closely at his mouth, we can see that there is a big problem. This is because the pass that implements the edge effect can be written to a deep cache. So in some cases, the front of the model cannot be drawn properly.
Take the mouth here for example, where the upper lip of the mouth is positive, and the lower lip is the opposite (the polygon direction is counterclockwise). So cull front will remove the upper lip and keep the lower lip. The normal direction of the lower lip is obviously almost upward, so in the vert function, this black stripe of the patch is produced above the lower lip. And because we can write to the deep cache, this black patch is written to the depth cache, and the black patch is right in front of the lip, so the lip is drawn without the depth test, leaving only the black patch.
Naturally, we would have thought that it would be OK to let this black patch go without a deep cache test. The following picture is the result of closing the z-buffer test in the pass.
Use the following code:
Pass { cull Front Lighting off zwrite off
What extra black patches do not exist after you close the z-buffer test. But a new problem has arisen. Because the black patches never pass the z-buffer test, the patches on the model itself will cover the black patches. We see this diagram below, and the front model blocks the black edges produced by the model behind. It's not what we want.
Now we probably know that the essence of the problem is that the black patch is extended along the normal length, and its z-values change. If we deliberately deal with the Z-value, it produces a smaller Z-value of the black patch on the back, which is a bit farther from the viewpoint, rather than being attached to the surface like a newly created model. In this case, for the edge effect, the main effect will be the X and Y components, not the Z component.
Now go back to our vertex function and do some matrix transformations.
Flatten the black patches produced on the back in the z direction
The first challenge to meet is that our vertex and normal is in model space-but we want to convert it to the viewport (the camera is the space of origin, not the projection transformation), because in the viewport, the z-axis points to the camera, that is, the model Z-value exactly indicates the distance of the model from the camera.
Here are a few unity built-in matrices.
First, instead of converting vertices into projection space, we convert the vertices to the viewport first-which is simple enough to use a different matrix.
Then we are going to convert the corresponding normal values into the viewport-a trick is used here, because it is not easy to use the matrix UNITY_MATRIX_MV to convert the normal from model space to the viewport space. You have to use UNITY_MATRIX_MV's reversal matrix UNITY_MATRIX_IT_MV (where IT represents inverse transpose). The result of multiplying the normal multiplied by the UNITY_MATRIX_MV directly will no longer be perpendicular to the original patch. The essential reason is that the vertex is a point, and the normal is a direction vector.
So all we have to do is:
- Converts vertices into the viewport. -pos = Mul (UNITY_MATRIX_MV, V.vertex);
- Converts the normal direction into the viewport space. -normal = Mul ((float3x3) UNITY_MATRIX_IT_MV, v.normal);
- Fix the z component of a normal vector to a certain minimum value- normal.z =-0.4
- Re-unit the normal direction (because in the previous steps, we changed the normal direction, destroying its unit length)
- Use the _outline scaling method to length, and then add the vertex position along the normal translation so long.
- Converts vertices into the projection space.
All the code looks like this:
v2f Vert (a2v v) { v2f o; = Mul (UNITY_MATRIX_MV, V.vertex); = Mul ((float3x3) UNITY_MATRIX_IT_MV, v.normal); =-0.4; = pos + float4 (normalize (normal),0) * _outline; = Mul (unity_matrix_p, POS); return o;}
Note that the matrix used in unity is 4x4-but our normal is FLOAT3 type-we have to convert the matrix to 3x3-(float3x3) UNITY_MATRIX_IT_MV, otherwise we will get a lot of errors in Unity's console.
If we use the Zwrite on-effect looks like this:
This effect is enough for us.
Cartoon of
All that is left is to apply the toon shader we previously made with the surface shader to vertex&fragment shader.
First we define a _ramp property value as in part IV of the tutorial, and define the sampler2d _ramp accordingly.
Use ramp texture (gradient texture)-then we add a _colormerge property variable (a value of type float) that uses it to reduce the kind of model color.
Let's change the fragment function in part five of the tutorial-just like this:
float4 Frag (v2f i): COLOR {//get the corresponding pixel value from the texture based on UV coordinatesFLOAT4 C =tex2d (_maintex, I.UV); //Reduced Color TypeC.rgb = (Floor (c.rgb*_colormerge)/_colormerge); //getting the corresponding pixel's normal direction from the bump textureFLOAT3 n =Unpacknormal (tex2d (_bump, i.uv2)); //Get diffuse light colorFLOAT3 Lightcolor =unity_lightmodel_ambient.xyz; //calculate the distance of the light source floatLENGTHSQ =dot (i.lightdirection, i.lightdirection); //calculates the attenuation of light intensity based on the calculated light source position floatAtten =1.0/ (1.0+lengthsq); //Incident angle of light floatdiff =Saturate (dot (n, normalize (i.lightdirection))); //using gradient Texturesdiff = tex2d (_ramp, Float2 (diff,0.5)); //Light attenuation obtains the final illumination brightness according to the angle of incidenceLightcolor + = _lightcolor0.rgb * (diff *atten); //Multiplies the light brightness by its own color to get the final colorC.rgb = Lightcolor * C.rgb *2; returnC;}
All we have to do is use the _maintex texture to sample, then lower the color type, and finally use the gradient texture to get the values as light intensity.
To make our final effect:
The complete source code is here.
For other forwardadd parts of the light, leave it to yourselves to write!
Unity3d Shader Beginner's Tutorial (6/6)--Better cartoon Shader