In the example above, we get the information passed by the mesh component to be mathematically converted to the appropriate color range to be colored to the object.
This article will be based on the study of the Erase of the fragment (discarding fragments) and the front clipping, the back clipping (front face culling and the rear face culling) to achieve a transparent effect.
When the information of a mesh component is passed, we are able to determine which portions of the rendering (render) is rendered by the code. and which parts do not. The process is like removing the unwanted parts and we can't see him. Although his mesh information is still there. But our GPU is not going to handle it. Must be lower than the performance consumption of the GPU before culling.
This process is like our mesh component is a transparent membrane, we if this glue paper we can not see, and the fragment shader when coloring like a brush to selectively ground color. The final effect is that we may see a part of the membrane is visible, but the missing place, the film still exists, just we did not give him color. We neither look at them nor need to draw precious ink on them (GPU parallel processing capability)
So we can change the longitude green false colored sphere in the example, erase its longitude >0.5 part, then the code should change to:
So give this shader to material, and then give a sphere to see the green fake color ball we saw last time just left the southern hemisphere:
Looks like solid from the front.
A little tilt. I can see from above that the inside of the sphere is hollow, so I use a film and a brush to describe the render process.
Let's change the sphere to a cube and see what it looks like:
Can find that this is a strange cube, the cube's six faces were drawn only half, and are half of the following.
Why do cubes and spheres differ so much from their effects?
Since the cube is a Cartesian coordinate system, the sphere is a polar coordinate system .... Did you give the teacher a slap?
Likewise we will be able to get the northern hemisphere of the sphere by changing the >0.5 to <0.5.
This is the simplest surface culling (cuteaway)
A better surface culling is to convert the position of the fragment from the object coordinate system to the world coordinate system, and then transform it based on the underlying matrix to calculate which fragments are inside the other sphere (the original radius is 0.5) and then reject the surface inside the other sphere, if two balls overlap one another, So even if two balls revolve around each other's sphere. No overlapping parts will be drawn, and the overlapping parts will not be drawn, which we do not see, so as to save performance.
Because even the sphere rotates. When the coordinates of an object are transformed into world coordinates by Unity's built-in matrix, the world coordinates of the overlapping parts are fixed. So there will be no two spheres overlap part of the surface is cropped, after rotating a ball slowly see the cut of the hole.
(Because the previous method is clipped by the object coordinate system)
Trim front and back
Just in the code we saw the cull off. This line of code precedes the Cgprogram tag, so he is not part of the CG category. It is the shaderlab of our unity. So he doesn't need a semicolon to end it.
Cull off is to turn off the triangle cut (why suddenly came out of the triangle, brain repair, our stereoscopic image in the computer is a triangular patchwork, because so our three-dimensional graphics will produce sawtooth, it is the contribution of the triangle)
Cull Front for Front (outer) clipping
Cull back is trimmed for the rear (inside), which is the default mode for all of our shader, which means that shader is not written by yourself. It's very likely that when you turn our hemisphere, you just look at the front surface rather than the hemispherical surface, and don't believe you can drag a model to see
As to why the default is the back cut. Because in most cases our rendering is performed on the surface of the entire three-dimensional body. So since the surface is all rendered, you can't see the part that is back to you. So the default back trim will save a lot of physical performance!
Just now that we have erased the surface. So we can see the inner surface of the back through the erased part, then we should change the clipping mode, just like a house has a roof. We can't see the floor of the house from above, so the floor should fall into the category of tailoring. But suppose we erase the roof (pushing the roof), and we can't see the floor, it's a little scary, and it's going to switch the clipping mode.
In order to be more intuitive and clear these two modes. We changed the code above to the internal/external clipping of the dual channel (Pass). And the last coloring in each pass is different (red and green)
To be clear, the shader in unity will only run a subshader, but will run the entire pass
The changed code:
Pass{cull Front//external trim, then this channel can be understood to be to color the inner surface of the basketball cgprogram#pragma vertex vert#pragma fragment Frag#include "Unitycg.cginc" The struct Vertexoutput {float4 pos:sv_position;//is output by the vertex shader to the texture coordinates in the mesh information, which is an object-coordinate float4 posinobjectcoords:texcoord0 ;}; Vertexoutput Vert (Appdata_full input) {Vertexoutput Output;output.pos = Mul (UNITY_MATRIX_MVP, Input.vertex);// Pass the Texcoord directly to the fragment shader output.posinobjectcoords = Input.texcoord;return output;} FLOAT4 Frag (vertexoutput input): color{//Erase Fragment if (Input.posinobjectcoords.y > 0.5) {discard;} when the Y value of the coordinate is greater than 0.5 The remainder still generates the longitude Green ball return float4 (0.0, Input.posinobjectcoords.y, 0.0, 1.0) by the Y-value size; }ENDCG} Pass{cull back//internal trim. So this channel can be understood to be to color the outer surface of the basketball cgprogram#pragma vertex vert#pragma fragment frag#include "Unitycg.cginc" struct Vertexoutput { FLOAT4 pos:sv_position;//The texture coordinates of the mesh information are output by the vertex shader, which is the FLOAT4 posinobjectcoords:texcoord0 with the coordinate system of the object; Vertexoutput Vert (Appdata_full input) {Vertexoutput Output;output.pos = Mul (UNITY_MATRIX_MVP, Input.vertex);// Pass the Texcoord directly to the fragment shader Output.posinobjEctcoords = Input.texcoord;return output;} FLOAT4 Frag (vertexoutput input): color{//Erase Fragment if (Input.posinobjectcoords.y > 0.5) {discard;} when the Y value of the coordinate is greater than 0.5 The remainder still generates a longitude red ball return Float4 (input.posinobjectcoords.y, 0.0, 0.0, 1.0) at the Y-value size; }ENDCG}
We're finished. A shader with two passes, now look at what the sphere looks like:
Look down from the top. Because we do not know the concave or convex of this sphere because it is completely perpendicular to the view. As if it were the green longitude ball in our last example,
We'll look at the bottom online:
We still do not know whether the red and black part is concave or convex, after all, it is a hemisphere, the vertical hemisphere to see nothing found
We look at the past from the front:
Visible green and black parts are recessed inside the surface, the red and black part is the convex outer surface ~
At this point, we have been able to control where our surfaces are visible or invisible at will.
And then there's a more magical place for CG to find out.
If my blog is helpful to you or if you have any questions, please add Chongqing u3d QQ Group I will give you answer: 68994667, also can add group with us to exchange technology
Interpreting CG writing in Unity Shader series 3--surface culling and clipping modes