Three ways to write Unity3d shaders

Source: Internet
Author: User
Tags mul

Whether you will not write Unity3d shader, it is estimated that you will know, Unity3d writing shader Three ways, this article mainly refers to the three kinds of things what is the difference, and probably how to use. Let's start with a list of these three ways: fixed function Shadervertex and fragment Shadersurface shader why Unity3d to provide three ways to write shader? That's because three ways of writing the difficulty of the difference, corresponding to the different use of the crowd. In fact, I think this is uniy3d think a bit more, shader is not only to achieve the effect, but also to achieve high efficiency, only for the real use in the project, a simple implementation of an effect, it is only the demo level, only suitable for unity3d own product promotion, the actual user's meaning will not be great. Based on this idea, I will only introduce the following three ways of using the method, will later through other articles targeted to a specific way to explain in detail. Let's take a look at three ways of stating: fixed function shader (stationary shader)The simplest shader type, only using Unity3d's own fixed syntax and the method provided, is suitable for any hardware, with minimal difficulty Vertex and fragment shader (vertex fragment program shader)The vertex fragment shader, the most effective shader type, uses the CG/HLSL language specification, which consists of vertex programs and fragment programs. All the effects need to be written by themselves, the use is relatively difficult. Surface shader (face shader)Also use the CG/HLSL language specification shader type, but the light model is extracted, you can use some of the Unity3d light model, you can also write your own lighting model, the shader is also composed of vertex program and fragment program, but itself has a default program method, Users can write only for the effect part of their relationship. Because of the selectivity is large, so can write a richer effect, the use of difficulty relative vertex and fragment shader small. By the above three ways of description can be seen, if you are rich in CG development experience, a good choice is to use vertex and fragment directly shader. Although the fixed function shader is simple, it can achieve very limited results. Surface shader is a more eclectic way for unity3d to achieve richer results, but with relatively little difficulty. But surface shader has a problem, it does not support Subshader internal multi-pass, so some need to multi-pass effect will be more difficult to achieve. Next is the main content, each one of the writing specifications and examples: 1. Fixed Function Shaders 1. Standard Example:
1Shader"Vertexlit" {2 Properties {3_color ("Main Color", Color) = (1,1,1,0.5)4_speccolor ("Spec Color", Color) = (1,1,1,1)5_emission ("emmisive Color", Color) = (0,0,0,0)6_shininess ("shininess", Range (0.01,1)) =0.77_maintex ("Base (RGB)", 2D) =" White" { }8     }9 Ten Subshader { One Pass { A Material { - diffuse [_color] - Ambient [_color] the shininess [_shininess] - specular [_speccolor] - emission [_emission] -             } + Lighting on - Separatespecular on + SetTexture [_maintex] { A Constantcolor [_color] atCombine Texture * Primary DOUBLE, Texture *constant -             } -         } -     } -}
2. Specific InstructionsSpecific variables, Material module Reference "Unity3d shader Introduction (ii)--UNITY3D shader Basic Structure description", the main difference is to use the SetTexture module to control the specific wording of the map settexture: settexture [paste Figure name] {Constantcolor color Combine colorpart, Alphapart} constantcolor color defines a constant color in Combine command Inside use. Combine commandCombine SRC1 * Src2 multiplies Src1 and src2, resulting in a darker result than the two. Combine Src1 + src2 add Src1 and SRC2, the result will be brighter than both combine src1-src2 from Src1 minus src2combine Src1 +-src2 add Src1 to Src2 and subtract 0.5combine Src1 lerp (SRC2) src3 the interpolation between SRC3 and SRC1, using the transparent value of SRC2. The interpolation is relative. When the transparent value is 1 o'clock display SRC1, when the transparent value is 0 o'clock src3combine src1 * src2 + src3 multiplies src1 and src2 transparency, then adds Src3combine Src1 * src2 +- SRC3 multiplies the transparency of the SRC1 and SRC2, and then adds the SRC3 to the Src1 minus 0.5Multiplies component with the Alpha Src2, and then does a signed add with SRC3. Combine SRC1 * SRC2-SRC3 multiplies the transparency of SRC1 and src2 and subtracts SRC3 Matrix CommandMatrix [matrixpropertyname]transforms texture coordinates used in this command with the given matrix wherein the combine part is optional, the function is set How the color and transparent parts are superimposed. Where transparent portions are not written, the same settings are used by default for the color section. The general notation is combine texture * primary, which means to multiply the map by the vertex color as the final color. Can also be written as combine texture * primary double, plus a double to increase the light intensity of twice times. Primary represents the initial vertex color, which is the vertex color before the map is added. If you need to overlay the second or more maps, then you can no longer multiply primary, instead of previous, which is combine texture * Previous. Previous represents the previous color, which is the color of your previous map multiplied by the initial color. From the shaders of the fixed Function, it can be seen that this type of shader is very easy to use, basically all platform devices can support, but do not have much to do. You can simply perform some basic textures and color overlays, or you can use the Texgen map to achieve some texture effects, but you can't control the vertices or control the surface coloring more complex. 2, Vertex and Fragment Shader 1. Standard Example:
1Shader"custom/exam1" {2 Properties {3_maintex ("Texture", 2D) =" White" { }4 }5 Subshader6 {7 Pass8 {9 CgprogramTen #pragmaVertex vert One #pragmaFragment Frag A#include"Unitycg.cginc" - sampler2d _maintex; - float4 _maintex_st; the structv2f { - float4 pos:sv_position; - float2 uv:texcoord0; - } ; + v2f Vert (appdata_base v) - { + v2f o; AO.pos =Mul (Unity_matrix_mvp,v.vertex); atO.UV =Transform_tex (V.texcoord,_maintex); -     returno; - } - float4 Frag (v2f i): COLOR - { -FLOAT4 Texcol =tex2d (_MAINTEX,I.UV); inFLOAT4 OUTP =Texcol; -     returnOUTP; to } + ENDCG - } the } *}
2. Detail Explanation: 1, CG Small fragment CG small fragment begins with Cgprogram, ending with ENDCG. The middle content uses the CG/HLSL language. At the beginning of the CG small fragment, you can add: #pragma vertex name-Indicates that this is a vertex program with a function named name.    #pragma fragment name-indicates that this is a fragment program with a function named name.    #pragma fragmentoption option-add an option to the compiled OpenGL fragment program. The ARB fragment program allows you to query the list of options for the allowed specification. This directive is not valid for vertex programs or for programs that are not compiled with OpenGL.   #pragma target Name-shader destination compilation.   #pragma target default compiler target: in Direct3D 9 Environment: Vertex shader 1.1 and image cable shader 2.0ARB Vertex program has 128-bit instruction limit, ARB fragment program has 96-bit instruction limit (32-bit texture + 64-bit arithmetic operation ), 16 temporary registers and 4 indirect textures.   #pragma target 3.0 compiler Shader Mode 3.0: in Direct3D 9 Environment: Vertex shader 3.0 and image cable shader 3.0ARB vertex program has no instruction limit, ARB fragment program has 1024-bit instruction limit (512-bit texture + 512-bit arithmetic operations), 32 temporary registers, and 4 indirect textures. You can extend the limit value by using the #pragma profileoption command. For example: #pragma profileoption maxtexindirections=256 increases the indirect texture limit to 256. It is important to note that some shader pattern 3.0 features do not support ARB vertex programs and arb fragment programs, such as derived directives. You can use the #pragma GLSL command to convert to GLSL, which is less restrictive.   #pragma only_renderers space separated names-compiles shaders with the given renderer only. By default, shaders are compiled with all renderers. D3d9-direct3d 9.? Opengl-opengl.? Gles-opengl ES 2.0.? Xbox360-xbox 360.? Ps3-playstation 3.? flash-flash. #pragma exclude_renderers space separated names-compiles shaders without a given renderer. By default, shaders are compiled with all renderers. Refer to the previous point.   #pragma GLSL-when compiling shaders with the desktop OpenGL platform, convert to CG/HLSL inside GLSL (instead of the ARB vertex/fragment program, which is the default setting).  2, the parameter corresponding declaration has the following shader input parameter _mycolor ("Some color", color) = (1,1,1,1)  _myvector ("Some vector", Vector) = (0,0,0,0) & Nbsp;_myfloat ("My float", float) = 0.5 _mytexture ("Texture", 2D) = "White" {} _mycubemap ("Cubemap", CUBE) = " "{}  so in the CG program should again declare these parameters, in order to use inside the CG program FLOAT4 _MYCOLOR;FLOAT4 _myvector;float _myfloat; sampler2d _mytexture Samplercube _mycubemap;  Type correspondence Relationship: the color and vector properties correspond to the FLOAT4 type. The  range and float properties correspond to the float type. The texture property corresponds to the sampler2d type of the normal 2D texture.  cube and rect textures correspond to samplercube and samplerrect types.   3, vertex programs need to get the information of the vertex, you can refer to the # include "Unitycg.cginc" and in this package there are 2 types containing the vertex information: appdata_base: Contains vertex position, normal and a texture coordinate. Appdata_tan: Contains vertex position, tangent, normal, and a texture coordinate. It contains the following parameters: Float4 vertex is the vertex position float3 Normal is the vertex normal float4 Texcoord is the first UV coordinate FLOAT4 texcoord1 is the second UV coordinate float4 tangent is the tangent vector ( Used in the normal map) FLOAT4 color is the colour of each vertex (Per-vertex) &nBSP, of course, can also not refer to # include "Unitycg.cginc", directly define Appdatastruct appdata {    FLOAT4 vertex:position;    Float4 texcoord:texcoord0;}; Common types: POSITION: Vertex position color: Vertex color normal: normal tangent: Tangent texcoord0:uv1texcoord1:uv2   declaring vertex program name #pragma vertex Vert below you can write the vertex program vert the struct v2f {    float4 pos:sv_position;    fixed4 color:color;};  v2f Vert (appdata_base v) {    v2f o;    O.pos = Mul (UNITY_MATRIX_MVP, V.vertex);    O . color.xyz = v.normal * 0.5 + 0.5;    return o;}   where v2f is a custom block that defines the information contained in the return of the vertex program and can be passed to the fragment program, where the position is changed to sv_position. You can also customize some of the other properties.   The vertex in the vertex program is multiplied by the matrix to get the final position, and the correlation matrix interprets:unity_matrix_mvp  the current model * view * projection matrix. (Note: Model matrix is local, world) unity_matrix_mv  current model * View Matrix  UNITY_MATRIX_V  Current View matrix unity_matrix_p  Current projection matrix unity_matrix_vp  current View * projection matrix unity_matrix_t_mv  transpose model * View Matrix unity_matrix_it_mv -reversal Model * Visual matrix   UNITY_MATRIX_TEXTURE0 to unity_matrix_texture3  texture transformation matrix      when each frame is rendered, eachThe object that needs to be rendered will automatically input the vertex information into the specified vertex program of shader, and after finishing the vert vertex program, the information assigned by the vertex will be returned to the v2f, and the GPU will automatically receive the vertex information and handle it. This is a feature of GPU programming.   3, fragment program #pragma fragment Frag then write Half4 frag (v2f i): color{    Half4 texcol = tex2d (_maintex, I.UV); nbsp   Return texcol * _COLOR;} The fragment program does not require a custom block as the return value, because the fragment program's return value is directly color. However, in the fragment program, it can be processed by the vertex position, UV, and normal information obtained by the previous vertex program, so that the color of the point will be changed in a variety of ways.   3. Surface Shaders 1. Standard Example:
Shader"Example/diffuse Texture"{Properties {_maintex ("Texture", 2D) =" White"{}} subshader {Tags {"Rendertype"="Opaque"} cgprogram#pragmaSurface Surf LambertstructInput {float2 Uv_maintex;      };      Sampler2d _maintex; voidSurf (Input in, InOut surfaceoutput o) {O.albedo=tex2d (_maintex, In.uv_maintex). RGB; } ENDCG} Fallback"Diffuse"  }
2. Detailed DescriptionProgram declaration wording: #pragma surface surfacefunction Lightmodel [Optionalparams] surfacefunction is the program name, the program should be written as: void Surf (Input in, InOut surfaceoutput o), input is the structure of your own definition. The input structure should contain the additional required variables required for all texture coordinates (texture coordinates) and surface functions (surfacefunction). Lightmodel Illumination model: Optional: Lambert Blinnphong or Custom optional parameters: Optional parameter: alpha-transparent (alpha) blending mode. Use it to write a translucent shader. Alphatest:variablename-Transparent (Alpha) test mode. Use it to write out the shader for the cutout effect. The skeleton-sized variable (VariableName) is a variable of type float. Finalcolor:colorfunction-The final color function of the custom. Please refer to Example: Surface shader Example (surface Shader Examples). Exclude_path:prepass or Exclude_path:forward-you do not need to generate a channel with the specified render path. Addshadow-Add shadow Projection & Collection channel (Collector passes). It is usually modified with a custom vertex so that the shadow can also be projected on the vertex animation of any program.  dualforward-use a dual illumination map (dual lightmaps) in the forward (forward) render path. Fullforwardshadows-Supports all shadow types in the forward (forward) render path. Decal:add-Add decal shader (Decal shader) (for example: Terrain addpass). Decal:blend-Blends a translucent decal shader (semitransparent decal shader). Softvegetation-Renders the surface shader (surface shader) only when soft vegetation is turned on.  noambient-not applicable to any ambient illumination (ambient lighting) or spherical harmonic illumination (spherical harmonics lights).  novertexlights-does not apply to spherical harmonic illumination (spherical harmonics lights) or to each vertex illumination (Per-vertex lights) in forward rendering (Forward rendering).  nolightmap-Disables the Light map (lightmap) on this shader. (Suitable for writing small shaders)  nodirlightmap-disables directional illumination mapping (directional lightmaps) on this shader. (Suitable for writing some small shaders). Noforwardadd-Disables forward rendering of the Add channel (Forward rendering additive pass). This will enable the shader to support a complete directional light and per-vertex/sh calculations for all illumination. (also suitable for writing some small shaders).  approxview-Shaders need to calculate the direction of each vertex (Per-vertex) of the standard view instead of each image cable (per-pixel) direction. This is faster, but the view orientation is not exactly the surface that the current camera is approaching. Halfasview-the half-direction vector, not the view direction (view-direction) vector, is passed in the light function (lighting functions). The half-direction calculates and standardizes each vertex (per vertex). This will be quick, but not entirely accurate.   Note: Surface because of the use of light model, so is not written pass, directly written in the Subshader inside.   Input: UV_ map variable name-UVFLOAT3 Viewdir of the map-the view direction value. In order to calculate the parallax effect (Parallax effects), edge illumination (rim lighting) and so on, you need to include the view direction (views direction) values. FLOAT4 with color semantic-the interpolated value of each vertex (Per-vertex) color. FLOAT4 Screenpos-The position in the screen space. To reflect the effect, you need to include location information in the screen space. For example, the wetstreet shader used in dark unity. FLOAT3 Worldpos-position in the world space. FLOAT3 WORLDREFL-a reflection vector in world space. This parameter is included if the surface shader (surface shader) does not write to the normal (o.normal) parameter. Please refer to this example: Reflect-diffuse shader.  FLOAT3 Worldnormal-The normal vector in world space. This parameter is included if the surface shader (surface shader) does not write to the normal (o.normal) parameter. FLOAT3 WORLDREFL; Internal_data-a reflection vector in world space. If the surface shader (surfaceShader) does not write the normal (o.normal) parameter, which will contain this parameter. To obtain a reflection vector (reflection vector) based on each vertex normal map (per-pixel normal map), you need to use the world Reflection vector (Worldreflectionvector (in, O.normal)). Please refer to this example: reflect-bumped shader. FLOAT3 Worldnormal; Internal_data-the normal vector in world space (normal vectors). This parameter is included if the surface shader (surface shader) does not write to the normal (o.normal) parameter. To obtain a normal vector (normal vector) based on each vertex normal map (per-pixel normal map), the World normal vector (Worldnormalvector (in, O.normal)) is used.     output: struct surfaceoutput {    half3 albedo;    half3 normal;    HALF3 Emi ssion;    Half specular;    half gloss;    half Alpha;};   Finally say two words I think more important words: 1, the shader has its prescribed instruction restrictions, registers, and texture restrictions, there are some complex effects need to use more registers and instructions, may exceed the limit, such as the use of shaders to achieve skeletal animation can easily exceed the limit. No matter what platform you write shaders (Unity3d or Stage3D, etc.) you need to know what the limitations are, or wait until the compile time to cry. 2, the limitations of the shader is sometimes related to the release platform hardware support, some publishing platform (such as mobile phone) does not support you to specify the high-level shader mode, it may be based on the actual situation to reduce the level, or even show not. So before you write a shader effect, consider the platform you need to publish, and don't always ask questions such as "Why I see the effect in Unity3d's editor is normal, why I can't see the effect when I publish to my phone".    Original Address: http://blog.csdn.net/a6627651/article/details/50545643

Three ways to write Unity3d shaders

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.