"Unity Shaders" Mobile Shader adjustment--What is efficient Shader

Source: Internet
Author: User
Tags diff

http://blog.csdn.net/candycat1992/article/details/38358773

This series focuses on Unity Shaders and Effects Cookbook (thanks to the author of the book), plus a bit of personal understanding or outreach.

Here are all the illustrations of this book. Here is the code and resources required for this book (you can also download it from the website, of course).

========================================== Split Line ==========================================

Write in front

Before learning the various shader, we never considered usability under all platforms. Unity is a powerful cross-platform game engine, but it also determines that we need to consider more platform factors when writing code. For shader, if no corresponding optimizations are made, it is likely that they will not run on platforms with higher performance limitations such as mobile platforms. We need to understand some key factors to optimize our shader to improve game performance while maintaining the same visual effect as possible.

Especially if your target platform includes Android system, then you must be careful about the consequences of the big wave of China's various shanzhai machines taking you on the beach ... So, if you have never considered these things for your shader, then use and be careful ...

In this chapter, we will learn about the following: What is an efficient shader, how to perform performance analysis on shader, and optimize our shader for the mobile platform.

So, what is an efficient shader? This is a somewhat complex issue, and it involves a number of factors. For example, the number of variables you use and the amount of memory used, the number of textures shader uses, and so on. It is also possible that your shade work well, but we can actually use half the number of variables to achieve the same effect. We will explore some of these techniques in this section and show you how they are combined to make our shader faster and more efficient, while achieving the same high-quality visuals on various platforms.

Preparatory work

We will first use one of the most common Shader: bumped diffuse Shader. That is, the shader with the normal map applied.

    1. Create a new scene and a sphere, adding a parallel light.
    2. Create a new shader and material that you can name as OptimizedShader001.
    3. Assign shader to material and give material to the sphere.
    4. Finally, modify the shader using the following code.

[Plain]View PlainCopyprint?
  1. Shader "custom/optimizedshader001" {
  2. Properties {
  3. _maintex ("Base (RGB)", 2D) = "White" {}
  4. _normalmap ("Normal Map", 2D) = "Bump" {}
  5. }
  6. Subshader {
  7. Tags {"Rendertype" = "Opaque"}
  8. LOD 200
  9. Cgprogram
  10. #pragma surface surf Simplelambert
  11. Sampler2d _maintex;
  12. Sampler2d _normalmap;
  13. struct Input {
  14. FLOAT2 Uv_maintex;
  15. FLOAT2 Uv_normalmap;
  16. };
  17. Inline Float4 Lightingsimplelambert (surfaceoutput s, float3 Lightdir, float atten)
  18. {
  19. float diff = max (0, Dot (s.normal, lightdir));
  20. FLOAT4 C;
  21. C.rgb = S.albedo * _lightcolor0.rgb * (diff * Atten * 2);
  22. C.A = S.alpha;
  23. return C;
  24. }
  25. void Surf (Input in, InOut surfaceoutput o)
  26. {
  27. FLOAT4 C = tex2d (_maintex, In.uv_maintex);
  28. O.albedo = C.rgb;
  29. O.alpha = C.A;
  30. O.normal = Unpacknormal (tex2d (_normalmap, In.uv_normalmap));
  31. }
  32. Endcg
  33. }
  34. FallBack "Diffuse"
  35. }


Simple diffuse reflection is performed inside the light function, and the normal of the model is changed in the surf function.

Finally, the effect you get is probably this:

Realize

Next, let's take a step-by-step optimization of this shader.

First, we need to refine the variable types so that they consume as little memory as possible:

  1. Modify the input structure. Previously, our UV coordinates were stored in variables of type FLOAT2, and we now change them to half2:
    [plain] view plain copy print?
    1. struct Input {
    2. Half2 Uv_maintex;
    3. Half2 Uv_normalmap;
    4. };
  2. Next is the light function. Similarly, the variable of the float family is changed to the correspondingfixedType variable:
    [Plain]View PlainCopyprint?
      1. inline fixed4 lightingsimplelambert  (surfaceoutput s,  Fixed3 lightdir, fixed atten)   
      2. {  
      3.      fixed diff = max  (0, dot  (s.normal, lightdir));   
      4.       
      5.     fixed4 c;   
      6.     c.rgb = s.albedo * _lightcolor0.rgb *   (diff * atten * 2);   
      7.     C.A  = s.alpha;  
      8.     return c;  
      9. }  
  3. Finally, modify the variable type in the Surf function. Also use the fixed type variable:
    [plain] view plain copy print?
    1. void Surf (Input in, InOut surfaceoutput o)
    2. {
    3. Fixed4 C = tex2d (_maintex, In.uv_maintex);
    4. O.albedo = C.rgb;
    5. O.alpha = C.A;
    6. O.normal = Unpacknormal (tex2d (_normalmap, In.uv_normalmap));
    7. }
After modifying the variable type, we now take advantage of Unity's built-in illumination function variables so that we can control how the shader handles the light source. To this end, we can largely reduce the number of light sources processed by shader。 To modify the #pragma declaration:

[Plain]View PlainCopyprint?
    1. Cgprogram
    2. #pragma surface surf Simplelambert noforwardadd


Now we can use shared UV coordinates to continue optimizing the shader. To do this, we use _maintex's UV coordinates in place of the _normalmap UV in Unpacknormal () and remove the uv_normalmap in the input structure:

[Plain]View PlainCopy print?
    1. O.normal = Unpacknormal (tex2d (_normalmap, In.uv_maintex));

[Plain]View PlainCopyprint?
    1. struct Input {
    2. Half2 Uv_maintex;
    3. };


Finally, we tell unity that this shader only works on a specific renderer :

[Plain]View PlainCopyprint?
    1. Cgprogram
    2. #pragma surface surf Simplelambert exclude_path:prepass Noforwardadd


The results of the final optimization are as follows (left front right rear):

As you can see, there is almost no difference between our eyes, but we have reduced the time it takes the shader to be drawn to the screen. We'll use Unity's visualizer in the next section to analyze the magnitude of this reduction. But here, we are concerned that less data is used to get the same rendering effect . Always remember this thought when creating our own shader!

Explain

The above mentioned 4 optimization methods: Optimize the variable type, share UV coordinates, reduce the number of lights processed, so that shader only work on a specific renderer. Let's take a deeper look at how these technologies work, and finally learn some other tricks.

Optimizing Variable Types

First, let's look at the size of the data that each variable stores when we declare the variable. Since we tend to have multiple choices when declaring variables (float,half,fixed), we need to look at these types of features:

    • float: A high-precision floating-point value, usually 32-bit, and the slowest of the three. It also corresponds to FLOAT2,FLOAT3 and FLOAT4.

    • Half: Medium precision floating-point value. Usually 16-bit, ranging from 60000 to +60000, it is suitable for storing UV coordinates , color values, etc., much faster than float type. It also corresponds to HALF2,HALF3, and Half4.

    • Fixed: Low precision floating point value. Typically 11 bits, ranging from 2.0 to +2.0, with a precision of 1/256. This is the smallest of the three, can be used for light calculation , color and so on . Its corresponding values are FIXED2,FIXED3 and fixed4.
The choice of the type of official website, gives the following suggestions:
    • Use low-precision variables whenever possible.
    • For a vector of color values and unit lengths, use fixed.
    • For other types, use half if the range and accuracy are appropriate, and other cases use float.
Reduce the number of light sources you can see from the above, this step optimization is achieved by declaring the Noforwardadd value in the #pragma statement. This is primarily to tell unity that, using this shader object, only a single parallel light source is accepted as a pixel light source, and the other light sources are processed using the built-in spherical harmonic function as the light source per vertex. This strategy is obvious when we place another light source in the scene, because our shader uses a normal map for pixel-wise operation.

It's good to do this, but what if we need more than one parallel light and want to control which is the main light source for that pixel-by-byte calculation? This requires a setting in the Unity Panel! If you look closely, it will be normal. Each light source has a Render mode drop-down menu. When you click on it, there are three options for Auto, Important, and not Important . By choosing important, you can tell unity that the light source needs to be treated as a pixel-by-point light instead of a per-vertex light source. If set to auto, it's up to unity to make the decision!

Are you ignorant or not? To illustrate the above meanings, let's do an experiment! Place another point light in the scene, and remove the main Texture in the shader. For the first time, turn on the parallel light, close the point Light (left), turn off the parallel light for the second time, and open the Point Light (right). You can see that the second point light does not affect our normal map (just illuminates the model, that is, it is just a vertex-to-handle) and only the first parallel light will affect it.

The optimization here is due to the fact that all other light sources are treated as vertex light sources, and only one main parallel light is counted as the pixel light source when calculating the pixel color.

Share UV coordinates

This step optimization is simple, using only the UV coordinates of main texture instead of the UV coordinates of the normal map, which actually reduces the code that internally extracts the UV coordinates of the normal map. This approach can simplify our code very well.

Work only on a specific renderer

Finally, we declare in the statement to tell unity that the shader no longer accepts any other custom lighting from the deferred rendering. This means that we can only use this shader effectively in forward rendering (forward render), which is set in the main camera's settings.

Help links: Forward rendering, deferred rendering.

Written in the last

There are many other optimization strategies. We've learned how to package multiple grayscale images into an rgba map, and how to use a map to simulate lighting effects. Because of these numerous techniques, asking how to optimize shader is a very vague question. However, understanding these technologies allows us to use the appropriate technology based on different shader and platforms to obtain a stable frame rate of shader.

"Unity Shaders" Mobile Shader adjustment--What is efficient Shader

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.