Unity5 internal rendering optimization 3: remove fixed functions, unity5 remove
Translated from the aras blog, there are three articles in total to describe the process of unity5 optimizing its own Renderer
Learn from debugging and optimization experience and understand the optimization methods of the unity5 internal Renderer.
Article 1: Unity5 internal rendering optimization 1: Introduction
Article 2: Unity5 internal rendering optimization 2: Cleaning
In the previous article, I wrote about cleaning and optimization. Since then, I have moved to some unity5.1 work and removed the Fixed Function Shaders and some other things from the Fixed Function Shaders.
Before the fixed feature was fixed, the GPU had no "programmable coloring machine programmable shaders"; by enabling or disabling some features, the configuration became more or less flexible (mostly few. For example, let them calculate the illumination of each vertex, or add two Paster colors on each pixel.
Unity has been available a long time ago, so it is natural to support fixed function coloring devices. Their syntax is very simple. If you just write some simple pointers, the running speed is faster than that of the vertex/pixel shader.
For example, if a shader pass is set to alpha, the product of the output texture and color is:
Ass
{
Blend SrcAlpha OneMinusSrcAlpha
SetTexture [_ MainTex] {constantColor [_ Color] combine texture * contant}
}
The result is the same as that of vertex + pixel.
Pass
{
Blend SrcAlpha OneMinusSrcAlpha
CGPROGRAM
# Pragma vertex vert
# Pragma fragment frag
# Include "UnityCG. cginc"
Struct v2f
{
Float2 uv: TEXCOORD0;
Float4 pos: SV_POSITION;
};
Float4 _ MainTex_ST;
V2f vert (float4 pos: POSITION, float2 uv: TEXCOORD0)
{
V2f o;
O. pos = mul (UNITY_MATRIX_MVP, pos );
O. uv = TRANSFORM_TEX (uv, _ MainTex );
Return o;
}
Sampler2D _ MainTex;
Fixed4 _ Color;
Fixed4 frag (v2f I): SV_Target
{
Return tex2D (_ MainTex, I. uv) * _ Color;
}
ENDCG
}
Now we have removed support for fixed function fixed-function GPUs, and in unity4.3 (this May 2013) (OpenGL ES 1.1 on mobile and Direct3D 7 GPUs on Windows ). There are no technical requirements and no need to write the fixed function coloring tool. Unless: 1. There are many of these pasters in the current project. 2. I just want to reduce typing.
In fact, there are also many disadvantages of the fixed function coloring tool:
1. They don't work on the host (PS4, Xbox one, vita), and it is very difficult to generate a shader in real time on these platforms.
2. They cannot work with MaterialPropertyBlocks. They cannot be used in unity Sprite rendering or animated materials.
3. They are only suitable for simple things. After you make a simple fixed-function coloring device, you find that you need to add more features and there is no way to do more.
How does the fixed function coloring tool execute on unity? Why? For most platforms, we do not support "fixed function rendering pipelines", which are converted to "actual shaders" internally for rendering. Only one fixed-function shader can have the exception of legacy desktop OpenGL (GL 1. x-2.x) and Direct3D 9.
To more platforms, we have implemented something similar to D3D9 on OpenGL ES 2.0, replacing the binary connection of D3D9shader with connecting GLSL fragments. Then there are more platforms (D3D11, Flash, Metal); each platform executes the "fixed function" code. The code is not very complex, the problem is well understood, and we have enough graphic testing and verification work.
In every step of this process, no one doubts why we should continue to do "why real-time generation? Instead of offline, it will be converted during the import of the fixed-function shader "? (If someone asks this question, the answer is "This will make sense, but someone needs to do it" for a while ...)"
A long time ago, the offline conversion of the fixed-function shader was not very practical, because there were a large number of possible variants to be supported. The most tricky part is the support for texture coordinates (send uv to the texture stage, an optional texture transformation matrix, an optional texture projection, and an optional texture coordinate generation ). But hey, we removed a lot from unity5. Is it simpler? Yes.
Convert a fixed-function shader to a normal one during import
Translation:
In unity, the shader can be written in the form of "fixed functions". For example, you can only write "Light On" to get vertices-by-vertex Light. In the super sample shader (less input) it is also useful, and a large number of shader are written in this way.
Currently (5.0, 5.1/) These fixed function pasters are processed at runtime:
They load and parse to the internal shaderlab.
Whenever a new "fixed function status" is required, a new "actual shader" is generated and used for computing
Platforms: D3D9, D3D11, OpenGL, Metal, GLES, and PSM
Cannot be fully executed on host/DX12
It is much better to "generate actual shader" during shader import, and then remove all runtime codes.
Benefit: removed a lot of code!
Benefit: removes a fraction of the useless part in the rendering loop, which is only produced by a fixed function.
Benefits: you can work on the host, DX12, and Vulkan.
Advantage: cross-platform behaviors are more consistent (there are still slight differences, for example, the highlights on the mobile end are different, and the fog work is somewhat different)
Disadvantage: generating a fixed function shader from a script using "new Material (string)" will stop working.
"New Material (string)" is marked as obsolete by 5.1.
It means backward compatibility. Damages changes. This means that 5.2 should separate 5.0/5.1 webplayer into a separate channel.
So I intend to do this by removing all the runtime code of the "fixed function shader"; replacing it with importing shader only in the Unity editor to convert them to "normal shader ". Create a Data & scheduler overview on the wiki and start programming. I thought the final result would be "I wrote 1000 lines of code and removed 4000 lines", but I was wrong!
One time I was working on the basics of importing shader (the result was about 1000 lines of code), I began to remove the entire fixed feature section. It was a happy day (for example :)
About twelve thousand lines of code disappear. It's amazing!
I don't remember that there are so much code in the fixed function. You write it for a platform, and then it basically works. Then some new platforms will show up and write new code about it, then it basically works. Then there were N platforms, and the amount of all the code added together was huge, because it was not so much code at once, so no one found this problem.
Take away: Occasionally, look at the entire subsystem. You may be shocked that it has expanded so much in years. Some of these items may not work for some reason.
Side note: If a vertex shader's vertices-by-vertices illumination becomes simple on a fixed function pipeline, it is easy to combine many functions. You can use a lot of light (up to 8). They are direction light, point light or spot light. It's just a flag that controls the opening and closing of highlights, and the same applies to fog.
It feels like "Simple Feature composition". When we put everything in the shader, we lose an important thing. Shader (vertex/fragment /... Stages) cannot be combined! To add some optional features-> This almost means "Double shader", or branch shader, or real-time generation of shader, both of which have advantages and disadvantages.
For example, how do you write a shader that can be close to eight light sources? There are many ways to do this:
Split the vertex color into "spot light ?" "Have point light ?" "Only direal Al light. I guess spot light is rarely used for fixed functional illumination by vertex; they look particularly bad. Therefore, in many cases, there is no "computing spot light" consumption.
The number of light sources is passed into the shader as an integer, And the shader loops in them. Complex: OpenGL ES 2.0/WebGL, the number of cycles that you can only keep constant: in practice, we found that OpenGL ES 2.0 does not have this limit for many times, however, webGL certainly has this restriction. At this time, I do not have a good answer. In ES2/WebGL, I just cyclically traverse eight possible light sources (the light source that is not used is set to black ). A real solution, a general loop is like this:
Uniform int lightCount;
//...
For (int I = 0; I <lightCount; ++ I)
{
// Compute light # I
}
When compiling in ES2.0/WebGL, I want to edit shader as follows: (blogger Note: according to the eight mentioned above, all the loops are repeated)
Uniform int lightCount;
//...
For (int I = 0; I <8; ++ I)
{
If (I = lightCount)
Break;
// Compute light # I
}
I hate to deal with seemingly arbitrary restrictions like this (I heard that webGL2 does not have such restrictions, which is really great)
What do we have now, so the current situation is like this. With a lot of code removed, I have the following benefits:
1. The "fixed function style" shader can run on all platforms (host! DX12 !)
2. They work more stably across platforms (for example, there are slight differences between high light and attenuation on PCs and mobile phones)
3. The "fixed function style" shader can work with MaterialPropertyBlocks, which means it can render sprite and so on.
4. Fixed-function coloring devices will not have any weird half-pixel offsets in the grating on windows phone.
5. Fixed-function shader conversion to actual shader makes it easier. I added a button in shader inspector to display all generated code. You can copy and extend it.
6. When the code is reduced, the size of the executable file to be converted becomes smaller. For example, Windows 64 bit is smaller than 300 kilobytes.
7. rendering is a little faster (even if no fixed-function shader is used)
The last point is not the main purpose, but it is a good benefit. There is no significant impact, but a considerable number of branches and data are removed from the platform image abstraction (only when fixed features are supported ). I tried according to the project and saved 5% of the time in the rendering thread (for example, 10.2 ms-> 9.6 ms). The effect was very good.
Are there any disadvantages? Yes, there are several:
1. You can no longer create a fixed function shader at runtime. Previously, you can run var mat = new Material ("<fixed function shader string>") on both hosts. For this reason, I made Material (string). In unity5.1, it is marked as obsolete and issued a warning, but in fact it stops working.
2. harmful changes have been made to the backward compatibility of web player. If the code is developed in unity 5.2, it means that it cannot run on unity5.0/5.1.
3. It may not work in several special cases. For example, the fixer uses a global texture instead of a 2D texture. In the shader resource itself, nothing about that texture will be specified; so when I generate an actual shader during the fixed-function shader import, I don't know whether it is a 2D or Cubemap texture. So for global textures, I just assume they are all 2D textures.
4. This is probably the case!
There are many potential benefits for removing support for real-time fixed functions. For all texture changes internally, we use something like "Texture type (2D, cubemap, etc.)"-but it seems that only a fixed functional pipeline will use it. Similarly, we use a vertex-declaration-like structure for each draw call, but now I don't think they are needed any more.
The end of Article 3...
------ Self: wolf96 http://blog.csdn.net/wolf96