Forward Render
The traditional rendering method, which you provide to the graphics card shape mesh, breaks it up into a bunch of nodes, then passes through a series of transformations and splits into fregment or pixels, rendering all the rendering done before the screen is rendered.
This is fairly linear, and each shape goes through every stage of the pipeline before it generates a full image.
Deferred Rendering
Chinese is called deferred rendering, the work of rendering is placed at the end, until all the shapes are finished, and once all the required buffers are built, they are read directly into a shading algorithm and merged together to produce the final result. In this way, the compute and memory bandwidth required to color a scene is reduced to these visible portions, reducing the complexity of the shading depth.
G Buffer -refers to geometry buffer, which is the "Object buffer". Unlike normal, which renders colors only to textures, g-buffer refers to buffers that contain color, normals, and world space coordinates, which are textures that contain color, normals, and world space coordinates. Since the length of vectors required by the g-buffer exceeds the length of the vectors that the usual texture can contain, typically in game development, a multi-render target technique is used to generate g-buffer, which renders color, normals, and world space coordinates in three floating-point textures, respectively, in a single drawing.
It is common practice to render color, depth, and normals into different buffer, and the final pixel color is calculated by the three buffer and light source information when the light is last calculated.
Color, Depth, and Normal buffer
Final Lighting (shading) result generated using the three buffers
Comparison
In a standard forward Rendering rendering pipeline, for each light source, you must calculate the illumination (using vertex shading) for each vertex in the scene. Assuming that there are 100 objects in the scene, each object has 1000 vertices, that is about 100,000 polygons, the graphics card processing this set of fixed-point number is very casual, but when these polygons sent to fregment shader for processing, where the light calculation will consume a lot of performance.
Developers are always trying to put more light calculations into vertex shader instead of fregment shader, which can save a lot of performance. Each piece of visible fragment is calculated with expensive illumination, whether it is obscured by other fragments, the pixel that spends the screen is 1024 * 768, then there will be nearly 800,000 pixels to render, then each frame of the render may be in fregment shader millions of times, Even pixels that have been obscured by deep testing are counted, which can result in a lot of wasted performance.
What's more scary is that when you add a lamp to the scene, then fragment shader again to recalculate the light source and imagine a street full of lights ...
The complexity of Forward rendering rendering can be represented by O (num_geometry_fragments * num_lights), which shows that the complexity is positively correlated with the number of faces of the aggregate and the quantity of light sources.
Some engines are optimized by some algorithms, such as light sources that are too far away from calculations, merging lights, or using light maps (which can only be static), but I need a better solution if I want to implement dynamic multi-source.
Deferred Rendering is a good solution. It can be very good to reduce the number of objects rendered, that is, the number of rendered fragments, in the light calculation with the number of pixels on the screen, rather than the sum of all the fragment pixels. The time complexity of its illumination rendering can be expressed in O (screen_resolution * num_lights), which is irrelevant to the number of objects in the scene, and only to the number of light sources.
How to choose
The simple answer is that if you are using a lot of dynamic light sources, then use deferred Rendering, but it also has some drawbacks:
1. Need to compare new graphics card, support multi-target rendering;
2. A large video card bandwidth is required to pass the buffer;
3. Do not process transparent objects (unless forward rendering and Deferred rendering are combined);
4. Traditional anti-aliasing methods, such as MSAA, are not available, but the fxaa of screen space is applicable;
5. Only one material can be used, but one solution is deferred Lighting;
6. The number of Shadows is also related to the number of light sources.
If the game does not have a lot of light source and need to be compatible with the old equipment, then it is best to choose forward Rendering, and then use static lights map, the effect is very good.
Reference
Forward Rendering vs. Deferred rendering-http://gamedevelopment.tutsplus.com/articles/ forward-rendering-vs-deferred-rendering--gamedev-12342
Deferred shading-http://en.wikipedia.org/wiki/deferred_shading#deferred_lighting
Deferred Shading Rendering path-http://docs.unity3d.com/manual/rendertech-deferredshading.html
Forward Rendering path-detailshttp://docs.unity3d.com/manual/rendertech-forwardrendering.html
Forward Render VS Deferred Rendering