Objective
Unity Vision Vr/ar Summit came to China (http://www.bagevent.com/event/197605?bag_track=http://www.bagevent.com/event/ 197605), and recently also focused on the unity of VR development.
It's probably June. See news: Steam released all the source code for the renderer used by the lab. I've always been curious about the Unity3d engine, which is not open source, if it's a renderer. I have time to read the code today.
RELATED links:
-Official post: http://steamcommunity.com/games/250820/announcements/detail/604985915045842668
-GitHub Download: Https://github.com/ValveSoftware/the_lab_renderer
-Unity Asset Store Download: https://www.assetstore.unity3d.com/en/#!/content/63141 Features & implementation
After downloading the Lab renderer from GitHub, the rough browsing is not much content, mainly the C # code of several components and some shader. The next step is to see how its main features are implemented. Single-pass Forward Rendering
The reason the Lab renderer uses forward rendering is primarily for MSAA (multisampling anti-aliasing) and efficiency. However, the unity default forward rendering uses Multi-Pass to render all lights (one pass per dynamic light per object to render its illumination), and the Lab renderer provides a single pass rendering multiple lighting solutions.
To implement Single-pass Forward Rendering, first make some settings in the player settings, as shown in the figure above. The so-called "Single-pass" mainly rely on the shader to achieve. The general idea is to define a series of array of lighting parameters in the "vr_lightng.cginc" Shader file:
#define Max_lights ...
FLOAT4 g_vlightcolor[max_lights];
FLOAT4 g_vlightposition_flinvradius[max_lights];
FLOAT4 g_vlightdirection[Max_lights];
Then use a For loop to calculate the lighting of all lights at once:
lightingterms_t computelighting (float3 vpositionws ...)
{
[loop] for (int i = 0; i < g_nnumlights; i++)
{}
}
The next step is to process the lighting information on the C # layer. First, you need to add a "ValveRealtimeLight.cs" script to each Unity light object, class Valverealtimelight manage a static variable "list< valverealtimelight > S _alllights "used to book all the light data.
Then, you need to add the "ValveCamera.cs" script on the main camera object. In the class valvecamera.updatelightconstants () member function, all the lighting-related parameters are computed and set to the shader constants.
The above is the implementation of the Single-pass Forward rendering feature of the Lab renderer. Shadows
The Lab renderer also takes over shadow rendering. It is necessary to select "Disable Shadows" in Unity's quality->shadows settings to close the unity default shadow.
As shown in the previous illustration, the Lab renderer uses the shadow mapping algorithm to generate real-time shading. The rough process of this algorithm is this:
Render a depth buffer from the angle of the light. The geometric meaning of this depth buffer can be roughly understood as the nearest distance from each pixel point to the light, and this depth buffer becomes shadow buffer or shadow Map. When rendering the back buffer, for each point that needs to be shaded, the "projection" (projection) to the above shadow map space is then compared to determine if the point is closest to the light-that is, there is no other object occlusion, that is, in the shadow. The rendering of shadow buffer is very intuitive for spot light; for directional light, the lab employs an approximation: replace the directional light with a "very far" point light, and the lab uses 6 fake spot light to replace the point light source. 0_0| | |
The process control of the above algorithm is implemented in the ValveCamera.cs script. First it needs a camera from the lighting angle, a rendertexture for shadow map, and a shader for shadow map rendering (Resources/vr_cast_shadows.shader).
[Executeineditmode]
[Requirecomponent (typeof (Camera))]
public class Valvecamera:monobehaviour
{
...
[NonSerialized] private Camera m_shadowcamera = null;
[NonSerialized] public rendertexture m_shadowdepthtexture = null;
[NonSerialized] public Shader m_shadercastshadows = null;
...
}
Valuecamera.valveshadowbufferrender () is invoked in the Valuecamera.onprecull () script callback function to render shadow Buffer. As shown in the shadow above, the lab renders all the lights into a whole shadow buffer, storing each light shadow buffer corresponding to the shader parameter "G_VSHADOWMINMAXUV". In this way, in front of the Single-pass Forward rendering process, you can achieve all the lighting in a pass calculation.
As for the content of Vr_cast_shadows.shader, it is very simple, it is the core of a vertex Shader, used to calculate projection after the position coordinates good, UV ah what kind of can be omitted.
In the shader of the Light rendering (Vr_lighting.cginc) The shadow is computed by the Computeshadow_pcf_3x3_gaussian () function. The so-called PCF is percentage Closer Filter, in order to produce the edge of the shadow smooth effect. In this function, it has a Gaussian filter to calculate the 3x3 range around the target point. Adaptive Quality
For VR, the frame rate is very important, so valve Daniel added this feature: dynamically adjusting the rendering quality to achieve stable efficiency, a speech he made on GDC 2016: https://www.youtube.com/watch?v= Eilb688puu4
This part mainly involves when to adjust the quality, adjust where (which is not easy to adjust), the specific logic is in the valvecamera.updateadaptivequality () this function.