Summary of issues
1.Light Support for Editor
The editor adds a light tool to add and modify lights.
Question 1. The user of light object is cross-handed.
Point Light can draw the corresponding volume (wireframe sphere/cone) for user selection, but when there are too many lights, the ball is a bit messy. So using the HUD, just select the HUD to select the light, only the light is selected when the volume display. In addition, many of the "invisible" logic objects inside the editor have this requirement , although not yet. In addition, you can directly take deferred shading in the geometry multiplexing, to show bounding volume helper, for directional light, you need an extra arrow as helper.
The problem 2:hud picking. Before the object picking, is the 3D method, take screen Sapce x, y, take any z (different z is projected to X, y), unproject to World space, and then take the world sapce camera coordinates, build a ray ( All the dots on this ray are projected to X, Y on the screen, and then pick up the bounding box using the 3D space algorithm.
And the HUD is very special, if the previous 3D method, but it is cumbersome. The bounding box needs to be reversed according to the screen rect, and the world bounding is not completely compact to surround the screen rect, which requires the use of the view space bounding, which is computed with view Sapce Ray.
After considering the option to do picking directly in screen space, the result is to add one or two more interfaces, but it is easy to implement. It should be noted that the Z of screen space also needs to be preserved to sort out the top objects.
The problem 3:hud icon, which is directly using the editor's resource (64x64 png), does not compress offline, but is compressed in real time to BC3 (DXT5). In addition this icon needs background, with the art resources to repeat the word redundancy too much, at the same time do not want to put the number of stickers to die (must be 2?), so added a new senmatic to get the number of texture, So in the shader can be iterated sampling.
voidBladefsmain (inchfloat4 pos:position,inchfloat2 uv:texcoord0, uniform float4 texutrecount:sampler_count, uniform sampler2d huddiffuse[max_dynamic_t Exture_count], outfloat4 outcolor:color0) {Outcolor= FLOAT4 (0,0,0,0); for(inti =0; i < texutrecount.x; ++i) {FLOAT4 color=tex2d (HUDDIFFUSE[I],UV.XY); Outcolor=lerp (outcolor, color, COLOR.A); }}
3: view Distance v.s. View Depth
(View distance v.s. View Space Z)
is View space Z the camera distance? A lot of the time can be almost considered equivalent, but not the same.
First of all, the two are linear, and the difference is that the same points in the distance form a sphere, while the same points in z form a plane perpendicular to the view dir.
Because view space Z is z-value, it can be used for z buffering (depth write/test), which can be written to depth stencil; View distance cannot be used for depth test.
Normalized Linear Depth |
Can be used as ZBuffer (Depth testing) |
Geometry Points of the same Value |
View Distance |
NO |
Sphere |
View Space Z |
YES |
Plane |
And before the plan to save Gbuffer, is prepared with Intz write depth, and as gbufer depth to sample, so blade use of view space Z.
However, deferred shading inside:
pos = eye_pos + dir * "depth", actually here the depth is viewdist, that is
pos = eye_pos + dir * viewdistance.
For direct read Depthstencil, it is view z instead of view Dist. This problem was not found before, because only the direction of the light is achieved, no use of position.
Simple analysis can draw |viewdistance| = | viewz| /cos (θ) = |viewz| /dot (Viewdir, dir).
Construct position in View space:
Right Handed:viewdir = (0,0,-1)
Viewz = tex2d (Depthintz, UV). R; Normalized linear depth
Viewpos = Viewdir * (VIEWZ * farclipdist)/-viewdir.z;
Construct position in World space:
Viewz = tex2d (Depthintz, UV). R;
Worldpos = Worldeyepos + Worlddir * (viewz*farclipdist)/dot (Worldlookatdir, worlddir);
This makes it possible to use Depthstencil as the depth of the gbuffer. The depth test can be used to build vertex position coordinates. So you don't have to compress normal and depth into the same buffer,
Even when using 24-bit normal, a channel is vacated. Currently this channel is used to save specular power.
The current blade Gbuffer are as follows:
Component |
Format |
Attachment |
Usage |
Color |
A8r8g8b8 |
MRT Color 0 |
Diffuse:rgb, Specularlevel:a |
Normal |
A8r8g8b8 |
MRT Color 1 |
Worldnormal:rgb, Specular Exponent:a |
Depth |
Intz |
Depthstencil |
normalized View space Z ZBuffer depth, converted to view space |
Blade first in the pixel shader will normalized view space z output, and later because the efficiency is not high, so changed to vertex shader output regular Z, and then deffered The shading calculates view Sapce Z according to ZBuffer:
Viewz = Convertdepthtoviewspace (tex2d (Depthintz, UV). R);
Worldpos = Worldeyepos + Worlddir * (viewz*farclipdist)/dot (Worldlookatdir, worlddir);
About the calculation of Convertdepthtoviewspace:
A link has been recorded earlier, and here is a simple way to keep track of ideas. According to projection matrix
Projectedz = f (viewz) (linear conversion to [0, Zfar])
Projectedw =-viewz (right handed)
After perspective Divide (z/w):
Ndcz =-F (viewz)/viewz ranges [ -1,1] (OGL) or [0,1] (D3D)
i.e. Ndcz = g (z) = a/z + B (non-linear)
where b = prjoectionMatrix33 = Projectionmatrix[2][2], a = PROJECTMATRIX34 = Projectionmatrix[3][2]
If you omit viewport's depth range, you can assume that the final depth buffer stores the z of NDC space,
Now bring in the parameters of the projection matrix, reverse compute VIEWZ = g-1 (ZBuffer) = A/(ZBuffer + b)
As long as the CPU on the calculation of a good, a, the shader inside the view space Z, it can be. You can also use Invertedprojectionmatrix to find out Viewspacez directly.
At present it is only a trial phase, there is no problem, such as possible accuracy is not enough, and so on (before using linear depth for accuracy problems), and then continue to improve.
If the accuracy is not enough, it may be necessary to render the view distance directly into the MRT, without sampling the depthstencil.
It is important to note that not all SM3.0 support Intz, only G80 above. To simplify the pipeline, use the forward shading directly on the video card that does not support Intz.
At present blade support shader model has 2_0, 2_x, 3_0. In the shader model for 3.0, the detection of Intz support, if not supported, then set to 2_x, this detail is forgotten, the back of the perfect.
Other questions
Light volume:calculate screens UV in Vertex Shader:
Only the quad of the direction light is very special, can do so, shpere and cone are not good. Tried it wrong. See details here: Http://gamedev.stackexchange.com/questions/63870/computing-pixels-screen-position-in-a-vertex-shader-right-or-wrong
UV Scale
Because the default backbuffer is the size of the desktop, Depthstencil is created as large. However, when using the actual window size, it is necessary to calculate the scale of the Uvs based on viewport's pixel size (or window size) and the size of the depthstencil, sampling only a portion of the area, and apply half pixel offset, The scale and offset of this UV are calculated on the CPU and passed into the Pxiel shader.
The back is ready to refine spot light and add stencil mask.
Engine design Tracking (nine. 14.3.1) deferred shading:depthstencil as Gbuffer depth