Depth of field)

Source: Internet
Author: User
Tags pixel coloring

Http://www.ownself.org/oswpblog? P = 50

The algorithm principle comes from the real-time depth of field simulation in the papers "real-time depth" by Guennadi riguer, Natalya tatarchuk, and John Isidoro in ATI labs. This article only describes the principles and procedures. For more information, see the original article.
We know that the goal of video games is completely true pictures, but most of the earlier games seem to be missing some pictures, make the image look completely sharp at any angle, and this situation does not exist in reality because of the existence of depth of field. Although a completely sharp image is perfect, it makes people seem unrealistic. This is because in the real world, whether it is human eyes, cameras, cameras, and other imaging devices, there is a relationship between crystals or lenses, it will always make the projected image have a virtual reality, that is, the more sharp the object is near the focal plane, the blurrier the object farther away from the focal plane (you can review the physics knowledge of junior high school ~ Well, I can see it now), but no lens imaging is involved in the rendering of game images. The camera is equivalent to a perfect small hole imaging, so every pixel in the screen is perfect and sharp, but this is not what we can see in the real world, so adding the depth of field effect in the game can make the screen more realistic, it can also make the means like in a movie that guides the viewer's attention through zoom to be displayed in the game.
Let us briefly recall the relationship between parameters during lens imaging.

First, we know the formula of the lens's focal length: 1/p + 1/I = 1/f. We set the distance from the object to the lens after imaging (projection) through the lens to X, similarly, we can get 1/f = 1/x + 1/D, and then we can deduce:

Because of the relationship between objects and lenses, only when the distance between an object and the lens is P can the light be accurately converged after passing through the lens, for an object whose distance from the lens is D, after passing through the lens, the light will scatter in a circle with a diameter of C, leading to blurred pictures, this circle is called CoC (circle of confusion ).
There are many methods to implement depth of field in video games. The following describes how to use GPU (direct3d API) simulate CoC to achieve the depth of field effect (in the DOF example given by DirectX, it is based on distance simulation ).
This implementation method is actually a post-processing method, which is implemented by drawing the scenario twice. In the first painting, we will calculate the fuzzy factor to simulate COC, the second plot uses the calculated fuzzy factor to mix the color of each vertex, so that the Blur should be sharp.
Pass one: Draw a scenario
In addition to the scenario, the most important task of pass one is to calculate the fuzzy factor, because additional information except the color information needs to be output, therefore, we also need to use the MRT (multiple render target) in direct3d used in the previous dynamic fuzzy match. The output format is shown in:

The format is d3dfmt_a8r8g8b8 and d3dfmt_g16r16 respectively. In the vertex coloring tool, an additional parameter is required in addition to the spatial coordinate transformation. The depth value of each point in the angle coordinate system is, this parameter will be involved in the calculation in the subsequent pixel shader.
Struct vs_input
{
Float4 VPOs: position;
Float3 vnorm: normal;
Float2 vtexcoord: texcoord0;
};
Struct vs_output
{
Float4 VPOs: position;
Float4 vcolor: color0;
Float fdepth: texcoord0;
Float2 vtexcoord: texcoord1;
}; //////////////////////////////////////// /////
Vs_output scene_shader_vs (vs_input V)
{
Vs_output o = (vs_output) 0;
Float4 vposwv;
Float3 vnorm;
Float3 vlightdir;
// Standard Coordinate Transformation
O. VPOs = MUL (V. VPOs, matworldviewproj );
// Calculate coordinates in the Perspective Coordinate System
Vposwv = MUL (V. VPOs, matworldview );
// Output depth in the Perspective Coordinate System
O. fdepth = vposwv. Z;
// Calculate the diffuse color value
Vlightdir = normalize (lightpos-V. VPOs );
Vnorm = normalize (V. vnorm );
O. vcolor = dot (vnorm, vlightdir) * mtrldiffuse + mtrlambient;
// Output the UV coordinates of the texture.
O. vtexcoord = V. vtexcoord;
Return O;
}
In the pixel coloring er, the fuzzy factor is calculated based on the existing information and the formula we have deduced. Here, the fuzzy factor is parameterized to the range 0, 1, and 0 indicates that it is completely sharp, 1 indicates the maximum blur.
Struct ps_input
{
Float4 vcolor: color0;
Float fdepth: texcoord0;
Float2 vtexcoord: texcoord1;

> &
#160 ;};
Struct ps_output
{
Float4 vcolor: color0;
Float4 vdof: color1;
}; //////////////////////////////////////// ///////////////////////////
Ps_output scene_shader_ps (ps_input V)
{
Ps_output o = (ps_output) 0;
// Output color
O. vcolor = V. vcolor * tex2d (texsampler, V. vtexcoord );
// Calculate the fuzzy factor based on the formula we export
Float pixcoc = ABS (dlens * focallen * (zfocus-V. fdepth)/(zfocus * (V. fdepth-focallen )));
Float blur = saturate (pixcoc * scale/maxcoc );
// Returns both the depth and fuzzy factor to the 0-1 interval.
O. vdof = float4 (V. fdepth/scenerange, blur, 0, 0 );
Return O;
}
Pass two: Post-processing
There is no operation in the vertex coloring tool drawn the second time. It is output as is. Offset operation of texture UV Of course, this operation can also be performed outside the shader.
The final blur occurs in the pixel shader drawn the second time. Based on the size of the fuzzy factor obtained during the first draw, we simulate the size of a CoC, A certain number of samples are carried out in the pixels included in the COC to mix and form the final color of the point. As shown in:

The relative coordinates of sampling points are usually pre-defined and saved in the array. Of course, you can use a more scientific method to dynamically calculate these sampling points, but at least the current video games do not require such high accuracy.
In addition, it is very important to ignore the above calculation steps. If we calculate based on the above process, a very serious distortion will occur in the final picture: when the last two objects are on the focal plane and away from the focal plane, the pixels close to the intersection of the two objects are blurred together, COC sampling will inevitably mix the color of the object in the focal plane (color leaking). Therefore, in the final calculation, we need to determine the depth of the sampling point, to determine whether the point should be eventually mixed in.
Struct ps_input
{
Float2 vtexcoord: texcoord;
}; //////////////////////////////////////// //////////////////
Float4 dof_filter_ps (ps_input V): Color
{
// Specifies the color of the sampling center.
Float4 colorsum = tex2d (scenecolorsampler, V. vtexcoord );
Float totalcontribution = 1.0f;
// Sample center depth value and fuzzy factor
Float2 centerdepthblur = tex2d (depthblursampler, V. vtexcoord );
// Calculate CoC Based on Fuzzy Factors
Float sizecoc = centerdepthblur. y * maxcoc;
// Sample
For (INT I = 0; I <num_dof_taps; I ++)
{
// Calculate the coordinate of the sampling point. filtertaps is the array that saves the coordinate of the sampling point in advance.
Float2 tapcoord = V. vtexcoord + filtertaps [I] * sizecoc;
// Sample color and sample depth Value
Float4 tapcolor = tex2d (scenecolorsampler, tapcoord );
Float2 tapdepthblur = tex2d (depthblursampler, tapcoord );
// Compare the depth value to determine whether to add it to the point
Float tapcontribution = (tapdepthblur. x> centerdepthblur. X )? 1.0f: tapdepthblur. Y;
// Mixed color
Colorsum + = tapcolor * tapcontribution;
Totalcontribution + = tapcontribution;
}
// Obtain the Mean Value
Float4 finalcolor = colorsum/totalcontribution;
Return finalcolor;
}
Finalcolor is the final mixed color ~

Http://www.ownself.org/oswpblog? P = 50

The algorithm principle comes from the real-time depth of field simulation in the papers "real-time depth" by Guennadi riguer, Natalya tatarchuk, and John Isidoro in ATI labs. This article only describes the principles and procedures. For more information, see the original article.
We know that the goal of video games is completely true pictures, but most of the earlier games seem to be missing some pictures, make the image look completely sharp at any angle, and this situation does not exist in reality because of the existence of depth of field. Although a completely sharp image is perfect, it makes people seem unrealistic. This is because in the real world, whether it is human eyes, cameras, cameras, and other imaging devices, there is a relationship between crystals or lenses, it will always make the projected image have a virtual reality, that is, the more sharp the object is near the focal plane, the blurrier the object farther away from the focal plane (you can review the physics knowledge of junior high school ~ Well, I can see it now), but no lens imaging is involved in the rendering of game images. The camera is equivalent to a perfect small hole imaging, so every pixel in the screen is perfect and sharp, but this is not what we can see in the real world, so adding the depth of field effect in the game can make the screen more realistic, it can also make the means like in a movie that guides the viewer's attention through zoom to be displayed in the game.
Let us briefly recall the relationship between parameters during lens imaging.

First, we know the formula of the lens's focal length: 1/p + 1/I = 1/f. We set the distance from the object to the lens after imaging (projection) through the lens to X, similarly, we can get 1/f = 1/x + 1/D, and then we can deduce:

Because of the relationship between objects and lenses, only when the distance between an object and the lens is P can the light be accurately converged after passing through the lens, for an object whose distance from the lens is D, after passing through the lens, the light will scatter in a circle with a diameter of C, leading to blurred pictures, this circle is called CoC (circle of confusion ).
There are many methods to implement depth of field in video games. The following describes how to use GPU (direct3d API) simulate CoC to achieve the depth of field effect (in the DOF example given by DirectX, it is based on distance simulation ).
This implementation method is actually a post-processing method, which is implemented by drawing the scenario twice. In the first painting, we will calculate the fuzzy factor to simulate COC, the second plot uses the calculated fuzzy factor to mix the color of each vertex, so that the Blur should be sharp.
Pass one: Draw a scenario
In addition to the scenario, the most important task of pass one is to calculate the fuzzy factor, because additional information except the color information needs to be output, therefore, we also need to use the MRT (multiple render target) in direct3d used in the previous dynamic fuzzy match. The output format is shown in:

The format is d3dfmt_a8r8g8b8 and d3dfmt_g16r16 respectively. In the vertex coloring tool, an additional parameter is required in addition to the spatial coordinate transformation. The depth value of each point in the angle coordinate system is, this parameter will be involved in the calculation in the subsequent pixel shader.
Struct vs_input
{
Float4 VPOs: position;
Float3 vnorm: normal;
Float2 vtexcoord: texcoord0;
};
Struct vs_output
{
Float4 VPOs: position;
Float4 vcolor: color0;
Float fdepth: texcoord0;
Float2 vtexcoord: texcoord1;
}; //////////////////////////////////////// /////
Vs_output scene_shader_vs (vs_input V)
{
Vs_output o = (vs_output) 0;
Float4 vposwv;
Float3 vnorm;
Float3 vlightdir;
// Standard Coordinate Transformation
O. VPOs = MUL (V. VPOs, matworldviewproj );
// Calculate coordinates in the Perspective Coordinate System
Vposwv = MUL (V. VPOs, matworldview );
// Output depth in the Perspective Coordinate System
O. fdepth = vposwv. Z;
// Calculate the diffuse color value
Vlightdir = normalize (lightpos-V. VPOs );
Vnorm = normalize (V. vnorm );
O. vcolor = dot (vnorm, vlightdir) * mtrldiffuse + mtrlambient;
// Output the UV coordinates of the texture.
O. vtexcoord = V. vtexcoord;
Return O;
}
In the pixel coloring er, the fuzzy factor is calculated based on the existing information and the formula we have deduced. Here, the fuzzy factor is parameterized to the range 0, 1, and 0 indicates that it is completely sharp, 1 indicates the maximum blur.
Struct ps_input
{
Float4 vcolor: color0;
Float fdepth: texcoord0;
Float2 vtexcoord: texcoord1;

> &
#160 ;};
Struct ps_output
{
Float4 vcolor: color0;
Float4 vdof: color1;
}; //////////////////////////////////////// ///////////////////////////
Ps_output scene_shader_ps (ps_input V)
{
Ps_output o = (ps_output) 0;
// Output color
O. vcolor = V. vcolor * tex2d (texsampler, V. vtexcoord );
// Calculate the fuzzy factor based on the formula we export
Float pixcoc = ABS (dlens * focallen * (zfocus-V. fdepth)/(zfocus * (V. fdepth-focallen )));
Float blur = saturate (pixcoc * scale/maxcoc );
// Returns both the depth and fuzzy factor to the 0-1 interval.
O. vdof = float4 (V. fdepth/scenerange, blur, 0, 0 );
Return O;
}
Pass two: Post-processing
There is no operation in the vertex coloring tool drawn the second time. It is output as is. Offset operation of texture UV Of course, this operation can also be performed outside the shader.
The final blur occurs in the pixel shader drawn the second time. Based on the size of the fuzzy factor obtained during the first draw, we simulate the size of a CoC, A certain number of samples are carried out in the pixels included in the COC to mix and form the final color of the point. As shown in:

The relative coordinates of sampling points are usually pre-defined and saved in the array. Of course, you can use a more scientific method to dynamically calculate these sampling points, but at least the current video games do not require such high accuracy.
In addition, it is very important to ignore the above calculation steps. If we calculate based on the above process, a very serious distortion will occur in the final picture: when the last two objects are on the focal plane and away from the focal plane, the pixels close to the intersection of the two objects are blurred together, COC sampling will inevitably mix the color of the object in the focal plane (color leaking). Therefore, in the final calculation, we need to determine the depth of the sampling point, to determine whether the point should be eventually mixed in.
Struct ps_input
{
Float2 vtexcoord: texcoord;
}; //////////////////////////////////////// //////////////////
Float4 dof_filter_ps (ps_input V): Color
{
// Specifies the color of the sampling center.
Float4 colorsum = tex2d (scenecolorsampler, V. vtexcoord );
Float totalcontribution = 1.0f;
// Sample center depth value and fuzzy factor
Float2 centerdepthblur = tex2d (depthblursampler, V. vtexcoord );
// Calculate CoC Based on Fuzzy Factors
Float sizecoc = centerdepthblur. y * maxcoc;
// Sample
For (INT I = 0; I <num_dof_taps; I ++)
{
// Calculate the coordinate of the sampling point. filtertaps is the array that saves the coordinate of the sampling point in advance.
Float2 tapcoord = V. vtexcoord + filtertaps [I] * sizecoc;
// Sample color and sample depth Value
Float4 tapcolor = tex2d (scenecolorsampler, tapcoord );
Float2 tapdepthblur = tex2d (depthblursampler, tapcoord );
// Compare the depth value to determine whether to add it to the point
Float tapcontribution = (tapdepthblur. x> centerdepthblur. X )? 1.0f: tapdepthblur. Y;
// Mixed color
Colorsum + = tapcolor * tapcontribution;
Totalcontribution + = tapcontribution;
}
// Obtain the Mean Value
Float4 finalcolor = colorsum/totalcontribution;
Return finalcolor;
}
Finalcolor is the final mixed color ~

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.