The article starts with two sets of effects, the end of the article and then put two groups of effects
This paper tests the scene resources from the light ink God, shader effect is the effect of this paper
HDR
People's limited vision system, only supports 16.7 million of colors, beyond this range of colors can not be displayed
BMP or JPRG each pixel is 16,24 or 32 bits
Each pixel is made up of red, green and blue, and if stored as 24 bits, each value is in the range of 0,255.
Can only show a 256:1 difference, Unity's shader is 0 to 1
However, in the natural sun, the contrast ratio is 50,000:1
HDR (High dynamic Range) allows the image to show a wider range of comparisons, and the normal range is called LDR (Low Dynamic range)
You can control the brightness by controlling the exposure time when you take a photo.
An HDR effect is a controlled exposure,
Tone Mapping tone-mapping
Traditional display devices do not fully display HDR, so we use tone-mapping technology.
Tone-mapping to map images from HDR to LDR display
The explanation in http://www.ownself.org/blog/2011/tone-mapping.html is good.
Tone mapping is a term in photography, because the brightness range that prints a photograph is not enough to represent the brightness field in the real world, and if you simply compress the entire luminance field of the real world linearly into the brightness field that the photo can perform, you lose a lot of detail at both ends of the light and dark. This is obviously not the desired effect, Tone mapping is to overcome this situation exists, since the photo can show the brightness of the field is limited, we can according to the scene of the overall brightness through the aperture and the length of exposure time to control a suitable brightness field, so that the details are not lost, You can also not distort the photo. The human eye is also the same principle, which is why when we from a bright environment suddenly to a dark environment, can not see from what can slowly adapt to the surrounding brightness, the difference is that the human eye is through the pupil to adjust the brightness field.
A tone-mapping formula.
Middlegrey is the mid-level grayscale of a full screen or part of the screen, which can control the brightness of the screen
Avglogluminance is the average of the logarithm of the brightness of a full-screen or part-screen
The formula of Avglogluminance
LW is the brightness, n is the number of luminance to be taken
This operation allows the L value to limit the bit [0,1]
Some tone-mapping operations use exposure or gamma as parameters to control the final image.
Tone-mapping is non-linear, he retains a certain range of dark colors and is gradually approaching the dynamic
This technique produces appealing visuals with strong contrasts and details.
The HDR Rendering in OpenGL article gives a brief and well-performing formula
The key code is as follows
FLOAT4 Frag (v2f i): COLOR
{
float4 c = tex2d (_maintex, I.uv_maintex);
Float y = dot (float4 (0.3,0.59,0.11,1), c);
float yd = _exp * (_exp/_BM + 1)/(_exp + 1);
return c*yd;
}
_EXP,_BM is an externally controllable variable
The HDR process is as follows
If you can't tell the HDR from the highlight light, you can look at the skybox, the highlight light is not highlighted Skybox, HDR makes the color more vivid, the pixel is clearer.
Bloom Flood Light
The cause of the glow is due to the scattering of the human eye lens
The principle that we make bloom is that the bright part of the image is superimposed on the original image by the convolution blur, and the bloom effect is produced.
Gaussian fuzzy filter is a low-pass filter
is to go to the current pixel and the surrounding pixels by a certain weight mixed, resulting in a certain blur effect
The weight distribution is as follows, the farther away from the current pixel, the lower the weight
Gaussian normal distribution curve
Two-dimensional formula
The weight can be calculated directly from this formula
Double sigma = (double) radius/3.0;
Double SIGMA2 = 2.0 * Sigma * SIGMA;
Double Sigmap = sigma2 * PI;
for (long n = 0, i =-radius; I <=radius; ++i)
{
Long i2 = i * i;
for (Long j =-radius; j <= radius; ++j, ++n)
kernel[n] = exp (-(double) (I2 + J * j)/SIGMA2)/Sigmap;
}
Kernel is the weight
Radius for the pixel and the current pixel distance (RADIUS)
For this formula we can calculate the 3*3,5*5,7*7 filter, for performance reasons, we still use the 5*5 filter
3*3 Filter
5*5 Filter
There are ready to do not forget, this also consumes some performance
We use this weight directly.
The key code is as follows
FLOAT3 mc00 = tex2d (_maintex, I.uv_maintex-fixed2 (2,2)/_inten). RGB;
FLOAT3 MC10 = tex2d (_maintex, I.uv_maintex-fixed2 ()/_inten). RGB;
FLOAT3 mc20 = tex2d (_maintex, I.uv_maintex-fixed2 (0,2)/_inten). RGB;
FLOAT3 MC30 = tex2d (_maintex, I.uv_maintex-fixed2 ( -1,2)/_inten). RGB;
FLOAT3 mc40 = tex2d (_maintex, I.uv_maintex-fixed2 ( -2,2)/_inten). RGB;
FLOAT3 mc01 = tex2d (_maintex, I.uv_maintex-fixed2 (2,1)/_inten). RGB;
FLOAT3 mc11 = tex2d (_maintex, I.uv_maintex-fixed2 (All)/_inten). RGB;
FLOAT3 Mc21 = tex2d (_maintex, I.uv_maintex-fixed2 (0,1)/_inten). RGB;
FLOAT3 mc31 = tex2d (_maintex, I.uv_maintex-fixed2 ( -1,1)/_inten). RGB;
FLOAT3 mc41 = tex2d (_maintex, I.uv_maintex-fixed2 ( -2,1)/_inten). RGB;
FLOAT3 MC02 = tex2d (_maintex, I.uv_maintex-fixed2 (2,0)/_inten). RGB;
FLOAT3 MC12 = tex2d (_maintex, I.uv_maintex-fixed2 (1,0)/_inten). RGB;
FLOAT3 MC22MC = tex2d (_maintex, I.uv_maintex). RGB; FLOAT3 MC32 = tex2d (_maintex, i.uv_Maintex-fixed2 ( -1,0)/_inten). RGB;
FLOAT3 mc42 = tex2d (_maintex, I.uv_maintex-fixed2 ( -2,0)/_inten). RGB;
FLOAT3 mc03 = tex2d (_maintex, I.uv_maintex-fixed2 (2,-1)/_inten). RGB;
FLOAT3 mc13 = tex2d (_maintex, I.uv_maintex-fixed2 (1,-1)/_inten). RGB;
FLOAT3 mc23 = tex2d (_maintex, I.uv_maintex-fixed2 (0,-1)/_inten). RGB;
FLOAT3 mc33 = tex2d (_maintex, I.uv_maintex-fixed2 ( -1,-1)/_inten). RGB;
FLOAT3 mc43 = tex2d (_maintex, I.uv_maintex-fixed2 ( -2,-1)/_inten). RGB;
FLOAT3 MC04 = tex2d (_maintex, I.uv_maintex-fixed2 (2,-2)/_inten). RGB;
FLOAT3 Mc14 = tex2d (_maintex, I.uv_maintex-fixed2 (1,-2)/_inten). RGB;
FLOAT3 Mc24 = tex2d (_maintex, I.uv_maintex-fixed2 (0,-2)/_inten). RGB;
FLOAT3 mc34 = tex2d (_maintex, I.uv_maintex-fixed2 ( -1,-2)/_inten). RGB;
FLOAT3 mc44 = tex2d (_maintex, I.uv_maintex-fixed2 ( -2,-2)/_inten). RGB;
FLOAT3 c=0; c+= (MC00+MC40+MC04+MC44);//4 c+=4* (mc10+mc30+mc14+mc34+mc01+mc41+mc03+mc43);//16 c+=7* (mc20+mc24+mc02+MC42);//16 c+=16* (MC11+MC13+MC03+MC33);//32 c+=26* (MC21+MC23+MC12+MC32);//64 c+=41*mc22mc;//32 c/=273;
_inten to the degree of ambiguity
Feel that lengthy trouble can also be replaced by A for loop.
And then we're going to take the bright part and mix it with the original image,
This part calls the Unity intrinsic function luminance function to calculate the brightness, multiplies it with the blurred image, the dark color part naturally eliminates
But if the direct multiplication will be on the edge of the dark color to produce an unnatural shadow, is the dark color is also "flooded", for this we do not let luminance after the value of 0, plus 0.1, does not affect the brightness.
Float lum = luminance (c);
c = MC22MC + c * (lum+0.1) * _lum;
return Float4 (c,1);
Finally, together with HDR, this is the final result of the example above.
The last process is put into the camera, we build a C # and are responsible for the value
The code is as follows:
using Unityengine;
Using System.Collections;
[Executeineditmode] public class Hdrglow:monobehaviour {#region Variables public Shader curshader;
Private Material curmaterial;
public float exp = 0.4f;
public float BM = 0.4f;
public int inten = 512;
public float lum = 1f; #endregion #region Properties Material Material {get {if (curmaterial = = null) {Curma
terial = new Material (Curshader);
Curmaterial.hideflags = Hideflags.hideanddontsave;
} return curmaterial; }} #endregion void Start () {if (!
systeminfo.supportsimageeffects) {enabled = false; Return