Aura #01 (Unity air effect plugin)

Source: Internet
Author: User
Tags function definition

(W in construction)

Honestly, I don't know what to call it. The title Wwww

Aura is an open source plug-in for unity that achieves better atmospheric effects (e.g. volumetric light, volume fog, etc.):

Portal:
Asset store:https://assetstore.unity.com/packages/vfx/shaders/aura-volumetric-lighting-111664
Github:https://github.com/raphael-ernaelsten/aura

The approximate effect is this look of www:

(all official figure X)

So how did www it come true, www?

(I thought I was sure I couldn't understand what I was going to do about this holiday. x results in the back of the plane to understand a little bit of WW (although the implementation is very straightforward and simple www) sure enough, it's a little bit better than before. x (escape

In the Aura GitHub page (which has links above), a diagram is mentioned, which is the implementation flowchart:

First, the first part corresponds to the various light sources in the aura (parallel (directional) light, point light source, etc.) and various volume (a fog). (Aura here the light source does not need to hit the fog (scatter) to produce the effect of volume light, a single light source can have the effect of volumetric light W)
They are just various forms of data structures (the shape of the volume, the parameters of the light source, and so on) stored in memory, waiting for subsequent operations to send them to the compute shader for calculation to get the final illumination result.

In a nutshell, the whole process is about calculating the color of each point, adding it, and applying it to the rendering results.

The main light calculation process occurs in Aura::frustum::computedata () (aura/classes/frustum.cs:147). The Computedata () function is called in the Updatefrustrum () (Update cone) function (aura.cs:351) of Aura's main class (Aura.cs), and the Updatefrustrum () function is also under the same class Called in the Onrenderimage (RT, RT) function (aura.cs:174). The Onrenderimage function completes the operation of the drawing image, with two parameters (all rendertexture-two textures, which are the current frames to be displayed on the screen) SRC and dest are the inputs and outputs of the function respectively. In the function onrenderimage (), use the PostProcess method (using a pixelshader (Aura/shaders/shaders/postprocessshader.shader), as described later) to Computeshader calculates the final illumination result (a 3D texture , corresponding to the bottom integrated volumetric Lighting of the graph, is the product of the Updatefrustrum (), described later) apply to the image rendered by Unity in the usual way, and get the final result.

↑ Input Image (general rendering results)

↑computeshader computed results (3D textures did not find a suitable visualization method, which is a "slice" of the resulting 3D texture. You can see the light beam from the top left to the right)

↑ The final image is obtained by applying the atmospheric effect to the rendering results by Postprocessshader.

(1) Calculation of illumination (atmospheric effect):

First, the current View Space is set by the fine degree in the settings and the rendering range (which is known as the resolution and range setting under the Aura component of the camera) The cone of sight (the one in front of the camera) is divided into a number of small pieces according to the fineness.

↑ The vertebral body is the frustrum.

After cutting the cones into small chunks, each piece corresponds to a voxel in the final 3D texture (the pixel concept is from 2D to 3D Enshin), and each small block is not calculated at this stage, and the process is highly parallelized (each thread computes a voxel), Handed over to the Computeshader. The computational process of each individual is in Computedatacomputeshader.compute.

Prior to this, some processing (computemaximumdepthcomputeshader.compute) of the current depth buffer was done, and the depth information in the cones was obtained (where the camera can see farthest from the X, Y axis of the cone lattice):

(The calculated depth graph is mainly to integrate the information of the original image to obtain the appropriate depth images at 3D texture clarity of the cones)

According to such a depth map, remove some of the non-effect of voxel. (Obscured by the scene object)

Back to see the calculation of the light:

Throw away the first 65,000 lines (... The macro definition (depending on the user's settings (using/ignoring the light source, etc.) can produce up to 32768 combinations of settings, so define so many macro definitions) regardless, you can see that the next calculation process is very straightforward:

(many things here are not very correct, only a general understanding of WW (here is too much to understand Orz)

1. First obtain the world space coordinates corresponding to the current voxel. Here, this position is added some jitter (disturbance/noise), so that the resulting results more soft (with noise to compensate for the lack of clarity, and then there are many places such treatment).

2. Calculate the contribution of each Volume to that point. (Color with "density" (equivalent to Alpha)

3. Calculate the contribution of each light source to the color of the point. First, the shadowmap of the light source calculates whether it is in its shadow or not (+0) if it is in the shadow, and if it is not illuminated by the light, the color of the current point is added to the color of the light (and the corresponding attenuation). At the same time, things like light cookies can be calculated here as well.

4. The final color of this point to do some small processing, such as non-negative;

5. Mix the results from the previous frame and the results of the current frame to make the resulting results softer. This step is critical, if you do not reuse the previous frame of the calculation results, the final rendering of the image can be seen very obvious artifact. But after mixing, the rendering quality has improved a lot. The default value given by Aura is 90% for the previous frame and 10% for the current frame, which is an ideal parameter at 60FPS. (How momentum are 0.9 (x)) in the process of mixing, due to the camera position changes, resulting in the previous frame of the view cone and the current frame has a slight displacement between the cone, so a series of matrix transformation, the current frame in the position of the cone is transformed into the previous frame of the view cone, and then the previous frame obtained by sampling.

Here we get the color of each point. It can be easily understood here that this color is the color of the current lattice of atmospheric light scattering, the equivalent of a small light source (the color). But by now we've only got the light source color of each lattice in front of the camera, and we haven't calculated the effects of these lights on the camera.

The code is not affixed, too long and fragmented (

So the second step is to accumulate the color of each light source.

The corresponding file is Computeaccumulationcomputeshader.compute (accumulation is the cumulative meaning):

In addition, we need to take into account the attenuation of the scattered light source on the propagation path: using the exp (exponential) function as the attenuation function (because exp (ax) is x = a * x ' (after the unit distance attenuation A, a < 1). The process is also simple:

1. Each thread (Threadidx x, y, Z) calculates the final color from the current lattice (x, Y, z) and the light propagates to the camera (x, y, 0). This process also takes into account the illumination of all squares on the road (not only the starting point of a lattice).

2. The process of transmitting light from the far end to the camera by cyclic calculation. The loop in the Aura code is a bit around, from the camera (z = 0) to the z of the current grid (from the camera to the current direction), but in the calculation of the inverse attenuation, in fact, from the current lattice attenuation, propagation to the camera process.

Half4 Accumulate (half4 Coloranddensityfront, Half4 coloranddensityback) {half transmittance= exp (COLORANDDENSITYBACK.W *layerdepth); Half4 accumulatedlightandtransmittance= Half4 (coloranddensityfront.xyz + coloranddensityback.xyz * (1.0f-transmittance) * COLORANDDENSITYFRONT.W, COLORANDDENSITYFRONT.W *transmittance); //Note here Front + back * (1.0f-transmittance), which is the inverse calculation.     returnaccumulatedlightandtransmittance;} [Numthreads (num_thread_x,num_thread_y,num_thread_z)]voidRaymarchthroughvolume (Uint3 id:sv_dispatchthreadid) {//get current point coordinatesHalf3 Normalizedlocalpos =Getnormalizedlocalpositionwithdepthbias (ID); #ifEnable_occlusion_culling//occlusion Culling[branch]if(Isnotoccluded (Normalizedlocalpos.z, Id.xy))//Todo:maybe COULD be OPTIMIZED by USING A MASK VALUE in the DATA TEXTURE    #endif    {        //Set initial valueHalf4 Currentslicevalue = Half4 (0,0,0,1); Half4 NextValue=0; //cyclic calculation attenuation[Loop] for(UINTz =0; Z < id.z; ++z) {nextValue=samplelightingtexture (Uint3 (Id.xy, z)); Currentslicevalue=accumulate (currentslicevalue, nextValue); } Half4 Valueatcurrentz=samplelightingtexture (ID); Currentslicevalue=accumulate (currentslicevalue, Valueatcurrentz); //writes the resulting result to a 3D texturewriteinoutputtexture (ID, currentslicevalue); }}

The final step is to add the 3D texture to the last resulting attenuation, and use Postprocessshader.shader to make the final color of the image. The method is also very simple, get the depth of the current position, convert to the corresponding view Space (cone) coordinates, sample from the 3D texture, add some noise after the current color (calculated color) superimposed on the original image, the final result is obtained:

float4 aura_getfogvalue (float3 screenspaceposition) {//Aura_volumetriclightingtexture: Attenuation of the accumulated 3D texture (The final result of Computeshader)    returnTex3dlod (Aura_volumetriclightingtexture, Float4 (Screenspaceposition.xy, aura_rescaledepth (SCREENSPACEPOSITION.Z) ,0));}voidAura_applyfog (inout float3 colortoapply, Float3 screenspaceposition) {//Add some NoiseScreenspaceposition.xy + = Getbluenoise (Screenspaceposition.xy,3). XY; FLOAT4 Fogvalue=Aura_getfogvalue (screenspaceposition); //add a little bit more noise .FLOAT4 noise = getbluenoise (Screenspaceposition.xy,4); //Overlay Colorcolortoapply = colortoapply * (FOGVALUE.W + NOISE.W) + (fogvalue.xyz +noise.xyz);} Fixed4 Frag (v2f psin): sv_target{//Convert Depth coordinates    floatdepth =tex2d (_cameradepthtexture, PSIN.UV); Depth=lineareyedepth (depth); //_maintex final image for normal renderingFLOAT4 BackColor =tex2d (_maintex, PSIN.UV);    Aura_applyfog (BACKCOLOR.XYZ, FLOAT3 (PSIN.UV, depth)); //See the function definition above    returnBackColor;}

(A few days to update a few GIF pictures, more intuitive a little www.)

And then I threw it into a scene. WWW effect can also ww? Because there is no fine tune www (so just look at the play (x

Because the entire operation is in a certain range of View Space to do, so most of the scene can be, but also a more satisfying point of the www? Yes, X.

Above WW

Aura #01 (Unity air effect plugin)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.