Signed Distance Field Shadow in Unity

Source: Internet
Author: User
Tags abs reflection truncated

0x00 Preface

Recently read a great share of this year's GDC, the Sebastian Aaltonen brought to the use of ray-tracing to achieve some interesting effects of sharing.

Among them, he introduced to the signed Distance Field Shadow improvement, mainly reflected in the elimination of the SDF shadow of some artifact.

The first time you see signed Distance Field Shadow is on the great god Inigo Quilez blog, more traditional ways of shadow implementation, such as Shadow map, the visual effect is much better. It can be seen that the shadow of the object gradually transitions from clear to blurred as the distance from near to far, and the performance is more natural and true.

In contrast, shadow implementations in unity are much simpler and more rigid.

Here we will implement raymarching in unity, and use SDF to draw some simple objects, and finally realize the effect of shadow.

0X01 implements SDF in unity

First, the raymarching algorithm deals with every pixel on the screen, so it's natural for unity to think of using a post-screen approach to implement raymarching.
Therefore, the main logic of the raymarching is implemented within the fragment shader, while the vertex shader is mainly used to obtain the ray information saved in the vertex attribute, then interpolated into fragment shader, For use by every fragment. At this point the entire screen is a four-sided shape, a total of 4 vertices, the 4 vertices can be used to record the screen of 4 rays, and the direction of the 4 rays can directly take the camera's flat head of the 4 edges of the direction, and then interpolated to generate a beam to a slice of the ray.
?

Here we can directly invoke the Camera.calculatefrustumcorners method provided by unity, here is the relevant document (https://docs.unity3d.com/ScriptReference/ Camera.CalculateFrustumCorners.html).
Here is the signature of this method:

public void CalculateFrustumCorners(Rect viewport, float z,               Camera.MonoOrStereoscopicEye eye, Vector3[] outCorners);

One of the 4 outcorners that we need is also passed into this method as a parameter. However, it is important to note that the 4 edges of the truncated head body obtained by the method are in local space, so we need to transfer them to world space for use in fragment shader.
So we get 4 vectors, but how can these 4 vectors be delivered to the shader in a high efficiency? If each vector is passed once, the efficiency is not high. So here we use a matrix to hold the 4 vectors, and the transfer of the data to the shader only requires the transfer of a matrix.

  Transform camtr = cam.transform;    vector3[] frustumcorners = new Vector3[4]; Cam.    Calculatefrustumcorners (New Rect (0, 0, 1, 1), Cam.farclipplane, Cam.stereoactiveeye, frustumcorners); var bottomleft = camtr.    Transformvector (Frustumcorners[0]); var topleft = camtr.    Transformvector (Frustumcorners[1]); var topright = camtr.    Transformvector (frustumcorners[2]); var bottomright = camtr.    Transformvector (Frustumcorners[3]);    matrix4x4 Frustumcornersarray = matrix4x4.identity;    Frustumcornersarray.setrow (0, Bottomleft);    Frustumcornersarray.setrow (1, bottomright);    Frustumcornersarray.setrow (2, TopLeft);    Frustumcornersarray.setrow (3, topright); return frustumcornersarray;  

The data for the ray is ready, and it's easy to send data to shader in unity, just call SetMatrix. But here comes a new question: How does shader correctly determine which ray it is dealing with? If the ray corresponding to the vertex is not determined, then the interpolated result will not be correct. So in vertex shader we need an index to correctly remove the ray direction from the incoming matrix.
So how do you determine index?
Smart you must have thought, for a quadrilateral, its UV data is very regular. So we can use UV data in vertex shader to determine the correct ray:

    index = v.uv.x + (2 * o.uv.y);    o.ray = _Corners[index].xyz;

OK, then as long as the interpolated ray data is used in the fragment shader, the ray direction corresponding to the current fragment is obtained. In this, we have introduced the ray into the shader.

Next we define an SDF, using SDF to define what we are going to render. We can get an SDF definition of many common objects on Inigo Quilez's blog, linked here: (http://.org/www/articles/distfunctions/distfunctions.htm).
Let's use SDF to render a six-edged body in unity:

float sdHexPrism( float3 p, float2 h ){    float3 q = abs(p);    return max(q.z-h.y,max((q.x*0.866025+q.y*0.5),q.y)-h.x);}

An SDF is required to describe the object for different object definitions, but it seems very inconvenient to modify the SDF every time we want to render different shapes in our raymarching algorithm, so we usually define a higher level abstraction-also called the SDF function- This function is often called a map, and its input is a point coordinate, and the output is the nearest distance from the surface of the object defined by SDF.
With the abstraction of the high-level map, we can easily modify the SDF in the internal implementation of the map according to our own needs, such as merging and splitting some basic objects, and so on. From this point of view, map actually defines the corrective action we want to render, so the information we have is known about the scene, which is used when the shadow is rendered.
However, let's look at a simple example, here is the definition of the map used in our example of drawing a hexagonal body:

        float map(float3 rp)        {            float ret = sdHexPrism(rp, float2(4, 5));            return ret;        }

Then we implement the raymarching logic on the fragment in the fragment shader, after the introduction of SDF, raymarching each marching distance can be set according to the results of SDF, I think you've all seen a diagram like this:

As you can see, the distance from each marching is the nearest distance from the current sample point to the surface of the SDF definition until the sample point and surface coincide, that is, the light and the surface intersect.
So we just need to run a for loop in the fragment shader, and each iteration calls a map to confirm the current sample point distance from the SDF's closest distance surfacedistance, if Surfacedistance is not 0, The next marching distance is surfacedistance, if it is 0, it proves that the light and the surface intersect, we just need to determine the color of this.
In addition, we need the camera's position Rayorigin as the starting point for the ray, which we can pass the camera's position to the GPU by calling Setvector in the script. In addition we need the ray direction raydirection on the fragment, which we can get directly because it is the result of Ray interpolation in the vertex attribute.

So this is a very simple logic:

        fixed4 raymarching(float3 rayOrigin, float3 rayDirection)        {            fixed4 ret = fixed4(0, 0, 0, 0);            int maxStep = 64;            float rayDistance = 0;            for(int i = 0; i < maxStep; I++)            {                float3 p = rayOrigin + rayDirection * rayDistance;                float surfaceDistance = map(p);                if(surfaceDistance < 0.001)                {                    ret = fixed4(1, 0, 0, 1);                    break;                }                rayDistance += surfaceDistance;            }            return ret;        }

OK, after the light and surface intersect, output a red color.
Let's take a look at the actual results:

As you can see, the hierachy of the scene is empty, but a solid six-edged body appears on the screen.

0x02 gradients, Normals and illumination

Of course, this effect is not appealing, so we obviously have to add some lighting effects to improve our expressiveness. Then the normal of the surface is one thing that must be done.
Milo's "Drawing Light in C (four): Reflection" This article also has the relevant content, that is the distance field changes the most direction is the normal direction. According to vector calculus (vectorial calculus), the maximum change direction of a scalar field is its gradient (gradient), so this problem is transformed into an SDF gradient for shape boundary position-that is, the rate of change in each direction, that is, the request is directed.
But obviously there is no need to really calculate the derivation, just find a way to get a similar effect. We often use this equation below to approximate the SDF gradient, that is, the surface normals at this point:

The code is very simple:

        //计算法线        float3 calcNorm(float3 p)        {            float eps = 0.001;            float3 norm = float3(                map(p + float3(eps, 0, 0)) - map(p - float3(eps, 0, 0)),                map(p + float3(0, eps, 0)) - map(p - float3(0, eps, 0)),                map(p + float3(0, 0, eps)) - map(p - float3(0, 0, eps))            );            return normalize(norm);        }

We can output the normal information into color, and we get the result in it.

and achieving a simple diffuse reflection is a very simple thing to do:

          ret = dot(-_LightDir, calcNorm(p));          ret.a = 1;

So we get a six-edged body with a simple light effect.

0x03 Shadow

The six-edged body has a simple diffuse reflection effect, which is followed by an SDF-based shadow effect on this basis. One of the advantages of SDF is that the distance information in the scene is all known, so it's easy to achieve a shadow-like effect, and it can be more natural to achieve a shadow attenuation based on distance, resulting in a more realistic shadow.
But before I do, I'll change the scene a little bit more complicated, of course, here I just added 3 objects to the SDF definition--sphere, plane, and cube, and simply modified the map function to reorganize the whole scene.

        float sdSphere(float3 rp, float3 c, float r)        {            return distance(rp,c)-r;        }        float sdCube( float3 p, float3 b, float r )        {          return length(max(abs(p)-b,0.0))-r;        }        float sdPlane( float3 p )        {            return p.y + 1;        }        float map(float3 rp)        {            float ret;            float sp = sdSphere(rp, float3(1.0,0.0,0.0), 1.0);            float sp2 = sdSphere(rp, float3(1.0,2.0,0.0), 1.0);            float cb = sdCube(rp+float3(2.1,-1.0,0.0), float3(2.0,2.0, 2.0), 0.0);            float py = sdPlane(rp.y);            ret = (sp < py) ? sp : py;            ret = (ret < sp2) ? ret : sp2;            ret = (ret < cb) ? ret : cb;            return ret;        }

In this way, the whole scene becomes this, consisting of 2 spheres and a cube and a plane.

Then we come to realize the shadow, in fact, the formation of the shadow itself is very simple. Along the direction of the light, if the light is obscured by a surface, a shadow is generated on the subsequent surface.
In the code, then, a simple SDF-based shadow implementation is simple: for a sample point that reaches the surface of the object, the point is the starting point, and along the direction of the light, another ray fired at the source is emitted. If the ray also hits the surface of an object, it proves that the sampling point is in shadow-in fact, it is raymarching.
Let's do one of the simplest shadow implementations, where the shadow is uniform black.

        float calcShadow(float3 rayOrigin, float3 rayDirection)        {            int maxDistance = 64;            float rayDistance = 0.01;            for(rayDistance ; rayDistance < maxDistance;)            {                float3 p = rayOrigin + rayDirection * rayDistance;                float surfaceDistance = map(p);                if(surfaceDistance < 0.001)                {                    return 0.0;                }                rayDistance += surfaceDistance;            }            return 1.0;        }

Of course, it is important to note that the first iteration does not pass the sampling point directly to the map, otherwise it will return directly.
OK, so a hard shadow is created, no extra passes, no extra stickers, and using SDF to create a shadow is as simple as that.

As we all know, the shadow is usually composed of the so-called Umbra and Penumbra, in which the umbra mainly refers to the surface of the object is not directly irradiated by the light source of the area, showing the state of the whole black, and the so-called penumbra is those half-ming and half-dark transition part. We can see that the shadow we achieve actually includes the umbra, without the effect of the penumbra.
So on the basis of this pure black umbra, and then add some not pure black penumbra effect, then the final shadow will be more real. So next we're going to consider the color of those dots on the surface outside the black umbra.
At this point we take the distance factor into account:

      ret = min(ret, 10 * surfaceDistance /rayDistance );


As you can see, it is no longer like the original implementation of the shadow directly truncated, but more than a circle of blurred penumbra to transition.
However, I believe sharp-eyed you must have found some problems. That is, the penumbra of Cube appeared a banded artifact.

This is mainly due to the problem of sampling during the calculation of Shadow raymarching.
On this year's GDC, Sebastian Aaltonen shared a new solution to the problem:

Based on the last sample D-1 and this time the data of the sample D, to calculate or estimate a ray on the surface of the SDF near the nearest point E, and E to calculate the penumbra.
In the sharing Sebastian also gave his revised penumbra calculation formula:

In fact Inigo has also improved the effect of his SDF shadow based on Sebastian's sharing. Below we will solve the artifact of the penumbra in unity according to the realization of Inigo and Sebastian.

        //Adapted from:iquilezles        float calcSoftshadow( float3 ro, float3 rd, float mint, float tmax)        {            float res = 1.0;            float t = mint;            float ph = 1e10;                        for( int i=0; i<32; i++ )            {                float h = map( ro + rd*t );                float y = h*h/(2.0*ph);                float d = sqrt(h*h-y*y);                res = min( res, 10.0*d/max(0.0,t-y) );                ph = h;                                t += h;                                if( res<0.0001 || t>tmax )                     break;                            }            return clamp( res, 0.0, 1.0 );        }

Where ph is the radius of the circle at the time of the previous sample, H is the circle radius of the current sample.
Modified Shadow Effect:

0x04 PostScript

In this way, we implemented SDF rendering and SDF-based shadow rendering in unity, and solved the annoying stripe artifact.

The items in this article can be obtained here:
Https://github.com/chenjd/Unity-Signed-Distance-Field-Shadow

Signed Distance Field Shadow in Unity

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.