Write in front
Write an article a year ago. Before looking over 2015 years of Siggraph Course (on the rendering can go to Selfshadow's blog found, very full) when you see the volume cloud rendering. This course tells the developer of the dynamic weather system developed for the game "Horizon: Dawn", which focuses on the simulation and rendering of the cloud inside, and it is valuable for reference.
Among them, the cloud modelling mainly uses the Raymarching method, their inspiration should be related to the Shadertoy, but has more the program control and the artistic effect and so on. Can be seen from the above picture, the effect is very good.
The speech on Siggraph is mainly about 3D dynamic cloud rendering, which is suitable for large-scale games such as end-trips. Later on the IQ blog, found that idol in 2005 wrote a 2D dynamic cloud simulation of the article, the algorithm used in the relatively simple, the computational amount is very small. This article was written about ten years ago, when computing resources were very limited, so many trick were proposed to improve performance and reduce memory footprint. Although the computing resources of the computer are much better now, these trick do not exit the stage, but they can work on the mobile platform. This article is intended to introduce the methods mentioned in the IQ article.
Dynamic Cloud
Think about what we do when we do the sky background now. We will first prepare a semi-circular sky top to simulate the background, and then usually prepare several pictures containing the sky background, such as the blue sky and several white clouds. Each image is a background and assigned to a material that animates the images and moves each layer of texture to simulate the slow-moving effect of the cloud. This method is simple and effective, so it is widely used.
Sometimes, however, we hope that the sky background of the game is not predictable in advance, or that our system implements a weather system that can dynamically produce natural changes as the weather changes. It is not possible to use the ready-made images to simulate clouds and skies. This article is to talk about how to use the program to simulate the cloud dynamically, although the effect of this article is relatively humble, but I believe that in the program and the Art of co-ordination, there is a need for friends can be inspired to achieve a very beautiful effect.
The computational complexity of this paper is very low, before looking down, we need to have a certain understanding of noise, do not understand the previous article "graphics" talk about noise. At the end of this article, a simple sky simulation will be implemented, including Sky color, stars, fluttering clouds, etc., while allowing the user to adjust the color, thickness, sharpness, etc. of the cloud. The video below shows the effect of a rainy and sunny weather, where the sky part of the simulation uses the method in this article.
Algorithm implementation
In fact, our focus is the simulation of the cloud, the sky color and so on can be used with another shader or texture to achieve, for example, in the above video I use another pass to render the sky and stars and other effects. Here we only explain the part of the cloud simulation.
The simulation of the cloud is the use of fractal noise, which corresponds to the thickness of the cloud. So how can we simulate the effect of irregular clouds? We can take it for granted that this requires a constantly changing two-dimensional noise texture. In the previous article "graphics" talk about noise, we talked about the use of a three-dimensional noise texture to achieve a smooth change of two-dimensional noise texture, where the third sample coordinates corresponding to the time parameter. However, the cost of using 3D textures is to take up a lot of memory, and our goal is to be real-time and compute as small as possible, then this method is not available.
Now is the key point, which requires a small trick. We know that fractal noise is actually caused by a number of layers of different sampled-size noises (called octave) that are summed up by a certain weight. In order for the final fractal noise to change, we can move these octave layers at different speeds so that the resulting fractal noise will change continuously. The number of layers does not need to be too large, the implementation of this article, as mentioned in the IQ article, uses only 4 octave.
Now, the first step is to create these octave first.
Creating a noise texture
The first step is to create the individual octave that make up the fractal noise. and "graphics" talk about noise a little bit differently, because we need to constantly translate these textures, in order to achieve seamless connectivity, we need to make these noise textures seamless (seamless). To get a seamless 2D noise texture, the usual method is to first create a 4D noise texture, then take two orthogonal circles in 4D space, and sample a 2D noise texture on the circle. Reasons and algorithms can refer to the following links:
- Http://ronvalstar.nl/creating-tileable-noise-maps
- Http://gamedev.stackexchange.com/questions/23625/how-do-you-generate-tileable-perlin-noise
I directly used the code on the Unity Wiki, which uses 4D of simplex noise to produce a seamless 2D noise texture, the reason why the Perlin noise is not used is that the simplex is much less computationally complex at high latitude, see "graphics" to talk about noise.
In this way, we get 4 noise textures and the fractal noise they add by weight:
At run time, we just need to move these noises at different speeds, as long as they don't break the cloud's simulation. IQ said that the higher the frequency of noise should move faster, but I think it seems that the reverse effect is no problem. Anyway, just do as the idol says. IQ also said that the direction of their movement is not a critical issue, the following code shows the speed and direction of my movement:
Sampler2d _OCTAVE0;Sampler2d _octave1;Sampler2d _octave2;Sampler2d _octave3;FLOAT4 _octave0_st;FLOAT4 _octave1_st;FLOAT4 _octave2_st;FLOAT4 _octave3_st;v2f Vert (AppData v) {v2f O;O. POS=Mul(UNITY_MATRIX_MVP, V. Vertex);O. UV0. XY= Transform_tex (V. Texcoord, _OCTAVE0) + _time. x*1.0* _speed * HALF2 (1.0,0.0);O. UV0. ZW= Transform_tex (V. Texcoord, _octave1) + _time. x*1.5* _speed * HALF2 (0.0,1.0);O. UV1. XY= Transform_tex (V. Texcoord, _octave2) + _time. x*2.0* _speed * HALF2 (0.0, -1.0);O. UV1. ZW= Transform_tex (V. Texcoord, _octave3) + _time. x*2.5* _speed * HALF2 (-1.0,0.0);Return o;}
The _OCTAVE0 is the lowest-frequency noise texture, and the _octave3 is the highest-frequency noise texture. We chose to calculate the sample coordinates of four textures in the vertex shader and store them in two HALF4-type registers and pass them to the slice shader.
In this way, we only need to calculate the fractal noise value of fbm in the slice shader according to a formula similar to the following:
F bm=0.5? n 0 + 0.5 2 ? n 1 + 0.5 3 ? n 2 + 0.5 4 ? n 3
The resulting fbm can be used to determine the thickness of the cloud at that location. We want to control how sparse and sharp the clouds are, such as when the video shows a cloudy cloud (sparse and sharp), and a cloudless feeling (sparse and sharp) when it rains. This can use a variable as a threshold, all FBM below this threshold is mapped to that threshold, and then another threshold is used, all fbm above this threshold is mapped to this threshold, and in the middle of it is preserved, and finally the part is remapped to 0~1. As shown (source: IQ blog):
This process is very simple to implement, the main corresponding element shader code is as follows:
float4 n0 = tex2d (_octave0, I.uv 0 Span class= "Hljs-preprocessor" >.xy ) ; FLOAT4 n1 = tex2d (_octave1, I.uv 0.ZW ) FLOAT4 N2 = tex2d (_octave2, I.uv 1.xy ) FLOAT4 n3 = tex2d (_octave3, I.uv 1.ZW ) FLOAT4 fbm = 0.5 * n0 + 0.25 * n1 + 0.125 * n2 + 0.0625 * n3; FBM = (Clamp (FBM, _emptiness, _sharpness)-_emptiness)/(_sharpness-_emptiness)
With the treated fbm, theoretically we can now use this value directly to blend the cloud color with the sky background color. We can separate the cloud-simulated pass and the background, store the FBM value in the transparent channel of the output color, and store the color of the cloud in the RGB channel of the output color, so that it can be mixed with the background color by mixing instructions.
The previous procedure can be roughly represented by the following pseudo-code:
Pass { // 渲染天空背景 ...}Pass { // 渲染云彩层 // 开启并设置混合系数,和上一个Pass的结果进行混合 Blend SrcAlpha OneMinusSrcAlpha ... fixed4 frag (v2f i) : SV_Target { fixed4 col; ... col.rgb = _CloudColor.rgb; col.a = fbm; }}// 可以渲染多个云彩PassPass { // 同上一个Pass,但噪声纹理UV的移动速度和方向有所不同}
IQ points out, but if this mix directly makes the whole effect look dull (flat), IQ takes Raymarching's thought into account, adding the sun (or moon) direction to the cloud color effect, so that the whole effect looks more stereoscopic.
Add raymarching
The raymarching thought used here is very simple, we first assume that the rendered cloud layer is observed from the ground toward the sky, that is, the cloud pixel values we render correspond to a point below the simulated cloud, as shown in the blue dots. As the light travels from the sun, along the path through the clouds to the blue dots, the longer the journey through the clouds, the more light decays, so our goal is to know the distance the light travels through the clouds.
This can be approximated using raymarching. We know that the blue dot corresponds to the cloud thickness of fbm, now we are moving along the light source direction of a small step, that is, a step to the first orange Line corresponding to the point, we can compare the point corresponding to the thickness of the cloud and the Orange Line itself, if the thickness is greater than the height, then the point in the clouds, otherwise it is We select multiple sampling points, for example, in the implementation of this article, I chose 4, where the ratio of the sample points within the cloud determines the shading value of the point.
So the problem now is to get the cloud thickness value of these sampling points fbm. We can move a few steps along the light source in the element shader, and then project to the Dome of the sky to get its sample coordinates to sample, so we have four noise textures, four times raymarching steps requires 16 sampling operations. A more efficient approach is to unify the sampling of each noise texture into one sample operation, reducing the number of samples. So how do you do it? We can store the translated noise texture of each raymarching step corresponding to the different RGBA channels at the first generation of the noise texture, that is, the true noise value is stored in the R channel, and the light source direction passes through a raymarching The noise after step is stored in the G-Channel, the noise after two raymarching steps is stored in the B channel, after three raymarching steps the noise is stored in a channel. In this way, we only need a tex2d operation to get the value of four sample points.
The next step is to compare the thickness of the cloud and the height of the sample. We use a variable ray to store these height values, and because the sampling distances are fixed, Ray is also fixed. We use fbm minus Ray, if the result is less than 0, it means outside the clouds, and vice versa in the clouds. We use the max operation to set the result to a portion greater than or equal to 0, and then averaging them to get the final result.
Finally, our code for the element shader is as follows:
Fixed4 Frag (v2f i): sv_target {fixed4 col =0;Float4 n0 = tex2d (_OCTAVE0, I. UV0. XY);FLOAT4 n1 = tex2d (_octave1, I. UV0. ZW);FLOAT4 N2 = tex2d (_octave2, I. UV1. XY);FLOAT4 n3 = tex2d (_octave3, I. UV1. ZW);FLOAT4 FBM =0.5* N0 +0.25* N1 +0.125* N2 +0.0625* N3;FBM = (Clamp (FBM, _emptiness, _sharpness)-_emptiness)/(_sharpness-_emptiness);Fixed4 Ray = Fixed4 (0.0,0.2,0.4,0.6);Fixed amount = dot (max (Fbm-ray,0), Fixed4 (0.25,0.25,0.25,0.25));Col. RGB= Amount * _cloudcolor. RGB+2.0* (1.0-Amount) *0.4;Col. A= Amount *1.5;Return col;}
The above amount is the result of our calculations. In the above we have added a gray effect to the calculation of cloud color, which is based on the effect of the selection, you can use other methods of calculation.
Code
After understanding the above method, the code is not difficult. In my implementation, I wrote an editor script to generate a noise layer, using the code on the Unity wiki to generate seamless noise. Here is the Build dialog box, where you can choose the resulting texture size and light direction:
When you click the Generate button, four noise textures are generated within the specified folder:
Because their rgba has values, it appears to be a translucent texture with a certain color.
We assign these four noise textures to a material that uses a shader containing two passes, a pass for creating a sky background (including certain gradient colors and stars), and the next pass is the cloud layer we're talking about.
The sky layer is purely for demonstration purposes and can be replaced with any custom background.
The complete code can be downloaded here.
Written in the last
This article describes the method is very simple, the effect is limited, if you do hand-travel with similar needs can be borrowed from the method of this article.
Shadertoy has many more complex rendering examples of 3D volumetric clouds that use raymarching for modeling and rendering, such as clouds. The main idea of the SIGGRAPH approach in the beginning is also the use of raymarching. IQ Blog has a lot of valuable and algorithms are not difficult articles, we can go to see more.
Reference Links:
- The original of IQ: http://www.iquilezles.org/www/articles/dynclouds/dynclouds.htm
- 3D Dynamic Volume Cloud rendering: http://advances.realtimerendering.com/s2015/index.html
"Unity Shader" 2D Dynamic Cloud