Recently saw an article on the understanding of cloud rendering (HTTPS://ZHUANLAN.ZHIHU.COM/P/34836881?UTM_MEDIUM=SOCIAL&UTM_SOURCE=QQ)
The original text simply describes the method of noise generating cloud body, and a light model.
After looking very interested in, plus undergraduate bi set to do is the body rendering, so intend in unity in the cottage one out.
The original source (the article referenced in the article) is a talk from the production team at the dawn of the 2015 Horizon (http://advances.realtimerendering.com/s2015/The%20Real-time% 20volumetric%20cloudscapes%20of%20horizon%20-%20zero%20dawn%20-%20artr.pdf), speak more clearly.
One of the deepest impressions is a video (Https://www.youtube.com/watch?v=FhMni-atg6M).
To illustrate, this set of cloud renderings produced by the Horizon team is a service to the weather system in the game.
The weather system in the game simulates the condition of the clouds (including distribution, density, etc.) and then renders the clouds according to this information.
The above video shows that the weather system simulates a rainy scene (the bottom left corner of the video has the weather system output, you can see a rain cloud in the middle), you can see the rain clouds approaching to the point of view, and then started raining, very strong sense of telepresence.
This article will focus on the implementation of the problem, especially some of the details, after all, all the other things above, it will not repeat.
First look at the effect of the implementation:
Three pictures with different height signal and coverage Texture; the third picture is to try to fake the effect of the video (Emmmm feel less successful)
The original text is divided into four parts to illustrate, including modeling, lighting, rendering and optimization, this article will be in this order to explain the various strange problems.
Modeling
Modeling refers to the 3D texture that generates clouds through noise.
There are 2 3D Texture in the original, one is the Worley-perlin noise plus three different frequencies of Worley noise, and the final cloud modeling obtains the density value on a point by raymarch this map.
Here is the first strange point, why there are 4 noise map, after all, Raymarch when the final only a floating point.
To think about it, it seems the only possibility is that 4 maps correspond to different kinds of clouds. As you can see below, this set of systems is a kind of cloud that can specify the rendering of a location.
(In the end I only used a worley-perlin to render)
Similarly, in the second 3D texture, there are 3 different frequencies of Worley noise, and I can only understand that it corresponds to a different kind of cloud.
It is important to note that the generated noise must be tilable, or the clouds in the sky are all together.
about how to generate tilable Perlin noise and Worley noise, just an article about this: https://lightbulbbox.wordpress.com/2015/11/11/clouds-by-perlin-and-worley/
The article is also equipped with pictures, or quite understood.
Here, the basic shape of our cloud is complete. If you actually go to sample, it will be quite a complete piece (if the Worley noise is not adjusted), because in the map, most of the places are non-0 (especially superimposed noise). We can directly adjust the sample results, the sample below a value of 0, using the formula:
tempResult = saturate((tempResult - _Cutoff) / (1-_Cutoff));
Values below _cutoff will be cropped, and larger than _cutoff will be remapped to (0,1].
In the PPT, it is mentioned that the weather system will provide a 2D cloud cover map, you can directly multiply the sample results of this graph, you can customize the distribution of clouds. (Well, for example, if you write a word in a sticker, you can let the cloud show it.) Feeling very low)
At this point there is already a cloud-like, then limit the height of the clouds, with a map, called height Signal, with the height of the sample to sample the map.
(Here you need to define the starting position of the cloud and the upper limit height, play according to the needs of the screen)
After sampling with the first 3D Texture, a second 3D Texture is used as the Detaill Texture to subtract the initial sample. Specifically mentioned, this operation is done only on the edge of the cloud.
My implementation is to calculate an "edge value" that indicates how close the point is to the edge of the cloud, using the following formula
float edge = saturate(_DetailMask - lowresSample / _CloudDentisy); //_CloudDentisy是整体的云层密度,lowresSample之前采样时已经乘上了这个值,现在还原回来再计算;
Where _detailmask indicates that samples below _detailmask are considered to be marginal.
The edge value is then multiplied by the detail texture sample on the minus.
return saturate(lowresSample - edge * _Detail * sampleResult * _CloudDentisy);
The above sentence returns the density value after subtracting the operation from the Lowressample.
PPT said with three 2d curl noise to distort this detail texture to show the effect of the flow, not very understand how to operate, and so I understand the curl noise again to update.
The next part is the demonstration with the weather system, it is of course impossible to fake a weather system out, it skipped.
On the display of different kinds of clouds, PPT only a few words have been brought,
Personal speculation is the heightsignal of different clouds, and then according to the weather system input, according to a certain rule to choose different heightsignal to sample.
The input to the weather system includes the cloud coverage, precipitation rate, and clouds, corresponding to a 2D map RGB channel (referred to in PPT).
There are three types of heightsignal mentioned in PPT, namely Stratus, Cumulus and Cumulonimbus
(to illustrate, the main object here is the low-level cloud, namely Stratus and Cumulus (there is a combination of both stratocumulus, no tube), and cumulonimbus)
Cumulonimbus will only appear in the event of a heavy rainstorm (that is, the large cloud in the video above).
Combined with the PPT mentioned, when the rainfall rate is greater than 70%, no matter what cloud will become cumulonimbus, the fact.
It can be thought that the type of cloud that the weather system enters will control heightsignal sampling between Stratus and cumulus Blend.
When the rainfall rate is greater than 70%, go to sample Cumulonimbus heightsignal (of course, with normal sampling a little bit of blend, otherwise too strange).
This enables the rendering of different kinds of clouds.
Light
The core of the Illumination section is a formula
The formula describes the energy reception of a point in the cloud (the energy of the point is passed to the viewpoint after the HG is taken) (I don't know, I don't. jpg)
D is the depth, derived from beer's law, which describes the relationship between depth and energy transfer;
R is a result of their own observation, called powder, which refers to the situation where the edges darken when observing such objects from the light source direction. (But I can't reproduce it, and finally it's time to get rid of it, emmm)
HG is a Henyey–greenstein formula that describes the laws of energy propagating in anisotropic materials. There is a glow on the edge of the cloud when we can show that the clouds are looking toward the sun. The specific formula is as follows (copied from http://www.oceanopticsbook.info/view/scattering/the_henyeygreenstein_phase_function):
Cosψ is the Cos value of the propagation angle, which should be the angle of view in the direction of the sun (yes, well)
Where the G-value controls the regularity of anisotropy. When G is in (0,1), light tends to propagate forward, ( -1,0) backwards and 0 is isotropic.
Set to 0.2 or so there is more obvious, the edge of the luminous effect.
P is the energy absorption ratio of rain clouds.
The only mysterious amount in this is the D depth, which needs our own calculation, but there is no intuitive way.
The specific implementation is mentioned in the next part of the rendering.
Rendering
The details of the rendering implementation are not mentioned in PPT. Let me briefly mention the implementation of body rendering in unity.
The first is the model. No matter what you render, you must first have a model (Proxy mesh).
There are two ways, the first one is to use a model to cover the body you want to render. This is a bit odd, because intuitively speaking is based on the location of the model to render the body, but for example in our case, the position of the cloud is fixed, we do not want to change the location of the model, after scaling the cloud has changed.
The biggest problem with this approach is that after entering the model, the model is cropped, nothing is done, and can be resolved by turning on the back rendering, but there is always a burden.
The second approach is to render a rectangular one-sided coverage in front of the camera. If the depth does not mean, you should be able to write a post-processing effect to achieve (the post-processing effect is actually rendering a quadrilateral).
For the sake of convenience I am currently using a huge box floating in the sky emmmm. (So the above demo 3, the distant clouds can not see, in fact, the model is more than Frustrum)
The 1th aspect mentioned in the rendering section is Raymarch acceleration. In simple terms, the step size is determined by the result of a low-level sample (the result of the first map sample).
When a sample is sampled to a result equal to 0 o'clock, this is no cloud, we can maintain a relatively large step to raymarch.
If the sample is not 0 o'clock, we switch to a smaller step to sample (remember to take a step backwards before that).
If, during the fine sampling period, the sampling is repeated several times to 0, then the larger step is sampled. (There are also such operations. jpg)
The 2nd is the depth calculation mentioned in the previous section.
With a seemingly chaotic, but well-done approach, a cone in the direction of the sun is sampled 6 times. The final result as the depth value.
Also, when the alpha value of a point exceeds 0.3, it switches to a more cheap sampling method as an optimization.
The following mention of the rendering of the cloud, in fact, is a common map;
When these are done, the PPT mentions that the final rendering takes 20ms to complete a frame.
After I have finished, the actual worse, need 50ms.
But slightly modified the parameters, it can be reduced to about 5ms.
The main thing is to change to a 3D map only one channel, the number of frames immediately burst. It can be speculated that bandwidth is the most important bottleneck.
There is also a value for the size of the cloud that corresponds to the actual cloud size of the map resolution. When the value is low, the number of frames also becomes lower. Presumably, when the cloud is large enough, the location of the sample is close enough to the map, with a high degree of parallelism (purely guessed, previously seen in a similar case).
Optimization
The optimization section is very simple, the main is to render only one quarter buffer per render, and then update the final image of 1/16 pixels, the previous frame does not have information directly on the top of the low resolution. (I haven't realized it yet, it feels like a technical job)
After a while to improve the completion of the degree, I will put the whole project as a reference emmmm.
Finish
Rendering the cloud in unity with a body rendering method