Http://forum.china.unity3d.com/thread-32271-1-1.html
We have released Unite 2018 Jiang Yibing "Clockwork musician", Hit-point's "Travel Frog", Ubisoft "Eagle Flight" speech sharing, many developers in the background message hope to share Mihajlovic Tour technical director He Jia "Collapse 3" case sharing, Because this is a dry, full-blown field. We are very grateful to Miha and He Jia for their long-standing support of the Unite conference, which will be divided into two chapters due to space limitations.
The following is the lecture content:
Hello everyone, Welcome to Unite2018 to join us in this speech. Simply do a self-introduction, my name is He Jia, currently in Miha as the technical director. My team and I focused on real-time rendering in PBR and NPR, as well as the process animation and interactive physics of animation CG and games. Now, part of our job is to use unity to achieve high-quality cartoon rendering.
The theme of this presentation is "achieving high-quality cartoon rendering on unity", which is optimized for the features of each platform, from mobile to high-performance PCs, to different levels of platform.
Let's start with a brief introduction to the main aspects of this presentation. First of all, some applications on the mobile side of the "collapse 3" rendering characteristics. And then I'll talk about some of the techniques used in animation-style CG rendering, such as illustration-style character rendering, special material rendering, rendering of effects, and post-processing with cartoon rendering adaptation. The last part is about some of the miscellaneous and some prospects for future implementation.
First, let's take a look at some of the rendering features used in the crash 3 scenario.
As we can see, the scene uses a lot of special effects to enhance the expressive force. Examples: Bloom post-processing effects, dynamic particles, plane reflections, screen twist effects, and more. We will analyze these effects in turn.
These dynamic effects are shown below.
Let's take a look at how to achieve high-quality reflective effects.
To achieve high-quality reflection on the mobile side, planar reflection is a good way to synthesize the effect and performance factors. The usual practice is to place the camera in a symmetrical position and render the scene to reflect the result, with the ground as a symmetrical plane.
In order to show the metallic texture of the ground, we first apply hexagon sampling blur to the reflection results and then use the metal texture detail normal map to disturb the reflection results, in addition to the mirror reflection map and Fresnel effect to further enhance the reflective texture. In some secondary reflective surfaces that are far from the ground level or non-horizontal, planar reflections are not applicable, so we use environment map reflection as an alternative.
To minimize the overhead of rendering a reflection scene, we limit the reflection resolution to less than 1/3, and because the reflection map is obfuscated, even if the resolution is reduced, the difference is not obvious, and we use a simplified version of the material in the process of rendering the reflection, ignoring some unimportant small objects.
Now let's look at another effect: the app for full-screen twist effects.
We used a lot of screen warps in the crash 3 scenario, such as the trailing effects of swords, the time-space break effect, the water waterfall and other scene effects.
During the rendering of the warp effect, we used 3 channels to store the distorted rendering results, 2 to store the UV offset, and the other to store the warp strength mask, which was used to perform deep cropping and distance-based strength control.
Rendering distorted results to frame-buffered textures with separate passes is expensive for mobile platforms, so we've integrated the warp effect in the final post-processing, much faster than the former. However, this approach can also lead to the problem of having the front object mixed with the back-twisted material because there is no layering, but given the performance limitations of the mobile platform, this compromise is worthwhile relative to the overall effect.
Let's take a look at the implementation of Bloom, the entire scene if HDR is turned on using the FP16 format of the render target, and then sampled to the original size of the 1/4 size for the subsequent post-processing process to use.
First, we need to specify a luminance threshold to extract the highlighted area of the image, the implementation method is not complicated, just subtract the threshold from the source pixel, the structure is the extracted high brightness area, overlay this layer of content can make the results look more contrast and colorful. Next, we generate 4 render target with a recursive half size, and apply its contents to Gaussian blur with increasing radius. Finally, we combine these blurred results to get the final bloom effect.
From the end we can see that the bloom effect not only serves to express the visual effect of the highlighted area, but also plays a significant role in the color of the whole image.
When reflection rendering, distortion and bloom are done, these intermediate results can be synthesized together in the end. We use the filmic tone mapping with exposure and contrast controls to convert the original image of FP16 HDR to the final LDR frame buffer. Because these compositing operations are done in one pass, performance needs can be met even on mobile devices.
Let's introduce the weather in the game and the realization of the sea of clouds.
We want to create a rendering system that allows the player to feel deep, with a variety of rich patterns and dynamic lighting changes. The system should also be easy to adjust and use, so that art can create different types of cloud effects. This is also an interesting challenge for us, so let's talk about these features in the next step.
First let's look at the resources required to render the cloud, because we want to achieve the lighting effect of a stylized cloud that can be dynamically changed over 24 hours, and if the number of mapped maps that are stored directly is too large and inconvenient to adjust, we use multilayer coloring to achieve this.
We use 4 channels to represent the cloud's light and shadow: the base illumination layer, the shadow 1 layer, the Shadow 2 layer, and the edge light layer, we can get the color scheme of the cloud at different times by setting different colors for each layer. We have prepared a total of 8 different shapes of cloud templates, used to build a variety of clouds landscape.
To build the sea of clouds, we used a lot of particle emitters that were emitting clouds toward the screen, and we used different cloud templates and launch patterns to combine different clouds, and we implemented various types of clouds and stormy weather, which were saved in the weather configuration. In addition, we use keyframes to define the color of the sky background and clouds. As time goes by, the color of the cloud changes depending on the keyframe.
The main cost of performance is the overdraw problem, if we follow the fixed control pattern to launch the cloud although the minimum overdraw can be used to obtain a better density of clouds, but may appear to be more repetitive, adding the location of the random factors can solve the problem, But to get a less sparse cloud effect requires more particles than fixed pattern, we have fine-tuned parameters for particle emission configurations so that a good tradeoff can be found between the two.
This is a 24 hour day and night change of the sea of clouds landscape.
This is another day and night change of the sea of clouds landscape.
This is a scene of Storm Lightning.
Now let's take a look at the weather system used in the game scene.
We change the weather and atmosphere of the scene mainly through the global fog effect, skybox color and directional light setting. There are also many parameters that can be adjusted for fog effects. We give fog effect based on depth divided into two distance range, near and far range can be set different color and intensity values to create a variety of atmosphere. Skybox can also control the color gradient of the sky, the cloud's light and shadow color, and so on. Combined with the above adjustment options, we can create sunny, rainy and foggy, cloudy and nighttime weather.
In addition, the light of the person will be affected by the environment, the main light color is determined by the direction of light, the local area of the shadow changes, such as: the role of the shadow area, by some from the level editor manually placed lighting volume definition.
Let's take a look at the use of depth of field in the game.
The use of depth of field in mobile games is generally not common, as the common depth of field implementation is still expensive for mobile platforms, and we use the depth of field effect to highlight people in the character selection interface and mission briefing sessions.
Since these scenarios do not require excessive depth of field, we use a special approach to improve mobile performance. Instead of using depth buffer for COC blending, use a separate camera to draw the background layer directly. After the blur is applied, the final image is obtained by combining the background and foreground characters together.
For better visual results, we use the hexagon sampling mode to get better bokeh shapes. In addition to the bokeh intensity adjustment parameter to make it look clearer, we use the luminance value as the increment factor, and 2 is usually a suitable value.
Performance in order to maintain the stability of performance, our blurred background resolution depends on the degree of ambiguity, the larger fuzzy size using lower resolution and less noticeable, we also use unity built-in curves to describe the transformation between them.
Shows the results of dynamically adjusting the blur size and caustics strength.
In the game scene, we also implemented a look cool effect, when the last enemy hit a fatal blow when the bullet time, then all the high-speed moving objects will slow down, in the rainy day we can clearly see the shape of the raindrops.
In order to achieve this effect, we used 4 key frames representing the shape of raindrops at different velocities, and then stretched them vertically according to the time and speed scale. At a normal time scale, raindrops look like a straight line, gradually shortening into a raindrop shape when time slows down. Here we also use the animation curve to control the tension, key frame selection and time of the relationship between the speed, adjustment is very flexible and convenient.
Just now we talked about some of the rendering features for mobile optimization, let's talk about the rendering method for animation style real-time CG or next generation games.
In the past two years, we have produced two short videos, which reflect our new rendering style. (Station B Video address: https://www.bilibili.com/video/av14260225)
We put it on the B station, in 3 days to get the B station full station monthly ranking ranked first position, so far more than 3 million clicks. (Station B Video address: https://www.bilibili.com/video/av7244731)
Let's talk about some of the real-time rendering CG techniques used in these videos.
First, let's look at the role rendering, our goal is to achieve completely dynamic lighting and shading, all materials are the correct response to various lighting phenomena, including the main light and regional ambient light. This requires that we do not use any of the light manifestations that are painted on the texture.
The main features used for role rendering are: multi-channel ramp material shading methods, special materials such as eyes, hair and other anisotropic materials, and pcss character soft shadows and high-quality hook lines.
First, let's look at the shading method for multichannel ramp.
We want the shadow and color changes of the character to show a more subtle illustration style, so we use 2D ramp textures to represent these subtle changes, where RGB channel resolution is used to describe the diffuse shadow range of different shadow layers. Each layer can have a different color, so that you can make fine color change control in the light and shade changes.
For the cartoon-style picture, if the color is only pure light and shade changes, the shadow will appear dirty, lack of expressiveness, and if the increase in the dark saturation and hue changes, the overall color will look more vivid. And by adjusting the vertical texture sampling coordinates, we can achieve dynamic soft-hard style conversion. From another point of view, this method also indirectly shows the subsurface scattering effect of the skin.
The following four images show the effect of multi-channel color overlay over layers. You can see the layers of layers of color overlay, the skin level of detail will become richer.
The upper and lower second mate graphs show the corresponding rendering effect of ramp texture in different locations, and different ramp can get different color styles. Using hard ramp to compare close to Cel-shading,soft ramp is similar to the soft shadow hierarchy of illustrations.
Since we use 2D ramp textures, the changes between them can be dynamically adjusted, we can use the ramp mask texture to select ramp per pixel to achieve the hand-painted style of illustration. This ramp mask texture can be drawn directly from the art on the model. We have a 3D paint tool under unity that is more intuitive to use.
Another important factor in illustrator style rendering is the use of texture strokes. We can use different stroke texture patterns to get different shading styles. For each brush texture, we have 4 channels to store brush patterns that represent different directions, and a mix of these brushes allows for a richer brush change. In the two contrast chart on the right, there is more hand-drawn feeling using the stroke texture.
Displays the result of applying a skin texture with a stroke style to different lighting angles.
Next, let's look at how to achieve high-quality edge light.
The same is based on the Fresnel method, we have parameters to control it, such as: Edge width and smoothness, in addition to these global control parameters, we also use brush texture to increase some local changes. We define the edge light can be from the direction light source can also come from the environment map, using directional light we can define the edge light according to demand, using environment map, we can according to the ambient light to obtain edge light to appear more real, both are more useful, can be used together.
To avoid edge-edge light appearing in unwanted areas, we use the AO texture and shadowmap to close the occlusion area. As we can see, the shape of the left edge light in the contrast chart appears to be more prominent.
Cartoon style for the face generally there is not too much shadow level changes, if we apply the previous ramp method in the face, the effect will be like the right side of the graph looks as unnatural, in order to improve this situation we use a vertex color of a channel as mask to control the face of the color layer of strength, The desired cartoon effect is achieved by depressing the diffuse reflection performance.
Next, let's take a look at the implementation of the high quality character soft shadows.
If we use Unity's built-in CSM shadows directly, the shadow quality does not meet the requirements when the camera is close to the character, so we render a single shadowmap for the character to ensure a constant shadow quality. For this purpose, we also implement the SHADOWMAP based on the cone of view, according to the role of the BoundingBox and the visual cone to find the intersection part, as a rendering area. You can maximize the usage of shadow maps,
In addition, Variance shadow map and pcss are used to reduce shadow artifacts and to achieve a natural soft shadow effect. In addition, if you want to achieve the correct shading of transparent materials, additional channels are required to store the shadow intensity based on the transparency of the material. We can see from the example picture that a translucent skirt can cast a natural shadow.
The treatment of the eye we used a physical-based refraction calculation. Ordinary cartoon model treatment of the eye is usually left blank, the pupil is depressed, so in the side of the time will not be bulging out to appear more natural, but if you want to do close-up eye, this approach does not seem convincing. Using the real refraction algorithm, the eyeball itself is done by the sphere, and then the refractive index is calculated according to the angle of view to find the map corresponding point.
The contrast chart below shows the actual effect of refraction, and we can see that if there is no refraction effect, the side of the eye looks rather strange.
In addition, we added a light-refraction effect, which makes the eye texture further enhanced. For non-realistic rendering, physical correctness is not a factor to consider, due to the special circumstances of cartoon rendering, we want the caustics effect to appear on the other side of the incident light, and the more parallel the incident angle appears more obvious.
The realization method is through the incident light and the eyeball front angle calculates the illumination intensity, here we use inverse diffuse to simulate, then assists the Fresnel formula to make the brightness change, finally by the eye caustic texture obtains the final effect.
We can see that the eye looks dull and lacks texture without caustics.
It shows the refraction of the eye and the anisotropic specular effect of the hair.
Due to space limitations, the content of the previous article was first shared here, tomorrow we will continue this speech content, sharing hair color rendering, lighting, post-effects processing, such as a series of content.
Unite 2018 | Collapse 3: Achieve high-quality cartoon rendering in Unity (top)