Unite 2018 | Collapse 3: Achieve high-quality cartoon rendering in unity (bottom)

Source: Internet
Author: User
Tags visibility

Http://forum.china.unity3d.com/thread-32273-1-1.html

Today we continue to share the Mihajlovic Tour technical director He Jia at the Unite Beijing 2018 Conference, "achieve high-quality cartoon rendering on unity," the next chapter, please click here to read.



The following is the lecture content:

Next, let's introduce the hair rendering. Hair is a more important and unique part of cartoon rendering characters. We want to achieve a high-light and shadow gradient that changes dynamically based on the light source, and this implementation should also have an intuitive WYSIWYG color-conditioning capability.

Like the texture of the skin, we also use the Multi-ramp method for the diffuse rendering of the hair, and specular highlights we use two layers of high-gloss to do the superposition, by combining high and low frequency highlights together, we can get satisfactory results. In addition, we use glossy map and AO textures to further enhance the texture of the hair.



The high-gloss rendering of the hair uses anisotropic highlights, which use normal to calculate light compared to ordinary highlights, and the anisotropy uses tangent as the basis for computing, so that highlights can be displayed in a shape perpendicular to the hair direction.

When we make hair models, if the model topology is more complex, UV expansion is difficult to achieve all vertical, we can also use Flowmap to comb the shape of the highlights.



We also use the Jittermap jitter map to enhance the texture of the cartoon-rendered hair. The high-gloss effect that simulates the hair detail is achieved by disturbing the tangent direction. In addition, adjusting the UV scale of the jittermap can also be done to adjust the high light weight of the hair.



The four graph decomposition shows the effect of each highlight component on the rendering results. The bottom-right corner is the final image. We can see that the hair looks more expressive with the high-light display combined with low-frequency and high-frequency components.



Next, let's look at another implementation of cel-shading hair highlights.

Our goal is also to make it dynamic, high light should be based on the light source and camera position along the direction of the hair moving, the shape should also be in the movement of dynamic morphological changes.

Cel-shading style of hair high-gloss more unique form, it is difficult to use the traditional high-light calculation method to describe. Also, we need to use tangent direction instead of normals for high-light calculations, and we need a more specific way to perform high-gloss shapes.



First, we're going to uw each strand of hair in a vertical direction so that the highlights can move along each strand. Then from the left to the right of each strand from 0 to 1 to identify the starting and ending positions of dynamically generated highlight shapes, we use a few curve-defined templates to describe the basic shape of the hair highlights, and then use the jitter noise texture to modulate the thickness of the hair highlights,

There are many parameters in the material to control the shape of the resulting pattern. Position, offset, width, jitter ratio, etc., by adjusting these parameters, we can obtain a variety of different shapes as needed.



Let's look at another example of anisotropic material: silk. This time we used the sub-normal direction to calculate specular reflection, and using three layers of high light to synthesize together to achieve the final rendering effect, we set different colors for each layer so that the final material looks richer in color levels.



This shows the changes in the reflection of the anisotropic high light at different angles of view.



Our character materials also include other special materials, such as translucent materials such as crystals and scarf, and the direct use of alpha blending does not show the desired texture, which requires us to achieve both refraction and blur effects, all of which rely on Unity's command buffer for two effects.

When the refraction effect is achieved, Command buffer obtains the already rendered backbuffer as the background for refraction sampling, sets different refractive coefficients for the RGB channel, and samples three times to simulate the dispersion effect before rendering refraction.

For the Blur effect, the backbuffer is reduced and blurred by command buffer, generating 4 dimensions, then halving the rendertexture of the fuzzy degree, and then determining the degree of ambiguity based on the camera distance and the FOV and the intrinsic fuzzy parameters of the material. Select the corresponding rendertexture to complete the blur effect.

We also made a certain optimization of the implementation of the two, not directly to the Backbuffer use full-screen blur, the object itself as a proxy mesh, only to deal with the part of the painting.



Next, let's talk about the high quality hook-up method.

For characters and dynamic objects we use the Backface hook line method, and use vertex color to control the width of the hook line, the hook line itself requires continuous vertex normals in order to have no faults at the sharp edges, so we store the smoothed normals in another set of vertex colors.

In addition, we also use vertex color to control the width of the hook line, for example: the tip of the hook line will gradually become thinner, we fill in the vertex color gradient of 0 to make the line width gradually transition to zero.

In addition, depending on the distance between the camera and the object, there should be a distance correction based on the width of the hook line. Each material should also have a different check line color, all these features are high-quality hook line required.



Backface Hook line method, although it can be a more detailed hook line reduction. But it also has its own inherent flaw, that is, can not be in the non-edge of the sharp line to produce a hook line. These polylines are very common on hard surface models.

To solve this problem, we add a preprocessing process to extract the edges, save them to additional mesh resources, and draw them using geometry shader. For these polylines we used the same adjustment parameters as the Backface method to make them look exactly the same. With the addition of the polyline drawing, we can see that the picture on the right captures more of the tick detail.



Hook line Another common method is to generate contour lines in the image space. By detecting the discontinuity of normal and depth in the scene image, we can get a more detailed check line. Regardless of the complexity of the scene, the performance of this method is constant, we also added to the hook line color hue, lightness, saturation adjustment, so that the hook line more natural.

The disadvantage of this method is that it is more difficult to control the width of the hook line, if we want to achieve distance-related line width, we can only adjust it in a few pixels, so the image-based approach is mainly applicable to scene contour rendering, for objects close to the camera, we'd better use Backface method.



The final approach is based on the brush-buying method, which is used more often in offline rendering and is usually divided into the following steps.

    • Contour Extraction: Extract contour edges from mesh, mainly divided into two kinds of sharp edge and smooth edge.
    • Connecting contour lines: The adjacent contour edges are connected to the contour lines as long as possible, according to the model's complement relationship.
    • Contour segment: On the basis of step 2, the contour lines are separated from the mutation of curvature or visibility according to the curvature and visibility changes on the contour lines.
    • Stroke mapping: Make a texture of the strokes you want to add, mapped to the contour line of step 3 according to the corresponding texture coordinates.



This method can achieve more stylized, more obvious strokes of the hook line, pencil+ blender freestyle render Basic is a similar approach, the performance overhead, can be used for CG quality rendering, but not suitable for use in the game directly.



Let's take a look at other special effects implementations, which also play an important role in scene characterization.



This is a scene to show the volume light. As we can see, the volume light with fog effect is used together with bloom, the scene shows a strong level and atmosphere feeling.



Here's a look at the implementation details of the volumetric light.

We use Unity's built-in curves to shape the light, which makes it easy to adjust the shape at run time, and the strength parameter changes are also defined by the curve.

To further simulate smoke effects, we also use 3D noise textures to simulate the effects of dynamic smoke flow. Noise smoke itself has some parameters to adjust. For example: particle size, size ratio, noise intensity, flow rate, etc.

In addition, with cookie map can also customize the volume of light projection shape, using a cookie map has also introduced a high-frequency change components, which need to increase the number of samples to reduce aliasing, using jitter algorithm can reduce the aliasing caused by insufficient sampling, we implemented two kinds of jitter: Bayer Pattern and blue noise, the results show that blue noise with temporal AA can achieve better volumetric light effect at lower sampling number.



Let's take a look at the example of using real-time GI.

In this simple demo scenario, we use enlighten to bake the lightmap of the real-time GI, then use the dynamic spontaneous light material and the volumetric light as the light source, we use the Avpro plugin to decode the video file, set it on the spontaneous light texture, and set the strength value to 1 or more. We can get a dynamic and bright area light source, and remember to update the Gicache so that the lighting environment can be dynamically updated at runtime, and when used with dynamic volumetric light, the overall lighting effect looks impressive.



For the dynamic AO implementation on the role, we use the modified Hbao to specify the saturation and tonal adjustments of the colors in the AO area so that the image color after the AO is not dirty, and we can see from the comparison chart that the right image is stronger than the left image level after applying the AO.



We have also re-implemented the image-based glare effect for cartoon rendering to simulate the ghost and star scattering effects of the lens. This is done using a high-light area extracted in a similar way as input, then bloom the convolution in different directions and applying color modulation to obtain the final result.



Let's take a look at some of the CG videos and close-ups.



This is another set of scenarios. We can see that the entire scene is closer to the quality of offline rendering after applying the rendering techniques mentioned earlier.



The following figure depicts the main rendering characteristics applied to the above scenario. We can see these effects include: stylized PBR material, cartoon-style AO, screen space hook, screen space reflection, and surface subdivision.

Comprehensive application These effects play an important role in high-quality animation-style scene rendering, and our goal is to add a stylized adjustment to the shading based on PBR to make it more expressive.



Most of the material in the scene is physically based rendering. We have made some stylistic adaptations to the PBR texture set, such as the cartoon adjustment of the color and the emphasis or omission of the object's material details. Combined with the use of the image space of the hook to emphasize the edge of the object, the overall scene of the performance is closer to the animation style.



The light and shade variations of these materials at different illumination angles are shown.



Show the change of light and shadow.



In addition to the scene rendering, let's take a look at some of the other animation rendering involved in the content, animated emoticons.

We use Blendshape to make facial expressions. The expression of the eyes, mouth and eyebrows is independently made for different parts, and then through our custom facial expression plugin, the automatic mapping of the expression animation and the voice mouth type is realized. In addition, we can also drive facial expressions in interactive applications by pre-defined different sets of emoticons.



When using humanoid as an animated import in unity, if the joints are rotated at a large angle, the shape of the joints will not be satisfactory according to the animation quality.

For this reason, we have built a modified blendshape of each joint into unity to prevent joint deformation through the modeling software. We use an automatic control script to mix shapes according to the angle of the joint rotation. To ensure better results, we made two blendshape for each joint, one for 90 degrees and another for 140 degrees to offset the joint deformation.

Another method is to use additional bones for joint correction. This method is easier to make, but it is better to use Blendshape for structural details.



In order to perform more complex scene dynamics, such as fluid and broken scenes, we can use the Alembic format, or use the EXR texture as a carrier to import vertex animation resources from Houdini or other DCC tools. Houdini provides good conversion support for the EXR texture format export vertex animations, and for real-time applications, vertex animation textures run efficiently and load faster than the ALEMBIC format because they are running on the GPU.



Finally, we will talk about real-time cartoon rendering in the future can continue to improve and improve the place.

The first is to achieve a fully customizable style rendering of all types of materials, and we have initially applied brushes in the skin and costume rendering to get a stroke effect.

Next we want to extend it to the entire scene, such as the new Hai Cheng style of the scene, to show a unique and unified style animation style rendering. Another point is to further improve the rendering accuracy of the model, we want to be able to present the CG level model accuracy in real time. You can try to use geometry shader or pre-bake displacement map for dynamic adaptive surface subdivision, which can greatly reduce the cost of resource import and improve operational efficiency compared to directly importing raw high modulus.

The final step is to optimize the entire process solution to make it easier to adjust and edit in real time, further improving operational efficiency for use in the game.



OK, the above is that we have today about the cartoon rendering to share the main content, thank you!

Unite 2018 | Collapse 3: Achieve high-quality cartoon rendering in unity (bottom)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.