Unite 2018 | Collapse 3: Achieve high-quality cartoon rendering in unity

Source: Internet
Author: User

This article for the unity of the official forum published articles because all is dry, in order to facilitate their own search, but also to avoid deletion can not find the copy came over ...  Original address: http://forum.china.unity3d.com/thread-32271-1-1.htmlhttp://forum.china.unity3d.com/thread-32273-1-1.html <ignore_js_op>



The following is the lecture content:

Hello everyone, Welcome to Unite2018 to join us in this speech. Simply do a self-introduction, my name is He Jia, currently in Miha as the technical director. My team and I focused on real-time rendering in PBR and NPR, as well as the process animation and interactive physics of animation CG and games. Now, part of our job is to use unity to achieve high-quality cartoon rendering.

The theme of this presentation is "achieving high-quality cartoon rendering on unity", which is optimized for the features of each platform, from mobile to high-performance PCs, to different levels of platform.

<ignore_js_op>



Let's start with a brief introduction to the main aspects of this presentation. First of all, some applications on the mobile side of the "collapse 3" rendering characteristics. And then I'll talk about some of the techniques used in animation-style CG rendering, such as illustration-style character rendering, special material rendering, rendering of effects, and post-processing with cartoon rendering adaptation. The last part is about some of the miscellaneous and some prospects for future implementation.

<ignore_js_op>



First, let's take a look at some of the rendering features used in the crash 3 scenario.

As we can see, the scene uses a lot of special effects to enhance the expressive force. Examples: Bloom post-processing effects, dynamic particles, plane reflections, screen twist effects, and more. We will analyze these effects in turn.

<ignore_js_op>



These dynamic effects are shown below.

<ignore_js_op>



Let's take a look at how to achieve high-quality reflective effects.

To achieve high-quality reflection on the mobile side, planar reflection is a good way to synthesize the effect and performance factors. The usual practice is to place the camera in a symmetrical position and render the scene to reflect the result, with the ground as a symmetrical plane.

In order to show the metallic texture of the ground, we first apply hexagon sampling blur to the reflection results and then use the metal texture detail normal map to disturb the reflection results, in addition to the mirror reflection map and Fresnel effect to further enhance the reflective texture. In some secondary reflective surfaces that are far from the ground level or non-horizontal, planar reflections are not applicable, so we use environment map reflection as an alternative.

To minimize the overhead of rendering a reflection scene, we limit the reflection resolution to less than 1/3, and because the reflection map is obfuscated, even if the resolution is reduced, the difference is not obvious, and we use a simplified version of the material in the process of rendering the reflection, ignoring some unimportant small objects.

<ignore_js_op>



Now let's look at another effect: the app for full-screen twist effects.

We used a lot of screen warps in the crash 3 scenario, such as the trailing effects of swords, the time-space break effect, the water waterfall and other scene effects.

During the rendering of the warp effect, we used 3 channels to store the distorted rendering results, 2 to store the UV offset, and the other to store the warp strength mask, which was used to perform deep cropping and distance-based strength control.

Rendering distorted results to frame-buffered textures with separate passes is expensive for mobile platforms, so we've integrated the warp effect in the final post-processing, much faster than the former. However, this approach can also lead to the problem of having the front object mixed with the back-twisted material because there is no layering, but given the performance limitations of the mobile platform, this compromise is worthwhile relative to the overall effect.

<ignore_js_op>



Let's take a look at the implementation of Bloom, the entire scene if HDR is turned on using the FP16 format of the render target, and then sampled to the original size of the 1/4 size for the subsequent post-processing process to use.

First, we need to specify a luminance threshold to extract the highlighted area of the image, the implementation method is not complicated, just subtract the threshold from the source pixel, the structure is the extracted high brightness area, overlay this layer of content can make the results look more contrast and colorful. Next, we generate 4 render target with a recursive half size, and apply its contents to Gaussian blur with increasing radius. Finally, we combine these blurred results to get the final bloom effect.

From the end we can see that the bloom effect not only serves to express the visual effect of the highlighted area, but also plays a significant role in the color of the whole image.

<ignore_js_op>



When reflection rendering, distortion and bloom are done, these intermediate results can be synthesized together in the end. We use the filmic tone mapping with exposure and contrast controls to convert the original image of FP16 HDR to the final LDR frame buffer. Because these compositing operations are done in one pass, performance needs can be met even on mobile devices.

<ignore_js_op>



Let's introduce the weather in the game and the realization of the sea of clouds.

We want to create a rendering system that allows the player to feel deep, with a variety of rich patterns and dynamic lighting changes. The system should also be easy to adjust and use, so that art can create different types of cloud effects. This is also an interesting challenge for us, so let's talk about these features in the next step.

<ignore_js_op>



First let's look at the resources required to render the cloud, because we want to achieve the lighting effect of a stylized cloud that can be dynamically changed over 24 hours, and if the number of mapped maps that are stored directly is too large and inconvenient to adjust, we use multilayer coloring to achieve this.

We use 4 channels to represent the cloud's light and shadow: the base illumination layer, the shadow 1 layer, the Shadow 2 layer, and the edge light layer, we can get the color scheme of the cloud at different times by setting different colors for each layer. We have prepared a total of 8 different shapes of cloud templates, used to build a variety of clouds landscape.

<ignore_js_op>



To build the sea of clouds, we used a lot of particle emitters that were emitting clouds toward the screen, and we used different cloud templates and launch patterns to combine different clouds, and we implemented various types of clouds and stormy weather, which were saved in the weather configuration. In addition, we use keyframes to define the color of the sky background and clouds. As time goes by, the color of the cloud changes depending on the keyframe.

The main cost of performance is the overdraw problem, if we follow the fixed control pattern to launch the cloud although the minimum overdraw can be used to obtain a better density of clouds, but may appear to be more repetitive, adding the location of the random factors can solve the problem, But to get a less sparse cloud effect requires more particles than fixed pattern, we have fine-tuned parameters for particle emission configurations so that a good tradeoff can be found between the two.

<ignore_js_op>



This is a 24 hour day and night change of the sea of clouds landscape.

<ignore_js_op>



This is another day and night change of the sea of clouds landscape.

<ignore_js_op>



This is a scene of Storm Lightning.

<ignore_js_op>



Now let's take a look at the weather system used in the game scene.

We change the weather and atmosphere of the scene mainly through the global fog effect, skybox color and directional light setting. There are also many parameters that can be adjusted for fog effects. We give fog effect based on depth divided into two distance range, near and far range can be set different color and intensity values to create a variety of atmosphere. Skybox can also control the color gradient of the sky, the cloud's light and shadow color, and so on. Combined with the above adjustment options, we can create sunny, rainy and foggy, cloudy and nighttime weather.

In addition, the light of the person will be affected by the environment, the main light color is determined by the direction of light, the local area of the shadow changes, such as: the role of the shadow area, by some from the level editor manually placed lighting volume definition.

<ignore_js_op>



Let's take a look at the use of depth of field in the game.

The use of depth of field in mobile games is generally not common, as the common depth of field implementation is still expensive for mobile platforms, and we use the depth of field effect to highlight people in the character selection interface and mission briefing sessions.

Since these scenarios do not require excessive depth of field, we use a special approach to improve mobile performance. Instead of using depth buffer for COC blending, use a separate camera to draw the background layer directly. After the blur is applied, the final image is obtained by combining the background and foreground characters together.

For better visual results, we use the hexagon sampling mode to get better bokeh shapes. In addition to the bokeh intensity adjustment parameter to make it look clearer, we use the luminance value as the increment factor, and 2 is usually a suitable value.

Performance in order to maintain the stability of performance, our blurred background resolution depends on the degree of ambiguity, the larger fuzzy size using lower resolution and less noticeable, we also use unity built-in curves to describe the transformation between them.

<ignore_js_op>



Shows the results of dynamically adjusting the blur size and caustics strength.

<ignore_js_op>



In the game scene, we also implemented a look cool effect, when the last enemy hit a fatal blow when the bullet time, then all the high-speed moving objects will slow down, in the rainy day we can clearly see the shape of the raindrops.

In order to achieve this effect, we used 4 key frames representing the shape of raindrops at different velocities, and then stretched them vertically according to the time and speed scale. At a normal time scale, raindrops look like a straight line, gradually shortening into a raindrop shape when time slows down. Here we also use the animation curve to control the tension, key frame selection and time of the relationship between the speed, adjustment is very flexible and convenient.

<ignore_js_op>



Just now we talked about some of the rendering features for mobile optimization, let's talk about the rendering method for animation style real-time CG or next generation games.

In the past two years, we have produced two short videos, which reflect our new rendering style. (Station B Video address: https://www.bilibili.com/video/av14260225)

<ignore_js_op>



We put it on the B station, in 3 days to get the B station full station monthly ranking ranked first position, so far more than 3 million clicks. (Station B Video address: https://www.bilibili.com/video/av7244731)

<ignore_js_op>



Let's talk about some of the real-time rendering CG techniques used in these videos.

First, let's look at the role rendering, our goal is to achieve completely dynamic lighting and shading, all materials are the correct response to various lighting phenomena, including the main light and regional ambient light. This requires that we do not use any of the light manifestations that are painted on the texture.


The main features used for role rendering are: multi-channel ramp material shading methods, special materials such as eyes, hair and other anisotropic materials, and pcss character soft shadows and high-quality hook lines.

<ignore_js_op>



First, let's look at the shading method for multichannel ramp.

We want the shadow and color changes of the character to show a more subtle illustration style, so we use 2D ramp textures to represent these subtle changes, where RGB channel resolution is used to describe the diffuse shadow range of different shadow layers. Each layer can have a different color, so that you can make fine color change control in the light and shade changes.

For the cartoon-style picture, if the color is only pure light and shade changes, the shadow will appear dirty, lack of expressiveness, and if the increase in the dark saturation and hue changes, the overall color will look more vivid. And by adjusting the vertical texture sampling coordinates, we can achieve dynamic soft-hard style conversion. From another point of view, this method also indirectly shows the subsurface scattering effect of the skin.

<ignore_js_op>



The following four images show the effect of multi-channel color overlay over layers. You can see the layers of layers of color overlay, the skin level of detail will become richer.

<ignore_js_op>



The upper and lower second mate graphs show the corresponding rendering effect of ramp texture in different locations, and different ramp can get different color styles. Using hard ramp to compare close to Cel-shading,soft ramp is similar to the soft shadow hierarchy of illustrations.

Since we use 2D ramp textures, the changes between them can be dynamically adjusted, we can use the ramp mask texture to select ramp per pixel to achieve the hand-painted style of illustration. This ramp mask texture can be drawn directly from the art on the model. We have a 3D paint tool under unity that is more intuitive to use.

<ignore_js_op>



Another important factor in illustrator style rendering is the use of texture strokes. We can use different stroke texture patterns to get different shading styles. For each brush texture, we have 4 channels to store brush patterns that represent different directions, and a mix of these brushes allows for a richer brush change. In the two contrast chart on the right, there is more hand-drawn feeling using the stroke texture.

<ignore_js_op>



Displays the result of applying a skin texture with a stroke style to different lighting angles.

<ignore_js_op>



Next, let's look at how to achieve high-quality edge light.

The same is based on the Fresnel method, we have parameters to control it, such as: Edge width and smoothness, in addition to these global control parameters, we also use brush texture to increase some local changes. We define the edge light can be from the direction light source can also come from the environment map, using directional light we can define the edge light according to demand, using environment map, we can according to the ambient light to obtain edge light to appear more real, both are more useful, can be used together.

To avoid edge-edge light appearing in unwanted areas, we use the AO texture and shadowmap to close the occlusion area. As we can see, the shape of the left edge light in the contrast chart appears to be more prominent.

<ignore_js_op>



Cartoon style for the face generally there is not too much shadow level changes, if we apply the previous ramp method in the face, the effect will be like the right side of the graph looks as unnatural, in order to improve this situation we use a vertex color of a channel as mask to control the face of the color layer of strength, The desired cartoon effect is achieved by depressing the diffuse reflection performance.

<ignore_js_op>



Next, let's take a look at the implementation of the high quality character soft shadows.

If we use Unity's built-in CSM shadows directly, the shadow quality does not meet the requirements when the camera is close to the character, so we render a single shadowmap for the character to ensure a constant shadow quality. For this purpose, we also implement the SHADOWMAP based on the cone of view, according to the role of the BoundingBox and the visual cone to find the intersection part, as a rendering area. You can maximize the usage of shadow maps,

In addition, Variance shadow map and pcss are used to reduce shadow artifacts and to achieve a natural soft shadow effect. In addition, if you want to achieve the correct shading of transparent materials, additional channels are required to store the shadow intensity based on the transparency of the material. We can see from the example picture that a translucent skirt can cast a natural shadow.

<ignore_js_op>



The treatment of the eye we used a physical-based refraction calculation. Ordinary cartoon model treatment of the eye is usually left blank, the pupil is depressed, so in the side of the time will not be bulging out to appear more natural, but if you want to do close-up eye, this approach does not seem convincing. Using the real refraction algorithm, the eyeball itself is done by the sphere, and then the refractive index is calculated according to the angle of view to find the map corresponding point.

The contrast chart below shows the actual effect of refraction, and we can see that if there is no refraction effect, the side of the eye looks rather strange.

<ignore_js_op>



In addition, we added a light-refraction effect, which makes the eye texture further enhanced. For non-realistic rendering, physical correctness is not a factor to consider, due to the special circumstances of cartoon rendering, we want the caustics effect to appear on the other side of the incident light, and the more parallel the incident angle appears more obvious.

The realization method is through the incident light and the eyeball front angle calculates the illumination intensity, here we use inverse diffuse to simulate, then assists the Fresnel formula to make the brightness change, finally by the eye caustic texture obtains the final effect.

We can see that the eye looks dull and lacks texture without caustics.

<ignore_js_op>



It shows the refraction of the eye and the anisotropic specular effect of the hair.

<ignore_js_op><ignore_js_op>

He Jia large. jpg (344.27 KB, download number: 0)

Download attachments

Uploaded 2 hours ago



The following is the lecture content:

Next, let's introduce the hair rendering. Hair is a more important and unique part of cartoon rendering characters. We want to achieve a high-light and shadow gradient that changes dynamically based on the light source, and this implementation should also have an intuitive WYSIWYG color-conditioning capability.

Like the texture of the skin, we also use the Multi-ramp method for the diffuse rendering of the hair, and specular highlights we use two layers of high-gloss to do the superposition, by combining high and low frequency highlights together, we can get satisfactory results. In addition, we use glossy map and AO textures to further enhance the texture of the hair.

<ignore_js_op>

The high-gloss rendering of the hair uses anisotropic highlights, which use normal to calculate light compared to ordinary highlights, and the anisotropy uses tangent as the basis for computing, so that highlights can be displayed in a shape perpendicular to the hair direction.

When we make hair models, if the model topology is more complex, UV expansion is difficult to achieve all vertical, we can also use Flowmap to comb the shape of the highlights.

<ignore_js_op>

We also use the Jittermap jitter map to enhance the texture of the cartoon-rendered hair. The high-gloss effect that simulates the hair detail is achieved by disturbing the tangent direction. In addition, adjusting the UV scale of the jittermap can also be done to adjust the high light weight of the hair.

<ignore_js_op>

The four graph decomposition shows the effect of each highlight component on the rendering results. The bottom-right corner is the final image. We can see that the hair looks more expressive with the high-light display combined with low-frequency and high-frequency components.

<ignore_js_op>

Next, let's look at another implementation of cel-shading hair highlights.

Our goal is also to make it dynamic, high light should be based on the light source and camera position along the direction of the hair moving, the shape should also be in the movement of dynamic morphological changes.

Cel-shading style of hair high-gloss more unique form, it is difficult to use the traditional high-light calculation method to describe. Also, we need to use tangent direction instead of normals for high-light calculations, and we need a more specific way to perform high-gloss shapes.

<ignore_js_op>

First, we're going to uw each strand of hair in a vertical direction so that the highlights can move along each strand. Then from the left to the right of each strand from 0 to 1 to identify the starting and ending positions of dynamically generated highlight shapes, we use a few curve-defined templates to describe the basic shape of the hair highlights, and then use the jitter noise texture to modulate the thickness of the hair highlights,

There are many parameters in the material to control the shape of the resulting pattern. Position, offset, width, jitter ratio, etc., by adjusting these parameters, we can obtain a variety of different shapes as needed.

<ignore_js_op>

Let's look at another example of anisotropic material: silk. This time we used the sub-normal direction to calculate specular reflection, and using three layers of high light to synthesize together to achieve the final rendering effect, we set different colors for each layer so that the final material looks richer in color levels.

<ignore_js_op>

This shows the changes in the reflection of the anisotropic high light at different angles of view.

<ignore_js_op>

Our character materials also include other special materials, such as translucent materials such as crystals and scarf, and the direct use of alpha blending does not show the desired texture, which requires us to achieve both refraction and blur effects, all of which rely on Unity's command buffer for two effects.

When the refraction effect is achieved, Command buffer obtains the already rendered backbuffer as the background for refraction sampling, sets different refractive coefficients for the RGB channel, and samples three times to simulate the dispersion effect before rendering refraction.

For the Blur effect, the backbuffer is reduced and blurred by command buffer, generating 4 dimensions, then halving the rendertexture of the fuzzy degree, and then determining the degree of ambiguity based on the camera distance and the FOV and the intrinsic fuzzy parameters of the material. Select the corresponding rendertexture to complete the blur effect.

We also made a certain optimization of the implementation of the two, not directly to the Backbuffer use full-screen blur, the object itself as a proxy mesh, only to deal with the part of the painting.

<ignore_js_op>

Next, let's talk about the high quality hook-up method.

For characters and dynamic objects we use the Backface hook line method, and use vertex color to control the width of the hook line, the hook line itself requires continuous vertex normals in order to have no faults at the sharp edges, so we store the smoothed normals in another set of vertex colors.

In addition, we also use vertex color to control the width of the hook line, for example: the tip of the hook line will gradually become thinner, we fill in the vertex color gradient of 0 to make the line width gradually transition to zero.

In addition, depending on the distance between the camera and the object, there should be a distance correction based on the width of the hook line. Each material should also have a different check line color, all these features are high-quality hook line required.

<ignore_js_op>

Backface Hook line method, although it can be a more detailed hook line reduction. But it also has its own inherent flaw, that is, can not be in the non-edge of the sharp line to produce a hook line. These polylines are very common on hard surface models.

To solve this problem, we add a preprocessing process to extract the edges, save them to additional mesh resources, and draw them using geometry shader. For these polylines we used the same adjustment parameters as the Backface method to make them look exactly the same. With the addition of the polyline drawing, we can see that the picture on the right captures more of the tick detail.

<ignore_js_op>

Hook line Another common method is to generate contour lines in the image space. By detecting the discontinuity of normal and depth in the scene image, we can get a more detailed check line. Regardless of the complexity of the scene, the performance of this method is constant, we also added to the hook line color hue, lightness, saturation adjustment, so that the hook line more natural.

The disadvantage of this method is that it is more difficult to control the width of the hook line, if we want to achieve distance-related line width, we can only adjust it in a few pixels, so the image-based approach is mainly applicable to scene contour rendering, for objects close to the camera, we'd better use Backface method.

<ignore_js_op>

The final approach is based on the brush-buying method, which is used more often in offline rendering and is usually divided into the following steps.

    • Contour Extraction: Extract contour edges from mesh, mainly divided into two kinds of sharp edge and smooth edge.
    • Connecting contour lines: The adjacent contour edges are connected to the contour lines as long as possible, according to the model's complement relationship.
    • Contour segment: On the basis of step 2, the contour lines are separated from the mutation of curvature or visibility according to the curvature and visibility changes on the contour lines.
    • Stroke mapping: Make a texture of the strokes you want to add, mapped to the contour line of step 3 according to the corresponding texture coordinates.


This method can achieve more stylized, more obvious strokes of the hook line, pencil+ blender freestyle render Basic is a similar approach, the performance overhead, can be used for CG quality rendering, but not suitable for use in the game directly.

<ignore_js_op>

Let's take a look at other special effects implementations, which also play an important role in scene characterization.

<ignore_js_op>

This is a scene to show the volume light. As we can see, the volume light with fog effect is used together with bloom, the scene shows a strong level and atmosphere feeling.

<ignore_js_op>

Here's a look at the implementation details of the volumetric light.

We use Unity's built-in curves to shape the light, which makes it easy to adjust the shape at run time, and the strength parameter changes are also defined by the curve.

To further simulate smoke effects, we also use 3D noise textures to simulate the effects of dynamic smoke flow. Noise smoke itself has some parameters to adjust. For example: particle size, size ratio, noise intensity, flow rate, etc.

In addition, with cookie map can also customize the volume of light projection shape, using a cookie map has also introduced a high-frequency change components, which need to increase the number of samples to reduce aliasing, using jitter algorithm can reduce the aliasing caused by insufficient sampling, we implemented two kinds of jitter: Bayer Pattern and blue noise, the results show that blue noise with temporal AA can achieve better volumetric light effect at lower sampling number.

<ignore_js_op>

Let's take a look at the example of using real-time GI.

In this simple demo scenario, we use enlighten to bake the lightmap of the real-time GI, then use the dynamic spontaneous light material and the volumetric light as the light source, we use the Avpro plugin to decode the video file, set it on the spontaneous light texture, and set the strength value to 1 or more. We can get a dynamic and bright area light source, and remember to update the Gicache so that the lighting environment can be dynamically updated at runtime, and when used with dynamic volumetric light, the overall lighting effect looks impressive.

<ignore_js_op>

For the dynamic AO implementation on the role, we use the modified Hbao to specify the saturation and tonal adjustments of the colors in the AO area so that the image color after the AO is not dirty, and we can see from the comparison chart that the right image is stronger than the left image level after applying the AO.

<ignore_js_op>

We have also re-implemented the image-based glare effect for cartoon rendering to simulate the ghost and star scattering effects of the lens. This is done using a high-light area extracted in a similar way as input, then bloom the convolution in different directions and applying color modulation to obtain the final result.

<ignore_js_op>

Let's take a look at some of the CG videos and close-ups.

<ignore_js_op>

This is another set of scenarios. We can see that the entire scene is closer to the quality of offline rendering after applying the rendering techniques mentioned earlier.

<ignore_js_op>

The following figure depicts the main rendering characteristics applied to the above scenario. We can see these effects include: stylized PBR material, cartoon-style AO, screen space hook, screen space reflection, and surface subdivision.

Comprehensive application These effects play an important role in high-quality animation-style scene rendering, and our goal is to add a stylized adjustment to the shading based on PBR to make it more expressive.

<ignore_js_op>

Most of the material in the scene is physically based rendering. We have made some stylistic adaptations to the PBR texture set, such as the cartoon adjustment of the color and the emphasis or omission of the object's material details. Combined with the use of the image space of the hook to emphasize the edge of the object, the overall scene of the performance is closer to the animation style.

<ignore_js_op>

The light and shade variations of these materials at different illumination angles are shown.

<ignore_js_op>

Show the change of light and shadow.

<ignore_js_op>

In addition to the scene rendering, let's take a look at some of the other animation rendering involved in the content, animated emoticons.

We use Blendshape to make facial expressions. The expression of the eyes, mouth and eyebrows is independently made for different parts, and then through our custom facial expression plugin, the automatic mapping of the expression animation and the voice mouth type is realized. In addition, we can also drive facial expressions in interactive applications by pre-defined different sets of emoticons.

<ignore_js_op>

When using humanoid as an animated import in unity, if the joints are rotated at a large angle, the shape of the joints will not be satisfactory according to the animation quality.

For this reason, we have built a modified blendshape of each joint into unity to prevent joint deformation through the modeling software. We use an automatic control script to mix shapes according to the angle of the joint rotation. To ensure better results, we made two blendshape for each joint, one for 90 degrees and another for 140 degrees to offset the joint deformation.

Another method is to use additional bones for joint correction. This method is easier to make, but it is better to use Blendshape for structural details.

<ignore_js_op>

In order to perform more complex scene dynamics, such as fluid and broken scenes, we can use the Alembic format, or use the EXR texture as a carrier to import vertex animation resources from Houdini or other DCC tools. Houdini provides good conversion support for the EXR texture format export vertex animations, and for real-time applications, vertex animation textures run efficiently and load faster than the ALEMBIC format because they are running on the GPU.

<ignore_js_op>

Finally, we will talk about real-time cartoon rendering in the future can continue to improve and improve the place.

The first is to achieve a fully customizable style rendering of all types of materials, and we have initially applied brushes in the skin and costume rendering to get a stroke effect.

Next we want to extend it to the entire scene, such as the new Hai Cheng style of the scene, to show a unique and unified style animation style rendering. Another point is to further improve the rendering accuracy of the model, we want to be able to present the CG level model accuracy in real time. You can try to use geometry shader or pre-bake displacement map for dynamic adaptive surface subdivision, which can greatly reduce the cost of resource import and improve operational efficiency compared to directly importing raw high modulus.

The final step is to optimize the entire process solution to make it easier to adjust and edit in real time, further improving operational efficiency for use in the game.

<ignore_js_op>

OK, the above is that we have today about the cartoon rendering to share the main content, thank you!

Unite 2018 | Collapse 3: Achieve high-quality cartoon rendering in unity

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.