A practical method for real-time hair rendering and coloring

Source: Internet
Author: User

A practical method for real-time hair rendering and coloring

Thorsten Scheuermann

ATI resarch,Inc.

Translation: Pan

( pancy:XXX) for the translator's note )

Introduced:

We present a real-time hair rendering algorithm that uses polygonal models and applies it to this yearSIGGRAPHa live animation on the animation sectionRuby:The Double Cross"above." The hair rendering algorithm is based on theKajiya-kayThe algorithm of the hair rendering model, but above it adds a real-time specular effect near the real highlight (pancy: The original algorithm may not take into account that the high light of the hair only calculates the diffuse reflection or simply simulates the specular reflection )。 The work of this part is mainly made up ofMarschnerwaiting for people to come true. In addition, we have defined a concise rendering process(pancy: In the graphics engine, a rendering effect is generally called a tecnique, which is translated into the rendering process ), the process will tell you how to approximate theBack-to-frontthe effect of sorting to achieve multi-layered translucent hair rendering. (pancy: analog back-to-front sequencing is an important GPU algorithm for ATI to solve multilayer translucent rendering , which will be described in more detail in this paper ), in previous algorithms, the ordering of multi-layered translucent materials needed to be relied on in each frameCPUto do, in our algorithm this method will be discarded and we will rely on theGPUTo sort multiple layers of translucent hairs. (pancy: Here's an explanation of why the hair is sorted before, the translucency is generated by Alpha blending, but The disadvantage of alpha blending is the need to the back-to-front Sort, which means that the translucency is relative, is translucent relative to the object drawn before it, and what is drawn after him will cover it completely. And the hair model has only one, unless the modeling of the time apart or it will be the CPU to decide who first draw who draw to achieve perfect translucent effect )

Hair Model:

The model we need to input is a hair that is simulated by a 2 -dimensional polygon of many layers , a multi-layered simulation that can easily mimic the details of the hair and make up a set of geometry. This polygon model is very easy to render because his vertex information is less and reduces the load on the vertex shader , and it simplifies the geometry of the polygon hairs, making it easy to Back-to-front Rendering.

The model has two main textures, one for mapping fine-grained structures, and the other is a set of opaque textures used to patch uncoordinated layers of hair. (pancy: The reason why the fur needs translucent effect is mainly because the hair of the hair part is gradually becoming transparent, this effect is alpha-test can not be replaced, However, the uncoordinated repair between the layers of hair does not involve hair ends and can be used as a completely opaque material.

Hair Rendering:

1: Diffuse part:

We used the n-l Offset and a scale to simulate diffuse light both:diffuse = max (0,0.75n*l + 0.25) (pancy: We usually calculate diffuse light using n*l as diffuse angle, and this paper through the above formula, the diffuse reflection coefficient of hair compression to a relatively small area, the calculated diffuse reflection will be less than the usual calculation effect chromatic aberration, brighter, But you can use this formula selectively based on the hair quality of your model . This method of diffuse reflection makes the light portion of the hair brighter and creates a soft appearance for the entire hair.

2: Specular Highlights:

The basic method we use to simulate specular light is to follow the Kajiya-kay coloring model, but we have a much more innovative approach. Marschner and others pointed out that the hair should have two kinds of specular highlights that can be identified by the unaided eye, one of which is the light reflected directly on the hair surface, and this part of the light will focus on the top of the hair. The second is that light enters the hair bundle of many hairs, and passes through multiple reflections into the eye of the beholder (pancy: This process is similar to the problem of ambient light simulations, but the strict high light has been reflected many times, If rigorous simulation is bound to involve ray-tracing, there is neither a strict hair model nor a ray-tracing in a real-time rendering algorithm , So it should be based on the characteristics of this light to get some approximate algorithm ) This prominent highlight color depends on the hair color, and it always toward the root of the hair, and the appearance is also very messy.

To simulateMarschnersuch a visual effect, we will enter two different specular emission sources each time we render, the two specular light colors are different, the specular reflectance index is different, and we will convert the two specular light into the opposite two directions according to the direction of the head hair.(pancy: Probably a follow hair, a head hair, this is to simulate the last paragraph mentioned in the two different kinds of specular reflection light ), for the second specular light, we adjust its highlights two times with a noise texture, so that it can be used at a minimum cost to achieve a blinking appearance similar to real hair.(Pancy:gpu Shader does not have many built-in functions, such as generating random numbers, so it is common for random algorithms to pass shader a CPU The resulting textures to achieve this effect ), we can reconstruct the surface tangent vectors of the hair in order to get two opposite specular light directions along the hair direction, by using a tangent vector for each layer of hair and a normal vector of hair slices:T' = Normalize (t+s*n),whichson behalf of the modified parameters, according to this parameter to take positive or negative differences, we can get along the hair point to hair root, and along the hair pointed to the tip of the two different directions. We're not going to decide on a hair piece here for the details .s, but is obtained from textures, which are similar to the normal maps we use, but heresis to look for the tangent details of the hair, so we call it "tangent mapping."

(pancy: As we said before, the second mirror light idea is good but the implementation is very difficult, can only be approximate simulation, here he mentioned the tangent map, I explain this noun, if for each layer of hair only use a tangent, that is, that layer of polygon slices on the plane tangent, Then the computed highlights are not for each hair, but for the polygon slices, so they think of the normal mapping algorithm that is now used to record the normal details, that is, to record the details of the hair on a single image, but only the tangent details rather than all the details. And then back to use when the UV coordinates to restore back, so that you can restore a part of the hair from a visual angle of information, of course, this method is flawed, such as not head-up, and so on, by the way, the surface subdivision technology can now be used in most of the home machine to achieve , so this algorithm should be able to make a lot of improvements, such as tangent displacement map God horse and so on ..... of course it's just a spit. ATI or Nvida there must be a better way to render the hair.

3 Ambient occlusion section:

In order to briefly represent the self-shadowing of the hair, we computed an ambient occlusion for each vertex during the preprocessing phase of the program, and in the final pixel processing, we +ao The pixel's diffuse + specular emission. Color synthesis as a pixel output. (pancy: Here is basically the equivalent of nothing to say ^O^, Ambient occlusion algorithm I'll say, basically, they're using SSAO . , the ambient occlusion is primarily designed to simulate ambient light, which is the indirect reflection of light, similar to the second specular light above. But the exact simulation is expensive, so the approximate algorithm of " computational masking " is proposed . An ambient occlusion value is calculated based on a point in front of a vertex that does not completely block him, as the paper describes, for hair rendering, the algorithm that mainly simulates the shadow that the hair itself obscures itself.

Approximate depth sort rendering:

In order to achieve the correct translucent blending of the hair, we must restore the depth order of each level before the hair model. When we're modeling, we're going to go through the process, sort the layers of hair according to their distance from the head, then record the index in a static index buffer, and then render the process, we use 4 passes to describe our coloring model:

Pass1:

1, activate alpha-test, allow only opaque pixels to pass the test.

2, turn off the back blanking

3, activate the depth buffer, set it to less

4, turn off the write operation of the color buffer

The pixelshader of this step only returns Alpha information

Pass2:

1, turn off the back blanking

2, activate depth buffer , set to equal

PASS3:

1, turn on the back to hide, remove the front

2, close the z - buffer

PASS4:

1, turn on the back blanking, erase the back

2, open the z buffer and set it to less

(Pancy:ati famous 4-pass Translucent rendering process, the general idea is that the first two passes to find the opaque parts, they were drawn first, Avoid the embarrassing phenomenon that transparent parts cover the opaque part. Then go to the front and back to draw the translucent part , so that you can perfectly draw a translucent hair, and cleverly avoid the previous multi-layered hair between the multi-layer sorting problems required by the occlusion

Advanced depth Culling (early-z culling):

The reason we were in the firstPassOpenZbuffer because the activationalpha-testwill make it impossible for us to start earlyZblanking(early-z culling), we will advanceZbuffer record, and then use theZbuffered pixels to performPixel shaderis a highly efficient practice. The remaining threePasswill be due to this advanceZthe buffer records gain performance gains. (pancy:early-z culling refers to the requirement that the depth test be earlier than the pixelshader, in general, our Z Test, template testing and other tests are after the PixelShader, and the advance of the words in some algorithms can eliminate a lot of things should not appear, save a lot of resources, such as the multi-layered translucent hair rendering )

The biggest advantage of our hair rendering scheme is that we don't need to use the CPU to sort the hair at every frame of the run , but our algorithm can only do some "gentle" animations on the hair model, which means that the hair slices cannot be moved too much. (pancy: It is probably the force that the hair does not fully follow the physical model, such as the one that hit the piece and then move together ). If this hypothesis is not met. For the time being, you can only rely on the CPU -based runtime to sort by frame hairs

A practical method for real-time hair rendering and coloring

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.