GPU-based Virtual Character Expression Rendering

Source: Internet
Author: User
Tags mathematical functions
9-7-6 Author: Xu yuanchun Liu Yong Source: Wanfang data

Keywords: GPU virtual expression Shader Language This article proposes a GPU-based Virtual Character Expression rendering method, which uses GPU computing technology and uses the Shader Language to process interpolation data, this allows you to quickly draw emoticon animations of virtual characters. The experiment shows that this method is simple and effective, and is conducive to the painting of expressions.

1. Introduction

Computer Graphics deformation technology is an important part of computer animation technology. it utilizes the knowledge of computer graphics, image processing, computational mathematics and other disciplines to Realize Computer deformation animation. It is a new research field that has emerged in recent years. it has important academic and practical values. Here, we use the existing Deformation Technology to simulate the facial expression of a virtual character. It is a hot topic in current research to conveniently and quickly simulate facial expressions. Based on the study of interpolation algorithms, This paper combines GPU acceleration technology to implement a hybrid system of character expressions.

 2. Key Technologies and Theories

Deformation, also known as shape blending, is a continuous, smooth, and natural process from the source to the target. The main research content is to design the intermediate gradient process to make the deformation smooth and natural, that is, to use a certain mathematical method to automatically generate a series of the same objects in the smooth transition between the given two objects. For example, a certain number of key frames are provided in the animation design. the intermediate frame is interpolated using the gradient method. achieve continuous animation effect. or, some new objects that combine the characteristics of two objects are generated by the gradient from the initial object to the target object.

Currently, there are multiple ways to generate animated facial expressions for virtual characters. Arai K and others define an independent parameterized face space, overlay facial expressions, and use bilinear interpolation to realize real-time changes of facial expressions of virtual characters. M. Cohen and other l'j use a direct Parametric Model to control the face. Two types of parameters are defined. The emoticon parameter controls the emoticon. The shape parameter controls the face shape. The parameter value is determined by direct observation of the face. This direct parameterization model describes virtual facial features based on experience, and does not consider hierarchical connections between different regions. The Free-Form Defomation (FFDl) method of Thomas W. Sederberg et al. embeds the object to be deformed in an elastic Control Box consisting of a series of control points, a hypothetical three-dimensional mesh. When the control box is pressed, bent, and distorted, the embedded objects are also deformed. This is a method of parameterization. When this method is used in Face Animation, the Face Deformation Control is transferred from the description of the face to the face surface itself. In addition, Platt and others proposed a facial model based on facial muscles. Through spring-like muscles connected to the simulated skeleton at the bottom, various expressions were generated by using the elastic contraction of muscles. Through the study of the above facial expression control methods, it is found that linear interpolation plays a key role in the final draw stage of facial expressions. Therefore, it is our goal to achieve rapid and real-time rendering of facial expressions.

In recent years, graphics processor (GPU) performance has greatly improved and the Development of programmable features. people started to transfer some processing stages of the traditional graphic rendering pipeline and some graphics algorithms from the CPU to the GPU. this greatly improves the graphic rendering performance. A Shader program running on NVlDlA GeForce FX5900UI worker can run at a speed of 20 gndps, which is equivalent to Pentium4 of 10 GHz. In addition, the memory bandwidth of the Graphic System is 25.3 GBPs, compared with 5.96 GBPs for Pentium 4. This shows the advantages of GPU operations.

Traditional graphics processors accept input 3D information and process it according to fixed functions. The following information can be entered: global information of a scenario, such as viewpoint, projection, and illumination; information of a three-dimensional object in a scenario, such as vertex position, patch composition, and surface material setting. The processor processes the data on a fixed assembly line, outputs the data to the frame cache, and finally outputs the data to the display. The programmable graphics processor provides more flexible control over the processing of input information, and controls the input information through programming. It provides flexible programmable features at vertex level and pixel level, and supports IEEE 32-bit floating point operation. Supports multiple plotting operations at the same time. this avoids data exchange between CPU and GPU for multiple times: supports texture rendering, improving the rendering efficiency, and supports texture dependency for easy data access.

With the development of graphics processors, high level shading languages have emerged ). High-end coloring languages are GPU-Based C programming languages. Fully utilizes the rendering processing capabilities of the current graphics processor. Specific features of the shader program. various HLSL Languages provide powerful support for vector data types. it also includes basic mathematical functions and texture processing functions, and provides efficient processing mechanisms for vertex transformation, illumination processing, and various rendering effects. These advanced languages accelerate the development process of shader programming. Common high-end coloring languages include HLSL (high-end coloring language developed by Microsoft), glsl (high-end coloring language developed by 0pengl), and CG (high-end coloring language developed by NVIDIA ). Here, we use the HLSL language as our shader programming language.

3. Basic principles and key steps of virtual face animation:

 3.1 basic principles:

First, determine a basic grid model. The grid model determines the initial coordinates of the model vertices before transformation. This model is defined as the source mesh model. Then, different emoticon mesh models are defined as the target mesh. The mesh models of different expressions are considered as different key frames. Changes in facial expressions are implemented by interpolation of key frames. A new facial expression can be generated not only by two key frame locations, but also by interpolation of four key locations (bilinear interpolation 1. Interpolation generates eight key frames (three linear interpolation. This method is quick and intuitive. You only need to define a few key frame mesh models to produce a basic face animation.

  3.2 key steps:

  3.2.1 define a Grid Model

  3.2.3 differences in computing models

Obtain the source mesh by iterating all vertices of the mesh model

The processing time of this algorithm is much shorter than that of the Hough algorithm, which can directly obtain the endpoint and length information of a straight line. Although this algorithm processes interference, it is still greatly affected by segmentation. If the profile information is lost during the split process. This algorithm cannot be automatically compensated. This also affects the performance of the algorithm to a certain extent. The detection accuracy is not high.

 

4. fast Hough algorithm

The various improved methods described above have all improved and improved the performance of the Small-size transformation and the linear extraction speed to a certain extent. However, when the real-time requirement is high and the processed image resolution is high, the algorithm running speed needs to be improved. In order to solve this problem, the following fast Hou small transform algorithm is proposed.

The idea of such A fast Hough algorithm is to use the divide and conquer idea to speed up the calculation of A (p, 0. Divide the image in the x direction in A half. First, obtain the value of the image after the image is converted by Houglh, then obtain the value of the segmented image, and finally obtain the value of the entire image. This can start with each column. Perform iterative calculation of A value. The number of iterations is obviously a three-d cut. Note that. Before performing Iterative Computing, the image needs to be scaled by zero so that the image size reaches the integer power of 2.

To this end, the symbol Ap (h, I, j) is introduced to accumulate the gray value of all pixels after a line segment calculated in step p iteration. The start point of a line segment is (I, j), and the offset of the end point to the y direction of the start point is h. H indicates that the angle between the line segment and x is acute. Otherwise, it is expressed as a blunt angle.

Through analysis, we can get the following approximate iteration formula:

The following is a description of this fast Hough algorithm:

This algorithm requires more storage space than the standard Hough transformation. However, the improved algorithm greatly improves the time performance and accelerates image processing. it makes sense to improve the timeliness of image processing.

 5. Conclusion

The theory and practice have always been inseparable and complement each other. The main reason for the rapid development of the theory and practice lies in the wide range of practical applications; the shortcomings exposed in practice further promote its development and reciprocating, just like the evolution of life. It is also used in biomedicine; automation, Robot Vision: Space Technology, military defense, and office automation. Therefore, the entire process has a wide range of concerns and good application prospects.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.