Software College of Zhejiang University three-dimensional animation and interactive technology examination concept finishing

Source: Internet
Author: User
Tags compact scalar

First Lecture

1. Augmented Reality technology AR:

--three-dimensional animation, stereoscopic vision and image processing are fused;

-Modeling, rendering, position calibration, image fusion;

2. OpenGL is a programming interface for creating real-time 3D images.

3. The term three-dimensional indicates that an object being described or displayed has three dimensions: width, height, depth;

-Computer 3D graphics are essentially planar;

-a two-dimensional image displayed on a computer screen that provides the illusion of depth (or third dimension);

2d+ perspective = 3D

Perspective gives the illusion of depth.

4. The real 3D is the observation of the same object through the two eyes of the human, creating two images of Parallax on the retina, creating a true 3D visual sensation in the human brain.

5. Noun interpretation: rendering, the operation of converting mathematical and graphic data into 3D spatial images.

6. Transform: Pan, rotate, and zoom.

7. Projection: Converts the x-coordinate into a two-dimensional coordinate.

8. Rasterization: Use pixels to fill the shape.

9. Texture map: A texture is a picture that is attached to a triangle or polygon.

10. Viewport Mappings:

--mapping coordinates of the drawing to the window coordinates ;

--mapping from logical Cartesian coordinates to physical screen pixel coordinates;

-The viewport is the customer area within the window that is used to draw the cropped area; The viewport is not necessarily the entire window, and each unit of logical coordinates does not necessarily correspond to the screen pixel one by one.

11. Projection: From 3D to 2D, two projection modes: orthographic projection and perspective projection.

12. Positive projection: Also known as parallel projection;

Features: no distortion, visual untrue;

Mainly used in architectural design, CAD, 2D drawing;

13. Perspective Projection:

Near large and small, visual reality.

14. What is OpenGL?

Open Graphics Library

Definition: A software interface for graphics hardware;

Originally created by SGI, it is used to draw two-dimensional, three-dimensional graphics on different hardware architectures on graphics devices. OpenGL is not a programming language, it is characterized by the extremely fast parallel floating-point operation, but not the process control.

. ARB: Standard expansion;

EXT: A number of supported extensions;

16. Back culling: Painter algorithm:

--Sort the elements that need to be drawn, first drawn farthest, and then close in turn.

Second lecture

1. The substance consists of atoms, and 3D graphics consist of elements.

2. Points: Each vertex is a separate point on the screen.

3. Line: Each pair of vertices defines a segment.

4. Line band: A line drawn from the first vertex, followed by successive vertices.

5. Wire loop: The last vertex and the first vertex are connected.

6. What is a vector?

Three-dimensional vectors are represented by a ternary group (x, Y, z).

--Vertex is a vector: represents a position in space;

--three-dimensional coordinates can represent a vector: the vector has a length, there is a direction, so the vector = to + amount.

7. Definition of vectors:

--the vector is a line segment with arrows from the original point to the coordinate system (x, Y, z).

--a point in space, both a vertex and a vector.

8. Unit vectors:

-a vector of length 1 is called a unit vector;

--Converts an arbitrary vector into a unit vector, called normalization. Divides the vector by the length of the vector.

8. Point multiplication:

--Two three-dimensional vector point multiplication is a scalar;

--Represents the length of a vector projecting to another vector;

--in the unit vector, the point multiplication of two vectors is the cosine of the angle;

9. Fork Multiplication:

The cross-multiplication of two vectors is a third vector perpendicular to the two vectors;

Purpose: To calculate the normal direction of plane;

Features: order is not exchangeable;

10. Matrix: is a data structure composed of rows and columns, in the program is generally stored in a two-dimensional array.

11. The role of the Matrix:

--Affine transformations in three-dimensional space are performed using matrix operations: rotation, translation, and scaling;

Matrices in--opengl: View transformations, model transformations, projection transformations.

12. Several basic concepts

View transform: Sets the position of the observer or camera;

Model Transformation: Moving objects in a scene;

Model View: Model and view consistency;

Projection transformation: Setting the size and shape of the scene body;

Viewport Transformations: Window Scaling

13. View Transform:

--The function of the view transform is to set the observer's position, as well as the sight;

--can be understood as placing the camera in the scene, the position of the camera, the direction of the camera alignment;

--The view transform should be used before any other transformations to ensure consistency with the visual coordinate system;

--The default from (0,0,0) to the z-axis negative direction;

14. Model Transformation:

--model transformations are used to manipulate the model and the specific objects therein;

--Move the object to the desired position, then rotate and scale;

15. Projection Transformation: Applied after model transformation and view transformation;

The projection transformation actually defines the scene body and creates a clipping plane ;

The projection transformation is divided into positive projection and perspective projection.

16. Viewport transformations

--mapping from the color buffer to the window pixels;

affect the proportion of screen display;

The final picture can be scaled;

17. Model View Matrix

--The Model view matrix is a 4*4 matrix;

--The original vertex coordinate is a four-dimensional vector, which is multiplied by the model view matrix to get the new coordinate after transformation;

--Note: Mathematically, vectors should be placed on the right, and left by a transform matrix;

In OpenGL, the vector is the row vector, matrix column main matrix, equivalent to the whole transpose;

18. Pan: Move a vector along one or more axes.

19. Rotate: Zoom in or out of the vector according to the set factor in the direction of the three axes.

Third Speaking

1. Color is only a wavelength of light, the real world of various colors are composed of many different types of light, these types of lights by their wavelength to distinguish.

2. The wavelength of light is measured by the distance between the wave's adjacent peaks.

3. The White object reflects the color of all wavelengths evenly, while the black object absorbs the color of all wavelengths evenly.

4. OpenGL specifies a color by setting the strength of the red R, Green G, and blue B components respectively.

5. Model all available colors, create cubes, and become RGB color space;

--Origin point (0,0,0), black;

--diagonal vertices (255,255,255), white;

--from the origin point to the direction of each axis, respectively, is the saturated distribution of red, green and blue;

6. The illumination model is a mathematical formula for calculating the luminance and color composition of any point on the surface of a geometric object.

The illumination model is a mathematical method to describe the lighting situation in the real world.

local illumination Model : Light intensity is only related to illuminated objects and light sources;

Global Illumination Model : The intensity of light is correlated with any point in the scene;

7. local Illumination Model : it is assumed that light source, object is non-transparent object, and the surface is smooth, transmitted light and scattered light approximate to zero.

--the effect of reflected light is only considered in the local illumination model;

-reflected light including ambient light , diffuse light , specular light;

8. Lighting Overview:

Illumination is usually performed prior to texture mapping;

Lighting Effects:

--You can see the texture map ;

-Illumination can greatly increase the realism of the scene;

--enable illumination, will not see the color information on the surface of the object;

--enable illumination to see material information on the surface of the object;

-The normal of the surface of the object will determine the direction of light reflection;

9. The illumination model in OpenGL:

--Environmental light;

-Diffuse reflection light;

--Mirror Light;

(1) Ambient light:

-The ambient light does not come from any particular direction, he comes from a light source, but the light bounces around the scene;

-Ambient light illuminates the surface of the object in all directions evenly;

-color is independent of rotation and angle of view;

Ambient light is a global source , only color, no direction and position, and only one.

OpenGL supports at least 8 independent light sources with position and direction of illumination ;

(2) Diffuse reflection light:

--opengl Diffuse light is directional and comes from a specific direction;

-A uniform reflection on the surface according to the angle of the incident light, which is distributed in all directions;

-From any point of view, the lighting effect is the same;

(3) Mirror light:

-Specular light has a strong directivity;

--illuminate the surface to form a bright spot;

-The reflective direction is almost identical;

-Specular reflection can make the object look shiny;

--different angles, the effect of specular reflection is not the same;

10. Global Illumination Model: Ray tracing algorithm, radiometric algorithm.

Lecture IV

1. Fog: Can make distant objects appear hazy feeling, distance from the viewpoint of the object is almost invisible. Fog is an effective hint of depth.

2. Accumulating buffers

Principle:

(1) OpenGL after rendering to the color buffer, not directly displayed on the window, but copied to the accumulation buffer;

(2) After the accumulation buffer is mixed repeatedly, the buffer exchange is performed to display.

Role:

(1) using different viewpoints to render the scene multiple times, the accumulation can achieve complete anti-aliasing of the whole scene, the effect is better than multi-sampling;

(2) realize the effect of motion blur;

3. Jitter: Generate rich colors with a small amount of color.

4. Bitmap and pixel graphs

--the bitmap is represented by a 2-color (1-bit) point;

--The pixel chart uses 256 colors (8 bits) to denote a point;

5. Two ways to map an image to the screen:

(1) Drawing with Sotu: image pixels correspond strictly to screen pixels;

(2) Texture map: Image pixels are mapped to screen pixels after a certain transformation;

6. Texture Map:

Basic concepts:

(1) Texture mapping is the application of image data to three-dimensional elements;

(2) Texture mapping brings rich surface features to three-dimensional graphics;

(3) The texture unit is an individual image element in the texture;

7. Bump texture is divided into:

(1) Displacement mapping;

(2) normal mapping;

8. Displacement mapping: A displacement map is a technique that uses a height map to shift the actual geometric point position on a textured surface along the surface normals according to the values stored in the texture.

Lecture V

1. Rendering: The computer creates an image from the model .

--The model is made up of geometric elements, and the geometry is specified by vertices, and OpenGL treats points, lines, polygons, graphs, and bitmaps as elements.

--The final rendered image is made up of screen pixels;

2. The process of rendering:

(1) modeling : The use of geometric elements to establish a model, so as to obtain the mathematical description of the object;

(2) transformation : Arranging objects in three-dimensional space, choosing the advantageous position of observing the scene;

(3) coloring : Calculate the color of all objects;

(4) rasterization : The mathematical description of the object and the relevant color information into the screen pixels;

3. What is a rendering pipeline?

When we pass the drawing to OpenGL, OpenGL also has a lot to do to complete the projection of the 3D space to the screen. This series of processes is called OpenGL's rendering pipeline.

The general rendering steps are as follows:

(1) display list;

(2) Evaluation procedure;

(3) vertex operation;

(4) Assembly of elements;

(5) pixel operation;

(6) Texture assembly;

(7) Rasterization;

(8) Fragment operation;

4. The basic steps of OpenGL to build a three-dimensional model:

(1) viewpoint transformation ;

(2) model transformation ;

(3) projection transformation ;

(4) viewport transformation ;

In this way, an object in a three-dimensional space can be represented by a corresponding two-dimensional plane object, and it can be displayed correctly on a two-dimensional computer screen.

5. The rendering pipeline in OpenGL consists of two phases:

(1) The first is a vertex-based operation , then the elements are rasterized, producing fragments;

(2) texture, fog, and other fragment-based operations before the fragment is written to the frame buffer;

6. Vertex processing is divided into 4 stages: Vertex transform, illumination, texture coordinate and transform, crop.

7. The result based on the fragment operation is a color value.

8. OpenGL Coloring language (GLSL) is a High-level language for programming GPUs, with compact code, good readability and higher efficiency.

GLSL's syntax is very close to the C language.

GLSL uses two types of objects: shader Objects and program objects .

Six talk

1. What is an animation?

(1) from the production level: animation is not the use of real-life performance shooting, but with a variety of technical means produced by the artistic value of the activity image;

(2) from the technical level: animation is a series of static image playback, the use of human visual retention effect , resulting in continuous movement of the ornamental effect.

Visual retention Effect: when the observed object disappears, the image can remain in the brain for some time, about 1/10s.

2. Three-dimensional animation types: Deformation animation, skeletal animation.

3. Motion engine technology: similar to the physical engine, the introduction of an action engine in the game engine. Mainly handles motion control and motion feedback.

Eighth Lecture (Review of Examination points)

1. What is OpenGL?

(1) OpenGL = Open Graphics Library

(2) Definition: A software interface of graphics hardware;

(3) originally created by SGI, used to draw two and three-dimensional graphics on graphics devices of different hardware architectures;

(4) OpenGL is not a programming language, but contains GLSL, which is characterized by the extremely fast parallel floating-point vector operation, but not the process control.

2. opengl extension Mechanism:

(1) ARB: standard expansion;

(2) EXT: A number of supported extensions;

3. OpenGL and Platform

(1) OpenGL ES for embedded platform;

(2) browser-oriented WebGL;

(3) Wgl for MS Windows;

(4) CGL for Mac OS;

(5) GLX for x windows;

4. OpenGL and related tools

(1) Cross-platform toolbox: GLUT = OpenGL Utility Toolkit

--glut developed by SGI;

-compatible with Win/linux/mac;

--The latest open source Freeglut;

(2) Packaging library for processing extensions: Glew

5. OpenGL State Machine

(1) OpenGL uses a set of state variables to maintain the state of the graphics rendering pipeline;

(2) OpenGL uses a state model (state machine) to track all state variables;

(3) When a value is set, the state is maintained;

6. Basic Geometry elements

(1) Gl_points: Each vertex is a separate point on the screen;

(2) Gl_lines: Each pair of vertices defines a segment;

(3) Gl_line_strip: A line drawn from the first vertex sequentially through subsequent vertices;

(4) Gl_line_loop: Ditto, but the last vertex and the first vertex are connected;

(5) Gl_triangles: Defines a new triangle for each of the three vertices;

(6) Gl_triangle_strip: A set of triangles that are common to vertices on a stripe;

(7) Gl_triangle_fan: A group of triangles with a single origin centered in a fan-shaped arrangement, sharing a neighboring vertex;

6. The wrapping property of the triangle:

Counterclockwise is the positive direction and can be modified by Glfrontface (GL_CW).

7. Rejection on the back

(1) Painter algorithm

--Sort the elements that need to be drawn, first drawing the farthest, then approaching;

-Inefficient and resource-intensive;

(2) Hidden face culling

--Glenable (Gl_cull_face)

--Gldisable (Gl_cull_face)

8. Comparison of depth tests

Self-masking objects that do not have a depth test enabled will result in a display error.

9. Polygon Offset: When the coordinates of the entities are very close, even if the depth test is enabled, there will be flashes and the same Z values.

10. What is a vector?

Three-dimensional vectors are represented by a ternary group (x, Y, z).

(1) A vertex is a vector that represents a position in space;

(2) Three-dimensional coordinates can represent a vector, vector has a length, vector also has direction, so, vector = to + amount.

11. Definition of vectors

(1) A vector is a line segment with arrows from the original point to the coordinate system (x, Y, z);

(2) A point in space, which is both a vertex and a vector;

12. The role of vectors in graphics

(1) Indicate position: vertex;

(2) indicating direction: the direction of sight, plane normal;

13. Unit vector

(1) A vector of length 1 is called a unit vector;

(2) converts an arbitrary vector into a unit vector, called normalization, dividing the vector by the length of the vector.

14. Point multiplication

(1) The point multiplication of two three-dimensional vectors is a scalar;

(2) indicates the length of a vector projecting to another vector;

(3) in the unit vector, the point multiplication of two vectors is the cosine of its angle;

15. Cross-multiplication

(1) The cross multiplication of two vectors is a third vector perpendicular to the two vectors;

(2) Use: The normal direction of the calculation plane;

(3) Features: order is not exchangeable;

16. Matrix

(1) A matrix is a data structure composed of rows and columns;

(2) in the program is generally stored in a two-dimensional array;

(3) OpenGL is stored in the main column;

17. The role of matrices: spatial transformations

(1) Affine transformations in three-dimensional space are performed using matrix operations: rotation, translation, and scaling;

(2) The matrix in OpenGL: View transformation, model transformation, projection transformation;

19. Several basic concepts

(1) View transform: Sets the position of the observer or camera;

(2) Model transformation: Moving objects in a scene;

(3) Model view: The consistency of model and view;

(4) Projection transformation: Set the size and shape of the scene body;

(5) Viewport Transformation: Window scaling;

20. View Transformations

(1) The function of the view transform is to set the observer's position and the line of sight;

(2) can be understood as placing the camera in the scene:

-Where the camera is located;

-the orientation of the camera alignment;

(3) Before any other transformations with the view transformation, can be guaranteed to be consistent with the visual coordinate system;

(4) The default from (0,0,0) to the z-axis negative direction;

(5) in the positive projection, the viewpoint in the z axis is infinitely far away, you can see any object in the scene body;

21. Model Transformations

(1) Model transformation is used to manipulate the model and its specific objects;

(2) Move the object to the desired position, then rotate and zoom;

22. Model and view consistency: Model transformations and view transformations finally form a unified model view matrix.

23. Model View Matrix

(1) The Model View matrix is a 4*4 matrix;

(2) The original vertex coordinate is a four-dimensional vector, which is multiplied by the model view matrix to get the new coordinate after the transformation;

(3) Note: In mathematics, vectors should be placed on the right, and left by a transformation matrix;

(4) in OpenGL, the vector is the row vector, the matrix is the column main matrix, the equivalent of the whole transpose;

24. What is a projection?

From three-dimensional cutting space to two-dimensional screen space.

25. How is the projection and its characteristics?

Positive projection

Perspective projection

26. Positive projection

Features: no distortion, visual untrue;

Mainly used in architectural design, CAD, or 2D drawing.

The viewing body is square.

27. Perspective projection

Features: Near big far small, visual reality.

The viewing body is a flat-cut head body;

OpenGL specifies a color by specifying the intensity of the red, green, and blue components separately.

29. Model all available colors, create a cube, called an RGB color space.

--Origin point (0,0,0), black;

--diagonal vertices (255,255,255), white;

-the direction from the origin to each axis, respectively, is the saturated distribution of red, green and blue.

30. Illumination Model

(1) local illumination model : Light intensity is only related to the light source of the object being illuminated;

(2) global illumination Model : The intensity of light is correlated with any point in the scene;

Partial illumination Model : Assuming that light is a point source, the object is non-transparent and the surface is smooth, and the transmitted light and scattered light will approximate to zero.

--the effect of reflected light is only considered in the local illumination model;

-reflected light including ambient light , diffuse light , specular light;

32. Ambient light

(1) Ambient light does not come from any particular direction, he comes from a light source, but the light is reflected around the scene.

(2) ambient light illuminates the surface evenly in all directions of the object.

(3) color is independent of rotation and angle of view.

(4) The light shining on the object comes from all directions and is reflected evenly in all directions .

33. Scattered Light

(1) OpenGL scattering light is directional and comes from a specific direction;

(2) reflect evenly on the surface according to the angle of the incident light .

(3) Under the irradiation of the point light source, the brightness of different parts of the object surface is different, and the brightness depends on the orientation of the object surface and the distance between it and the point light source.

(4) Diffuse reflection characteristics: light source from One direction, reflected light evenly into all directions.

34. Mirror Light

(1) The specular light has a strong directivity;

(2) Irradiation surface to form a bright spot;

35. Specular Reflection

(1) The reflective direction is almost identical;

(2) Specular reflection can make the object look sparkling;

(3) different angles, the effect of specular reflection is not the same;

36. Surface Normals: The direction of the normals determines the orientation of the polygon faces.

37. Limitations of the local illumination model:

(1) Only consider the contribution of light emitted directly from the light source to the surface brightness of the object;

(2) The reflection and transmission of light between objects are not considered.

38. Global Illumination Model:whitted model ;

It can simulate the contribution of mirror reflection and transmission between scene surfaces in the real world.

Whitted model

It is assumed that the contribution of the brightness of a point p from the surface of an object observed by V in a certain direction of observation is derived from three aspects:

(1) Brightness of reflected light caused by direct irradiation of light source;

(2) Environmental specular reflection light;

(3) Environmental rules transmitting light;

40. Why is it distorted?

The contradiction between continuous geometric space and discrete screen pixels .

41. Multiple sampling

Limitations of anti-aliasing: In complex scenarios, you need to sort all elements before and after.

Several terms:

Full screen anti-aliasing;

Super sampling anti-aliasing;

Multi-sample anti-aliasing;

42. Accumulating buffers

Principle:

(1) OpenGL after rendering to the color buffer, not directly displayed on the window, but copied to the accumulation buffer;

(2) After the accumulation buffer is mixed repeatedly, the buffer exchange is performed to display;

Role:

(1) using different viewpoints to render the scene multiple times, the accumulation can achieve complete anti-aliasing of the whole scene, the effect is better than multi-sampling;

(2) Realize the motion blur effect;

43. Bitmap and pixel graphs

(1) A bitmap is represented by a 2-color (1-bit) point;

(2) The pixel chart uses 256 colors (8 bits) to denote a point;

43. Basic concept of texture mapping

(1) Texture mapping is the application of image data to three-dimensional elements;

(2) Texture mapping brings rich surface features to three-dimensional graphics;

(3) The texture unit is an individual image element in the texture;

44. Nearest Filter

(1) Nearest filter: Mosaic will appear;

(2) Linear filter: Smooth more close to the real;

45. Why use mipmapping?

(1) solve the flashing problem;

(2) Reduce the waste of texture loading;

The mipmapping texture consists of a series of texture images, each of which is half the size of the front.

Mipmapping is an LOD technique .

46. Texture Objects

(1) Texture objects allow us to load multiple images at once and switch between these texture objects;

(2) The texture object is an unsigned integer;

the anisotropic filtration

(1) The technique of mapping the pixels in the surrounding direction to the target pixels after sampling and calculating;

(2) compared with bilinear filter and tri-linear filter , it has higher precision in large angle display, makes the picture more realistic, but the calculation amount is also greater, the requirements for the graphics card are higher.

OpenGL render Pipeline, what is a rendering pipeline?

(1) When we pass the drawing to OpenGL, OpenGL has to do a lot of steps to complete the projection of the 3D space to the screen, a series of processes called the OpenGL rendering pipeline.

(2) The general rendering pipeline has the following steps:

--Display list;

--evaluation procedure;

--vertex operation;

--graphic element assembly;

--pixel operation;

--texture assembly;

--rasterization;

--fragment operation;

Summary of fixed pipelines in OpenGL: The rendering pipeline in OpenGL is divided into two phases:

(1) The first is a vertex-based operation , then the elements are rasterized, producing fragments;

(2) texture, fog, and other fragment-based operations before the fragment is written to the frame buffer;

50. Fixed vertex processing:

--the vertex-based stage starts with a set of vertex attributes;

-These properties include object space position, normals, primary and secondary colors, and texture coordinates;

The result of vertex-based processing is the clipping space position, front and back primary and secondary colors, a fog coordinate, texture coordinates, and point size;

--vertex processing is divided into 4 stages: vertex transformation, illumination, texture coordinate transformation, cropping ;

51. Fixed fragment operation

--Using a fragment and the data associated with it as input, including texture coordinates, main color and auxiliary color, fog coordinates;

--based on the result of the fragment operation is a color value ;

--fragment-based fixed function pipelines are divided into 4 stages:

(1) Texture application and environment;

(2) color summation;

(3) Fog application;

(4) Anti-aliasing applications;

52. Programmable Rendering Pipeline

--using shaders to replace some of the stages in a fixed pipeline;

--shaders can also be called programs;

--Shader essence is a custom program defined by an application that is used to take over the responsibilities of the fixed function pipeline phase;

The. OpenGL Coloring Language

--OpenGL Coloring language (GLSL) is a high-level language for programming GPUs , with compact code, good readability and higher efficiency;

The--GLSL grammar is very close to the C language;

53. Shader Objects

--GLSL uses two types of objects: shader Objects and program objects ;

--The shader object loads the shader text and compiles it;

--The shader object is the smallest unit of action, but cannot be run independently and needs to be bound to the "program" to execute.

54. Shader Uniform values

--attributes are required for each vertex position, surface normal, and texture coordinate, while uniform values are used to pass data for shaders that want to remain intact throughout the primitive batch;

-For vertex shaders, the unified value may be a transformation matrix;

--The uniform value is read-only;

55. Height map and bump texture

--Use the pixel values of the height graph to determine the illumination characteristics of the object surface;

--bump texture is divided into: displacement mapping and normal mapping;

Displacement mapping: A displacement map is a technique that uses a height map to shift the actual geometric point position of a textured surface along the surface normals according to the values stored in the texture.

56. Vertex arrays: Use vertex arrays to speed up data loading and save and share data across multiple plots.

57. Indexed vertex arrays: Indexed vertex arrays are not iterated through the array of vertices from the beginning, specified by a separate index array.

Advantages: Two triangular bands share vertices, save memory, reduce transformation overhead;

58. Classification of animations:

According to the production method classification:

(1) Traditional animation: hand-painted animation;

(2) Freeze animation: Clay animation, puppet animation;

(3) Computer animation: two-dimensional animation, three-dimensional animation;

59. Types of three-dimensional animation

(1) Deformation animation;

(2) skeletal animation;

60. Advantages of Morphing animations:

(1) Can fine control the shape of the change;

(2) Especially for facial expression animation;

Disadvantages of morph animations:

(1) Large amount of animation data;

(2) high complexity of production;

(3) It is difficult to reuse among multiple roles;

61. Skeletal animation Technology:

--model creation;

--skeleton creation;

--Bone binding;

--Animation settings: Key frame, action data

62. Direction of presentation:

(1) using matrix representation;

(2) using Euler angle;

(3) The use of four yuan;

63. Advantages of motion capture:

--a real record of each action detail;

Disadvantages:

--Difficult to modify: changes to the action data can easily lead to distortion;

-Difficult to control: there is no good way to control the captured action;

--Difficult to reuse: applied to different roles, skeletons;

Software College of Zhejiang University three-dimensional animation and interactive technology examination concept finishing

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.