This is the time to read the English version of Nvidia's official "the CG Tutorial" to learn basic graphics knowledge and shader programming.
Make a reading record here.
Animation animation One, time-based motion movement in times
For animated rendering, the application needs to monitor the time and pass it as a global variable to the shader.
Using vertex shaders as much as possible on the GPU is an efficient way to animate, freeing up the CPU and allowing the CPU to handle more complex computations such as collision detection, artificial intelligence, and game play.
Second a pulsating Object1 to make a pulse motion. Brief introduction
Here is an example of how to make an object periodically swell and deform. The goal of this example is to use time as an input parameter and then modify the vertex position of the object geometry based on time. Rather, you need to move the position of the surface vertices along the surface normals.
Depending on the change in time, the amplitude of the vertex displacement displacement can be changed to create an expansion or pulse effect.
2. Displacement calculation Displacement calculation
An important process in the implementation of the above animation is to calculate the displacement displacement of the vertex in the normal direction. You can choose a function you like for displacement, such as:
Displacement = time
Perhaps you don't want objects to be infinitely larger over time, so you can choose to use a periodic function to calculate displacement, such as the sin function:
Displacement = 0.5 * (sin (time) + 1)
- In shaders, the efficiency of the Sin/cos function is equal to the addition and subtraction.
In addition, you can control the amplitude, frequency, and even the swelling effect by adding parameters to the displacement:
Displacement = Scalefactor * 0.5 * (sin (POSITION.Y * frequency * time) + 1)
- Scalefactor represents the magnitude of the swelling.
- Frequency indicates the frequency of expansion
- POSITION.Y is used to allow different vertices to have different expansion ranges, or to replace any other function related to vertex attributes.
- Here we can estimate the Scalefactor * 0.5 and store it as a global variable to reduce the operating pressure of the vertex shader.
Thirdly, the particle system particle Systems1. Brief introduction
Sometimes, unlike having vertices move in the grid, we want each vertex to be a separate object, or a particle. The collection of particles that follow a regular movement is called a particle system. The particle system will change over time.
The law of motion of a particle can be described using an equation, such as a simple vector kinematics equation in physics:
pfinal = pinitial + V * t + 0.5 * A * t2
- Pfinal is the final position of the particle.
- Pinitial is the initial position of the particle.
- V is the initial velocity of the particle
- A is the acceleration of the particles
- T is time
2. Calculate particle position
In the program, by monitoring the global time globatime and passing it as a uniform parameter to the vertex shader, when each particle is created, the time of the particle creation is passed as a variable tinitial to the vertex shader. In order to know when particles are activated, you need to use Globatime minus tinitial:
float T = globatime-tinitial
and substituting t into the formula in the previous section:
Float4 pfinal = pinitial + vinitial * t + 0.5f * acceleration * t * t
The location of the object space also needs to be converted to the clipping space:
Oposition = Mul (Modelviewprojectionmatrix, pfinal)
3. Calculate particle size Computing the particle size
The vertex shader provides an output semantics named Psize semantic used to represent the dimensions of a vertex, and when you render a dot point to the screen, an output parameter with that semantics determines the width of the point on the pixel.
4. Beautify particle systems dressing up Your particle system
We can promote the appearance of particles by using the point elf dot sprites. When using a point sprite, the hardware renders each point as a rectangle of four vertices instead of a single vertex. Each vertex of the point sprite is automatically assigned texture coordinates, which allows you to change the appearance of the particles through texture mapping.
The point sprite can create complex geometric effects without drawing extra triangles.
Four, key frame interpolation key-frame Interpolation1. Brief introduction
Keyframe This term comes from cartoon animation. When the painter draws the animation, it outlines a rough frame sequence animation, which does not contain each frame of the picture, and only the important, that is, "key" frame. The painter then complements the missing frames. With keyframes for reference, these intermediate frames are easier to draw.
Computer animation also uses a similar technique, and the 3D Modeler makes a keyframe for each pose of the animated character. Each keyframe needs to use the same number of vertices, and each vertex must correspond to each other. That is, a vertex in a keyframe must be able to correspond to the same point in other keyframes of the model. For this keyframe model, the game takes two keyframes of the model and then mixes each of the corresponding vertices.
Keyframe interpolation assumes that the number and order of vertices are the same in each keyframe, so that the vertex shader is always able to correctly pair and blend vertices.
2. Interpolation method interpolation approaches
There are many interpolation methods, and the two common ones are linear interpolation and two interpolation.
1) linear interpolation Linear interpolation
By linear interpolation, the position is transferred at a fixed rate. The linear interpolation formula is:
Blendedposition = Positiona * (1-f) + position * F
2) Two-time interpolation quadratic interpolation
If you want the conversion rate to change according to time, you can use two interpolation, the formula is:
Intermediateposition = Positiona * (1–F * f) + position * f * f
3) Other interpolation methods
In addition to appealing the two interpolation formulas, you can also interpolate using step functions, spline functions, or exponential functions.
3. Keyframe interpolation with illumination key-frame interpolation with Lighting
When you need to illuminate a keyframe model, the keyframe interpolation involves not only the blending of the positions, but also the blending of the vertex normals. The blended normals may no longer be unit vectors, so it needs to be standardized at this point.
Five, vertex skin Vertex Skinning1. Brief introduction
Another implementation of animation is vertex skinning, which is also known as a "matrix blending".
Unlike keyframe animations, vertex skin animations store a default pose for the model and a large number of matrices that are used to rotate and move the polygon grid area of the default pose, which is often referred to as bone bones.
The vertices of the polygon mesh in each default pose are controlled by one or more of these matrices, and each matrix does not specify a weighting factor to indicate how much the current matrix affects the vertices. Assume that all control matrices for each vertex have a weight of 1.
When you need to render a model with vertex skinning, you need to perform transformations on the vertex using all the matrices in the skeleton collection of each vertex, and then blend them by weight to get the position of the skin vertex.
When all matrices are the unit matrix, the model shows the default posture-usually standing facing the front and the hands and feet stretching out apart.
2. Build posture by matrix constructing poses from matrices
By controlling the matrix, you can create a different posture. 3D modelers need to assign matrices and corresponding weights to the model's default posture, and the process of creating postures becomes the process of manipulating matrices rather than controlling each individual vertex. In this way, changing the posture of the model and making the animation becomes very simple.
Typically, there are no more than 4 matrices that affect one vertex.
For the role model, the most important matrix is used to represent the movement and rotation of the bones in the body, so the vertex skin matrix is called the skeleton. The vertices represent the points on the skin.
3. Light Lighting
As with keyframe interpolation animations, when lighting is enabled, vertex skin animations also need to transform the normal vector. It is important to note that you need to transform each normal vector using only the rotation scaling matrix instead of the original matrix. It is also necessary to standardize the normal vectors after mixing.
4. Compare Keyframe animations on storage requirements Storage Requirements Compared with key Frames
In a keyframe animation, each pose requires a different set of vertex positions and normal vectors, and it is unwise to store the animation in this way when the character moves a lot.
Vertex skinning animations, you only need to store a default pose for the vertex position and the normal vector set as well as the matrix for each pose. Usually for a model, because the number of matrices is much less than the number of vertices, it takes much less space to store the animation using vertex skinning than keyframe animation.
When you store vertex skin animations, each vertex also requires additional storage to control the vertex's matrix index, along with the corresponding weighting factor.
Vertex skinning is ideal for storing and replaying motion capture sequences. We can store each action-captured frame as a collection of bone matrices and apply it to different models that have the same default posture. The inverse kinematics solver inverse kinematics solver can also be achieved by generating a skeleton matrix. The goal of the inverse kinematics solver is to find an incremental bone matrix sequence that allows a posture to transition naturally to another pose.
"The CG Tutorial" Reading notes--animation Animation