My DirectX programming (4)-Coordinate Transformation preliminary

Source: Internet
Author: User

These days are more interested in the left transform. I am not interested in game programming. I only use DirectX for near-body projects. Do I have to understand it when I use it! Besides, coordinate transformation is quite easy to use. My brain has been useless for a long time. It's getting rusty and moving.

My DirectXProgramming (4) Today I started watching the animation, but it is mainly Coordinate Transformation. For DirectXOr OpenGL, Coordinate TransformationIs the most important part. So we must study hard today. First look at the d3d Matrix Representation: M. _ 11 M. _ 12 M. _ 13 m. _ 14 UX Uy uz 0
M. _ 21 M. _ 22 m. _ 23 m. _ 24 = VX Vy VZ 0
M. _ 31 M. _ 32 M. _ 33 m. _ 34 wx wy wz 0
M. _ 41 m. _ 42 M. _ 43 m. _ 44 TX ty TZ 1 Let's take a look at the basic matrices to be constructed by d3d:
1, translation matrix 1 0 0 0
0 1 0 0
0 0 1 0
The TX ty TZ 1 translation matrix can be constructed by using the void d3dmatrixtranslation (d3dmatrix * m, float Tx, float ty, float Tz) function.
2. Rotating Matrix
Take the rotation around the Y axis as an example. The matrix format is as follows: cosr 0-SINR 0
0 1 0 0
SINR 0 cosr 0
0 0 0 1 and above matrices can be obtained through void d3dmatrixrotationy (d3dmatrix * m, float R. We can also obtain a matrix that rotates around X and Z axes.
3. scaling matrix SX 0 0 0
0 Sy 0 0
0 0 SZ 0
0 0 0 1 can be obtained through d3dmatrix * d3dmatrixscaling (d3dmatrix * pout, float Sx, float Sy, float sz. After obtaining the above matrix, we can use matrix multiplication to accumulate transformation. The multiplication form is the same as described in linear algebra, not detailed. The call method is as follows: d3dmatrix * d3dmatrixmultiply (d3dmatrix * pout, d3dmatrix * PM1, d3dmatrix * PM2). The return value is the same as that of pout.
In C ++, due to function overloading, you can also write it in the form of pout = PM1 * PM2. Of course, you can also use concatenation: pout = PM1 * PM2 * pm3.
After obtaining the transformation matrix, you must use m_pdevice-> settransform (d3dts_world, & m_matworld) to set the vertex to take effect. Of course, you can replace d3dts_world with d3dts_view and d3dts_projection to set the view matrix and projection matrix. The following is an example of a practical application:
Hresult cmyd3dapplication: framemove ()
{
// Rotates the object about the y-axis
D3dxmatrix matroty;
D3dxmatrix matrotz;
D3dxmatrixrotationy (& matroty, m_ftime * 0.5f );
D3dxmatrixrotationz (& matrotz, m_ftime * 1.5f );
 
D3dxmatrix mattrans;
D3dxmatrixtranslation (& mattrans, 0.0f, 0.0f, 0.0f); m_pd3ddevice-> settransform (d3dts_world, & (matroty * matrotz * mattrans); Return s_ OK;
}
We use the framemove () function to create an animation and play it cyclically. We only need to call the corresponding function to obtain the matrix, then multiply it and set it.
Let's look at the following code:
// Each viewport fills a quarter of the window
M_rviewport.width = mainviewport. width/2;
M_rviewport.height = mainviewport. Height/2;
M_rviewport.y = m_srtviewport.y = 0;
M_rviewport.x = m_trviewport.x = 0;
// Set the Full Z range for each viewport
M_rviewport.minz = 0.0f;
M_rviewport.maxz = 1.0f;
M_pd3ddevice-> setviewport (& m_rviewport );
M_matworld = rotationmatrix1;
M_pd3ddevice-> settransform (d3dts_world, & m_matworld );
M_pd3ddevice-> drawindexedprimitive (d3dpt_trianglelist,
0,
0,
4, // Number of vertices
0,
2); // Number of primitives
We only need to set the completed matrix to the current world coordinate matrix, and then draw the elements in sequence.
In this example, we can also imagine that d3d also has an automatic machine similar to OpenGL that saves the current state.
In the code above, we can also see the example of using a viewport. By setting the positions and sizes of different sdks, We can display different contents on the screen, in the book, we use the four views to demonstrate four different transformations.

The last time I wrote a blog, I simply recorded the Coordinate Transformation of d3d and applied the Matrix to the vertices in the scenario. I also explored how to use the viewport. This time, I will complete the rest of the d3d coordinate transformation. First, we will introduce the representation of an object. Then we will introduce two methods of arbitrary transformation, visual transformation and projection transformation. Finally, we will introduce the use of depth buffering.

Last time we introduced how to translate, scale, and rotate an object in the direction of the three coordinate axes. You can use the APIs of the d3dx series to directly construct the transformation matrix, the more complicated case is the rotation around its own axis. Using the above basic transformations can actually achieve this goal, but it is a little troublesome. Next we will discuss a more general expression. To achieve this goal, we must use more complex matrix transformations. First, we use a structure to describe the positioning of objects:
Struts object {d3dmatrix matlocal ;}
Use the three vectors in the Matrix to indicate the object orientation: Look, Up, And right. The meaning of this vector is the same as that of the camera look, up, and right in OpenGL. However, in fact, apart from the posture represented by the preceding three vectors, a location information is required for locating an object. Therefore, we use the fourth row to record the position. Set the matrix as a unit array to indicate that the object transformation starts from the origin and follows the axis direction. This indicates that, after an object is rotated around the look axis, the object is switched to the pitch axis, the Up axis to the yaw axis, and the right axis to the roll axis ). ------------------------------------ The APIs used are described below. The API for converting a vector according to the specified matrix is d3dxvectortransformcoord (d3dxvector * vnew, d3dxvector * Vold, d3dxmatrix * mat). The matrix mat used for rotation is derived from d3dmatrixrotationaxis (d3dxmatrix * mat, d3dxvector * vaxis, float frad), which indicates that an angle is rotated around a certain constant to generate a transformation matrix. With these two APIs, we can assign three pose vectors to vaxis and vold to obtain the new pose vector. It should be noted that, due to the accuracy of the calculation, there will be a rounding error after the calculation is performed multiple times, so that the three pose vectors are no longer vertical. To solve this problem, we need to normalize the three vectors before rotation. Normalization here is not to normalize each of the three vectors, but to perform the following operations:
D3dxvec3normalize (& vlook, & vlook); d3dxvec3cross (& vright, & vup, & vlook); d3dxvec3normalize (& vright, & vright); d3dxvec3cross (& vup, & vlook, & vright); d3dxvec3normalize (& vup, & vup );
We can see that this is done by means of vector normalization and cross multiplication, which both ensures vector normalization and vertical. The matlocal matrix is saved as follows: the first row is right, the second row is up, the third row is look, and the fourth row is position:
M_pobjects [0]. matlocal. _ 11 = vright. x; m_pobjects [0]. matlocal. _ 12 = vright. y; m_pobjects [0]. matlocal. _ 13 = vright. z; m_pobjects [0]. matlocal. _ 21 = vup. x; m_pobjects [0]. matlocal. _ 22 = vup. y; m_pobjects [0]. matlocal. _ 23 = vup. z; m_pobjects [0]. matlocal. _ 31 = vlook. x; m_pobjects [0]. matlocal. _ 32 = vlook. y; m_pobjects [0]. matlocal. _ 33 = vlook. z; m_pobjects [0]. matlocal. _ 41 = VPOs. x; m_pobjects [0]. matlocal. _ 42 = VPOs. y; m_pobjects [0]. matlocal. _ 43 = VPOs. z;
Next we will summarize the above transformation process:
  1. Determine the Rotation Angle and rotation axis.
  2. Obtain the current vright, vlook, vup, and VPOs vectors;
  3. Normalize the three vectors;
  4. Use d3dmatrixrotationaxis (d3dxmatrix * mat, d3dxvector * vaxis, float frad) to generate a rotation matrix;
  5. D3dxvectortransformcoord (d3dxvector * vnew, d3dxvector * Vold, d3dxmatrix * mat) is used to transform the current vright, vlook, and vup vectors to obtain the new vright, vlook, and vup vectors.
  6. Move the location to obtain a new VPOs;
  7. Set the new vright, vlook, vup, and VPOs vectors to matlocal.

----------------------------

We can see that the above representation requires seven processes, which is a little complicated. Next we will look at another simple computing method, quaternion ).

Let's first compare the implementation differences and then explain the meaning of the API.

  1. Determine the Rotation Angle and rotation axis.
  2. Use d3dxquaternionrotationyawpitchroll (d3dxmatrix * mat, float fyaw, float fpitch, float froll) to convert the matrix.
  3. Multiply the matrix obtained above with matlocal to obtain a new matlocal;
  4. Change the position.

The principle of the Quaternary element is a bit complicated. I don't know what it means because of the fast-moving relationship. I just know that I can simply imagine adding a rotation to a vector. I will have a chance to study the specific operation derivation. However, this topic is indeed widely used, so it is introduced as a specialized method by d3d.

Only one API is used, that is, d3dxquaternionrotationyawpitchroll (d3dxmatrix * mat, float fyaw, float fpitch, float froll). A transformation matrix is returned for a given rotation angle around three axes.

-----------------------------

Next, let's take a look at the observation transformation. The only difference between the observation matrix and the object positioning matrix is its storage method, which uses the column vector storage method. The various transformations of the camera are not different from those of the object, and finally a matrix is obtained. However, d3d provides a function to construct a matrix based on the viewpoint position, camera orientation, and upward direction: d3dxmatrixlookatlh (d3dxmatrix * mat, d3dxvector3 * peye, d3dxvector3 * Pat, d3dxvector3 * pup), saving yourself the trouble. Finally, set it with m_pd3device-> settransform (d3dts_view, & mat.

It should be noted that d3dxmatrixlookatlh () is only applicable to simple hood display or viewpoint tracking. It is best for cameras with complex flight simulators to calculate by themselves. The calculation method is the same as the method of object transformation introduced by the front edge. There are two ways: one is a complex 7-step transformation and the other is a simple Quaternary transformation. Finally, assign the obtained vector to the view matrix in the form of column vectors, and set it with settransform. This is actually a way to maintain your own camera.

------------------------------

Let's take a look at the projection transformation. When it comes to projection, you will think of the cone, and there will be four parameters: FOV-field of view, aspect, and distance cut. In d3d, you can use d3dxmatrixperspectivefovlh (d3dxmatrix * pout, float fovy, float aspect, float Zn, float ZF) to obtain the projection matrix using the given four parameters, then use m_pd3device-> settranform (d3dts_projection, & pout) to set the projection matrix.

------------------------------- The last time the use of the viewport has been said, let's take a look at the use of the depth buffer. In the frame, use the depth buffer to make m_d3denumeration.appusesdepthbuffer = true. Then, use the m_pd3device-> clear () method to clear the buffer before each frame is drawn. Hresult clear (DWORD count, // Number of rectangles const d3drect * prects, // DWORD flags of the rectangular pointer, // float Z of the buffer type to be cleared, // Z buffer setting value DWORD stenpencil) // The template buffer setting value book also describes the problem that the depth buffer precision affects the rendering quality, in order to achieve a stable effect without errors, the W buffer can be used. The method is as follows: m_pd3device-> setrenderstate (d3dts_znable, d3dzb_usew). However, hardware support is required, so it is better to use the Z buffer to ensure security. ----------------------------- In summary, d3d has three types of matrices: in addition to the world coordinate matrix mentioned last time, I actually understand that the model matrix in the model view matrix corresponding to OpenGL, there are also view matrices and projection matrices. Their setting functions are all pdeviceobject-> settransform (), but the parameters are different. All other functions or representations are ultimately used to obtain these three matrices. This allows us to see the sun in the cloud, as long as we read this matrix in our hearts.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.