1. The problem arises
This problem is caused by the "3D Game Programming Master Skills" in the book some of the problems are not clear, some things are not right.
The first aspect of this noun is because our PC screen is not square and the screen width: the height of the screen is the aspect ratio. But the view plane of the camera system we built last time is square, then when many objects are projected onto the view plane, it is necessary to finish a square picture, and the screen is rectangular, there are only two ways:
1) Flatten the photo so that all the objects on the drawing are squashed.
2) cut off the extra two pieces of the picture, leaving only the screen-sized picture so that the object does not deform, but some of the picture is not visible.
Which is the right one.
Take human eye as an example, our eyes cannot be squashed because the outer frame of the eye is not a square, so what we have to do is not to double the upper and lower sides, not to flatten the object.
2. Two different ways
We can think of two ways to do this:
1) such as Hello3dworld, in the image has been transformed into the screen system of the foot, the image of the superfluous bottom discard.
2) We do not set the FOV of the YZ plane to be the same as the FOV of the XZ plane, but instead make the aspect ratio of the view plane the same as the width to height ratio of the screen. In this way, the upper and lower cutting surface of the equation will be changed, directly will appear in the superfluous on both sides of the thing cut off, that is, after the 3D space cutting done, and then projected to the view plane results are not square, but with the same proportion of the screen, that is, in essence, the vertical view of the operation. The advantage is that more objects can be removed when the object is removed, and no 2D image clipping is required. So we should use this method.
If you modify the upper and lower clipping surface equation, the new process from perspective projection to screen transformation will be this:
The final screen coordinates have been obtained.
3. Modify the code that generates the top and bottom clipping polygons
Remember how to generate top and bottom clipping polygons. The last article is very thin, it is not repeated here. The change that is made now is that the coordinates of that particular two point have changed. For example, the cutting surface, formerly ( -1,1), now becomes ( -1, 1/ar) (1, 1/ar).
The new coordinates for the fork multiplication, novelty on the cutting surface method vector, obtained: <0, D, -1/ar>
Lower clipping surface normal vector: <0,-D, -1/ar>
This is the new function code to create the camera:
void _cppyin_3dlib::cameracreate (camera_ptr cam, int type, point4d_ptr pos, vector4d_ptr dir, point4d_ptr Target, VECTOR4 D_ptr V, int needtarget, double Nearz, double Farz, double FoV, double screenwidth, double screenheight)//Create camera {//Camera type Cam->type = Type; Set position and orientation to Vectorcopy (& (Cam->worldpos), POS); Vectorcopy (& (Cam->direction), dir); Sets the target point for the UVN camera if (target! = NULL) {vectorcopy (& (Cam->uvntarget), target);} else {vectorcreate (& (CAM->UVN Target), 0, 0, 0); if (v! = NULL) {vectorcopy (& (Cam->v), v);} cam->uvntargetneedcompute = Needtarget; Clipping surface and screen parameters Cam->nearz = Nearz; Cam->farz = Farz; Cam->screenwidth = ScreenWidth; Cam->screenheight = ScreenHeight; Cam->screencenterx = SCREENWIDTH/2-1; Cam->screencentery = SCREENHEIGHT/2-1; Cam->aspectratio = (double) screenwidth/(double) screenheight; Cam->fov = FOV; Cam->viewplanewidth = 2.0; Cam->viewplaneheight = 2.0/cam->aspectratio; Based on FOV and apparent plane sizeCalculate D if (Cam->fov = =) {cam->viewdistance = 1;} else {cam->viewdistance = (0.5) * (cam->viewplanewidth)/ Tan (Angeltoradian (FOV/2)); }//All clipping surfaces are past the origin point Point3D po; Vectorcreate (&po, 0, 0, 0); First go to the four corner of the plane on the surface of the two corners on the surface as two vectors on the clipping plane, and then the fork multiply, you can//the following normal vector vn directly using the result//face normal Vector3D vn; Right clipping surface vectorcreate (&vn, cam->viewdistance, 0,-1); Planecreate (&cam->clipplaneright, &po, &VN, 1); Left clipping surface vectorcreate (&vn,-cam->viewdistance, 0,-1); Planecreate (&cam->clipplaneright, &po, &VN, 1); Upper clipping surface vectorcreate (&vn, 0, Cam->viewdistance, -1/cam->aspectratio); Planecreate (&cam->clipplaneright, &po, &VN, 1); Lower clipping surface vectorcreate (&vn, 0,-cam->viewdistance, -1/cam->aspectratio); Planecreate (&cam->clipplaneright, &po, &VN, 1); }
4. Derivation of perspective projection matrix
The principle of perspective projection we have been very clear, the previous article introduced:
X ' = x * d/z
Y ' = y * d/z
Principle of derivation matrix Aoi the teacher has also introduced. But this one is special, because we can't divide the coordinate value by Z through the transformation matrix, only with 4D homogeneous coordinates.
We're going to do this:
1) enlarge x and Y to D
2) Set the value of the homogeneous coordinate w to Z, so because w! = 1, so when you sort the coordinates, X becomes x*d/w = x*d/z,y.
So the matrix should look like this:
[d 0 0 0]
[0 d 0 0]
[0 0 1 1]
[0 0 0 0]
If you also find that the result is completely different from what the 3D master says, you have taken it seriously, because the projection matrix deduction in that book is fundamentally Dog.
Then after performing the matrix transformation, it is very important to divide the X, y of all vertex coordinates by the W.
Here is how to use the Object projection transformation function can be selected manually or by the matrix to calculate:
void _cppyin_3dlib::objectprojecttransform (Object_ptr obj, camera_ptr CAMERA, int transmethod)//Perspective transform, pivot 2D coordinates The result is an X value of ( -1,1) and a Y value of ( -1,1) {//Manual transform if (Transmethod = = transform_method_manual) {for (int i = 0; i < Obj->vertexcoun T ++i) {obj->vertexlisttrans[i].x = obj->vertexlisttrans[i].x * Camera->viewdistance/obj->vertexlisttrans [I].z; OBJ->VERTEXLISTTRANS[I].Y =-obj->vertexlisttrans[i].y * Camera->viewdistance/obj->vertexlisttrans[i]. Z }} else if (Transmethod = = Transform_method_matrix)//Matrix transform {objecttransform (obj, &camera->matrixprojection, REND Er_transform_trans, 0); After performing the transformation, the coordinates are 4D homogeneous coordinates, but W is not 1, so you need to divide x, Y, z by the W for (int i = 0; i < obj->vertexcount; ++i) {obj->vertexlisttrans[i].x/ = obj->vertexlisttrans[i].w; Obj->vertexlisttrans[i].y/= obj->vertexlisttrans[i].w; Obj->vertexlisttrans[i].z/= obj->vertexlisttrans[i].w; OBJ->VERTEXLISTTRANS[I].W = 1; } } }
5. Derivation of the screen transformation matrix
The big picture above is a good illustration of what the screen transformation should do:
1) Enlarge SCREEN_WIDTH/2.
2) x Shift SCREEN_WIDTH/2.
3) y shift SCREEN_HEIGHT/2.
We can write this matrix directly, not just a combination of zoom and pan:
cam->ScreenWidth/2, 0, 0, 0,
0, cam-> screenwidth/2, 0, 0,
0, 0, 1, 0,
cam->screenwidth/2, Cam->ScreenHeight /2, 0, 1
This matrix is very intuitive, the scaling of the zoom, the translation of the translation.
The code used is as follows:
void _cppyin_3dlib::objectscreentransform (Object_ptr obj, camera_ptr CAMERA, int transmethod)//viewport transform, result value x: (0,screen_ WIDTH) Y: (0,screen_height) {//Manual transform if (Transmethod = = transform_method_manual) {for (int i = 0; i < Obj->vertexco Unt ++i) {obj->vertexlisttrans[i].x *= camera->screenwidth/2; obj->vertexlisttrans[i].x + = (camera-> SCREENWIDTH/2); Obj->vertexlisttrans[i].y *= camera->screenwidth/2; Obj->vertexlisttrans[i].y + = (CAMERA->SCREENHEIGHT/2); }} else if (Transmethod = = Transform_method_matrix)//Matrix transform {objecttransform (obj, &camera->matrixscreen, RENDER_T Ransform_trans, 0); After performing the transformation, the coordinates are 4D homogeneous coordinates, but W is not 1, so you need to divide x, Y, z by the W for (int i = 0; i < obj->vertexcount; ++i) {obj->vertexlisttrans[i].x/ = obj->vertexlisttrans[i].w; Obj->vertexlisttrans[i].y/= obj->vertexlisttrans[i].w; Obj->vertexlisttrans[i].z/= obj->vertexlisttrans[i].w; OBJ->VERTEXLISTTRANS[I].W = 1; } } }
6. Summary
The width and height of the screen are affected by two things, one: The camera's wide field of view and the longitudinal field of view are inconsistent. Two: The shift to the screen coordinate system requires a different displacement.
7. Code Download
Because of the flexible camera system, with a convenient perspective and viewport transformation matrix, this time changed the demo slightly, you can use the arrow keys to adjust the UVN camera's world coordinates. Adjust the z-coordinate up and down, adjust the x-coordinate left and right, and be careful not to let the camera get too close to the object, or it will cause errors in the camera because we haven't written the clipped code yet.
Screenshots:
Full project source code download:>> Click to go to download page <<