OpenGL Tutorial Translator 13th lesson camera space

Source: Internet
Author: User

OpenGL Tutorial Translator 13th lesson camera Space Original Address: http://ogldev.atspace.co.uk/(source code please download from the original page)Background

In the last few sections we see two types of vertex transformations. The first type of transformation is to change the position of the object (pan), rotate, dimension (indent). These transformations allow us to place an object anywhere in the 3D world. The second type of transformation is a perspective projection transformation that projects a vertex position in a 3D world coordinate system into a 2D world coordinate system (such as an aircraft). Once the coordinates are transformed into 2D coordinates, it is very easy to map these 2D coordinates to screen space coordinates. These coordinates are actually used to rasterize the elements that make up an object (possibly a point, line, or triangle).

We have not touched the camera in all previous chapters, and we implicitly assume that the camera is at the origin of the 3D space. In fact, we want to be able to control the camera freely, place it anywhere in the 3D world, and project the vertex to the 2D plane in front of the camera. This will be able to reflect the correct relationship between the camera and the objects on the screen.

In the picture below, we see the camera put back to us somewhere. There is a virtual 2D plane in front of the camera, and the ball is projected onto the plane. The camera is tilted, so the plane is tilted. Due to the camera's viewing angle, the part of this 2D plane can be seen as just a rectangle. Everything outside the rectangle will be lost. It is our goal to render this rectangle to the screen.


In theory, such a change matrix can be generated to project an object in 3D space onto a 2D plane in front of a camera located anywhere in the world's coordinate system. However, the mathematical transformation is much more complex than we have met before. When the camera is placed in the 3D World coordinate system source point and the direction along the z-axis projection is much simpler. For example, an object is placed at (0,0,5) and the camera is placed at (0,0,1) and is oriented toward the z axis. If we move the camera and object one unit toward the origin, then the relative distance and direction of the two (in terms of the camera orientation) remain the same, but the difference is that the camera is at the origin at that point. Moving all the objects in the scene in the same way allows us to render the field correctly, using the method we learned earlier.

The above example is simple because the orientation of the camera is already along the z axis and is usually aligned to the axis of the coordinate system. But what happens if the camera points to a different direction? Look at the picture below. Simply put, this is a 2D coordinate system that we look at from the top of the camera.


The orientation of the camera is originally pointing to the z axis, but then rotates 45 degrees clockwise. As you can see, the camera defines its own coordinate system, which may be the same as the world's (picture above) or different (the picture below). So there are actually two coordinate systems at the same time. That is, a world coordinate system for specifying an object, and a camera coordinate systems aligned with the camera's axis (target, up and right). These two coordinate systems are what we know about world space and camera/view space.

, the green Ball is located in the World Space (0,y,z). In camera space, it is in the upper-left corner of the coordinate system (in other words, its X coordinate is negative and the z coordinate is positive). We need to find the position of the green ball in the camera space. At this point we can simply forget all about world space, just use camera space. In camera space, the camera is in the origin direction, pointing to the z axis. The object's designation is related to the camera and can be used to render the object in the way we have learned.

The camera rotates 45 degrees clockwise, which is equivalent to a green ball counterclockwise by 45 degrees. The object's movement is always the opposite of the camera's movement. So in general, we need to add two new transformations and add them to our existing transformation pipeline. We need to keep the relative position of the object and camera constant while moving the camera to the origin, and also to rotate the object in the opposite direction from the camera's direction of rotation.

Moving the camera is very simple. If the camera is in (x, Y, z), then the translation transformation is (-x,-y,-y). The reason is obvious--the camera uses the vector (x, y, z) translation transformation in the world coordinate system, so if you want the camera to go back to the origin, then we need to pan with the opposite vector of this vector. The transformation matrix is as follows:



The next step is to move the camera to the location specified in the World coordinate system. We want to find the position of the vertex in the new coordinate system defined by the camera. So the real question is: how do we transition from one coordinate system to another?

Look at the picture above. The world coordinate system is defined by three linearly independent vectors (1,0,0), (0,1,0), and (0,0,1). Linearly irrelevant means that we can't find a total of 0 x, y, Z to make x* (1,0,0) + y (0,1,0) + z* (0,0,1) = (0,0,0). In geometric terms, this means that any two of these three vectors can determine a plane perpendicular to the third vector. It is easy to see that the camera coordinate system is determined by the vector (1,0,-1), (0,1,0), (1,0,1). After standardizing these vectors, we get (0.7071,0,-0.7071), (0,1,0) and (0.7071,0,0.7071).

The picture below shows how the position of the vector is specified in two different coordinate systems.


We know how to get the unit vector representing the camera space axis in the world coordinate system space, and also know the position of the vectors in the world coordinate system space (x, Y, z). We're looking for vectors (x ', y ', Z '). We now use a property called scalar projection for the dot product. A scalar projection is the result of an arbitrary vector A and a B dot product of a unit vector, resulting in the extension of a vector in the B vector direction. In other words, the result is a projection of vector a on vector b. In the example above, if we make a dot product between the vector (x, y, z) and the unit vectors that represent the camera axis, we will get the X ' value. The same way we can get y ', Z '. (x ', y ', Z ') is the position in camera space (z/s).

Let's look at how to turn this idea into a complete way of calibrating the camera direction. This method is called "UVN camera", it is only one of the many ways to specify the camera direction. The method is that the camera is defined by the following matrix:

1:n– a vector pointing to its target by the camera. In some 3D literature, it is also called ' Look at '. This vector corresponds to the z axis.

When the 2:v– is upright, the vector is vertically upward. If you are writing an airplane simulation program and the plane is reversed, then the vector is best directed toward the ground. This vector corresponds to the y-axis.

3:u– this vector points from the camera to its right. It corresponds to the x-axis.

In order to convert a position from the world coordinate space to the camera coordinate space defined by the UVN vector, we need to perform a point multiplication between the position and the UVN vector. As shown in the following matrix:


In this section of the code, you will notice that the global variable ' gworld ' in the shader has been renamed ' GWVP '. This change reflects the way that this series of transformations is known to us in many books. WVP represents World-view-projection.

Code WalkthruIn this section I decided to make a small design change by moving the low-level matrix manipulation code from the pipeline class to the matrix4f class. The pipeline class lets the matrix4f class invoke different methods to initialize itself and link several matrices to produce the final matrix transformation.

(pipeline.h:85)

struct {
vector3f Pos;
vector3f Target;
vector3f up;
} M_camera;

The pipeline class has some new members to store camera parameters. Note that the ' U ' axis is missing from the camera parameters. It will be calculated by cross-multiplication between target and up. In addition, there is a new function Sercamera to pass these values.


(math3d.h:21)

Vector3fvector3f::cross (const vector3f& v) const

{

const float _x = y * v.z-z * V.Y;

const FLOAT _y = z * v.x-x * V.Z;

const FLOAT _Z = x * v.y-y * v.x;

Return vector3f (_x, _y, _z);

}

There is a new method in the Vector3f class to calculate the cross product of two vector3f objects. The cross product of two vectors produces a vector that is perpendicular to the plane of two multiplication vectors. This becomes more intuitive when you remember that the vectors have a direction and amplitude but no position. All vectors that have the same direction and size are considered equal, regardless of where they start. So you can also get the starting point of the two vectors at the origin. This means that you can create a triangle where one vertex is at the beginning and the other two vertices are the end of the vector. This triangle defines a plane, and the cross product produces a vector perpendicular to the plane. You can learn more about cross product in Wikipedia.

(math3d.h:30)

Vector3f&vector3f::normalize ()

{

const FLOAT Length = SQRTF (x * x + y* y + z * z);

x/= Length;

Y/= Length;

Z/= Length;

return *this;

}

In order to generate the UVN matrix we need to make these vectors a unit vector. This operating term is called the normalization of vectors, which is obtained by dividing each component of the vector by the length of the vector.

(math3d.cpp:84)

Voidmatrix4f::initcameratransform (const vector3f& Target, const vector3f&up)

{

vector3f N = Target;

N.normalize ();

vector3f U = up;

U.normalize ();

U = U.cross (Target);

vector3f V = N.cross (U);

M[0][0] = u.x; M[0][1] = u.y; M[0][2]= u.z; M[0][3] = 0.0f;

M[1][0] = v.x; M[1][1] = v.y; M[1][2]= v.z; M[1][3] = 0.0f;

M[2][0] = n.x; M[2][1] = N.Y; M[2][2]= n.z; M[2][3] = 0.0f;

M[3][0] = 0.0f; M[3][1] = 0.0f;m[3][2] = 0.0f; M[3][3] = 1.0f;

}

This function generates a camera transformation matrix that is later used in the pipeline class. The u,v,n vector is computed and placed in each row of the matrix. Because the position vectors of the vertices are multiplied (as a column) on the right side of the matrix. This means that a dot multiplication is made between the u,v,n vector and the position vector. This generates three scalar values, three of which are the values of XYZ in the screen coordinate system.

The parameters provided by this function are the target and up vectors. The "right" vector is obtained by cross-multiplication of the two vectors. Because we are not sure whether the parameters are unit vectors, we standardize these vectors. After generating the U vector, we then calculate the up vector by cross-multiplication between the target and right vectors. When we started moving the camera in the back, we would find the reason to recalculate the up vector, and it would be simpler to update the target vector only and the up vector would remain the same. However, this means that the angle between the target and up vectors will not be 90 degrees, causing the coordinate system to become invalid. By calculating the right vector and the up vector, we will get a coordinate system with a 90-degree angle between each pair of axes.

(pipeline.cpp:22)

constmatrix4f* Pipeline::gettrans ()

{

matrix4f Scaletrans, Rotatetrans,translationtrans, Cameratranslationtrans, Camerarotatetrans,persprojtrans;

Scaletrans.initscaletransform (M_scale.x,m_scale.y, m_scale.z);

Rotatetrans.initrotatetransform (M_ROTATEINFO.X,M_ROTATEINFO.Y, m_rotateinfo.z);

Translationtrans.inittranslationtransform (M_worldpos.x,m_worldpos.y, m_worldpos.z);

Cameratranslationtrans.inittranslationtransform (-m_camera. Pos.x,-m_camera. Pos.y,-m_camera. POS.Z);

Camerarotatetrans.initcameratransform (M_camera. Target,m_camera. UP);

Persprojtrans.initpersprojtransform (M_persproj.fov,m_persproj.width, M_persproj.height, M_persProj.zNear, m_ PERSPROJ.ZFAR);

M_transformation = Persprojtrans *camerarotatetrans * Cameratranslationtrans * Translationtrans *

Rotatetrans * Scaletrans;

Return &m_transformation;

}

Let's be more new to a function of the complete change matrix of an object. Now just adding two new matrices about the camera has become quite complex. After completing the transformations in the world coordinate system (in combination with the scaling, rotation, and panning of the object), we begin the transformation of the camera by moving the camera to the origin point. This translation transformation is done by using the reverse amount of the camera position vector. So if the camera is in ( -1,-2,-3), then in order to get the camera back to the original point we need to move the camera along this vector. We then generate the camera's rotation matrix based on the camera's target and up vectors. So the transformation of the camera part is done. The final generation of coordinates.

(tutorial13.cpp:76)

Vector3fcamerapos (1.0f, 1.0f, -3.0f);

Vector3fcameratarget (0.45f, 0.0f, 1.0f);

Vector3fcameraup (0.0f, 1.0f, 0.0f);

P.setcamera (Camerapos,cameratarget, CAMERAUP);

We use the new method in the main rendering loop. To place the camera, we move from the origin to the negative direction of the z axis, then move right and up. The up vector is the positive half axis of the y-axis. We send all of this into the pipeline object, and the pipeline class will handle the next work.




OpenGL Tutorial Translator 13th lesson camera space

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.