Create different camera modes in the 3D world-create a camera: Position, target, and view frustum

Source: Internet
Author: User
2.1 create a camera: Position, target, and view frustum

Before drawing the 3D world to the screen, you need the view and projection matrix of the camera.

Solution

You can save the camera position and direction in a matrix. This matrix is called a view matrix (view matrix, observation matrix ). To create a view matrix, xNa needs to know the position, target, and up vectors of the camera.

You can also save the view frustum, which is actually visible in the 3D world, in another matrix called projection.

Working Principle

The view matrix defines the camera position and observation direction. You can create this matrix by calling matrix. createlookat:

 
Viewmatrix = matrix. createlookat (camposition, camtarget, camupvector );

This method requires three parameters: the position, target, and up vectors of the camera. The position vector is easy to understand. It indicates the position in the 3D space where the camera is placed. Then, you need to specify another point to indicate the target observed by the camera. This can already define a camera, but what is the up Vector used?

Let's take a look at this example: Your Head (in fact your eyes) is a camera. You try to define a camera with the same position and direction as the header. The first vector is easy to find: The position vector is the position of the header in 3D scenarios. Then, the target vector is not very difficult. If you look at X in Figure 2-1, in this case, the position of X is the target vector of the camera. However, there are other ways to let the header in the same position look at X!

Figure 2-1 observation target of the camera

Only the position and target vectors are defined. You can also rotate the head around the points between the eyes, for example, look up and down. If you do this, the position of the head and the observation target remain unchanged, but because everything is rotated, the observed image will be completely different. That's why you need to define the up vector of the camera.

Knowing the camera's position, watching the target and the camera's up direction, the camera is the only one to determine. The view matrix is determined by these three vectors. You can use the Matrix. createlookat method to create a camera:

 
Matrix viewmatrix; vector3 camposition = new vector3 (10, 0, 0); vector3 camtarget = new vector3 (0, 0, 0); vector3 camupvector = new vector3 (0, 1, 0); viewmatrix = matrix. createlookat (camposition, camtarget, camupvector );

Note:The position and target vectors of the camera point to the real position in the 3D space. The up vector indicates the upward direction of the camera. For example, a camera is located at the observation point (, 0, 0) (, 0 ). If the up vector of the camera is simply up, you only need to specify the (0, 1, 0) up vector. This is not a point in 3D space. In this example, the 3D point is (, 1, 0 ).

Note:XNa provides a shortcut for the most common vectors. vector3. Up indicates (, 0), vector3. forward indicates (,-1), and vector3. right indicates (, 0 ). To help you understand 3D vectors, the first tutorial in this chapter uses the complete syntax.

XNa also requires the projection matrix. You can think of this matrix as something that can map all points from a 3D space to a 2D window, but I want you to think of it as a matrix containing camera lens information.

Let's take a look at 2-2. The left figure shows a 3D scene in the camera's field of view. You can see it as a pyramid. In the right figure, you can see a 2D section of the pyramid.

Figure 2-2 cone of the camera

The pyramid on the left side of the image is called the view frustum ). Only objects inside the cone can be drawn to the screen.

XNa can create such a cone for you, which is stored in the projection matrix. You can call matrix. createperspectivefieldofview to create this matrix:

 
Projectionmatrix = matrix. createperspectivefieldofview (viewangle, aspectratio, nearplane, farplane );

The first parameter of the matrix. createperspectivefieldofview method is the viewing angle. It corresponds to half of the top corner of the pyramid, as shown in the right Figure 2-2. If you want to know your viewing angle, you can put your hand in front of your eyes and you will find that the angle is about 90 degrees. Because the radian pi is equal to 180 degrees, and 90 is equal to PI/2. Because you need to specify half of the viewing angle, this parameter is PI/4.

Note:You usually want to use a human perspective, but in some scenarios you may specify other perspectives. It usually happens when you want to draw a scene into a texture, for example, from the perspective of light. In the case of light, a larger angle of view indicates a larger illumination range. For examples, see tutorial 3-13.

Another parameter you need to specify has nothing to do with "source,", that is, it has nothing to do with the cone, but it is related to "destination,", that is, it is related to the screen. It is the aspect ratio of the 2D screen. It actually corresponds to the aspect ratio of the backup buffer. You can use the followingCodeObtain it:

 
Float aspectratio = graphics. graphicsdevice. viewport. aspectratio;

This ratio is 1 when a square window with the same length and width is used. However, this ratio is greater than 1 when the full screen 800x600 window is drawn. When the screen is drawn to a wide screen notebook or HDTV, It is larger. If you mistakenly use 1 instead of 800/600 as the aspect ratio of the 800*600 window, the image will be horizontally stretched.

The last two parameters are related to the cone. Imagine that an object is very close to the camera, which occupies the entire field of view and the window is filled with a separate color. To avoid this situation, xNa allows you to define a plane near the top of the pyramid. Objects at the top of the pyramid and between the plane will not be drawn. This plane is called the near clipping plane ), you can specify the distance from the camera to the near-cut plane as the third parameter of the createperspectivefieldofview method.

Note:The cropping process is used to indicate that some objects do not need to be drawn to improveProgramFrame Frequency.

Likewise, you can process objects that are far away from the camera. These objects look small, but still occupy the processing time of the video card. Therefore, objects far away from the second plane will also be cropped. The second plane is called the far clipping plane, which is the farthest border of the cone. You can specify the distance from the camera to the plane as the last parameter of the createperspectivefieldofview method.

Beware:Even if a simple 3D scene is drawn, do not set the remote plane to an excessively large value. For example, setting the distance of the far plane to a crazy 100000 will cause some visual errors. A video card with a 16-bit depth buffer (see the "z-buffer (or depth buffer)" section in this tutorial) has 2 ^ 16 = 65535 depth values. If two objects use the same pixel and the distance between them is less than 100 KB/65535 = 1.53 units, the video card cannot determine which object is closer to the camera.

In fact, this will lead to worse results, because scale is quadratic (?), The last three quarters (?) of the entire scenario (?) It looks like the distance from the camera is the same. The distance between the near-cut plane and the far-cut plane is preferably less than several hundred. If the video card's depth buffer is less than 16-bit, the distance should be smaller.

A typical error in this problem is that all the objects you see have sawtooth edges.

Usage

You want to update the view matrix during the update process, because the camera's position and direction are based on user input. The projection matrix needs to be updated only when the aspect ratio of the window changes. For example, when the window is switched to full screen mode.

After calculating the view and projection matrices, you need to pass them to the effect for drawing objects. You can see the corresponding code in the draw method below. This allows the shader on the video card to convert all vertices into the corresponding pixels of the window.

Code

The following example shows how to create a view matrix and a projection matrix. For example, if you have an object (0, 0, 0), you want to place the camera on the X axis + 10 units, and the positive y axis serves as the up vector. In addition, you want to plot the scene in the 800x600 window, so that all triangles with a distance less than F and greater than F are cut. The following code is used:

Using system; using system. collections. generic; using Microsoft. xNa. framework; using Microsoft. xNa. framework. audio; using Microsoft. xNa. framework. content; using Microsoft. xNa. framework. graphics; using Microsoft. xNa. framework. input; using Microsoft. xNa. framework. storage; namespace bookcode {public class game1: Microsoft. xNa. framework. game {graphicsdevicemanager graphics; contentmanager content; basiceffect; graphicsdevice device; coordcross ccross; matrix viewmatrix; matrix projectionmatrix; Public game1 () {graphics = new graphicsdevicemanager (this ); content = new contentmanager (services );}

The projection matrix must be updated only when the aspect ratio of the window changes. You only need to define the projection matrix once and put it in the initialization process of the program.

Protected override void initialize () {base. initialize (); float viewangle = mathhelper. piover4; float aspectratio = graphics. graphicsdevice. viewport. aspectratio; float nearplane = 0.5f; float farplane = 100366f; projectionmatrix = matrix. createperspectivefieldofview (viewangle, aspectratio, nearplane, farplane);} protected override void loadcontent () {Device = graphics. graphicsdevice; basiceffect = new basiceffect (device, null); ccross = new coordcross (device) ;}protected override void unloadcontent (){}

You need to change the view matrix so that the user can input the Mobile Camera and put it in the update process of the program.

Protected override void Update (gametime) {If (gamepad. getstate (playerindex. one ). buttons. back = buttonstate. pressed) This. exit (); vector3 camposition = new vector3 (10, 10,-10); vector3 camtarget = new vector3 (0, 0, 0); vector3 camupvector = new vector3 (0, 1, 0); viewmatrix = matrix. createlookat (camposition, camtarget, camupvector); base. update (gametime );}

Then, the projection and view matrices are passed to the effect Rendering scenario:

 
Protected override void draw (gametime) {graphics. graphicsdevice. clear (color. cornflowerblue); basiceffect. world = matrix. identity; basiceffect. view = viewmatrix; basiceffect. projection = projectionmatrix; basiceffect. begin (); foreach (effectpass pass in basiceffect. currenttechnique. passes) {pass. begin (); ccross. drawusingpreseteffect (); pass. end ();} basiceffect. end (); base. draw (gametime );}
Additional reading

You only need to use the first two matrices xNa to draw 3D scenes to 2D screens. Converting from 3d to 2D is a challenge, but xNa has already helped you. However, to create and debug a larger 3D program, you need to have a deep understanding of what happened behind this operation. The first challenge of Z-buffer (or depth buffer) is to specify which object occupies the final pixel on the image. When switching from a 3D space to a 2D screen, multiple objects may be displayed on the same pixel, as shown in 2-3. A pixel on a 2D screen corresponds to a ray in a 3D space. For more information, see tutorial 4-14. For a pixel, the dotted line in 2-3 of this radiation shows that it interacts with two objects. In this case, the color of this pixel is taken from object a because a is closer to the camera than B.

Figure 2-3 multiple objects occupy the same Pixel

However, if you first draw object B, the corresponding pixel in frame buffer will be first specified as the color of B. Then object A is drawn, and the video card needs to determine whether the pixel needs to be covered by the color of object.

The solution is to store the second image in the video card, which is the same size as the window size. When a pixel in the frame buffer is specified with a color, the distance between the object and the camera is stored in the second image. This distance is between 0 and 1, 0 corresponds to the distance between the near cut plane and the camera, and 1 corresponds to the distance between the far cut plane and the camera. Therefore, the second image is called depth buffer or Z-buffer.

So how to solve this problem? When drawing object B, check Z-buffe. Because B is drawn first, Z-buffer is empty. The result is that the color of the corresponding pixel in frame buffer is the color of B, and a value is obtained from the same pixel in Z-buffer, corresponding to the distance between the B object and the camera.

Then draw object A, corresponding to each pixel of object A, first check Z-buffer. Z-buffer already contains the value of object B, but the distance stored in Z-buffer is greater than the distance between object A and the camera, therefore, the video card needs to overwrite This pixel with the color of object!

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.