Discuss the relationship between gluperspective and glulookat

Source: Internet
Author: User

After reading the OpenGL related programs, I found that some things are still very confused, especially what functions glulookat functions are used and what functions gluperspective has.

I have read this article online: I finally understood the relationship between gluperspective and glulookat (zz)

Http://cowboy.1988.blog.163.com/blog/static/751057982010101574732212/

I don't think it clearly shows what these functions are for, but what are the effects of different parameters. I believe it is correct, but I prefer to get why to do it? Not just how to do it.

Then I found it online: OpenGL getting started [5].

Http://apps.hi.baidu.com/share/detail/22508949

This article is a little more detailed than above, but there is no systematic induction, I feel there is a fault above.

As a result, we had to repeat the previous books (Computer Graphics -- with OpenGL) and repeat the previous sections, so we had a rough idea.

I am confident that some beginners will also be inspired by this problem. (If it can help you solve your problem, it is the biggest reward for writing this blog ).

For two-dimensional graphics development, taking simple image display as an example, our main goal is to constantly color each pixel in a display buffer and then draw it out. there are many other acceleration methods for speed, but the principle is basically the same. very intuitive and simple. just like we can color the canvas.

Having used to the two-dimensional graphic development above, we came to the three-dimensional world and felt that we could not find the north at once. How can we draw the color? How to rotate/translate? And so on.

If you search online at this time, many web pages will mention that you only need to call a function. (OpenGL contains ready-made functions)

Instead of jumping into a specific function, we plot the 3D image.

Two-Dimensional Image Display, we feel that it is colored on the canvas, while three-dimensional image display is equivalent to taking a photo with a camera. although the photo printed on the negative film is two-dimensional, the process is no longer as simple as the two-dimensional coloring. There is a conversion process in it. if we know the conversion process and look at the corresponding functions, we will know why.

We will not mention the distinction between coordinate systems. Many books have written it and it is easy to understand.

Generally, the modeling coordinate system is used for modeling, and then the modeling coordinate system is first converted to the world coordinate system.

In the OpenGL coordinate system, the positive direction of Z points to the outside of the screen (belonging to the right-hand coordinate system), the positive direction of X points to the right, and the positive direction of Y points to the up.

Suppose we put a cup at a previous point (,-10). How can we display the cup on the screen like this? If it is a two-dimensional development, we estimate that the image of the cup is directly displayed. Now, in 3D, we use a camera to take pictures. In this scenario, it is not just a cup, there are other things, and how the cup reflects its three-dimensional imaging is also a problem.

Since we are taking a photo, we need to adjust the camera's position. This is generally a point of view (where we can see it). Then we need to adjust the camera's orientation, which is understandable.

The remaining problem is that my camera has also selected the position (, 0) and the direction (pointing to (,-1 )). well, now we need to consider a question that beginners may not be clear about. How can I display a cup on the screen?

 

The above is a simple drawing. In order to draw the cup to the screen, we need to make the following transformation.

First, we create an observation coordinate system with the origin on the (viewpoint), and then we need an observation plane (that is, the screen above). Note that this is a little different from the real screen display, there will be another ing relationship, but we will not distinguish between two-dimensional transformations ). now we want to projection the cup shape on the observation plane. Note that it is a projection. the projection result is the result we plot.

Since it is a projection, it is associated with the coordinates. Therefore, we must convert all the CPUs to the observed coordinate system. This process involves operations such as translation and rotation.

After we use the observed coordinate system to represent the coordinates and models of the Cup, we need to start the real projection.

Projection is relatively simple and intuitive in terms of mathematics. orthogonal projection and Perspective Projection are two different types of projection. I put it here as a light source. orthogonal projection is a light source with a line parallel to the normal direction of the observation plane. It emits a set of parallel lines, so that no matter how far the object is from the observation plane, the size is the same.

Perspective Projection is similar to a point light source. It emits a group of light that is located at the same point, so that the same size and the size of the close object projection are large, a little smaller.

After projection, it is displayed on the observation interface. This is our final imaging. the observation plane is large in mathematics. From the perspective of our actual use, we need to limit the size of a plane. This size determines which areas we can display, other projections are ignored in other regions ). this is the cropping window mentioned in many books.

For different projections, we can calculate the projection position (xview, yview, zview) of each vertex (x, y, z) on the observation plane ). the problem is that different points may be mapped to the same location. How can we differentiate them ??? I found that our predecessors were really smart. They introduced the projection positions, and they introduced the concept of projection conversion, so that the position Z of the previous object remains unchanged, while (x, y) convert to (xview, yview ). in this way, we can find that, in the previous cropping space (manually selecting the near and far regions of the Z axis), many identical points (xview and yview) are saved in this area) only the zcoordinates are different. then, based on the principle of distance, we can determine that the point on the observation plane is determined by the spatial point. in general, for unification, after projection conversion (that is, converting X and Y to the final point mapped to the observation plane (xview, yview )), we have formed a unified area (this is equivalent to orthogonal projection ). in this case, the X and Y coordinates of each vertex are consistent with those mapped to the observed plane. after the unification, we can further standardize the operations (so that each coordinate axis is in the range of (-), and then crop the area to draw visible surfaces, illumination and other coloring processes. this forms the final imaging.

In orthogonal projection, we can find the corresponding object by observing each point on the plane (xview, yview) (perform a deep Test Based on zbuffer ). in this way, the previous object can block the subsequent object. in the end, the projection transformation is used to convert different projections into the final orthogonal projection, and then the depth test is performed based on the different Z axes, so that the objects can be painted first, these objects are blocked. it is much easier to unify. :)

After imaging, we put the imaging content on the screen for display (this is a bit similar to the printing of the photo negative ). we can observe that the size of the crop window on the plane can be the same or different from that on the screen. however, in order to make the final results basically consistent, we usually set them to have the same aspect (horizontal/vertical ratio W/H), so that their conversion relationship is proportional amplification or reduction, without being distorted.

The above process is the process of 3D display. How can we associate it with OpenGL? This requires calling OpenGL functions.

Many of these functions are explained on the Internet, but there are very few that correspond to the processes we have understood above.

First, we need to convert the displayed object to the observed coordinate system.

If an object has its own modeling coordinate system, you need to convert it to the world coordinate system, and then convert it to the observation coordinate system.

Since the coordinate system is converted, we must first establish an observation coordinate system.

Therefore, we must first select the coordinate system (origin, Z axis, and Y axis direction), through the Z axis and Y axis direction, you can determine the direction of the X axis.

In this process, OpenGL uses

Glulookat (gldoble eyex, gldouble eyey, gldouble Eyez, gldouble centerx, gldouble centery, gldouble centerz, gldouble UPX, gldouble upy, gldouble upz );

It selects eyex, eyey, and Eyez as the origin (observe the coordinates of the coordinate system (expressed in the world coordinate system), then centerx, centery, centerz specifies the observation direction (the opposite direction of the Z axis), UPX, upy, and upz specify the approximate direction of the positive direction of the Y axis (it is not necessarily orthogonal to the Z axis direction, but can be done through related operations, locate the positive direction of the orthogonal Y axis, and you can also find the positive direction of the X axis. it is calculated through the dot multiplication and cross multiplication of vectors )).

By specifying this, a matrix is created for the conversion from the world coordinate system to the observed coordinate system. openGL stores the conversion matrix in the matrix stack, and then automatically converts other world coordinate systems to the observed coordinate system ). the transformation of the coordinate system, that is, the conversion of the object description to the observed coordinate system. the subsequent projection is then calculated in the observation coordinate system.

This is the function of glulookat. It encapsulates the conversion from the world coordinate system to the observed coordinate system. It is in: glmatrixmode (gl_modelview );

.

We have explained the principles and functions of the glulookat function. Now we should be clear about how it came from. Why? Simply put, all objects are described by observing the coordinate system to establish the transformation of the coordinate system. rather than looking at the function name, it seems that it is not intuitive to let us look at a certain place.

OK, we understand the function of glulookat. After calling it, we put the matrix of coordinate system transformation into the matrix stack, and then describe the position of the object, it will be converted to our observation coordinate system through this matrix stack.

From this point of view, I still admire the idea of OpenGL designers. The transformation matrix is stored in the stack, so that it does not need to be taken as an additional parameter when drawing different objects. makes it easier and more intuitive to use. but for beginners, sometimes we feel that we cannot find the north. Why do we always feel that we call the rotation function is irrelevant to the object?

Okay. Next, let's take a look at the projection. now that we have switched to the observed coordinate system, in the world of the observed coordinate system, we projected the object into our observation plane, and then through the cropping window, just take the one we need.

Here we take Perspective Projection as an example. How does it achieve this effect?

Using this image on the Internet, we can explain what gluperspective is doing. Why?

As we have said before, perspective projection, similar to a set of focused light, projects an object onto our observation plane.

Now we are thinking, where is the observation plane? What is the focus of the projected focus on?

If you know this, it is easy to calculate that the point at a certain position is projected on the observation plane.

OpenGL stipulates that the focal point of perspective projection is observing the origin of the coordinate system (that is, the specified observation point of glulookat). Where is the observation plane? Where is the cropping window?

This is the implementation of gluperspective in OpenGL.

Gluperspective (gldouble fovy, gldouble aspect, gldouble znear, gldouble zfar)

Znear and zfar are the distance from the observed source (along the negative axis of Z). Therefore, the two numbers should be set to a positive number in total.

According to a lot of information on the Internet, znear and zfar can be projected only when they are used as a depth crop range. Otherwise, they will be ignored directly. beginners can also understand this place, but from the above questions, we do not know where to observe the plane? In fact, OpenGL already stipulates that the observation plane is in the near plane, that is, the place specified by znear. The observation plane is parallel to the observation coordinate system (x, y), so we specify the Z axis, it also specifies its location.

OK. Our observation plane is determined. What about the cropping area of the observation plane?

As we have said before, it is best to observe the horizontal/vertical ratio (W/H) of the cropping window on the plane to be consistent with that on the screen (the estimation is inaccurate, that is, the horizontal/vertical ratio of the view, this is only proportional scaling ). therefore, the second parameter of gluperspective is aspect, so that the ratio of the width and height of the cropping area is determined. the rest is that we still need to set the height or width.

The first parameter of gluperspective is fovy, which is an angle. In fact, fovy and znear can be used to calculate the height of the cropping window, which determines the cropping window.

. Fovy is the angle of the cone's upper and lower plane. The height of the crop window is H = 2 * Tan (fovy/2) * znear.

As a result, we can understand that there is a webpage on it, such as opening your eyes. It feels a little far-fetched.

So we know that if it is 0, the height and width of the cropping window must be 0, so there will be no display.

If it is 180 degrees, the height of the cropping window is infinite, and there is certainly no practical value, and the program may crash.

If it is 178 degrees, the height of the cropping window is also very large. The larger the cropping window, the more things it displays. if an object is imaged, its projection on the observation plane remains unchanged. however, since the negative film size is larger than ours, we need to scale proportionally so that we will feel it gets smaller when we display it on the screen. haha.

On the above page, there is another one. If it is set to 1 degree, a ball in a very long distance, it seems very large and clear, similar to the camera lens is zoomed closer.

It is also the same principle. In itself, the projection of the ball at the first or 90 degrees remains unchanged in our observation plane, while the transformation is the size of our cropping window. set it to 1 degree. Basically, the height of the cropping window is very small, so that we can place it on the viewport, and it must be scaled up in the same proportion. thus, I feel magnified. the range of the displayed scenario is also reduced.

Now, we can clearly understand the functions of glulookat and gluperspective. What else do you have to understand? Thank you.

The OpenGL function is glviewport, which is the same as the two-dimensional one.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.