OpenGL final exam assignments

Source: Internet
Author: User

1. What are the basic elements that can be rendered in OpenGL? (Curved surface)

A: Basic OpenGL elements:

Gl_points gl_lines gl_polygon gl_line_strip

Gl_line_loop gl_trangles gl_trangle_strip gl_trangle_fan gl_quads gl_quad_strip

Basic elements of Glu: nubrs curve and surface, quadratic surface, etc.

2. What are common rendering engines? Examples illustrate the main functions of these real-time rendering engines.

A: Common rendering engines include direct3d (Microsoft), OpenGL, and delta3d.
OpenGL provides a large number of practical basic operation functions. Such as geometric modeling, transformation, rendering, illumination and materials, anti-sample, mixing, atomization, bitmap and image, texture ing, interaction and animation. OpenGL has the following advantages: platform independence. OpenGL is a software interface for hardware and graphics. It is actually a 3D image and model library that can run on any platform and operating system. Hardware acceleration. OpenGL application interfaces are low-layer and graphical hardware-oriented software interfaces. Therefore, many algorithms can be implemented through hardware. Currently, almost all 3D graphics acceleration cards have OpenGL acceleration functions. Open Network. OpenGL works in Client/Server mode. The client and server can be different computers and peripherals, as long as the two are subject to the same protocol. This allows you to easily use OpenGL in a network environment.
DirectX enables games or multimedia programs on Windows platforms to achieve higher execution efficiency, enhance 3D graphics and sound effects, and provide designers with a common hardware driver standard, this eliminates the need for game developers to write different drivers for each brand of hardware and reduces the complexity of hardware installation and setup.
Delta3d is a full-featured game and simulation engine developed by the US Naval Research Institute. Is a fully functional game engine that can be used for games, simulations, or other graphics applications.

3. What matrix stacks can be used?

A: Model viewpoint matrix, projection matrix, and color matrix (there should be another texture matrix)

4. describes how to use textures.

Step: 1. Create a texture object and specify a texture for it.

2. determine how the texture is applied to each pixel.

3. Enable the texture map function.

4. Create a scenario and provide texture coordinates and geometric coordinates.

Projection texture: by applying a series of transformations, You can map the coordinates in the object coordinates to a 2D space (texture space) and find out which part of the texture each vertex maps, then, the position is used as the texture coordinate to the vertex. The transformation process is to unify the object coordinate model transformation to a world coordinate system, and then enter the projector's view matrix transformation under the projector view space, and then through the perspective matrix, finally, we need to perform a scaling and offset transformation to generate its projection texture.

Multi-texture: Multiple textures map multiple textures to a polygon. In the process of texture ing, the texture in each Texture unit is applied to the polygon one by one through the texture combination function. When multiple textures are used, there are multiple texture units and multiple texture coordinates. During OpenGL rendering, each texture is operated separately and the result is transmitted to the next Texture unit, that is, each Texture unit combines the color of the original slice and the image in the texture unit in a certain way according to its texture state, and transmits the generated slice color to the next Texture unit, during texture combination, OpenGL uses the specified texture combination functions, such as gl_replace, gl_add, and gl_modulate.

5. What is the role of mipmap? Why can the mipmap be used for texture anti-Obfuscation.

(How can we reverse the texture? Describes its principles .)

Mipmap is a series of pre-filtered texture images with reduced resolution.

When OpenGL uses mipmap, the texture is automatically determined based on the size of the object to be mapped. Using this method, the details layer in the texture image can be adapted to the image that is drawn to the screen. When a mipmap is generated, a smaller image is usually a filtered version, which is the result of appropriate homogenization of the largest texture image. Generally, each Texture unit of a smaller texture image is the average value of the four texture units of a higher-level resolution texture image.

6. Write the partial illumination equation in OpenGL. The coefficients must include the light source parameters, material parameters, spotlight parameters, attenuation parameters, and so on. The equation must represent multiple light sources.

7. How do I understand model view transformation? In a coordinate system W, set the model to (5.0, 0.0, 0.0), (0.0, 5.0, 0.0, 0.0), (0.0, 5.0) for a triangle with three dots, set the camera as follows:

  • Set the camera position to (0.0, 0.0, 100.0), the viewing target point to (0.0, 0.0, 0.0), and the camera up direction to (0.0, 1.0, 0.0)
  • Projection is perspective projection, and the vertical angle is 60 degrees
  • The range is from 0.1 to 300 along the orientation of the camera.

Now you need to place the image you see on the top right of a 100,100x200 pixel resolution window. The starting point is (200), the width is, and the height is.

(1) write out the camera settings represented by OpenGL.

(2) write out the setting function of the viewport.

(3) write out the projection transformation setting function.

(4) obtain the 4x4 transformation matrix from the coordinate system W to the camera coordinate system (viewpoint coordinate system.

1. glulookat (0.0, 0.0, 100.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0 );

2. glviewport (100,100,200,200 );

3. gluperspective (60366f, 1.0f, 0.1f, 300366f );

4. The transformation matrix is:

[1 0 0 0

0 1 0 0

0 0 1-100

0 0 0 1]

8. How to program using the glsl language, vertex shader and fragment shader Input and Output

Glsl (GL shading Language) is a language used for programming in vertex and pixel paintors (shader). The program segment is the GPU (Graphic processor unit graphics processing unit) on the graphics card) instead of a fixed part of the rendering pipeline. For example, view conversion and projection conversion. The glsl coloring er code is divided into two parts: vertex
Shader and fragment shader ).

The Vertex coloring tool controls the vertex coordinate transformation process, and the fragment coloring tool controls the pixel color calculation process.

In vertex shader, the input coordinates are the local coordinate system (Model coordinate system), and the output coordinates are the cropping coordinate system after modelview and projectionmatrix.

The input of fragment shader is usually texture coordinates and illumination information, and the output is the final calculated color value.

The main input and output parameters include vertex, normal, color, position, and material.

9. Use a block diagram to describe the rendering pipeline of OpenGL and briefly describe each coordinate system in the vertex transformation.

See books.

10. analyze and calculate the program

Take a look at the following program and calculate the color of the light at the positions 1, 2, and 3 of the three vertices. The process must be written, and only the results do not score.

#include <GL/glut.h>void init(void) {GLfloat mat_ambient[] = { 0.2, 0.2, 0.2, 1.0 };GLfloat mat_diffuse[] = { 0.8, 0.8, 0.8, 1.0 };GLfloat mat_emission[] = { 0.0, 0.0, 0.0, 1.0 };GLfloat mat_specular[] = { 0.3, 0.3, 0.3, 1.0 };GLfloat mat_shininess[] = { 2.0 };GLfloat light_position[] = { 1.0, 1.0, 1.0, 0.0 };GLfloat light_ambient[] = {0.2, 0.2, 0.2, 1.0};GLfloat light_diffuse[] = {1.0, 1.0, 1.0, 1.0};GLfloat light_specular[] ={1.0, 1.0, 1.0, 1.0};GLfloat lmodel_ambient[] = {0.2, 0.2, 0.2, 1.0};glClearColor (0.0, 0.0, 0.0, 0.0);glShadeModel (GL_SMOOTH);glMaterialfv(GL_FRONT, GL_AMBIENT, mat_ambient);glMaterialfv(GL_FRONT, GL_DIFFUSE, mat_diffuse);glMaterialfv(GL_FRONT, GL_SPECULAR, mat_specular);glMaterialfv(GL_FRONT, GL_EMISSION, mat_emission);glMaterialfv(GL_FRONT, GL_SHININESS, mat_shininess);glLightfv(GL_LIGHT0, GL_POSITION, light_position);glLightfv(GL_LIGHT0, GL_AMBIENT, light_ambient);glLightfv(GL_LIGHT0, GL_DIFFUSE, light_diffuse);glLightfv(GL_LIGHT0, GL_SPECULAR, light_specular);glLightModelfv(GL_LIGHT_MODEL_AMBIENT, lmodel_ambient);glEnable(GL_LIGHTING);glEnable(GL_LIGHT0);glEnable(GL_DEPTH_TEST);}void display(void){glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);glBegin(GL_TRIANGLES);glNormal3f(0.0f, 0.0f, 1.0f);glVertex3f(0.0, 0.0, 0.0);   //1glVertex3f(1.0, 0.0, 0.0);   //2glVertex3f(1.0, 1.0, 0.0);   //3glEnd();glFlush ();}void reshape (int w, int h){glViewport (0, 0, (GLsizei) w, (GLsizei) h);glMatrixMode (GL_PROJECTION);glLoadIdentity();if (w <= h)glOrtho (-1.5, 1.5, -1.5*(GLfloat)h/(GLfloat)w,1.5*(GLfloat)h/(GLfloat)w, -10.0, 10.0);elseglOrtho (-1.5*(GLfloat)w/(GLfloat)h,1.5*(GLfloat)w/(GLfloat)h, -1.5, 1.5, -10.0, 10.0);glMatrixMode(GL_MODELVIEW);glLoadIdentity();}int main(int argc, char** argv){glutInit(&argc, argv);glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);glutInitWindowSize (500, 500); glutInitWindowPosition (100, 100);glutCreateWindow (argv[0]);init ();glutDisplayFunc(display); glutReshapeFunc(reshape);glutMainLoop();return 0;}

Because glfloat light_position [] = {1.0, 1.0, 1.0, 0.0 };

Therefore, L = g (1, 1 ));

N = g (0, 0, 1); s = g (L + (0, 0, 1); KC = 1, KL = KQ = 0, n = 1 1 of the concentrating effect;

For the 11th image, use the objformat to upload a 10-inch image, and separate each face with a picture of pic1.jpg, pic2.jpg, pic3.jpg, pic4.jpg, and pic5.jpg.

12. There are several types of framebuffer and how to use the stencel buffer. Read the stencel. c file.

There are several types of frame caches. What are the testing and operations of fragment and fragment?

Color cache: including left front, right front, left back, right back, and any number of secondary color caches.

Deep cache, template cache, and cumulative Cache

 

Understand the use of stencilbuffer (template buffer.

Template tests often block irregular areas on the screen to avoid plotting in these areas. Use the glstencilfunc () and glstencilop () functions to select the desired comparison function, reference value, and modify the template buffer.

The template test result determines whether the pixel color value is written to the rendering target, and whether the pixel depth value is written to the depth buffer.

For example, when the reference template value is 0, some objects are drawn to the scene, and the template buffer is cleared to 1, the template buffer is 0 when these objects are drawn. If you set the reference value to 1 and stencilfunction to comparefunction. lessequal, only pixels in the corresponding regions whose template value is not 0 will be drawn. This is a basic usage of using template buffer to create a restricted area for the current painting.

To use template buffering, depthformat must retain some bytes for template buffering.

Slice element:

The tests are performed in the following order. If the previous tests are deleted, no subsequent tests or operations are performed.

1. Clipping test: Use the glscissor () function to define a rectangle in the window and restrict the graph to it.

2. Alpha test: Use the Alpha value to determine or reject an element. Use gl_alpha_test to activate gl_alpha_test. Yo, compare the glalphafunc. It is used to implement transparent algorithms and make pasters for texture images.

3. template test: Compare the pixel values stored in the template cache with the reference values, and modify the values in the template cache based on the comparison results. Glstencilop ().

4. Deep test: Used to eliminate hidden surfaces. Gldepthfunc () to set the comparison function.

13. Here is an anti-obfuscation (Anti-sample) method.

Multiple sampling is usually used to reverse sample the scenario. The multi-sample method uses additional color, depth, and template information to reverse sample the elements. Each slice element is calculated based on the number and information of sub-pixel samples in the root Shard, that is, it is calculated based on the samples stored in a multi-sample buffer.

One method is to disturb the viewpoint, that is, to draw the scene multiple times. During each painting, the viewpoint is jittered and a slight offset is made. After the rendering process is complete, then, all the images are superimposed, and the position of each image is different, which can reduce the Sawtooth.

14. Analyze the following program and calculate

In the following example, how many texture coordinates are calculated for a vertex (-1234, 1.5, 0.5) on the Quadrilateral corresponding to the four 0.0 vertices? Based on the nearest neighbor filtering method, What is the color of the point? Write a detailed calculation process. Only the results are not given.

# Define checkimagewidth 64

# Define checkimageheight 64

Static glubyte checkimage [checkimageheight] [checkimagewidth] [4];

Static gluint texname;

Void makecheckimage (void)

{

Int I, j, C;

For (I = 0; I <checkimageheight; I ++ ){

For (j = 0; j <checkimagewidth; j ++ ){

C = (I & 0x8) = 0) ^ (J & 0x8) = 0) * 255;

Checkimage [I] [J] [0] = (glubyte) C;

Checkimage [I] [J] [1] = (glubyte) C;

Checkimage [I] [J] [2] = (glubyte) C;

Checkimage [I] [J] [3] = (glubyte) 255;

}

}

}

Void Init (void)

{

Glclearcolor (0.0, 0.0, 0.0, 0.0 );

Glshademodel (gl_flat );

Glenable (gl_depth_test );

Makecheckimage ();

Glpixelstorei (gl_unpack_alignment, 1 );

Glgentextures (1, & texname );

Glbindtexture (gl_texture_2d, texname );

Gltexparameteri (gl_texture_2d, gl_texture_wrap_s, gl_repeat );

Gltexparameteri (gl_texture_2d, gl_texture_wrap_t, gl_repeat );

Gltexparameteri (gl_texture_2d, gl_texture_mag_filter, gl_nearest );

Gltexparameteri (gl_texture_2d, gl_texture_min_filter, gl_nearest );

Glteximage2d (gl_texture_2d, 0, gl_rgba, checkimagewidth, checkimageheight,

0, gl_rgba, gl_unsigned_byte, checkimage );

}

Void display (void)

{Glclear (gl_color_buffer_bit | gl_depth_buffer_bit );

Glable (gl_texture_2d );

Gltexenvf (gl_texture_env, gl_texture_env_mode, gl_decal );

Glbindtexture (gl_texture_2d, texname );

Glbegin (gl_quads );

1 gltexcoord2f (0.0, 0.0); glvertex3f (-2.0,-1.0, 0.0 );

2 gltexcoord2f (0.0, 1.0); glvertex3f (-2.0, 1.0, 0.0 );

3 gltexcoord2f (1.0, 1.0); glvertex3f (0.0, 1.0, 0.0 );

4 gltexcoord2f (1.0, 0.0); glvertex3f (0.0,-1.0, 0.0 );

Glend ();

Glflush ();

}

Void reshape (int w, int H)

{

Glviewport (0, 0, (glsizei) W, (glsizei) H );

Glmatrixmode (gl_projection );

Glloadidentity ();

Gluperspective (60.0, (glfloat) W/(glfloat) h, 1.0, 30.0 );

Glmatrixmode (gl_modelview );

Glloadidentity ();

Gltranslatef (0.0, 0.0,-3.6 );

}

Int main (INT argc, char ** argv)

{Gluinit (& argc, argv );

Fig );

Gluinitwindowsize (250,250 );

Gluinitwindowposition (100,100 );

Ngcreatewindow (argv [0]);

Init ();

Gludisplayfunc (Display );

Glureshapefunc (reshape );

Glumainloop ();

}

 

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.