3D Math-Graphics Rendering Math

Source: Internet
Author: User
first, the graphics fixed pipeline rendering processFirst set the way to observe the scene Setupthecamera (); Clear the depth cache clearzbuffer (); Set ambient light and atomization [if needed] setgloballightandfog (); 1. World scene level operation//Get visible object list potentiallyvisibleobjectlist = highlevelvisibilitydetermination (Scene);
2. Object triangular grid-level operations, render them for (all objects in potentiallyvisibleobjectlist) {//Use bounding box to perform low-level VSD detection if (!OBJECT.ISB     Oundingvolumevisible ()) continue; Extract or incrementally generate geometry Trimesh = Object.getgeometry ();
   //3. Triangle level operation, cropping and rendering polygons     for (each triangle in the geometry)      {  &N Bsp    //transform vertex to crop space, perform vertex level illumination         clipspacetriangle = transformandlighting (triangle) ;         //triangles are back [now under hardware conditions many are placed in the screen space based on the triangle's non-clockwise order culling, rather than in the clipping space click less than 90 degrees]       & nbsp if (clipspacetriangle.isbackfacing ()) continue;        //frustum clipping triangle         clippedtriangle = Cliptoviewvolume (Clipspa Cetriangle);        //Determine if it is necessary to use-w<= x <= w,-w<= y <= w, -w<= z <= w in the cones, where w=z is dynamic.        //If clipping takes less than 3 vertices (clipping vertices will be generated at the boundary) then no rendering is required, otherwise you need to intercept vertices or get full polygons for rendering        //The Crop Almost all hardware, with the practice of splitting triangles to generate vertices, must ensure that all triangles are in the frustum body before converting to screen coordinates         if (Clippedtriangle.isempty ()) Continue        //projecting triangles into screen space, and rasterization         Clippedtriangle.projecttoscreenspace ();
4. Vertex level operations for (each pixel in the triangle) {//interpolation color, z-cache value and texture mapping coordinates//Now many in hardware conditions             This performs a back-reject detection//execution Zbufferinig (deep cache detection includes template stencile detection) and alpha detection if (!zbuffertest ()) continue; if (!alphatest ()) continue;
Pixel shading, where each pixel is calculated based on the vertex's shaded interpolation [typically gouraude linear interpolation), and whether the light is enabled to be multiplied by the UV color.             color = Shadepixel ();         Write the frame buffer and Z cache writepixel (color, Interpolatedz); }      } }

when lighting is enabled, the resulting color and texture color of the light are blended, feeling that the color is multiplied by the color fork of the texture and light:
D3dlight9 Light;
:: ZeroMemory (&light, sizeof (light));
Light. Type = d3dlight_spot/*d3dlight_directional*/;
Light. Ambient = D3dxcolor (0.8f, 0.0f, 0.0f, 1.0f);
Light. Diffuse = D3dxcolor (1.0f, 0.0f, 0.0f, 1.0f);
Light. Specular = D3dxcolor (0.2f, 0.2f, 0.2f, 1.0f);
Light. Direction = D3dxvector3 (1.0f, 1.0f, 0.0f);
Light. Position = D3dxvector3 (10.0f, 10.0f, 0.0f);
Light. Range = 1000;
Light. Falloff = 1.0f;
Light. Theta = 1.0f;
Light. Phi = 100.0f;

Device->setlight (0, &light);
Device->lightenable (0, true);

Device->setrenderstate (D3drs_normalizenormals, true);
Device->setrenderstate (d3drs_specularenable, true);
HRESULT hr = D3DXCreateTextureFromFile (Device,
"Crate.dds",
&tex);
if (FAILED (HR))
{
printf ("Device->createtexture fail.\n");
return false;
}
Device->setsamplerstate (0, D3dsamp_magfilter, d3dtexf_linear);
Device->setsamplerstate (0, D3dsamp_minfilter, d3dtexf_linear);
Device->setsamplerstate (0, D3dsamp_mipfilter, d3dtexf_linear);
Device->setmaterial (&d3d::white_mtrl);
Device->settexture (0, Tex);

Second, the view space in the coordinate space, the clipping space (perspective projection) and the screen space 1. Coordinate space, view space, and window scalingMmodel, Mmworld Mworld, Mview Mview, mclip mclip, Mscreen coordinate space and transformation matrix are relatively clear, but the specific origin of where, the direction set in the specific engine also to be specific analysis.
The pixel aspect ratio of the window is the window resolution, and the physical ratio of the window is the size of the vertical and horizontal pixels multiplied by the resolution of the pixels. In fact, the size of a single pixel is the density of the pixel, the smaller the more delicate, the larger the more coarse individual.
View space in the frustum, field of vision and scaling, the computer displays the view cone when the upper and lower angles are standard 90 degrees, so if the set projection matrix input parameter is less than 90 degrees, then the object will be enlarged, if greater than 90 degrees will reduce the object, 3D graphics can only change the shape of the cone, Or pull away the camera to achieve near large and small, and not by pulling far z=d are projected negative position. Because the horizontal scaling is ZOOMX, the vertical scaling is zoomy their ratio is equal to the resolution of the ratio, so as to ensure that the object is not stretched or flattened, so when the projection matrix is set to specify the angle of view of the cone, instead of specifying two angle.
2. Crop space and perspective projection, screen spaceThe W in the clipping space is dynamic, the w=z. 1). Clipping space in the view cone, the triangle cutting time with hardware to cut, the clipping is based on the method of edge splitting, the vertex of the triangle needs to be judged first, the judgment used to-w<= x <= W,-w<= y <= W,-w<= Z < = W; where the w=z vertex is inside the cone, otherwise it is outside the cone of view. 2). Back culling now under hardware conditions many are placed in the screen space based on the non-clockwise sequence of triangles culling, instead of clicking less than 90 degrees in the clipping space.

The first is to crop the space or camera space, only the x, Y is scaled, the z is scaled proportionally, W calculated for Z, and no perspective projection, when they divided by W time to achieve the clipping space to the perspective projection space transformation, after the transformation to perspective projection space, x is [ -1,1], y belongs to [- [ -1,1], in which z belongs to [0,1], and OPGL Z.
The clipping matrix, which is the dip OpenGL, will project Z into [ -1,1]:
XScale     0          0               0
0        yscale       0               0
0          0       (f + N)/(f-n)         1
0          0       - 2n*f/(f-n)     0
Where:
Yscale = cot (FOVY/2) = zoomy
XScale = yscale/aspect ratio = ZOOMX
W = Z

In DX, the Z project is projected into [0,1] and the projection matrix is:
XScale     0          0               0
0        yscale       0               0
0          0       f/(f-n)         1
0          0       -n*f/( F-n)     0
Where:
Yscale = cot (FOVY/2) = zoomy
XScale = yscale/aspect ratio = ZOOMX
W = Z
3. Using the Limit method to eliminate the variable method to prove that the conclusion W is equal to the dynamic change of Z. The vector in OpenGL is multiplied by the z-coordinate of the dip:

DX also has the same certificate.

The clipping space is converted to screen space. Then the clipping space to be divided into W=z, the W value is just placed in the dip matrix in the last dimension of the 4D vector, the vertex vector division after the new vector: Xproject belongs to [ -1,1], Yproject belongs to [ -1,1], DX zproject belongs to [ 0,1], OPGL in Zproject belongs to [ -1,1] the Perspective projection box.
At this point the Z-value is written to the depth cache, and the xproject,yproject is projected into the 2D screen coordinate space:

Summary: These operations are computed on the graphics API hardware, and what we can do is to understand such clipping space, perspective projection, and transition to screen coordinates. Set the angle of view cone to get zoomx,zoomy to the scene zoom, set one on it, another API will be calculated automatically. Set the depth distance n,f. Setting the camera's position can also be done near large and small, because of the perspective projection of the cause. When you manually calculate the coordinate position, you need to be thoughtful, why divide, and when to flip the position of the object.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.