Realistic computer graphics (II)-blanking and realistic graphics generation

Source: Internet
Author: User
Tags mathematical functions
Realistic computer graphics (II)-blanking and realistic graphics generation
Author: Tian Jingcheng Release Date: 2001/02/07
 
Abstr:
In the article "realistic computer graphics (I)-Natural Scene Simulation", four basic tasks to implement realistic graphics on computer graphics devices are given, at the same time, it introduces the technology related to the first task-simulation of natural scenes in 3D modeling. This article focuses on the third and fourth tasks-determining the colors (Illumination models, textures, and color models) that can be met (hidden) and visible in the computing scenario ).

Body:  


Realistic computer graphics (II)-blanking and realistic graphics generation

1. Hide

In computer graphics, three methods are provided to represent 3D objects: line diagrams, hidden graphs, and realistic graphs. Among them, the generation of realistic images should also be based on the elimination of light processing. The so-called blanking is the process of determining the visibility of a line, plane, or body by specifying a set of three-dimensional objects and projection methods (see constraints. There are two types of algorithms based on the difference in the degree of invisibility:
· The blanking algorithm of object space, which is implemented in the normalized projection space, compares each surface of K Polygon on the object surface with the other K-1 surfaces, accurately calculates the occlusion relationship between each edge or surface of an object. The calculation amount of such algorithms is proportional to K2.
· The image space blanking algorithm is used in the screen coordinate system to determine each pixel on the screen and determine the visible surface on the pixel. If the screen resolution is m × n and there are K polygon in the object space, the calculation of this algorithm is proportional to mnk.
Most algorithms involve the concepts of sorting and relevance. Sorting is used to determine the occlusion relationship between blanking objects. It is usually performed in the X, Y, and Z directions. The efficiency of the Blanking algorithm depends largely on the sorting efficiency. Relevance refers to the local persistence of the object or its transformed image. Using relevance in the Blanking algorithm is an important means to improve the sorting rate.
Common Object Space hiding algorithms include Polygon Area sorting algorithms and list priority algorithms.
Z-buffer (deep cache) is the simplest algorithm for eliminating image space surface hiding. The use of depth cache arrays avoids complicated sorting processes. when the resolution is certain, the algorithm calculates only proportional to the number of polygon. This algorithm also facilitates hardware implementation and parallel processing. On this basis, the Z-buffer scanning line algorithm uses the correlation between polygon and pixel points to further improve the algorithm efficiency. The scanning line algorithm also provides a good foundation for simple illumination models.

2. Simple illumination model and Shading

The illumination model is a mathematical model used to calculate the light intensity and color of each point in a realistic image projected into the observer's eye based on relevant optical laws. The simple local illumination model assumes that the light source is a light source, and the object is non-transparent, regardless of the refraction. the reflected light is composed of ambient light, diffuse light, and mirror light.
Ambient Light is characterized by the light that shines on an object from all directions around it and is uniformly reflected in all directions. Calculation formula:
                 
IA is the ambient light intensity (constant), and Ka is the reflection coefficient of ambient light on the object surface.
The diffuse reflection light is the light that the object returns evenly to the surrounding area. According to the cosine law of Lambert, the formula is given:
                
KD is the diffuse reflection coefficient of the object surface, IP, j is the incident light intensity emitted by a certain light source, θ I is the incident angle of the light source, that is, the angle (COS θ I = N · Li) between the surface normal vector n of an object and the point light source incident light vector Li ).
To simulate the highlights, B. T. Phone proposed the phone mirror reflection model:
                
KS is the normal vector of the object surface, n is the convergence index of the reflected light, and α I is the mirror reflected light direction (RI) and line of sight (v) generated by a light source on the object surface) the angle between cos α I = Ri · V ).
To sum up, the formula for calculating the reflected light intensity from any point on the object surface to the observation point I = ie + ID + is:
            
Or the dot product form of each unit vector:
            
In actual application, the red, green, and blue components of the light intensity should be processed respectively.
In computer graphics, a surface is typically represented by a polygon. There are two incremental methods to solve the smooth transition of brightness and color between polygon, namely, the bilinear brightness interpolation method (Gouraud shading) and bilinear method vector interpolation (Phong shading ).
The Gouraud method is mainly used to calculate the average light intensity of each vertex of an object based on the diffuse reflection in a simple illumination model, and then calculate the light intensity of each vertex in a polygon using bilinear interpolation. This method requires a small amount of computing, but cannot completely eliminate the attention band effect. It is difficult to handle the singularity of the algorithm, and the processing of the highlights is not ideal.
The method of Phong is also incremental linear interpolation, but the interpolation is the average method vector at the vertex. For the pixel points in the polygon, the calculated light intensity is calculated by the interpolation method vector. This method overcomes some shortcomings of brightness interpolation and can well process the mirror reflection, but the calculation is larger than that of couraud.
There are also many shadow generation algorithms based on local illumination models and shading. Shadow refers to the area in a scene that is not directly illuminated by a light source. In a computer-generated realistic image, shadow can reflect the relative position of the scene in the image, increase the stereoscopy and layering of the image, and enrich the realistic effect of the image. Shadow can be divided into two types: Local shadow and semi-Shadow. The local shadow and its surrounding semi-shadows form a soft area. Only a local shadow can be formed for a single lighting source, and a half shadow can be formed for multiple lighting sources and line lighting sources.
For an object represented by a polygon, a method for calculating the local shadow is the shadow Polygon method, the Shadow field of an object in an environment is defined as the intersection of the area where the field of view polygon and the light source are blocked by the contour polygon of the object in the scene space. The implementation of this method can use the existing scanning Line Blanking algorithm. Athherton et al. proposed the surface detail Polygon method. Based on the hidden surface elimination algorithm of polygon Region Classification, the shadow is generated by hiding the source and viewpoint twice.
The above two shadow generation methods are only applicable to scenes represented by Polygon and cannot generate shadows on smooth curved surfaces. To this end, Williams proposed the Z-buffer method. First, the Z-buffer algorithm is used to hide the scene in the light source direction, and then the Z-buffer algorithm is used to generate the scene in line of sight. This method can easily include any complex scenes with smooth surfaces. However, the storage capacity is large, and it is easy to produce samples near the shadow area.

3. Overall illumination model and light tracking

The light that shines on an object is not only directly emitted from the light source, but also reflected or reflected by other objects. The local illumination model can only process Direct Illumination. To accurately simulate the reflection and refraction between objects in the environment, an overall illumination model is required.
Compared with the local illumination model, the overall illumination model can be expressed as iglobal = krir + ktit. Iglobal is the contribution of non-direct light to the light intensity of an object. IR is the light intensity reflected or reflected by other objects from the reflection direction of the line of sight. kr is the reflection coefficient; KT is the light intensity of T refraction or reflection from the line of sight of other objects. It is the refraction coefficient. By adding iglobal to the computing result of the local illumination model, the light intensity of the point on the object can be obtained.
The light tracing algorithm is a typical overall illumination model. It was first proposed by goldste, Nagel, and Appel. Appel used the light tracing method to calculate the shadow. whited and Kay extended this algorithm, it is used to solve the problem of mirror reflection and refraction. The basic idea of an algorithm is as follows:
For each pixel on the screen, track the light passing through the pixel from the viewpoint to find the point of intersection with the object in the environment. At the intersection, the light is divided into two lines, tracking along the mirror reflection direction and the transparent body refraction direction, forming a recursive tracking process. Each time a light passes through a reflection or refraction, the reflection and refraction coefficient determined by the object material degrades its intensity. When the contribution of the light to the brightness of the original pixel is less than the given threshold, the tracking process stops. The Shadow processing of light tracing is also very simple. You only need to send a test light to the light source from the intersection of the light and the object, you can determine whether other objects have blocked the light source (for transparent occlusion objects, we need to further process the attenuation of the light intensity) to simulate the effect of Soft Shadows and transparent shadows.
Ray Tracing naturally solves the problems of invisibility, shadow, mirror reflection, and refraction between all objects in the environment, and can generate very realistic graphics, and the algorithm implementation is relatively simple. However, as a recursive algorithm, the computing workload is huge. Minimizing the calculation workload is the key to improving the efficiency of Ray tracking. Common methods include: Entents, hierarchies, and spatial partitioning.
Ray Tracing is a typical sampling process. The brightness of each screen pixel is calculated separately, which leads to a walk-through, the computing workload of the algorithm makes it difficult to apply the traditional anti-sample technology that increases the sampling frequency.
Pixel subdivision is a reverse tracing technique suitable for Ray Tracing. The specific method is: first, we use ray tracing to calculate the brightness of each pixel's corner point, and then compare the brightness of each corner point, if the difference is large, the pixels are subdivided into four sub-regions, and the brightness of the five new corner points is calculated by Ray tracking. Repeated comparison and subdivision are performed, until the Brightness Difference of each corner in the sub-region is less than the given threshold value, and the display brightness of the pixel points is obtained by weighted average.
Different from pixel segmentation, the distributed Ray tracking proposed by Cook, Porter, and Carpenter is a random sampling method. At the intersection, the mirror reflection direction and the refraction direction are enclosed in the three-dimensional angle, follow a certain distribution function to track several light sources at the same time, and then perform weighted average. Cook and others also proposed a method to simulate effects such as semi-shadows, depth of field, and motion blur using distributed random sampling technology.
Another problem of light tracing is that light is emitted from the viewpoint, and the shadow test light must be processed separately. Therefore, it cannot handle indirect reflection or refraction light sources, for example, it is difficult to simulate the effect of a mirror or lens on a light source. To solve this problem, we can track the light source and viewpoint in two directions. However, it is impossible for a large amount of light from the light source to reach the screen, which significantly increases the computing workload of Bidirectional light tracing and is difficult to use. The solution proposed by heckbert and hanrahanr is to use the light tracing from the light source as a supplement to the regular light tracing; the arvo method is to pre-process the light from the light source to the environment; shao Min's sum Peng qunsheng and others also proposed a bidirectional light Tracing Algorithm Based on the spatial linear octal tree structure to optimize the light emitted by the light source.

4. diffuse reflection and radiation

The general illumination model assumes that the diffuse reflection between objects is a constant ambient light. Even if two-way Ray tracking is used, only reflection and refraction between objects can be processed, but not between objects. Initially, the radiation level method proposed by Goral and others in 1984 and nishita in 1985 is a method based on thermal engineering. The generation and reflection of light radiation replaces ambient light, in this way, the problem of light reflection between objects can be precisely handled.
The radiation level method regards the scene and the light source as a closed system, and the light energy in the system is constant. It is assumed that the surfaces that constitute the scene are ideal diffuse reflection surfaces. Radiation refers to the light energy reflected from the surface area per unit time, which is recorded as B. Ideally, it can be considered that the light intensity on the surface that approaches the surface is even, that is, the diffuse reflection is evenly distributed. According to the Law of energy conservation, the radiation degree on each patch is:
             
The radiation of the micro-segmentation Dai on the Bi patch I; the EI is the light energy uniformly radiating from the patch I itself as the light source to the space; the σ I is the diffuse reflection rate of the Dai; bjfij is the light energy emitted from patch J to Dai, and fij is the shape factor of the micro-segmentation DAJ to Dai on other patches. Each patch in the environment has the above relationship. For scenarios with N patches, the following simultaneous equations can be obtained:
         
The EI is not zero only when the surface of the patch I itself is the surface of the luminous body, which represents the source of light energy in the system. The shape factor fij is only related to the geometric position of the scene.
Through the above linear equations, it is not difficult to solve the radiation bi of each patch, that is, to solve the brightness II of the patch. Bilinear interpolation can be used to calculate the brightness of each vertex of a forward slice using this set of brightness as the initial value. Finally, the brightness value of each pixel on the screen is obtained based on the specific viewpoint interpolation to generate the final image.
The calculation of the radiation degree method is mainly to calculate the shape factor. The semi-cube method proposed by Cohen and Greenberg is an efficient method for Approximate Calculation of closed surface shape factors. First, a half cube is built on the Z axis using the center of face I as the origin, the normal vector as the Z axis, and the five surfaces are divided into even grids. The micro-shape factors of each grid unit can be obtained in advance; then, all other patches in the scenario are projected to the semi-cube. When multiple patches are projected to the same grid unit, the depth comparison must be performed in the Projection Direction, the mesh unit only retains the nearest patch. This process is equivalent to the Z-buffer algorithm. Finally, the micro-shape factor of all the mesh units related to the patch J in the half cube is accumulated, then, we can obtain the shape factor fij of the slice I to the slice J.
The advantage of the radiation level method is that the algorithm has nothing to do with the viewpoint, the calculation and drawing can be performed independently, and the color penetration effect can be simulated, but the mirror reflection and refraction cannot be processed.
In the radiation level method, the light energy emitted from a plane to a specific direction is only related to the total radiation level, but not to the direction of the received energy. Immel, Cohen, and Greenberg promote this method. Each slice not only calculates the unique radiation level, but also splits the slice hemisphere space into a region with a limited three-dimensional angle, calculate the light energy of input and output respectively in each region, and calculate the probability of radiant energy in a certain direction using a bidirectional radiation function, the light intensity of each vertex can be obtained by interpolation of the radiation degree in the direction closest to the viewpoint, and finally the graph is generated. This improvement method can be used to deal with complex scenarios that contain mirrors and transparent objects, but requires a huge amount of time and space.
Another solution is to combine the radiation degree and ray tracing. It is not enough to add the computing results of the two methods. The relationship between the diffuse reflection surface and the mirror surface must be simultaneously processed. Wallace, Cohen, and Greenberg proposed a two-step method: the first step is to execute the radiation Degree Method unrelated to the viewpoint, and the radiation degree calculation must consider the mirror, this can be simulated using the mirror method (mirror-World approach). The second step is to execute the viewpoint-based Ray tracking algorithm to process the overall mirror reflection and refraction, and generate the image. The key to algorithm efficiency lies in the first step. The mirror method only needs to process the reflection of the ideal mirror and correct the shape factor accordingly, the calculation of the shape factor increases significantly with the increase in the number of mirrors. Sillon and Puech further expand the above two methods. In the first step, they do not use the image method, but use recursive ray tracing to calculate the shape factor. They can process any number of mirrors and transparent bodies.

5 texture ing

Texture Mapping is the process of adding surface details to an object surface by overwriting or projecting a digital texture image to the object surface. Texture images can be obtained through sampling or mathematical functions. It is difficult to present many surface detail polygon approaches or other geometric modeling methods. Therefore, texture ing technology can make computer-generated objects look more realistic and natural.
Texture ing technology was first proposed by Catmull and has been widely used after being improved by BlinN and Newell. It has become an important method in computer graphics. Maps a texture to an object's surface. It can be seen that a screen pixel is projected into the corresponding area of the texture space and the average color of the area is calculated to obtain the optimal approximate value of the real pixel color. Specifically, a texture image exists in an independent texture space. The ing is divided into two steps. The screen pixels are first mapped to the surface of a three-dimensional object through four coordinate points, then, it is further mapped to the texture space to form a quadrilateral area, that is, the approximation of the complex curved surface formed by ing screen pixels to the surface of a three-dimensional object. The texture ing result of screen pixels can be obtained by accumulating the Quadrilateral areas in the texture space. The opposite ing method can also be used, that is, ing from texture space to 3D objects and then to screen pixels. However, this ing method requires a larger storage space, making it easier to generate samples, and cannot be applied to the scanning line algorithm.
The texture of an object surface can be divided into two types: color texture and geometric texture. The color texture mainly refers to the pattern and color that appear everywhere on the same surface. The geometric texture mainly refers to the ups and downs of the object surface on the micro surface. The above texture ing method can only process color textures, and the surface of the generated object is still smooth. The bump mapping method proposed by BlinN Based on Texture ing is a technology used to simulate rough textures of objects, you can improve the microscopic structure of the object surface without modeling the rough surface of the object in ry, such as the marble texture surface carved text, concrete wall and other effects. In addition, more advanced realistic graphics, such as the sweat flowing on the human face, can also be simulated through bump mapping over time. Bump Map is a two-dimensional array. Each element in the array is an offset vector at a point on the object surface that is slightly higher or slightly lower than its actual position. After these tiny offsets are mapped to a point on the object surface, the normal vectors at the point are corrected and the illumination is calculated using the corrected normal vectors.
Both texture images and screen pixels are discrete sampling systems, which can easily produce out-of-the-box pattern. That is, the texture details are lost and the surface boundary is distorted. Convolution filtering is a common anti-sample method in texture ing. The screen pixel is a rectangular area mapped to the texture space as any quadrilateral. The convolution filtering method is to take the convolution of the texture function of the area covered by the Quadrilateral as the brightness of the screen pixel, box shape, triangle, Gaussian distribution, and splines can be used as filter functions. In practical applications, to simplify the calculation, square, rectangle, and elliptic shapes are commonly used to represent any quadrilateral area covered by screen pixels. The convolution filtering method is non-linear and requires a large amount of computing and is not applicable to bump mapping. Because the texture function of bump mapping is not linear with the brightness of a pixel, you can use the pre-filtering method. Prefilter calculates the average texture values in a certain area in advance based on different resolutions in the texture space. When performing the ing, you only need to select a certain resolution table based on the area covered by the screen pixels, perform appropriate linear interpolation.
In many cases, the above two-dimensional ing can produce good results, but sometimes it may produce distortion. For example, the two-dimensional effect will still be displayed on the three-dimensional surface, as well as the texture joint problem. Peachey and Perlin proposed a method based on the object texture, which uses the function of the position of the object in the three-dimensional space as the texture, so as to more accurately display the carving Effect of wood or marble.
The surfaces of other materials can also be simulated using appropriate methods. For example, Gardner's transparent ing method can simulate clouds with simple shapes. In addition, many methods based on physical models, random processes, and fragtal ry are also used to generate natural textures.

References
[1] Donald Hearn, M. Pauline Baker. Computer Graphics C version, 2nd edition, Tsinghua University Press, 1998
[2] Alan Watt. 3D computer graphics, 2nd edition. Addison-Wesley, 1993
[3] Tang rongxi, Wang jiaye, Peng qunsheng, et al. Computer Graphics tutorial. Science Press, 1994
[4] Tang zesheng, Zhou Jiayu, Li xinyou. Computer Graphics basics. Tsinghua University Press, 1995

Member name of the author: dipper

 
Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.