PBRT reading: 11th Chapter Texture Section 11.1-11.4 __float

Source: Internet
Author: User

PBRT reading: 11th Chapter Texture Section 11.1-11.4
http://www.opengpu.org/forum.php?mod=viewthread&tid=5817

11th Chapter Texture

To introduce textures into the material model, we now introduce a set of interfaces and classes. Recall that the materials described in the 10th chapter are based on parameters describing their characteristics (diffuse reflectance, gloss, etc.). Because the nature of the material in the real world is changing on the surface, it is also necessary to describe these texture patterns in the same way. In PBRT, we abstract textures so that styles are generated in a way that is separate from the implementation of the material, making it easy to combine them to create ever-changing skins.

In PBRT, a texture is a very generalized concept: it is a function that maps a point from one field (for example, a surface (u,v) parameter space, or (X,y,z) object space) to another domain (such as spectral values, or real numbers). A large number of texture implementations are provided to users using Plug-ins. For example, PBRT has a texture that can represent a 0-dimensional function that returns a constant, which is intended to describe the same surface as the parameter values everywhere. Image map texture is a two-dimensional function of (s,t) that computes the value of a given point by a two-dimensional array of pixel values (see sect. 11.4). PBRT even has texture functions that compute function values based on other texture functions.

In the final image results, textures can be a source of high-frequency changes. For example, (a) is a badly-aliased image sampled once per pixel; (b) is the enlargement of the top of the ball, which shows the high-frequency detail between the adjacent image sampling positions; (c) is the use of this chapter of the Anti-aliasing technology effect diagram.


Although the 7th chapter of Non-uniform sampling technology can be used to reduce the visual impact of this aliasing, but a better solution is, according to its sampling rate to get rid of the high-frequency content of the texture function. For many texture functions, it is not difficult to make good approximate calculation and anti-aliasing of the content, but it is much better than the method of increasing the sampling rate in efficiency.
The problem of texture aliasing and general solutions are discussed in section 1th of this chapter. Then we'll describe the basic texture interface and use a few simple texture functions to describe its usage. The remainder of this chapter will introduce a more complex variety of texture implementations, using different texture anti-aliasing techniques.

11.1 Sampling and anti-aliasing

The samples presented in the 7th Chapter are frustrating because we knew from the outset that the problem of aliasing was not solvable. No matter how high the image is sampled, the infinite high-frequency content of the geometric boundary and the hard Shadow (hard shadows) will definitely produce aliasing in the final image. Fortunately, for textures, things are not so hopeless: there is a very convenient way to parse the texture function, it is possible to remove high-frequency content before sampling, or it is possible to avoid high-frequency content by careful evaluation. As this chapter does, if you seriously solve this problem in a texture implementation, you can render an image without texture aliasing simply by taking a sample on one pixel.
To remove aliasing in the texture function, you must address the following two issues:

1. The sampling rate in the texture space must be calculated. We can derive the sample rate of the screen space from the image resolution and pixel sampling rate, but here we will determine the sample rate of the surface in the scene, thus solving the sampling rate of the texture function.
2. With the texture sampling rate, the sampling theory should be applied to guide the calculation of the texture value, so that the frequency change must not be higher than the range expressed by the sampling rate (for example, to remove the high frequency exceeding the Nyquist limit).
The remaining sections of this section will address both issues.

11.1.1 to solve texture sampling rate

On a surface in the scene, assume that there is an arbitrary texture function t (p) about the position. If we ignore the complexity caused by visibility (that is, there may be other objects blocking the surface near the image sample, or the surface is limited in scope on the image plane), this texture function can be represented as a function T (f (x,y) of the Point (x,y) on the image plane, where F (x,y) is a function that maps an image point to a surface. So T (f (x,y)) gives the value of the texture function on the pixel (x,y).

Let's give a simple example: consider a 2D texture function t (s,t), apply the texture to a quadrilateral (0,0,0) perpendicular to the z-axis (1,0,0), (1,1,0), (0,1,0). If the direction of the orthographic projection camera is the inverse of the z axis, the quadrilateral fills the image plane exactly, and if the point P on the quadrilateral has the following formula mapped to 2D (s,t):

s = px, t = py

Then it's easy to draw the S,t (X,y) and the on-screen pixel (the) Relationship:

s = x/xr, t = y/yr

Where the total image resolution is (XR, yr). Thus, if the sampling interval on the image plane is a pixel, the sampling interval in the (s,t) texture parameter space is (1/XR, 1/yr), and the texture function must remove any frequency that is above the range expressed by the sampling rate.

The relationship between the pixel coordinates and the texture coordinates, i.e. the relationship between their sampling rates, is the key information that determines the maximum frequency content in the texture function. Give a slightly more complicated example: given a triangle with texture (s,t) coordinates at the vertex, and with perspective projection to observe it, we can parse the difference between the S and T of the sampling points on the image plane. This is the basis for the basic texture mapping anti-aliasing of the graphics hardware.

For more complex geometry, camera projection model and texture coordinate mapping method, it is more difficult to determine the relationship between image position and texture parameter value. Fortunately, for texture anti-aliasing, we do not need the value of the function f (x,y) for arbitrary (x,y), but we need to find the relationship between the change value of the image sampling position at the point at which the image is given and the corresponding variation of the texture sampling position. This relationship can be given using the partial derivative ∂f/∂x,∂f/∂y of the function. For example, we use the following approximate formula to get the first order approximation of F values:

F (x ', y ') ≈f (x,y) + (× ' x ') ∂f/∂x + (y '-y ') ∂f/∂x

If these partial derivatives are slow to change from X '-X and y '-y, then this is a reasonable approximate formula. More importantly, these partial derivative values give the change value of the texture sampling position when the pixel is offset in the X and Y direction, and the texture sampling rate is obtained. For example, for the above quadrilateral, ∂s/∂x = 1/xr,∂s/∂y = 0,∂t/∂x = 0,∂t/∂y = 1/yr.

The key to the evaluation of these partial derivatives is in the raydifferential structure described in section 2.5.1. In the Scene::render () function, each camera light initializes the structure, which includes not only the tracked light, but also two extra rays, one from the camera's light position horizontally offset by one pixel and the other vertically offset by one pixel. All geometric light intersection routines require only the main camera light (main camera ray), while ignoring two auxiliary rays.

Now we use these two auxiliary rays to estimate the partial derivative ∂p/∂x of the function P (x,y) from the image position to the world space position. ∂p/∂y, and the partial derivative of the function U (x,y) and V (x,y) from (x,y) to (u,v) parameter coordinates ∂u/∂x, ∂u/∂x, ∂v/∂y, and ∂v /∂y. In the 11th section, we'll see how to use these values to compute the partial derivatives and the sampling rate of these quantities in any amount of screen space based on p or (U,V). The values of these biases at the intersection are stored in the differentialgeometry structure. Note that they are all declared as mutable because they are assigned in a function with a const Differentialgeometry object as a parameter.

<differentialgeometry public data> + =
mutable Vector Dpdx, Dpdy;
mutable float DUDX, dvdx, Dudy, Dvdy;

<initialize differentialgeometry from parameters> =
DUDX = DVDX = Dudy = Dvdy = 0;

The Differentialgeometry::computedifferentials () function calculates these values. The function is called by INTERSECTION::GETBSDF () before the MATERIAL::GETBSDF () is invoked, so you can use these values when you evaluate the material. Since not all the light that is tracked by the system has light differential information, we have to check the hasdifferential value in the raydifferential before we compute it. If this value is false, the partial derivative values are set to 0.

<differentialgeometry method Definitions> + =
void Differentialgeometry::computedifferentials (
Const raydifferential &ray) Const {
if (ray.hasdifferentials) {
<estimate screen spaces change in P and (U, v) >
else {
DUDX = DVDX = 0.;
Dudy = Dvdy = 0.;
DPDX = Dpdy = Vector (0,0,0);
}
}

The key to calculating these estimates is that we assume that the surface is locally flat in terms of the sampling rate at the coloring point. In practical applications, this is a reasonable approximation, and it is difficult to achieve better results. Because the Ray Tracker is a point sampling technique, we don't have any extra information about the light. This approximation does not hold true for highly curved surfaces or contours, although it rarely produces significant errors in practical applications. To achieve this approximate calculation, we need to obtain the tangent plane of the surface through the intersection of the main ray and the surface, and its equation is:

Ax + by + CZ + d = 0

Where A = NX, B = ny, c = NZ, d =-(n. P). We then compute the Auxiliary Light Rx and Ry with the intersection of PX, py. As shown in figure:

By using the forward difference technique and the surface partial derivative value at the point, we can get:

∂p/∂x≈px-p,∂p/∂y≈py-p

Because differential light offsets one pixel in all directions, there is no need to divide the difference δ, because δ=1.

<estimate screen spaces change in P and (U, v) > =
<compute Auxiliary intersection points with plane>
DPDX = px-p;
Dpdy = py-p;
<compute (U, v) offsets at Auxiliary points>

We use the Ray-plane intersection algorithm to derive the T value of the intersection:

T = (-(a,b,c). O) + D)/(A,B,C). D

We first calculate the coefficient d of the plane. There is no need to compute the coefficients a,b,c, because they are the DG.NN components.

<compute Auxiliary intersection points with plane> =
Float d =-dot (tin, Vector (p.x, P.Y, p.z));
Vector RXV (ray.rx.o.x, RAY.RX.O.Y, ray.rx.o.z);
float tx =-(dot (nn, RXV) + D)/Dot (NN, ray.rx.d);
Point px = ray.rx.o + tx * RAY.RX.D;
Vector ryv (ray.ry.o.x, RAY.RY.O.Y, ray.ry.o.z);
float ty =-(dot (nn, ryv) + D)/Dot (NN, ray.ry.d);
Point py = ray.ry.o + ty * RAY.RY.D;

We can use PX and py to obtain their corresponding (u,v) coordinates, the method is to take advantage of two facts, the first is the fact that the surface partial derivative ∂p/∂u and ∂p/∂v form a coordinate system (not necessarily orthogonal coordinate system) and the second is the two auxiliary intersections where the coordinates of the coordinate system are equal to their (u,v) The parameterized coordinates of the space. As shown in figure:


Given a point P ', we can calculate its position in this coordinate system with the following formula:

P ' = P +δu∂p/∂u +δv dv∂p/∂v

Or

The partial derivative ∂u/∂x,∂v/∂x,∂u/∂y,∂v/∂y of two auxiliary intersections in the screen space can be obtained by solving this linear equation group. This equation group has three equations and two unknowns, which means it is a overcontrained. We have to be careful because there is an equation that may be degraded---for example, if ∂p/∂u and ∂p/∂v are on the XY plane, their z value is 0, then the third equation is degenerate. To deal with this situation, we need only two equations to solve this equation group. What we're going to do is pick out the two equations that don't cause the equations to degenerate. A simple method is to take ∂p/∂u and ∂p/∂v, to see which coordinates have the maximum value, and then select the other two. Because their cross product is already in NN, this is straightforward. Even with these processes, there will still be no solution to the equation group (usually two partial derivatives cannot form a coordinate system). At this point, we can only return any value.

<compute (U, v) offsets at Auxiliary points>=
<initialize A, Bx, and by matrices for offset computation>
if (solvelinearsystem2x2 (A, Bx, x)) {
DUDX = x[0]; DVDX = x[1];
}
else {
DUDX = 1.; DVDX = 0.;
}

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.