Volume Rendering (rendering) overview 3: ray casting principles and key points of attention (strongly recommended and well-described)

Source: Internet
Author: User

From: http://blog.csdn.net/liu_lin_xm/article/details/4850609

Excerpt from "GPU programming and Cg language primer 1rd edition" Chinese name "GPU programming and Cg language principle of ray casting Algorithm

The ray projection method is a direct rendering algorithm based on the image sequence. Transmits a ray in a fixed direction (usually in the line of sight) from each pixel of the image, and the light passes through the entire image sequence. In this process, the image sequence is sampled to obtain the color information, at the same time, color values are accumulated based on the light absorption model until the light passes through the entire image sequence. The final color value is the color of the rendered image.

Why is the above definition "Image Sequence" instead of "body texture" directly used "? The reason is that the body data has multiple organizational forms. In CPU-based advanced language programming, sometimes the body texture is not used, but the image sequence. In a GPU-based coloring program, you must use a body texture. The image sequence mentioned here can also be understood as sliced data.

Note: The ray projection algorithm is used to direct the X-ray data from the viewpoint to the "outer pixel on the top of the Image Sequence surface, many tutorials are confused about writing "starting from screen pixels". This is too simple, and it is easy to misunderstand the implementation path of the technology. It can be said that this is a rumor! Ray is derived from screen pixels. It is a ray tracking algorithm, not a ray projection algorithm.

The ray projection method in volume rendering is similar to the ray tracking algorithm in realistic rendering technology, that is, the color accumulation along the ray path. But the specific operations are different. First, the light in the ray projection method goes through the data field in a straight line, and the light tracing algorithm needs to calculate the reflection and refraction of the light. Secondly, the ray projection algorithm samples along the ray path. Based on the color and transparency of the sample, colors are accumulated using the color synthesis operator of volume rendering, the light tracing algorithm does not deliberately accumulate colors, but only considers the intersection of light and ry. Finally, the light direction in the light tracing algorithm is from the viewpoint to the pixel Ray on the screen, the intersection judgment and calculation of rays and scene entities are also required, while the ray projection algorithm is a ray from the viewpoint to the object (section 16.1.2 will elaborate ), the intersection of rays and objects is not required.

In the above text, the description of the ray projection algorithm may be too simple and may cause some doubts. However, this is normal. If you have doubts, you can think about the solution, I am most afraid that I will not have any doubts after reading it. It is just a glimpse of the shadows, rather than a real understanding.

15.1.1 absorption model

Almost every direct volume rendering algorithm regards the body data as "the absorption and emission distribution of light by each body element when the light passes through the body under a certain density condition ". This idea comes from physical optics and is finally classified and described through the optical model (optical models. In order to distinguish the previous light rendering model, the optical model is uniformly translated into an optical model.

[15] describes important optical models used in most direct volume rendering algorithms.

1. absorption Model (absorption only): The body data is considered to be composed of cold and black body elements, which only absorb light and neither emit light, does not reflect or transmit light;

2. Emission Model (emission only): The Body element in the body data only emits light and does not absorb light;

3. absorption plus emission: This optical model is the most widely used. The body element in the body data emits light and can absorb light, but does not reflect or transmit light.

4. scattering and shading/shadowing: The Body element can scatter (reflection and refraction) the light from an external light source, and shadows can be generated due to the occlusion relationship between the body elements;

5. Multiple Scattering: light can be scattered by multiple elements before being observed by the eyes.

We usually use the absorption and emission model (absorption plus emission ). To enhance the sense of realism, you can also add shadow (including self-Shadow) computing.

15.2 details of the ray projection algorithm 15.2.1 how the light passes through the body Texture

This section describes how light passes through the body texture. This is a very important detail point. Many people give up learning the volume rendering technology because they cannot understand the interaction between body texture and ray projection.

The previous chapter seems to have been suggesting this: with a single texture, you can perform body rendering. I was confused by this suggestion for a long time when I first studied volume rendering. Later I found a foreign software that could render the body texture to a cube or a cylinder. Then I suddenly realized: the body texture is not the data of the spatial model. The spatial model (usually a regular cube or a cylinder) and the body texture can be combined to render the body.

For example, if we want to see a texture map effect on the computer, we need at least one two-dimensional texture and one patch to texture the texture. This patch is actually the carrier of the texture.

Similarly, a 3D model is also required as the carrier of the body texture in volume rendering. The body texture corresponds to the model through the texture coordinate (3D), and then is directed to the point Ray on the model from the viewpoint, the ray traversing model space is equivalent to the ray traversing body texture.

A common cube or a cylinder is usually used as a space model for volume rendering. This chapter uses cubes as the carrier of body textures.

Note: The body texture corresponds to the 3D model through texture coordinates. Considering that the texture coordinates used by OpenGL and direct3d are different, note this point when writing a program.

 

 

 

Figure 44 shows the distribution of the body texture coordinates on the cube. After testing, this distribution is based on OpenGL. In the host Program, determine the body texture coordinates of the 8 vertices of the cube. Note that it is a ternary vector, and then pass in the GPU, the body texture coordinates of the six points in the cube are automatically interpolated on the GPU.

A ray can be uniquely identified based on the viewpoint and the cube surface point. Ray traversing is equivalent to traversing the body data, and the body data is sampled in an equal distance during the traversing process, the obtained sampling data is accumulated repeatedly according to the light-through formula. This accumulation process is based on the transparent synthesis formula described in Chapter 11. However, we have just explained it briefly. In this chapter, we will fully elaborate on transparency, transparent synthesis, and sorting relationships.

15.2.2 transparency and Synthesis

In essence, transparency represents the ability of light to penetrate an object. light penetrating an object will lead to a change in the wavelength ratio. If it crosses multiple objects, this change is cumulative. Therefore, the rendering of a transparent object is essentially a mixture of the color of the transparent object and the color of the subsequent object. This is called the Alpha hybrid technology. The graphic hardware implements Alpha hybrid technology and uses the over operator. The formula for Alpha hybrid technology is as follows:

Where, as indicates the transparency of the transparent object, CS indicates the original color of the transparent object, CD indicates the original color of the target object, and CO indicates the color value obtained by the transparent object observation of the target object.

If multiple transparent objects exist, objects are usually sorted unless the transparency of all objects is the same. Drawing multiple transparent objects in the graphic hardware relies on the Z buffer zone. In the ray projection algorithm, the ray traversing body texture is also the sorting process of transparency. Therefore, there is a merging sequence problem. The process of Ray traversing texture can be used as the sampling and synthesis process, which is sorted from the front to the back, or in turn from the back to the front, there is no doubt that the two methods have different effects.

If sampling and synthesis is performed from front to back, the synthesis formula is:

Among them, CI and AI are the color values and opacity obtained by sampling on the body texture, which is actually the data contained in the body element; DETA Ci and deta ai indicate accumulated color values and opacity.

Note that many body textures do not actually contain transparency, so sometimes you define an initial transparency and then accumulate it.

If sampling and synthesis is performed from the back to the front, the formula is:

15.2.3 sample along the ray

 

 

 

As shown in Figure 45, it is assumed that the light is projected from point F to the cube and then projected from point L. The distance between the rays is M. When the light is projected from point F to the cube and the traversing distance is n (n <m), the formula is as follows:

Tstart indicates the volume texture coordinates of the projected points on the cube surface, D indicates the Projection Direction, detal indicates the sampling interval, increasing with N, and T indicates the obtained sample texture coordinates. After obtaining the sampled texture coordinates, You can query the volume data on the volume texture. The sampling process of a ray does not end until n> m or the transparency is accumulated by more than 1.

The following is a summary: first, we need a three-dimensional cube that determines the coordinate of the vertex texture. The process of light traversing the cube is the process of traversing the body texture. During the whole traversing process, calculate the texture coordinates of the sample body and sample the body texture. The sampling process ends when the light is injected into the cube or the accumulated transparency is 1.

I think this process should not be complicated. Everyone must remember that the texture coordinates are the bridge between the 3D model and the texture data of the body. By calculating the light, the 3D model is crossed, it can calculate the variation of the body texture in the light traversing direction, which is the method for calculating the sampled texture coordinate.

In high school, I learned about the mechanics of physics. I was in a bad state at first. I didn't know where to start when I encountered an application question. Then I came up with a reference document saying, "acceleration is a bridge between linking force and motion, when you encounter a problem, we first analyze the acceleration method. In this way, we do not feel physically hard to learn. So here I also use that sentence to summarize the role of texture coordinates.

Now there is another question: how do we know that light emits a cube? This problem is equivalent to calculating the distance m between rays in a cube. It will be elaborated in the next section.

Appendix: in OpenGL and DirectX, the distribution rules of the body texture coordinates are different. Therefore, you must determine the settings of the vertex body texture coordinates based on the profile currently used. This also shows that the Cg language is based on OpenGL and DirectX.

 

15.2.2 how to determine the texture of the Light Injection body

As described in the previous section, the texture of the Light Injection body is equivalent to that of the Light Injection cube. Therefore, how to determine the texture of the Light Injection body can be converted to determine the Light Injection cube.

First, calculate the travel distance from the light incident to the outbound light in the cube, and then calculate the crossing distance N of the light in the cube at the same time when the texture of each sample body is calculated. If n> = m, it indicates that the light emits a cube. Given the light direction and the sampling distance, we can find the crossing distance N of the light in the cube.

If it is on the CPU, it is easy to obtain the distance m through the knowledge of analytic ry, directly obtain the Two Intersection coordinates of the light and the ry, and then calculate the Euclidean distance. However, computing the intersection of light and Ry on the GPU is an old and difficult problem, especially when the ry is irregular. In addition, even the regular ry, the process of calculating the intersection of light and light is also very time-consuming. Therefore, the method of calculating the intersection and then calculating the distance is not used.

Think about the order between points and points in the GPU? Depth value (I answer your questions ).

In the GPU, the relationship between points and points can be indirectly reflected. One is the texture coordinate, and the other is the depth value. Generally, depth removal is performed in rendering, that is, only fragments with a small depth value are displayed. However, there is another depth removal method, which removes segments with a small depth value and leaves the segment with the largest depth value (the depth value removal method setting, there are ready-made function calls in OpenGL and direct ). If the latter is used, rendering in the scenario shows the set of patches that are farthest from the viewpoint.

Therefore, the method for calculating the distance M is as follows:

1. remove fragments with a large depth value (normal rendering status) and render the scene depth map frontdepth (See Chapter 14th ), at this time, the color value of each pixel on the frontdepth represents "the distance from the point closest to the viewpoint in a certain direction ";

2. Remove fragments with smaller depth values. The color values of each pixel in the backdepth and backdepth of the scene depth map represent the distance from the farthest point of view in a certain direction ";

3. Subtract the data from the two depth charts. The obtained value is the light projection distance M.

If you have carefully implemented the shadow map algorithm described in chapter 1, this process should not be too complicated. The possible problem is that many people have never touched the backend rendering. Here, we will describe some of the nuances of backend rendering to avoid detours.

Usually, the surface on the back (the surface without orientation to the point of view) will not be rendered. Students with good graphics basics should know that the three vertices usually form a triangular surface in a counter-clockwise order, the advantage of doing so is that the dot product of the forward and line-of-sight method vectors on the back side is negative, and the patch elimination algorithm can be implemented accordingly (which is also commonly used in illumination model implementation ), therefore, it is not enough to compare the depth value. You must disable the patch Removal Function Based on the Inverse or clockwise order to render the depth map on the back. Figure 46 shows the depth of the front and back of the cube.

 

 

 

Appendix: In many tutorials, The frontdepth and backdepth values are subtracted and saved as another texture, which is called a direction texture. Each pixel consists of R, G, B, and, the first three channels store the color value, and the last channel a stores the distance value. I think this process is a little complicated. In addition, the direction vector may have negative values, the color channel can only store positive values. Therefore, the direction vector must be normalized to the [0, 1] space. This process may cause loss of data accuracy. Based on the above considerations, I put the vector direction calculation into the fragment coloring program and calculated it through the viewpoint and vertex position.

Volume Rendering (rendering) overview 3: ray casting principles and key points of attention (strongly recommended and well-described)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.