In addition to pre-classification and post-classification, the transfer function is a pre-integrated transfer function. Compared with the previous two methods, this method significantly improves the performance and drawing quality. This method is described below.
I. Principles
Neither pre-classification nor post-classification solves the need for correct reconstruction of 3D data sites. For discrete-form integral equations of volume rendering, only when the sampling interval D tends to 0 Can an image be completely correct. This is because, according to the Adoption Law, only the nequest law can be correctly reconstructed. Then, due to the non-linear features of the transport function, the sampling frequency required to correctly assign values to the points in the volume rendering increases dramatically. This is because when sampling along the ray direction, the nekutes frequencies of the field C (S (x) and K (S (x) are transmitted in two ways.
The product of the maximum value in the nekutes frequency of the functions C (s) and K (s) and the nekutes frequency of the scalar field S (X. When performing the volume rendering integral, the sampling frequency cannot meet the nekutes frequency, which leads to a reduction in the image quality. If the sample frequency is too high, the calculation workload is increased and the drawing speed is reduced.
Classification is to map the scalar value S = S (x) in the original body data to the rgba value (color value and drag coefficient) using a transport function ). The objective of pre-integrated classification is to divide mathematical points into two parts: First, sampling continuous scalar field S (x) along the line of sight. At this time, the nequest frequency required for sampling is irrelevant to the transfer function. Second, the transport function maps the Q (s) and the drag ratio to K (s ).
The first step is to sample (Scalar Value) along a certain line of light ). Note that the nequest frequency value of this sample is not affected by the transfer function. To achieve pre-integrated classification, the sample value defines a one-dimensional linear scalar value area based on segments. The integration of the linear scalar value area of this fragment can be efficiently realized by generating a search table for each linear fragment. The three parameters of this table are: the value of the preceding sample point, the value of the following sample point, and the length of the segment, as shown in Figure 1:
Figure 1 color of the first segment in the light and the drag Analyzer
Formula for Calculating the drag-meter of Part I (Formula 1 ):
It is a function of the first sampling point value SF, the second sampling point value Sb, and the segment length D.
Formula for Calculating the color value of segment I (formula 2 ):
In this way, the pre-Integrated Classification calculates the following formula to obtain the color integral value (Formula 3) on a ray of Volume Rendering ):
ThisFormula derivation processAs follows:
The integral points of a line of light are as follows:
Through the analysis of the first I segment of the light, the drag coefficient in the above formula can be estimated:
AI is the cumulative resistance in segment I, which is approximately equal:
Sometimes it can be further simplified:
1-ai is the cumulative transparency in section I. The color section can also be simplified:
The points for drawing a line of light can be estimated to be simplified:
In this way, the back-to-front synthesis algorithm is:
Here, it is the cumulative color of segment I.
It can often be written as follows:
Then I can be rewritten:
Then, the back-to-front synthesis algorithm:
The pre-integral classification method calculates the color and opacity of the line segments between two adjacent points on the X-ray. The color and opacity values are determined by the parameters SF, Sb, and D; other classification methods only calculate the color and opacity of each sampling point, which is determined by the scalar value s and the sampling interval D. Compared with the general volume rendering method, when pre-integral classification volume rendering method is used to calculate the pixel intensity I of an image, each small line segment is first integrated to obtain a thin layer with a thickness of D.
The color and opacity values of, and then perform the integral operations of the volume rendering. The first and second classification methods only calculate the color and opacity values at the sampling points for the integral operations of the volume rendering, therefore, the former is also called the slab-by-slab point method, and the latter is the slice-by-slice point method. The second shows:
Figure 2 integration Modes
One disadvantage of pre-integral classification is that when the transport function changes, you need to recalculate the color and drag ratio of a large number of light segments. This is not a concern for gaming and entertainment applications. Therefore, this method is not suitable for Interactive Modification of transfer functions, such as scientific visualization.
Ii. Algorithm Optimization
We canTake some measures to optimize the algorithm.
1. You can set the segment length to a constant to reduce the dimension of the ing table from 3 to 2. Obviously, this is suitable for equi-spacing sampling. It is also suitable for normal projection of 3D textures, and is also a good estimation of perspective projection. By then, the 2D Texture of axis-aligned is not a good method.
2. There is no need to recalculate the entire search table for local changes to the transfer function. For example, if the transfer function's value is changed, you only need to recalculate the values in the query table: C (SF, Sb, d), A (SF, Sb, d ), SF <= S <= Sb, OR Sb <= S <= SF. In the worst case, half of the table content needs to be recalculated.
3. The introduction of integral functions can greatly improve efficiency.
The integral function can accelerate the calculation of the preceding formulas.
For example, function f (x ):
You can use the integral function g (s) to calculate the points:
For our pre-integral method, we only need to introduce an integral function for all scalar values. The points of all the combinations of Scalar Value SF and SF can be obtained through the calculation of the integral function.
In this way, the calculation of the resistance can be transformed:
The integral function used is:
To use the integral function for color value calculation, we need to ignore the attenuation of light in Slab:
The integral function is:
3. Program Implementation
The C/C ++ code that generates the pre-point query table:
//compute a 256 * 256 pre-integration map tableGLuint algo_preIntegrate(unsigned char * transferFuncTable){GLuint colorTexture;double r=0.f,g=0.f,b=0.f,a=0.f;int rcol,gcol,bcol,acol;double rInt[256],gInt[256],bInt[256],aInt[256];GLubyte lookupImg[256*256*4];int smin,smax;double factor,tauc;int lookupindex = 0;// compute integral functionsfor (int i1 = 1; i1 < 256; i1++){tauc = ( transferFuncTable[ ( i1 - 1 ) * 4 + 3 ] + transferFuncTable[ i1 * 4 + 3 ] ) / 2.0;r = r + ( transferFuncTable[ ( i1 - 1 ) * 4 + 0 ] + transferFuncTable[ i1 * 4 + 0 ] ) / 2.0 * tauc / 255.0;g = g + ( transferFuncTable[ ( i1 - 1 ) * 4 + 1 ] + transferFuncTable[ i1 * 4 + 1 ] ) / 2.0 * tauc / 255.0;b = b + ( transferFuncTable[ ( i1 - 1 ) * 4 + 2 ] + transferFuncTable[ i1 * 4 + 2 ] ) / 2.0 * tauc / 255.0;a = a + tauc;rInt[i1] = r; gInt[i1] = g;bInt[i1] = b;aInt[i1] = a;}rInt[0] = gInt[0] = bInt[0] = aInt[0] = 0;// compute look-up table from integral functionsfor ( int sb = 0; sb < 256; sb++ ){for ( int sf = 0; sf < 256; sf++ ){if ( sb < sf ){smin = sb;smax = sf;}else{smin = sf;smax = sb;}if ( smin != smax){factor = 1.0 / (double)(smax -smin);rcol = ( rInt[smax] - rInt[smin] ) * factor;gcol = ( gInt[smax] - gInt[smin] ) * factor;bcol = ( bInt[smax] - bInt[smin] ) * factor;acol = 256 * (1.0 - exp(-(aInt[smax] - aInt[smin])*factor/255.0));} else{if ( sb==0 && sf==0){rcol = 0;gcol = 0;bcol = 0;acol = 256 * (1.0 - exp(-(aInt[smin])*factor/255.0));}else{rcol = (lookupImg[(sb-1)*256*4+sf*4+0] + lookupImg[sb*256*4+(sf-1)*4+0]) * 0.5;gcol = (lookupImg[(sb-1)*256*4+sf*4+1] + lookupImg[sb*256*4+(sf-1)*4+1]) * 0.5;bcol = (lookupImg[(sb-1)*256*4+sf*4+2] + lookupImg[sb*256*4+(sf-1)*4+2]) * 0.5;acol = (lookupImg[(sb-1)*256*4+sf*4+3] + lookupImg[sb*256*4+(sf-1)*4+3]) * 0.5;}}CLAMP(0,rcol,255);CLAMP(0,gcol,255);CLAMP(0,bcol,255);CLAMP(0,acol,255);lookupImg[lookupindex++] = rcol;lookupImg[lookupindex++] = gcol;lookupImg[lookupindex++] = bcol;lookupImg[lookupindex++] = acol;}}//create textureglGenTextures(1, &colorTexture);glBindTexture(GL_TEXTURE_2D,colorTexture);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,256,256,0,GL_RGBA,GL_UNSIGNED_BYTE,lookupImg);return colorTexture;}
CG program:
Vertex program:
// Pre-integration vertex program in Cgstruct appin {float4 Position : POSITION;float4 TCoords0 : TEXCOORD0;};struct v2f {float4 HPosition : POSITION;float4 TCoords0 : TEXCOORD0;float4 TCoords1 : TEXCOORD1;};v2f main(appin IN,uniform float4x4 ModelViewProj,uniform float4x4 ModelView,uniform float4x4 ModelViewI,uniform float4x4 TexMatrix,uniform float SliceDistance){v2f OUT;// compute texture coordinate for sFOUT.TCoords0 = mul(TexMatrix, IN.TCoords0);// transform view pos and view dir to obj spacefloat4 vPosition = float4(0,0,0,1);vPosition = mul(ModelViewI, vPosition);float4 vDir = float4(0.f,0.f,-1.f,1.f);vDir = normalize(mul(ModelViewI, vDir));// compute position of sBfloat4 eyeToVert = normalize(IN.Position - vPosition);float4 sB = IN.Position- eyeToVert * (SliceDistance / dot(vDir, eyeToVert));// compute texture coordinate for sBOUT.TCoords1 = mul(TexMatrix, sB);// transform vertex position into homogeneous clip spaceOUT.HPosition = mul(ModelViewProj, IN.Position);return OUT;}
Fragment program:
// Pre-integration fragment program in Cgstruct v2f {float4 TexCoord0 : TEXCOORD0;float4 TexCoord1 : TEXCOORD1;};float4 main(v2f IN,uniform sampler3D Volume,uniform sampler2D PreIntegrationTable) : COLOR{float4 lookup;// sample front scalarlookup.x = tex3D(Volume, IN.TexCoord0.xyz).x;// sample back scalarlookup.y = tex3D(Volume, IN.TexCoord1.xyz).x;// lookup and return pre-integrated valuereturn tex2D(PreIntegrationTable, lookup.xy);}
Finally, a program is provided to compare the effects of the pre-integrated algorithm and the post-classification algorithm, which is 256*256*225. after the raw data is reconstructed in three dimensions, the two use the same transmission function and the same sampling step. We can clearly see that the effect of the former is much better ~
Pre-integrated algorithm:
Post-Classification Algorithm: