Preliminary understanding of VTK-Volume Rendering

Source: Internet
Author: User

VTK-Volume Rendering

VTK provides three types of Volume Rendering technologies, in addition to the ray projection method, as well as two-dimensional texture ing and volumepro-based hardware-assisted volume rendering.

The ray projection method is a typical rendering algorithm based on image space scanning to generate high-quality images. The basic idea is to emit a Ray from each pixel in the image plane along the line of sight, this ray passes through the body dataset and is sampled at a certain step. The color value and opacity of each sampling point are calculated by interpolation, then, the accumulated color value and opacity value are calculated from the beginning to the back or from the back to the front until the light is completely absorbed or passed through the object. This method can well reflect the changes in the material boundary. Using the phong model, the introduction of mirror reflection, diffuse reflection, and environmental reflection can produce a good illumination effect, in medicine, the properties, shape features, and hierarchical relationships of various tissues and organs can be displayed, enriching image information.

The two-dimensional texture ing is different from the ray projection method. It is based on the object space scan, that is, processing the data points of the object space, calculate and synthesize the contribution of each data point to the screen pixel to form the final image. Its rendering speed is 5 to 5 faster than the ray projection method ~ 10 times, but the imaging quality is far less accurate than the Light Projection Method Using tri-linear interpolation. When the angle of view changes, it will also produce false traces.

Based on the volumepro hardware-assisted volume rendering method, although the imaging quality is not as good as the ray projection method, it is better than two-dimensional texture ing. The volumepro hardware supports the fastest volume rendering speed, generally at least 20 frames per second, but currently only supports parallel projection and is expensive.

Each technology has its own advantages and disadvantages, and developers should choose based on actual needs. Currently, the ray projection method in VTK is the most widely used. This is because not only does it have the best imaging quality, but with the development of computer technology and the continuous improvement of algorithms, its rendering capability is also constantly improved. Due to the limited imaging quality and graphics hardware, volumepro is rarely used for its price.

The following describes the ray casting algorithm:


The four basic steps of Volume Ray Casting: (1) ray casting (2) Sampling (3) shading (4) compositing.

In its basic form, the volume ray casting algorithm comprises four steps:
Ray casting. for each pixel of the final image, a ray of sight is shot ("cast") through the volume. at this stage it is useful to consider the volume being touched and enclosed within a bounding primitive, A simple geometric object-usually a cuboid-that is used to intersect the ray of sight and the volume.
Sampling. along the part of the ray of sight that lies within the volume, equidistant sampling pointssamples are selected. as in general the volume is not aligned with the ray of sight, sampling points usually will be located in between voxels. because of that, it is necessary to trilinearly interpololate the values of the samples from its surrounding voxels. or
Shading. for each sampling point, the gradient is computed. these represent the orientation of local surfaces within the volume. the samples are then shaded, I. e. colored and lighted, according to their surface orientation and the source of light in the scene.
Compositing. after all sampling points have been shaded, they are composited along the ray of sight, resulting in the final color value for the pixel that is currently being processed. the composition is derived directly from the rendering equation and is similar to blending acetate sheets on an overhead projector. it works back-to-front, I. e. computation starts with the sample farthest from the viewer and ends with the one nearest to him. this work flow direction ensures that masked parts of the volume do not affect the resulting pixel.
The ray projection method is a direct rendering algorithm based on the image sequence. Transmits a ray in a fixed direction (usually in the line of sight) from each pixel of the image, and the light passes through the entire image sequence. In this process, the image sequence is sampled to obtain the color information, at the same time, color values are accumulated based on the light absorption model until the light passes through the entire image sequence. The final color value is the color of the rendered image.

The implementation process of ray projection in VTK is as follows:
1. reconstruction process

Similar to face-to-face rendering, vtkdicomimagereader is used to read CT tomography images in DICOM format in the same directory, and then classify the images based on different gray-scale values of the bones and skin, vtkpiecewisefunction is used to assign different opacity values to the bones and skin, And vtkcolortransferfunction is used to assign different colors to the bones and skin. Vtkfixedpointvolumeraycaw.apper computes the ray projection algorithm for three-dimensional data to obtain the color value of each pixel in Two-Dimensional Projection, and then maps it to the Three-dimensional Geometric entity represented by vtkvolume. You can use vtkvolumeproperty to set the properties of the vtkvolume object. Add volume to vtkrenderer, display it through vtkrenderwindow, and process user interaction operations through vt. krenderwindowindowinteractor.

2 main implementation process of Volume Rendering Based on ray projection algorithm

(1) create a vtkdicomimagereader to read the object, set the dicom ct fault file directory through its setdirectoryname, and call the update method to read the data.

(2) assign different opacity values to the bones and skin. Based on the data source used in this article, set the gray threshold of the skeleton to 1300, and the gray threshold of the skin to 750. The object that creates the opacity function, vtkpiecewisefunction, uses its addpoint Method to Determine the inflection point of the opacity function and its corresponding opacity. It sets the opacity of the skeleton to 1.0 (that is, completely opaque ), set the opacity of the skin to (100.0 A rayskintransparency)/100, where rayskintransparency is set to 50.

(3) create the color transfer function object vtkcolortransferfunction and assign different colors to the skeleton and skin through its addrgbpoint method. We set the skeleton to white (, 1) and the skin to red (, 0 ).

(4) create a vtkfixedpointvolumeraycaw.apper object to Perform Volume Rendering operations on Three-dimensional data. Use the getoutput method to read data from the vtkdicomimagereader object.

(5) create a vtkvolume object and use its setmapper method to obtain the output of vtkfixedpointvolumeraycaw.apper using the pipeline.

(6) create a painter vtkrenderer and a drawing window vtkrenderwindow. Use the addvolume method of vtkrenderer to add the vtkvolume object, and use the addrenderer method of vtkrenderwindow to add the created vtkrenderer object to the drawing window, call the render method to draw the image.

(7) create a vtkrenderwindowlnteractor object and use its setrenderwindow method to set the vtkrenderwindow as an interactive drawing window, so that you can perform interactive operations on the drawing result.

PS: In this section, we need to say that the DCM file type obtained by CT or MRI is generally short or char (I'm not sure) however, in VTK, the map type of vtkvolumeraycaw.apper must be unsignedchar/unsignedshort. Therefore, you must use vtkimageshiftscale () to perform the conversion.
// The following is a snippet of my program. I hope this will be useful to you (I am reading the DCM file through Itk gdcmio // sequence to be displayed must use VTK so there is connector)
Vtkimageshiftscale * scale = vtkimageshiftscale: New ();
Scale-> setinput (connector-> getoutput ());
Scale-> setoutputscalartypetounsignedchar ();
Scale-> Update ();

Vtkvolumeraycaw.apper * volumemapper = vtkvolumeraycaw.apper: New ();
Volumemapper-> autoadjustsampledistancesoff ();
Volumemapper-> setinput (scale-> getoutput ());

The following is an understanding of Volume Rendering:

M. levoy mentioned "volume rendering describes a wide range of techniques for generating images from three-dimen1_scalar data" in the article "display of surfaces from volume data" (document [14 ".

The core of Volume Rendering is "display body details! Not the surface details ". My definition is: based on 3D body data, the technology that shows all body details on two-dimensional images at the same time is called the volume rendering technology. Using the volume rendering technology, you can display the comprehensive distribution of various substances in an image, and reflect the isosurface through opacity control.

For example, the CT image shows the muscle and bone information of the human body, rather than the surface information (that is the photo ). Therefore, to understand the difference between the volume rendering and the area rendering technology, a very intuitive analogy is: the photos taken by a general camera and CT photos taken by a CT Instrument, although they are both two-dimensional images, but the objects displayed are different!

The goal of Volume Rendering is to display the details of a space on an image. For example, you have a house in front of you. The house contains furniture and household appliances. standing outside the house, you can only see external shapes. You cannot observe the layout of the house or the objects in the house; suppose the objects in the house and house are semi-transparent, so that you can view all the details at the same time. This is the effect achieved by the volume rendering.

Although the illumination model is usually used for surface rendering, it does not mean that the illumination model cannot be used in the rendering technology. In fact, the volume rendering technology is based on the absorption principle of objects to light. In terms of implementation methods, the ultimate computing model is based on transparency synthesis. In addition, classical illumination models, such as the Phong Model and the cook-Torrance model, can be used as a supplement to the volume rendering technology to improve the volume rendering effect and enhance the realism.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.