After successfully compiling volumeshop yesterday, read Stefan Bruckner's
Volumeshop: an interactive system for direct volume AuthenticationIn fact, his work is mainly based on the rendering of selection volume and raw volume. first, the user must provide a selection volume, and then perform intersection calculation for the transfer function based on selction and volume. The lighting calculation adopts 2D, use N * l and N * h as the X and Y axes respectively. Other jobs, such as volume painting, use selection volume.
I feel that the current work of semantic strative visualization mainly involves three aspects: First, adding a saliency field or selction data directly to volume, which is equivalent to a 3D texture like voluem, the other is to split voluem to get the surface and work on the surface, such as line, stipple, mesh deformation, and then continue to adjust alpha in the transfer function. of course, a lot of work is combined with 1, 3, and 2. There is not much to do tranfer function directly, because there were too many jobs in the past few years. Of course, Stefan's style tranfer function is inspired by Sloan javassphere and transfer function is implemented by NX and NY. In the previous article, I introduced 1 and 2.
My current job is divided into so many jobs. What do I do next?