1 / 8

Interactive Rendering of Large Volume DataSets (IEEE Viz 2002)

Interactive Rendering of Large Volume DataSets (IEEE Viz 2002). Stefan Guthe,WSI/GRIS, University of Tubingen. Summary. Data is stored using multires octree Wavelet compression – wavelet coefficients are further compressed

lerica
Download Presentation

Interactive Rendering of Large Volume DataSets (IEEE Viz 2002)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Interactive Rendering of Large Volume DataSets (IEEE Viz 2002) Stefan Guthe,WSI/GRIS, University of Tubingen

  2. Summary • Data is stored using multires octree • Wavelet compression – wavelet coefficients are further compressed • At render time, the data is decompressed on the fly and rendered using 3D texture hardware • Rendering is done according to view dependent priority • Good frame rates – but final image size is low (256 * 256)

  3. Wavelet compression • The compression involves 2 steps – wavelet representation of data and huffman, run-length or arithmetic encoding to further reduce space for wavelet coefficients • Compression ratio of huffman encoding (used in the implementation) is 3:4:1 for lossless compression

  4. Projective classification and rendering • A projective classification eliminates rendering voxels not in the view frustum • View dependent priority is assigned to nodes depending on their voxel depths • The number of voxels that can be displayed is preset (depending on texture memory) • A priority queue is used to insert node by node of the octree, the closer voxel nodes having higher priority

  5. Rendering with priority • The node with higher priority in the queue is fetched • Its high frequency wavelet coefficients are decompressed and the child is inserted in the queue • This process halts when the number of voxels exceed the preset limit

  6. Trilinear interpolation and block size effects • The volume is decomposed hierarchically into k3 (usually k=16) blocks, which are rendered as 3D textures using hardware • Block size must be a power of 2, because of OpenGL texture restrictions • The target image is 256*256 pixels For all 256*256 possible values of entry and exit, volume integrals are pre-computed • Tri-linear interpolation done by texture hardware might need multiple blocks in the octree – therefore neighboring blocks in the octree might have to be coalesced • A greater block size (k = 32) reduces this overhead

  7. Caching • Caching of decompressed data is required for interactive frame rates. Unaddressed issues: Interpolating between multiple resolutions,

  8. Results • Visible human dataset • 2048 X 1216 X 1877 X 12 bit (~6.4 G) • Final image size rendered is 256 X 256 • Frame rate vs compression ration vs PSNR

More Related