250 likes | 461 Views
Fast Depth-of-Field Rendering with Surface Splatting. Jaroslav K ř ivánek CTU Prague IRISA – INRIA Rennes. Ji ř í Žára CTU Prague. Kadi Bouatouch IRISA – INRIA Rennes. Computer. Graphics. Group. Goal. Depth-of-field rendering with point-based objects Why point-based ?
E N D
Fast Depth-of-Field Rendering with Surface Splatting Jaroslav Křivánek CTU PragueIRISA – INRIA Rennes Jiří Žára CTU Prague Kadi Bouatouch IRISA – INRIA Rennes Computer Graphics Group
Goal • Depth-of-field rendering with point-based objects • Why point-based ? • Efficient for complex objects • Why depth-of-field ? • Nice and naturally looking images
Overview • Introduction • Point-based rendering • Depth-of-field • Depth-of-field techniques • Our contribution: Point-based depth-of-field rendering • Basic approach • Extended method: depth-of-field with level of detail • Results • Discussion • Conclusions
y z x Point-based rendering • Object represented by points without connectivity • Point (surfel) • position, normal, radius, material • Rendering = screen space surface reconstruction • Efficient for very complex objects
Depth-of-Field • More naturally looking images • Important depth cue for perception of scene configuration • Draws attention to the focused objects
VD D Circle of Confusion (CoC) C C = f ( F, F/n, D, P ) F…... focal distanceF/n… lens diameterP……focal plane distanceD……point depth Thin Lens Camera Model VP P F/n image plane lens focal plane
Depth-of-Field Techniques in CG • Supersampling • Distributed ray tracing [Cook et al. 1984] • Sample the light paths through the lens • Multisampling [Haeberli & Akeley 1990] • Several images from different viewpoints on the lens • Average the resulting images using accumulation buffer
Image + depth Image with DOF Depth of Field Techniques in CG • Post-filtering [Potmesil & Chakravarty 1981] • Out-of-focus pixels displayed as CoC • Intensity leakage, hypo-intensity • Slow for larger kernels Focus processor(filtering) Image synthesizer
splat Point-based rendering - splatting • Draw each point as a fuzzy splat (an ellipse) Image = SPLATi
Our Approach: Swap and Focus filtering SPLATi Focus filtering SPLATj Focus filtering Image with DOF SPLATk Focus filtering Our Basic Approach • Post-filtering Focus processor(filtering) Image with DOF i SPLATi + depth Image + depth Image =i SPLATi
Splat = reconstr. kernel r Blurred reconstr. kernel rDOF = r GQDOF DOF filter GQDOF Our Basic Approach
Properties of our basic approach PROS… + Avoids leakage • Reconstruction takes into account the splat depth + No hypo-intensities • Visibility resolved after blurring + Handles transparency • In the same way as the EWA splatting – A-buffer CONS - Very slow, especially for large apertures • A lot of large overlapping splats • High number of fragments: • E.g. Lion, no blur: 2.3 mil.; blur 90.2 mil. (40x more)
Our Extended Method • Use Level of Detail (LOD) to attack complexity • blur = detail • Select lower LOD for blurred parts • # of fragments increases more slowly • E.g. Lion, no blur: 2.3 mil.; blur 5.3 mil. (2.3x more) Blurred img. Selected LOD
Observation • Selecting lower LOD for rendering equivalent to 1) selecting the fine LOD 2) low-pass filtering is screen space • Use LOD as a means for blurring • not only to reduce complexity Fine LOD Lower LOD
Effect of LOD Selection • How to quantify the effect of LOD selection in terms of blur in the resulting image ? • We use Bounding sphere hierarchy • Qsplat [Rusinkiewicz & Levoy, 2000]
subsample Center the filter GQL Bounding Sphere Hierarchy • Building the hierarchy levels low-pass filtering + subsampling The finest level: L=0 Lower level: L=1
LOD Filter in Screen Space • GQL defined in local coordinates in object space • GQL related to screen space by the local affine approximation J of the object-to-screen transform • Selecting level L = filtering in screen space by GJQLJT GJQLJT GQL Screen space Object space
rDOF = r GQDOF y r GJQLJT x DOF with LOD - Algorithm • Given the required screen space filter GQDOF • Select LOD L such that support( GJQLJT ) < support ( r GQDOF ) • Apply an additional screen space filter GQDIFF to get GQDOF rDOF = [r GJQLJT ] GQDIFF
Results No Depth-of-Field – everything in focus
Results Transparent mask in focus, male figure out of focus
Results Male figure in focus, transparent mask out of focus
Results Reference solution (multisampling) Our algorithm • Our blur looks too smooth because of the Gaussian filter
Results Reference solution (multisampling) Our algorithm • Artifacts due to incorrect surface reconstruction
Discussion • Simplifying assumptions & limitations • Gaussian distribution of light within the CoC • Mostly ok • We are blurring the texture before lighting • We should blur after lighting • Possible incorrect image reconstruction from blurred splats
Conclusion • A novel algorithm for depth of field rendering • LOD as a means for depth-blurring + Transparency + Avoids intensity leakage + Running time independent of the DOF - Only for point based rendering - A number of artifacts can appear • Ideal tool for interactive DOF previewing • Trial and error camera parameters setting Acknowledgement: Grant 2159/2002 MSMT Czech Republic