1 / 40

Depth Estimate and Focus Recovery

Depth Estimate and Focus Recovery. Presenter : Wen-Chih Hong Adviser: Jian-Jiun Ding Digital Image and Signal Processing Laboratory Graduate Institute of Communication Engineering National Taiwan University, Taipei, Taiwan, ROC 台大電信所 數位影像與訊號處理實驗室. Outlines. Introduction

prisca
Download Presentation

Depth Estimate and Focus Recovery

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Depth Estimate and Focus Recovery Presenter: Wen-Chih Hong Adviser: Jian-Jiun Ding Digital Image and Signal Processing Laboratory Graduate Institute of Communication Engineering National Taiwan University, Taipei, Taiwan, ROC 台大電信所 數位影像與訊號處理實驗室 DISP Lab @ MD531

  2. Outlines • Introduction • Binocularversion systems • Stereo • Monocular version systems • DFF • DFD • Other method • Conclusions • References DISP Lab @ MD531

  3. Introduction • Depth is an important information for robot and the 3D reconstruction. • Image depth recovery is a long-term subject for other applications such as robot vision and the restorations. • Most of depth recovery methods based on simply camera focus and defocus. • Focus recovery can help users to understand more details for the original defocus images. DISP Lab @ MD531

  4. Stereo focus Binocular Monocular Depth from defocus (DFD) Depth from focus (DFF) Introduction • Categories of depth estimation DISP Lab @ MD531

  5. Introduction • Categories of depth estimation • Active : • Sending a controlled energy beam • Detection of reflected energy • Passive: • Image-based DISP Lab @ MD531

  6. v Biconvex F F D/2 s u 2R : R>0 sensor Introduction • Geometric on imaging DISP Lab @ MD531

  7. Binocular version systems • The flow chart to binocular depth estimation. • Depth map • HVS modeling • Edge detection • Correspondence • Vengeance control • Gaze control • Depth map DISP Lab @ MD531

  8. Gazing point (Corresponding point) Depth (u) B/2 B/2 Baseline (B) Binocular version systems • Vengeance movement : • is some kind of slow eye movement that two eyes move in different directions. • But corresponding problem DISP Lab @ MD531

  9. Corresponding point (xR, yR) (xL, yL) Depth (u) Baseline (B) Left vision Right vision Binocular version systems • Complex model Figure 3.3 A more complete triangulation geometry for the binocular vision. We have to realize how much departure between the optical axis and the direction of the DISP Lab @ MD531

  10. Binocular version systems • Corresponding problem • But more accuracy DISP Lab @ MD531

  11. Monocular version systems • Depth from focus • Depth form defocus DISP Lab @ MD531

  12. Depth from Focus • Taking pictures at different observer distance or object distance • We need an estimator to measure degree on focus • Using Laplacian operator • Such operator point to a measurement on a single pixel influence, a sum of Laplacian operator is needed: DISP Lab @ MD531

  13. NP Measured curve Focus measure Nk Ideal condition Nk-1 Nk+1 [SML] dp dk-1 dk dk+1 displacement Depth from Focus • Gaussian interpolation Figure 4.4 Gaussian interpolation to a measure curve, Nk≧Nk-1, Nk≧Nk+1 DISP Lab @ MD531

  14. Depth from Focus • Range from focus • using • Take pictures along the axis • Find the image having highest frequency • Need more than 10 images (monocular) DISP Lab @ MD531

  15. Depth from Focus • We use Gaussian interpolation to form a set of approximations • The depth solution dp from above Gaussian: DISP Lab @ MD531

  16. Depth from Defocus • Due to geometric optics, the intensity inside the blur circle should be constant. • Considering of aberration and diffraction and so on, we easily assume a blurring function: • : diffusion parameter • Diffusion parameter is related to blur radius: • derived from triangularity in geometric optics • For easy computation, we assume that foreground has equal-diffusion, background has equal-diffusion and so on • However, this equal-focal assumption will be a problem DISP Lab @ MD531

  17. Depth from Defocus • Blurring model Blurring radius DISP Lab @ MD531

  18. Depth from Defocus • Blurring model DISP Lab @ MD531

  19. Depth from Defocus • Blurring model DISP Lab @ MD531

  20. Depth from Defocus • Blurring model DISP Lab @ MD531

  21. Depth from Defocus • Blurring model when • blur radius is independent of the location of the point source on the object plane at depth DISP Lab @ MD531

  22. Depth from Defocus • Blurring model • Using and • We get • So diffusion parameter: DISP Lab @ MD531

  23. Depth from Defocus • Depth recovery • Eliminating D from m=1,2 • we get • where • and DISP Lab @ MD531

  24. Depth from Defocus • Depth recovery • From • Take F.T.: • The F.T. of Gaussian is Gaussian DISP Lab @ MD531

  25. Depth from Defocus • Depth recovery Take the log • Using the relationship between them • we get DISP Lab @ MD531

  26. Depth from Defocus • Depth recovery let apha=1 we obtain the value of sigma-2 • Find out the depth D DISP Lab @ MD531

  27. Depth from Defocus • The main sources of range errors in DFD • Inaccurate modeling of the optical system. • Windowing for local feature analysis. • Low spectral content in the scene being images. • Improper calibration of camera parameters. • Presence of sensor noise. DISP Lab @ MD531

  28. Depth from Defocus • Block shift-variant blur model • Consider the interaction of sub-images • Define the neighborhood function • Indeed, the image we observed is • compared with DISP Lab @ MD531

  29. Depth from Defocus • Space-variant filtering models for recovering depth • Using complex spectrogram and P.W.D. • Complex Spectrogram: DISP Lab @ MD531

  30. Depth from Defocus • Space-variant filtering models for recovering depth • C.S.: • g_1/g_2 • where DISP Lab @ MD531

  31. Depth from Defocus • Space-variant filtering models for recovering depth • objective function: • Drawback: • No consider the intersection of pixels there will be interrupt in border. • Regularized solution. DISP Lab @ MD531

  32. Depth from Defocus • No corresponding problem • Less accuracy • S.V. > B.S.V. • Blocking Trade-off • Blocking size • Too large: less accuracy • Too small: noise DISP Lab @ MD531

  33. Other method • Structure from motion • Shape from shading • ML Estimation of Depth and Optimal camera settings • Recursive computation of depth from multiple images DISP Lab @ MD531

  34. Other method • Structure from motion • Using the relative motion between object and camera to find out surface information • Corresponding problem (binocular) • Find out what motion of camera DISP Lab @ MD531

  35. Other method • Shape from shading • Need to know the reflectance • Find the sliding rate and blindness DISP Lab @ MD531

  36. Defocused image pair SML measurement Maximum value searching focal point Depth measurement of a point Using the specific depth to retrieve imaging distance Small aperture construction Linear canonical transform based on constructed optical system Full focused image Focus recovery DISP Lab @ MD531

  37. Conclusions • Binocular stereo method • high accuracy • Absolute depth information • Complexity computation • Corresponding problem • Structure form motion • Nonlinearproblem • Corresponding problem • Shape from shading • Very difficult method • Active method DISP Lab @ MD531

  38. Conclusions • Range from focus: • Slowly • More than 10 images • depth from defocus: • Easy method • Less accuracy DISP Lab @ MD531

  39. References and future work • Y. C. Lin, Depth Estimation and Focus Recovery, Master thesis, National Taiwan Univ., Taipei, Taiwan, R.O.C, 2008 • Subhasis Chaudhuri, A.N. Rajagopalan, ”Depth From Defocus: A Real Aperture Imaging Approach. ” Springer-Verlag. New York, 1999. • M. Subbarao, “Parallel depth recovery by changing camera parameters,” Second International Conference on Computer Vision 1988, pp. 149-155, Dec. 1988. • M. Subbarao and T. C. Wei, “Depth from defocus and rapid autofocusing: a practical approach,” IEEE Conferences on Computer Vision and Pattern Recognition, pp. 773-776, Jun. 1992. • A. N. Rajagopalan and S. Chaudhuri, “A variational approach to recovering depth from defocused images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, pp. 1158-1164, Oct. 1997. DISP Lab @ MD531

  40. The end DISP Lab @ MD531

More Related