870 likes | 1.8k Views
Single Image Blind Deconvolution. Presented By: Tomer Peled & Eitan Shterenbaum. Agenda. Problem Statement Introduction to Non-Blind Deconvolution Solutions & Approaches Image Deblurring PSF Estimation using Sharp Edge Prediction / Neel Joshi et. Al. MAP x,k Solution Analysis
E N D
Single Image Blind Deconvolution Presented By: TomerPeled &Eitan Shterenbaum
Agenda • Problem Statement • Introduction to Non-Blind Deconvolution • Solutions & Approaches • Image Deblurring PSF Estimation using Sharp Edge Prediction / Neel Joshiet. Al. • MAPx,kSolution Analysis Understanding and evaluating blind deconvolution algorithms / Anat Levin et. Al. C. Variational Method MAPk Removing Camera Shake from a Single Photograph / Rob Fergus et. Al • Summary
Problem statement • Blur = Degradation of sharpness and contrast of the image, causing loss of high frequencies. • Technically - convolution with certain kernel during the imaging process.
Blur – generative model = Sharp image Blured image Point Spread Function = fft(Image) fft(Blured image) Optical Transfer Function
Evolution of algorithms ? Camera motion blur Simple kernels Non blind deconvolution Volunteers ? Shan 2008 Fergus 2006 Joshi 2008 Lucy Richardson 1972 Wiener 1949
Introduction to Non-Blind Deconvolution sharp image blurred image blur kernel noise Deconvolution Evolution: Simple no-Noise Case Noise Effect Over Simple Solution Wiener Deconvolution RL Deconvolution
Recovered Simple no-Noise Case: Blurred Image
Noisy case: Original (x) Blured + noise (y) Recovered x
Noisy case, 1D Example: FT of original signal Original signal Convolved signals w/o noise FT of convolved signals sd Reconstructed FT of the signal Noisy Signal Original Signal High Frequency Noise Amplified
Wiener Deconvolution Blurred noisy image Recovered image
Non Blind Iterative Method : Richardson –Lucy Algorithm Assumptions: Blurred image yi~P(yi), Sharp image xj~P(xj) i point in y, j point in x Target: Recover P(x) given P(y) & P(y|x) where From Bayes theorem Object distribution can be expressed iteratively: Richardson, W.H., “Bayesian-Based Iterative Method of Image Restoration”, J. Opt. Soc. Am., 62, 55, (1972). Lucy, L.B., “An iterative technique for the rectification of observed distributions”, Astron. J., 79, 745, (1974).
Richardson-Lucy ApplicationSimulated Multiple Star measurement PSF Identification reconstruction of 4th Element
Solution Approaches • Image Deblurring PSF Estimation using Sharp Edge Prediction Neel Joshi Richard Szeliski David J. Kriegman • MAPx,k Solution Analysis Understanding and evaluating blind deconvolution algorithms Anat Levin, Yair Weiss, Fredo Durand, William T. Freeman C. Variational Method MAPk Removing Camera Shake from a Single Photograph Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis, William T. Freeman
PSF Estimation by Sharp Edge Prediction Given edge steps, debluring can be reduced to Kernel Optimization Suggested in PSF Estimation by Sharp Edge Prediction \ Neel Joshi et. el. in Select Edge Step (Masking) Estimate Blurring Kernel Recover Latent Image
PSF Estimation by Sharp Edge Prediction - Masking Original Image Edge Prediction Masking Max ValidRegion Min
Masking, Cont.Which is Best the Signals ? Original Blurred Edge Impulse
PSF Estimation by Sharp Edge Prediction – PSF Estimation Blurr Model: y=x*k+n, n ~ N(0,σ2) Bayseian Framework: P(k|y) = P(y|k)P(k)/P(y) Map Model: argmaxk P(k|y) = argminkL(y|k) + L(k)
PSF Estimation by Sharp Edge Prediction – Recovery Recovery through Lucy-Richardson Iterations given the PSF kernel Blurred Recovered
PSF Estimation by Sharp Edge, Summary & Improvements • Handle RGB Images – perform processing in parallel • Local Kernel Variations: Sub divide image into sub-image units Limitations: • Highly depends on the quality of the edge detection • Requires Strong Edges in multiple orientations for proper kernel estimation • Assumes knowledge of noise error figure.
MAPx,k , BlindDeconvolution Definition sharp image noise blurred image blur kernel Unknown, need to estimate Input (known) ? ? Courtesy of Anat Levin CVPR 09 Slides
MAPx,k Cont. - Natural Image Priors Gaussian: -x2 Laplacian: -|x| -|x|0.5 Log prob -|x|0.25 x x Derivative histogram from a natural image Parametric models Derivative distributions in natural images are sparse: Courtesy of Anat Levin CVPR 09 Slides
Naïve MAPx,k estimation Given blurred image y, Find a kernel k and latent image x minimizing: Convolution constraint Sparse prior Should favor sharper x explanations Courtesy of Anat Levin CVPR 09 Slides
The MAPx,k paradox P( , )>P( , ) kernel kernel Latent image Latent image Let be an arbitrarily large image sampled from a sparse prior , and Then the delta explanation is favored Courtesy of Anat Levin CVPR 09 Slides
The MAPx,k failure sharp blurred ? Courtesy of Anat Levin CVPR 09 Slides
The MAPx,k failure 45x45 windows 25x25 windows 15x15 windows simple derivatives [-1,1],[-1;1] FoE filters (Roth&Black) Red windows =[ p(sharp x) >p(blurred x) ]
The MAPx,k failure - intuition > k=[0.5,0.5] P(step edge) P(blurred step edge) cheaper sum of derivatives: < P(impulse) P(blurred impulse) cheaper sum of derivatives: Courtesy of Anat Levin CVPR 09 Slides
MAPx,k Cont. - Blur Reduces Derivative Contrast < P(sharp real image) P(blurred real image) cheaper • Noise and texture behave as impulses - total derivative contrast reduced by blur Courtesy of Anat Levin CVPR 09 Slides
MAPx,k Reweighting Solution High Quality Motion Debluring From Single Image / Shan et al. Alternating Optimization Between x & k Minimization term: MAPx,k
Solution Approaches • Image Deblurring PSF Estimation using Sharp Edge Prediction Neel Joshi Richard Szeliski David J. Kriegman • MAPx,kSolution Analysis Understanding and evaluating blind deconvolution algorithms Anat Levin, Yair Weiss, Fredo Durand, William T. Freeman C. Variational Method MAPk Removing Camera Shake from a Single Photograph Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis, William T. Freeman
MAPk estimation Given blurred image y, Find a kernel minimizing: Convolution constraint Sparse prior Kernel prior Again, Should favor sharper x explanations
Superiority of MAPk over MAPk,x Toy Problem : y=kx+n uncertainty of p(k|y) reduces given multiple observations yj =kxj + nj . The joint distribution p(x, k|y). Maximum for x → 0, k → ∞. p(k|y) produce optimum closer to true k∗.
Evaluation on 1D signals Exact MAPk MAPx,k Favors delta solution Favor correct solution despite wrong prior! MAPk variational approximation (Fergus et al.) MAPk Gaussian prior Courtesy of Anat Levin CVPR 09 Slides
Intuition: dimensionality asymmetry MAPx,k– Estimationunreliable. Number of measurements always lower than number of unknowns: #y<#x+#k MAPk– Estimation reliable. Many measurements for large images: #y>>#k kernel k sharp image x blurred image y Large, ~105 unknowns Small, ~102 unknowns ~105 measurements Courtesy of Anat Levin CVPR 09 Slides
Three sources of information Courtesy of Rob Fergus Slides
Image prior p(x) Courtesy of Rob Fergus Slides
Blur prior p(b) Courtesy of Rob Fergus Slides
The obvious thing to do Courtesy of Rob Fergus Slides
Variational Bayesian approach Courtesy of Rob Fergus Slides
Variational Bayesian methods • Variational Bayesian = ensemble learning, • A family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. • Lower bound the marginal likelihood (i.e. "evidence") of several models with a view to performing model selection.