1 / 26

Learning and Vision: Generative Methods

Learning and Vision: Generative Methods. Andrew Blake, Microsoft Research Bill Freeman, MIT ICCV 2003 October 12, 2003 . Learning and vision: Generative Methods. Machine learning is an important direction in computer vision. Our goal for this class: Give overviews of useful techniques.

obelia
Download Presentation

Learning and Vision: Generative Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning and Vision: Generative Methods Andrew Blake, Microsoft Research Bill Freeman, MIT ICCV 2003 October 12, 2003

  2. Learning and vision: Generative Methods • Machine learning is an important direction in computer vision. • Our goal for this class: • Give overviews of useful techniques. • Show how these methods can be used in vision. • Provide references and pointers. • Note afternoon companion class: Learning and vision: Discriminative Methods, taught by Chris Bishop and Paul Viola.

  3. Learning and VisionGenerative Models Intro – roadmap for learning and inference in vision (WTF) Probabilistic inference introduction; integration of sensory data applications: color constancy, Bayes Matte 9:00 Basic estimation methods (WTF) MLE, Expectation Maximization applications: two-line fitting Learning and inference in temporal and spatial Markov processes Techniques: 9.25 (1) PCA, FA, TCA: (AB) inference – linear (Wiener) filter learning: by Expectation Maximization (EM); applications: face simulation, denoising, Weiss’s intrinsic images and furthermore: Active Appearance Models, Simoncelli, ICA & non-Gaussianity, filter banks 10.00 (2) Markov chain & HMM: (AB) inference: - MAP by Dynamic Programming, Forward and Forward-Backward (FB) algorithms; learning: by EM – Baum-Welch; representations: pixels, patches applications: stereo vision and furthermore: gesture models (Bobick-Wilson) < Break 10.30-10.45 > 10.45 (3) AR models: (AB) Inference: Kalman-Filter, Kalman Smoother, Particle Filter; learning: by EM-FB; representations: patches, curves, chamfer maps, filter banks applications: tracking (Isard-Blake, Black-Sidenbladh, Jepson-Fleet-El Maraghi); Fitzgibbon-Soatto textures and furthermore: EP 11.30 (4) MRFs: (WTF) Inference: ICM, LoopyBelief Propagation (BP), Generalised BP,Graph Cuts; Parameter learning: Pseudolikelihood maximisation; representations:color pixels, patches applications: Texture segmentation, super resolution (Freeman-Pasztor), distinguishing shading from paint and furthermore: Gibbs sampling, Discriminative Random Field (DRF), low level segmentation (Zhu et al.)

  4. What is the goal of vision? If you are asking, “Are there any faces in this image?”, then you would probably want to use discriminative methods.

  5. What is the goal of vision? If you are asking, “Are there any faces in this image?”, then you would probably want to use discriminative methods. If you are asking, “Find a 3-d model that describes the runner”, then you would use generative methods.

  6. Modeling So we want to look at high-dimensional visual data, and fit models to it; forming summaries of it that let us understand what we see.

  7. The simplest data to model:a set of 1–d samples

  8. Fit this distribution with a Gaussian

  9. Posterior probability Likelihood function Prior probability mean data points std. dev. Evidence By Bayes rule How find the parameters of the best-fitting Gaussian?

  10. Posterior probability Likelihood function Prior probability mean data points std. dev. Evidence Maximum likelihood parameter estimation: How find the parameters of the best-fitting Gaussian?

  11. Derivation of MLE for Gaussians Observation density Log likelihood Maximisation

  12. Basic Maximum Likelihood Estimate (MLE) of a Gaussian distribution Mean Variance Covariance Matrix

  13. Basic Maximum Likelihood Estimate (MLE) of a Gaussian distribution Mean Variance For vector-valued data, we have the Covariance Matrix

  14. Model fitting example 2: Fit a line to observed data y x

  15. Maximum likelihood estimation for the slope of a single line Data likelihood for point n: Maximum likelihood estimate: where gives regression formula

  16. Model fitting example 3:Fitting two lines to observed data y x

  17. MLE for fitting a line pair (a form of mixture dist. for )

  18. Line 1 Line 2 Fitting two lines: on the one hand… If we knew which points went with which lines, we’d be back at the single line-fitting problem, twice. y x

  19. Fitting two lines, on the other hand… We could figure out the probability that any point came from either line if we just knew the two equations for the two lines. y x

  20. Expectation Maximization (EM): a solution to chicken-and-egg problems

  21. MLE with hidden/latent variables:Expectation Maximisation General problem: data parameters hidden variables For MLE, want to maximise the log likelihood The sum over z inside the log gives a complicated expression for the ML solution.

  22. The EM algorithm We don’t know the values of the labels, zi , but let’s use its expected value under its posterior with the current parameter values, old. That gives us the “expectation step”: “E-step” Now let’s maximize this Q function, an expected log-likelihood, over the parameter values, giving the “maximization step”: “M-step” Each iteration increases the total log-likelihood log p(y|)

  23. Expectation Maximisation applied to fitting the two lines Need: /2 and then: and maximising that gives associate data point with line Hidden variables and probabilities of association are : ’ ’

  24. EM fitting to two lines with /2 “E-step” and repeat Regression becomes: “M-step”

  25. Experiments: EM fitting to two lines (from a tutorial by Yair Weiss, http://www.cs.huji.ac.il/~yweiss/tutorials.html) Line weights line 1 line 2 Iteration 1 2 3

  26. Applications of EM in computer vision • Structure-from-motion with multiple moving objects • Motion estimation combined with perceptual grouping • Multiple layers/or sprites in an image.

More Related