1 / 34

Robust Estimator

Robust Estimator. 學生 : 范育瑋 老 師 : 王聖智. Outline. Introduction LS-Least Squares LMS-Least Median Squares RANSAC- Random Sample Consequence MLESAC-Maximum likelihood Sample consensus MINPRAN-Minimize the Probability of Randomness. Outline. Introduction LS-Least Squares

baker-avila
Download Presentation

Robust Estimator

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Robust Estimator 學生:范育瑋 老師:王聖智

  2. Outline • Introduction • LS-Least Squares • LMS-Least Median Squares • RANSAC- Random Sample Consequence • MLESAC-Maximum likelihood Sample consensus • MINPRAN-Minimize the Probability of Randomness

  3. Outline • Introduction • LS-Least Squares • LMS-Least Median Squares • RANSAC- Random Sample Consequence • MLESAC-Maximum likelihood Sample consensus • MINPRAN-Minimize the Probability of Randomness

  4. Introduction • Objective: Robust fit of a model to a data set S which contains outliers.

  5. Outline • Introduction • LS-Least Squares • LMS-Least Median Squares • RANSAC- Random Sample Consequence • MLESAC-Maximum likelihood Sample consensus • MINPRAN-Minimize the Probability of Randomness

  6. LS • Consider the data generating process Yi = b0 + b1Xi + ei , where ei is independently and identically distributed N(0, σ). • If any outlier exists in the data. Least squares performs poorly.

  7. Outline • Introduction • LS-Least Squares • LMS-Least Median Squares • RANSAC- Random Sample Consequence • MLESAC-Maximum likelihood Sample consensus • MINPRAN-Minimize the Probability of Randomness

  8. LMS • The method can tolerates the highest possible breakdown point of 50% .

  9. Outline • Introduction • LS-Least Squares • LMS-Least Median Squares • RANSAC- Random Sample Consequence • MLESAC-Maximum likelihood Sample consensus • MINPRAN-Minimize the Probability of Randomness

  10. Main idea

  11. Algorithm • Randomly select a sample of s data points from S and instantiate the model from this subset. • Determine the set of data points Si which are within a distance threshold t of the model. • If the size of Si (number of inliers) is greater than threshold T, re-estimate the model using all points in Si and terminate. • If the size of Si is less than T , select a new subset and repeat above. • After N trials the largest consequence ser Si is selected, and the model is re-estimate the model using all points in the subset Si.

  12. Algorithm • What is the distance threshold? • How many sample? • How large is an acceptable consensus set?

  13. What is the distance threshold? • We would like to choose the distance threshold, t, such that with a probability α the point is an inlier. • Assume the measurement error is Gaussian with zero mean and standard deviation σ. • In this case the square of point distance, d2⊥ , is sum of squared Gaussian variables and follows a χ2m distribution with m degrees of freedom.

  14. What is the distance threshold? Note: The probability that the value of a random variable is less than k2 is given by the cumulative chi-squared distribution. Choose αas 0.95

  15. How many sample? • If w = proportion of inliers = 1-ε • Prob(sample with all inliers)=ws • Prob(sample with an outlier)=1-ws • Prob(N samples an outlier)=(1-ws)N We want Prob (N samples an outlier)<1-p Usually p is chosen at 0.99. • (1-ws)N<1-p • N>log(1-p)/log(1-ws)

  16. How large is an acceptable consensus set? • If we know the fraction of data consisting of outliers . Use a rule of thumb : T=(1-ε)n • For example: Data points : n=12 The probability of outlier ε=0.2 T=(1-0.2)12=10

  17. Outline • Introduction • LS-Least Squares • LMS-Least Median Squares • RANSAC- Random Sample Consequence • MLESAC-Maximum likelihood Sample consensus • MINPRAN-Minimize the Probability of Randomness

  18. THE TWO-VIEW RELATIONS

  19. Maximum Likelihood Estimation • Assume the measurement error is Gaussian with zero mean and standard deviation σ n: The number of correspondences M: The appropriate two-view relation, D: The set of matches.

  20. Maximum Likelihood Estimation

  21. Maximum Likelihood Estimation Modify the mode: Where γ is the mixing parameter, v is just a constant. Here it is assumed that the outlier distribution is uniform, with - v/2,…,v/2 being the pixel range within which outliers are expected to fall.

  22. How to estimate γ • The initial estimate of γis ½ • Estimate the expectation of the ηifrom the current estimate of γ Where ηi =1 if the ith correspondence is an inlier, and ηi =0 if the ith correspondence is an outlier.

  23. How to estimate γ piis the likelihood of a datum given that it is an inlier and pois the likelihood of a datum given that it is an outlier: • Make a new estimate of γ • Repeat step2 and step3 until convergence.

  24. Outline • Introduction • LS-Least Squared • LMS-Least Median Squared • RANSAC- Random Sample Consequence • MLESAC-Maximum likelihood Sample consensus • MINPRAN-Minimize the Probability of Randomness

  25. MINPRAN • The first technique that reliably tolerates more than 50% outliers without assuming a know inlier bound. • It only assumes the outliers are uniformly distributed within the dynamic range of the sensor.

  26. MINPRAN

  27. MINPRAN N: The points are drawn from a uniform distribution of z value in range Zmin to Zmax. r: A distance between a curve φ k: Least points randomly full with in the range φ± r Z0:(Zmax-Zmin)/2

  28. MINPRAN

  29. Algorithm • Assuming p points are required to completely instantiate a fit. • Chooses S distinct, but not necessarily disjoint subsets of p points from data, and finds the fit to each random subsets to form the data, and finds the fit to each random subset to form S hypothesized fits φ1,…., φS • Select the minimizing as the “best fit”. • If , then φ* is accepted as a correct fit. • A final least-squares fit involving the p + i* inliers to φ* produces a accurate estimate of the model parameters.

  30. How many sample? • If w = proportion of inliers = 1-ε • Prob(sample with all inliers)=wp • Prob(sample with an outlier)=1-wp • Prob(S samples an outlier)=(1-wp)S We want Prob (S samples an outlier)<1-p Usually p is chosen at 0.99. • (1-wp)S<1-p • S>log(1-p)/log(1-wp)

  31. Randomness Threshold F0 Choose probability P0, and solve the equation. We can get Randomness Threshold F0.

  32. Randomness Threshold F0 • Define the equation • There is a unique value, ,which can found by bisection search, such that • By Lemma 2, if and only if • In order to for all i, we get the constraints • ci, 0≦i<N, denote the number of the residuals in the range (fi…fi+1]

  33. Randomness Threshold F0 • Because the residuals are uniformly distributed, the probability any particular residuals are in the range fi to fi+1 is , where • The probability ci particular residuals are in the range fi to fi+1 is • Based on this, we can calculate the probability

  34. Randomness Threshold F0 • To make the analysis feasible, we assume the S fits and their residuals are independent. • So the probability each of the S samples has is just • The probability at least one sample has a minimum less than is • We can get F0 , since we know N, S , P0.

More Related