1 / 30

Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters

Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters. Prasanth Jeevan Mary Knox May 12, 2006. Background. Optimal linear filters Wiener  Stationary Kalman  Gaussian Posterior, p(x|y) Filters for nonlinear systems Extended Kalman Particle.

barth
Download Presentation

Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006

  2. Background • Optimal linear filters • Wiener  Stationary • Kalman  Gaussian Posterior, p(x|y) • Filters for nonlinear systems • Extended Kalman • Particle

  3. Extended Kalman Filter (EKF) • Locally linearize the non-linear functions • Assume p(xk|y1,…,k) is Gaussian

  4. Particle Filter (PF) • Weighted point mass or “particle” representation of possibly intractable posterior probability density functions, p(x|y) • Estimates recursively in time allowing for online calculations • Attempts to place particles in important regions of the posterior pdf • O(N) complexity on number of particles

  5. Particle Filter Background [Ristic et. al. 2004] • Monte Carlo Estimation • Pick N>>1 “particles” with distribution p(x) Assumption: xi is independent

  6. Importance Sampling • Cannot sample directly from p(x) • Instead sample from known importance density, q(x), where: • Estimate I from samples and importance weights where

  7. Sequential Importance Sampling (SIS) • Iteratively represent posterior density function by random samples with associated weights Assumptions: xk Hidden Markov process, yk conditionally independent given xk

  8. Degeneracy • Variance of sample weights increases with time if importance density not optimal [Doucet 2000] • In a few cycles all but one particle will have negligible weights • PF will updating particles that contribute little in approximating the posterior • Neff, estimate of effective sample size [Kong et. al. 1994]:

  9. Optimal Importance Density [Doucet et. al. 2000] • Minimizes variance of importance weights to prevent degeneracy • Rarely possible to obtain, instead often use

  10. Resampling • Generate new set of samples from: • Weights are equal after i.i.d. sampling • O(N) complexity • Coupled with SIS, these are the two key components of a PF

  11. Sample Impoverishment • Set of particles with low diversity • Particles with high weights are selected more often

  12. Sampling Importance Resampling (SIR)[Gordon et. al. 1993] • Importance density is the transitional prior • Resampling at every time step

  13. SIR Pros and Cons • Pro: importance density and weight updates are easy to evaluate • Con: Observations not used when transitioning state to next time step

  14. A Cycle of SIR

  15. Auxiliary SIR - Motivation[Pitt and Shephard 1999] • Want to use observation when exploring the state space ( ’s) • To have particles in regions of high likelihood • Incorporate into resampling at time k-1 • Looking one step ahead to choose particles

  16. ASIR - from SIR • From SIR we had • If we move the likelihood inside we get: • We don’t have though • Use , a characterization of given • such as

  17. ASIR continued • So then we get: • And the new importance weight becomes:

  18. ASIR Pros & Cons • Pro • Can be less sensitive to peaked likelihoods and outliers by using observation • Outliers - Model-improbable states that can result in a dramatic loss of high-weight particles • Cons • Added computation per cycle • If is a bad characterization of (ie. large process noise), then resampling suffers, and performance can degrade

  19. Simulation Linear • System Equations: where v ~ N(0,6) and w ~ N(0,5)

  20. Simulation Linear10 Samples

  21. Simulation Linear50 Samples

  22. Simulation Linear Table 1: Mean Squared Error Per Time Step

  23. Simulation Nonlinear • System Equations: where v ~ N(0,6) and w ~ N(0,5)

  24. Simulation Nonlinear10 Samples

  25. Simulation Nonlinear50 Samples

  26. Simulation Nonlinear100 Samples

  27. Simulation Nonlinear1000 Samples

  28. Simulation Nonlinear Table 2: Mean Squared Error Per Time Step

  29. Conclusion • PF approaches KF optimal estimates as N   • PF better than EKF for nonlinear systems • ASIR generates ‘better particles’ in certain conditions by incorporating the observation • PF is applicable to a broad class of system dynamics • Simulation approaches have their own limitations • Degeneracy and sample impoverishment

  30. Conclusion (2) • Particle filters composed of SIS and resampling • Many variations to improve efficiency (both computationally and for getting ‘better’ particles) • Other PFs: Regularized PF, (EKF/UKF)+PF, etc.

More Related