1 / 37

Kalman/Particle Filters Tutorial

Kalman/Particle Filters Tutorial. Haris Baltzakis, November 2004. Problem Statement. Examples A mobile robot moving within its environment A vision based system tracking cars in a highway Common characteristics A state that changes dynamically State cannot be observed directly

ckent
Download Presentation

Kalman/Particle Filters Tutorial

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kalman/Particle Filters Tutorial Haris Baltzakis, November 2004

  2. Problem Statement • Examples • A mobile robot moving within its environment • A vision based system tracking cars in a highway • Common characteristics • A state that changes dynamically • State cannot be observed directly • Uncertainty due to noise in: state/way state changes/observations

  3. A Dynamic System • Most commonly - Available: • Initial State • Observations • System (motion) Model • Measurement (observation) Model

  4. Compute the hidden state from observations Filters: Terminology from signal processing Can be considered as a data processing algorithm. Computer Algorithms Classification: Discrete time - Continuous time Sensor fusion Robustness to noise Wanted: each filter to be optimal in some sense. Filters

  5. Example : Navigating Robot with odometry Input Motion model according to odometry or INS. Observation model according to sensor measurements. • Localization -> inference task • Mapping -> learning task

  6. Bayesian Estimation Bayesian estimation: Attempt to construct the posterior distribution of the state given all measurements. Inference task (localization)Compute the probability that the system is at state z at time t given all observations up to time t Note: state only depends on previous state (first order markov assumption)

  7. Recursive Bayes Filter • Bayes Filter • Two steps: Prediction Step - Update step • Advantages over batch processing • Online computation - Faster - Less memory - Easy adaptation • Example: two states: A,B

  8. Continuous representation Gaussian distributions Kalman filters (Kalman60) Discrete representation HMM Solve numerically Grid (Dynamic) Grid based approaches (e.gMarkov localization - Burgard98) Samples Particle Filters (e.g.Monte Carlo localization - Fox99) Recursive Bayes FilterImplementations How is the prior distribution represented? How is the posterior distribution calculated?

  9. Example: State Representations for Robot Localization Grid Based approaches (Markov localization) Particle Filters (Monte Carlolocalization) Kalman Tracking

  10. Example: Localization – Grid Based • Initialize Grid(Uniformly or according to prior knowledge) • At each time step: • For each grid cell • Use observation model to compute • Use motion model and probabilities to compute • Normalize

  11. Kalman Filters - Equations A: State transition matrix (n x n) C: Measurement matrix (m x n) w: Process noise (єRn), v: Measurement noise(єRm) Process dynamics (motion model) measurements (observation model) Where :

  12. Kalman Filters - Update Predict Compute Gain Compute Innovation Update

  13. Kalman Filter - Example

  14. Kalman Filter - Example Predict

  15. Kalman Filter - Example Predict

  16. Kalman Filter - Example Predict Compute Innovation Compute Gain

  17. Kalman Filter – Example Predict Compute Innovation Compute Gain Update

  18. Kalman Filter – Example Predict

  19. Non-Linear Case Kalman Filter assumes that system and measurement processes are linear Extended Kalman Filter -> linearized Case

  20. Example:Localization – EKF • Initialize State • Gaussian distribution centered according to prior knowledge – large variance • At each time step: • Use previous state and motion model to predict new state (mean of Gaussian changes - variance grows) • Compare observations with what you expected to see from the predicted state – Compute Kalman Innovation/Gain • Use Kalman Gain to update prediction

  21. Extended Kalman Filter Project State estimates forward (prediction step) Predict measurements Compute Kalman Innovation Compute Kalman Gain Update Initial Prediction

  22. Synchro-drive robot Model range, drift and turn errors EKF – Examplemotion model for mobile robot

  23. Particle Filters • Often models are non-linear and noise in non gausian. • Use particles to represent the distribution • “Survival of the fittest” Motion model Proposal distribution Observation model (=weight)

  24. Particle Filters SIS-R algorithm • Initialize particles randomly (Uniformly or according to prior knowledge) • At each time step: • For each particle: • Use motion model to predict new pose (sample from transition priors) • Use observation model to assign a weight to each particle (posterior/proposal) • Create A new set of equally weighted particles by sampling the distribution of the weighted particles produced in the previous step. Sequential importance sampling Selection:Re-sampling

  25. Particle Filters – Example 1

  26. Particle Filters – Example 1 Use motion model to predict new pose (move each particle by sampling from the transition prior)

  27. Particle Filters – Example 1 Use measurement model to compute weights (weight:observation probability)

  28. Particle Filters – Example 1 Resample

  29. Particle Filters – Example 2 Initialize particles uniformly

  30. Particle Filters – Example 2

  31. Particle Filters – Example 2

  32. Particle Filters – Example 2

  33. Particle Filters – Example 2

  34. Particle Filters – Example 2

  35. Continuous State Approaches • Perform very accurately if the inputs are precise (performance is optimal with respect to any criterion in the linear case). • Computational efficiency. • Requirement that the initial state is known. • Inability to recover from catastrophic failures • Inability to track Multiple Hypotheses the state (Gaussians have only one mode)

  36. Discrete State Approaches • Ability (to some degree) to operate even when its initial pose is unknown (start from uniform distribution). • Ability to deal with noisy measurements. • Ability to represent ambiguities (multi modal distributions). • Computational time scales heavily with the number of possible states (dimensionality of the grid, number of samples, size of the map). • Accuracy is limited by the size of the grid cells/number of particles-sampling method. • Required number of particles is unknown

  37. Thanks! Thanks for your attention!

More Related