1 / 38

Inferring High-Level Behavior from Low-Level Sensors

Inferring High-Level Behavior from Low-Level Sensors. Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280. Main References. Voronoi Tracking: Location Estimation Using Sparse and Noisy Sensor Data

carlow
Download Presentation

Inferring High-Level Behavior from Low-Level Sensors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Inferring High-Level Behavior from Low-Level Sensors Don Peterson, Lin Liao, Dieter Fox, Henry Kautz Published in UBICOMP 2003 ICS 280

  2. Main References • Voronoi Tracking: Location Estimation Using Sparse and Noisy Sensor Data • (Liao L., Fox D., Hightower J., Kautz H., Shultz D.) – in International Conference on Intelligent Robots and Systems (2003) • Inferring High-Level Behavior from Low-Level Sensors • (Paterson D., Liao L., Fox D., Kautz H.) – In UBICOMP 2003 • Learning and Inferring Transportation Routines • (Liao L., Fox D., Kautz H.) – In AAAI 2004

  3. Outline • Motivation • Problem Definition • Modeling and Inference • Dynamic Bayesian Networks • Particle Filtering • Learning • Results • Conclusions

  4. Motivation • ACTIVITY COMPASS - software which indirectly monitors your activity and offers proactive advice to aid in successfully accomplishing inferred plans. • Healthcare Monitoring • Automated Planning • Context Aware Computing Support

  5. Research Goal • To bridge the gap between sensor data and symbolic reasoning. • To allow sensor data to help interpret symbolic knowledge. • To allow symbolic knowledge to aid sensor interpretation.

  6. Executive Summary • GPS data collection • 3 months, 1 user’s daily life • Inference Engine • Infers location and transportation “mode” on-line in real-time • Learning • Transportation patterns • Results • Better predictions • Conceptual understanding of routines

  7. Outline • Motivation • Problem Definition • Modeling and Inference • Dynamic Bayesian Networks • Particle Filtering • Learning • Results • Conclusions

  8. Tracking on a Graph • Tracking person’s location and mode of transportation using street maps and GPS sensor data. • Formally, the world is modeled as: • graph G = (V,E), where: • V is a set of vertices = intersections • E is a set of directed edges = roads/foot paths

  9. Example

  10. Outline • Motivation • Problem Definition • Modeling and Inference • Dynamic Bayesian Networks • Particle Filtering • Learning • Results • Conclusions

  11. State Space L = ‹Ls, Lp› • Location • Which street user is on. • Position on that street • Velocity • GPS Offset Error • Transportation Mode V O = ‹Ox, Oy› M ε {BUS, CAR, FOOT} X = ‹ Ls, Lp, V, Ox, Oy, M ›

  12. Dynamic Bayesian Networks • Extension of a Markov Model • Statistical model which handles • Sensor Error • Enormous but Structured State Spaces • Probabilistic • Temporal • A single framework to manage all levels of abstraction

  13. Model (I)

  14. Model (II)

  15. Model (III)

  16. Dependencies

  17. Inference We want to compute the posterior density:

  18. Inference • Particle Filtering • A Technique for Solving DBNs • Approximate Solutions • Stochastic/ Monte Carlo • In our case, a particle represents an instantiation of the random variables describing: • the transportation mode: mt • the location: lt (actually the edge et) • the velocity: vt

  19. Particle Filtering • Step 1 (SAMPLING) • Draw n samples Xt-1from the previous set St-1and generate n new samples Xtaccording to the dynamics p(xt|xt-1) (i.e. motion model) • Step 2 (IMPORTANCE SAMPLING) • assign each sample xt an importance weight according to the likelihood of the observation zt: ωt ≈ p(zt|xt) • Step 3 (RE-SAMPLING) • draw samples with replacement according to the distribution defined by the importance weights, ωt

  20. Motion Model – p(xt|xt-1) • Advancing particles along the graph G • Sample transportation mode mtfrom the distribution p(mt|mt-1,et-1) • Sample velocity vtfrom density p(vt|mt) - (mixture of Gauss densities – see picture) • Sample the location using current velocity: • draw at random the traveled distance d (from a Gauss density centered at vt). If the distance implies an edge transition then we select next edge et with probability p(et|et-1,mt-1). Otherwise we stay on the same edge et = et-1

  21. Animation Play short video clip

  22. Outline • Motivation • Problem Definition • Modeling and Inference • Dynamic Bayesian Networks • Particle Filtering • Learning • Results • Conclusions

  23. Learning • We want to learn from history the components of the motion model: • p(et|et-1,mt-1) - is the transition probability on the graph, conditioned on the mode of transportation just prior to transitioning to the new edge • p(mt|mt-1,et-1) - is the transportation mode transition probability. It depends on the previous mode mt-1 and the location of the person described by the edge et-1 • Use the Monte Carlo version of EM algorithm

  24. Learning • At each iteration it performs both a forward and a backward (in time) particle filtering step. • At each forward and backward filtering steps the algorithm counts the number of particles transiting between the different edges and modes. • To obtain probabilities for different transitions, the counts of the forward and backward pass are normalized and then multiplied at the corresponding time slices.

  25. Implementation Details (I) • αt(et,mt) • the number of particles on edge et and in mode mt at time t in the forward pass of particle filtering • βt(et,mt) • the number of particles on edge et and in mode mt at time t in the backward pass of particle filtering • ξt-1(et,et-1,mt-1) • probability of transiting from edge et-1 to et at time t-1 and in mode mt-1 • ψt-1(mt,mt-1,et-1) • probability of transiting from mode mt-1 to mt on edge et-1 at time t-1

  26. Implementation Details (II) After we have ξt-1and ψt-1 for all t from 2 to T, we can update the parameters as:

  27. Implementation details (III)

  28. E-step • Generate n uniformly distributed samples • Perform forward particle filtering • Sampling: generate n new samples from the existing ones using current parameter estimation p(et|et-1,mt-1) and p(mt|mt-1,et-1). • Re-weight each sample, re-sample, count and save αt(et,mt). • Move to next time slice (t = t+1). • Perform backward particle filtering • Sampling: generate n new samples from the existing ones using the backward parameter estimation p(et-1|et,mt) and p(mt-1|mt,et). • Re-weight each sample, re-sample, count and save β(et,mt). • Move to previous time slice (t = t-1).

  29. M-step • Compute ξt-1(et,et-1,mt-1) and ψt-1(mt,mt-1,et-1) using (5) and (6) and then normalize. • Update p(et|et-1,mt-1) and p(mt|mt-1,et-1) using (7) and (8). LOOP: Repeat E-step and M-step using updated parameters until model converges.

  30. Outline • Motivation • Problem Definition • Modeling and Inference • Dynamic Bayesian Networks • Particle Filtering • Learning • Results • Conclusions

  31. Dataset • Single user • 3 months of daily life • Collected GPS position and velocity data at 2 and 10 second sample intervals • Evaluation data was • 29 “trips” - 12 hours of logs • All continuous outdoor data • Divided chronologically into 3 cross-validation groups

  32. Goals • Mode Estimation and Prediction • Learning a motion model that would be able to estimate and predict the mode of transportation at any given instant. • Location Prediction • Learning a motion model that would be able to predict the location of the person into the future.

  33. Results – Mode Estimation

  34. Results – Mode Prediction • Evaluate the ability to predict transitions between transportation modes. • Table shows the accuracy in predicting qualitative change in transportation mode within 60 seconds of the actual transition (e.g. correctly predicting that the person goes off the bus). • PRECISION: percentage of time when the algorithm predicts a transition that will actually occur. • RECALL: percentage of real transitions that were correctly predicted.

  35. Results – Mode Prediction

  36. Results – Location Prediction

  37. Results – Location Prediction

  38. Conclusions • We developed a single integrated framework to reason about transportation plans • Probabilistic • Successfully manages systemic GPS error • We integrate sensor data with high level concepts such as bus stops • We developed an unsupervised learning technique which greatly improves performance • Our results show high predictive accuracy and interesting conceptual conclusions

More Related