1 / 26

Bayesian Indoor Positioning Systems

Bayesian Indoor Positioning Systems. Presented by: Eiman Elnahrawy Joint work with: David Madigan, Richard P. Martin, Wen-Hua Ju, P. Krishnan, A.S. Krishnakumar Rutgers University and Avaya Labs Infocom 2005. Wireless Explosion.

caroun
Download Presentation

Bayesian Indoor Positioning Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bayesian Indoor Positioning Systems Presented by: Eiman Elnahrawy Joint work with: David Madigan, Richard P. Martin, Wen-Hua Ju, P. Krishnan, A.S. Krishnakumar Rutgers University and Avaya Labs Infocom 2005

  2. Wireless Explosion • Technology trends creating cheap wireless communication in every computing device • Device radios offer “indoors” localization opportunity in 2,3D • New capability compared to traditional communication networks • Can we localize every device having radio using only this available infrastructure?? • 3 technology communities: • WLAN (802.11x) • Sensor networks (zigbee) • Cell carriers(3G)

  3. Radio-based Localization • Signal decays linearly with log distance in laboratory setting • Sj = b0j + b1j log Dj • Dj = sqrt((x-xj)2 + (y-yj)2) • Use triangulation to compute (x,y) » Problem solved • Not in real life!! • noise, multi-path, reflections, systematic errors, etc. [-80,-67,-50] (x?,y?) Fingerprint or RSS

  4. Supervised Learning-based Systems • Training • Offline phase • Collect “labeled” training data [(x,y), S1,S2,S3,..] • Online phase • Match “unlabeled” RSS • [(?,?), S1,S2,S3,..] to existing “labeled” training fingerprints [-80,-67,-50] (x?,y?) Fingerprint or RSS

  5. Previous Work • People have tried almost all existing supervised learning approaches • Well known Radar (NN) • Probabilistic, e.g., Bayes a posteriori likelihood • Support Vector Machines • Multi-layer Perceptrons • … [Bahl00, Battiti02, Roos02,Youssef03, Krishnan04,…] • All have a major drawback • Labeled training fingerprints “profiling” • Labor intensive (286 in 32 hrs!!) • May need to be repeated over the time

  6. Contribution • Why don’t we try Bayesian Graphical Models (BGM)? any performance gains? • Performance-wise: comparable • Minimum labeled fingerprints • Adaptive • Simultaneously locate a set of objects • In fact it can be a zero-profiling approach • No more “labeled” training data needed • Unlabeled data can be obtained using existing data traffic

  7. LoocleTM • General purpose localizationanalogous to general purpose communication! • Can we make finding objects in the physical space as easy as Google ?

  8. Can this dream come true? • We believe we are so close • Only existing infrastructure • Localize with only the radios on existing devices • Additional resources as performance enhancements • Ad-hoc • No more labeled data (our contribution)

  9. Outline • Motivations and Goals • Experimental setup • Bayesian background • Models of increasing complexities [labeled-unlabeled] • Comparison to previous work • Conclusion and Future Work

  10. Experimental Setup • 3 Office buildings • BR, CA Up, CA Down • 802.11b radio • Different sessions, days • All give similar performance • Focus on BR (no particular reason) • BR: 5 access points, 225 ft x 175 ft, 254 measurements

  11. X Y D S Bayesian Graphical Models • Encode dependencies/conditional independence between variables • Vertices = random variables • Edges = relationships • Example [(X,Y), S], AP at (xb, yb) • Log-based signal strength propagation • S = b0 + b1 log D • D= sqrt((X-xb)2 + (Y-yb)2)

  12. D3 D4 D5 D2 S3 S4 S5 S2 b13 b14 b15 b12 b03 b04 b05 b02 Model 1 (Simple): labeled data Yi Xi Position Variables D1 Distances S1 Observed Signal Strengths Base Station Parameters “Unknowns" b11 b01 Xi ~ uniform(0, Length) Yi ~ uniform(0, Width) Si ~ N(b0i+b1ilog(Di),δi), i=1,2,3,4,5 b0i ~ N(0,1000), i=1,2,3,4,5 b1i ~ N(0,1000), i=1,2,3,4,5

  13. Labeled: training [(x1,y1),(-40,-55,-90,..)] [(x2,y2),(-60,-56,-80,..)] [(x3,y3),(-80,-70,-30,..)] [(x4,y4),(-64,-33,-70,..)] Unlabeled: mobile object(s) [(?,?),(-45,-65,-40,..)] [(?,?),(-35,-45,-78,..)] [(?,?),(-75,-55,-65,..)] Probability distributions for all the unknown variables Propagation constants b0i, b1i for each Base Station (x,y) for each (?,?) Input Output

  14. Great! but now how do we find the location of the mobile? • Closed form solution doesn’t usually exist » simulation/analytic approx • We used MCMC simulation(Markov Chain Monte Carlo) to generate predictive samples from the joint distribution for every unknown (x,y) location

  15. Example Output

  16. Max 75th Med 25th Min Some Results.. Comparable M1 is better

  17. Yi Xi D1 D3 D4 D5 D2 S1 S3 S4 S5 S2 b11 b13 b14 b15 b01 b12 b03 b04 b05 b02 b Model 2 (Hierarchical): labeled data • Allowing any values for b’s too unconstrained! • Assume all base-stations parameters normally distributed around a hidden variable with a mean and variance • Motivation borrowing strength might provide some predictive benefits Xi ~ uniform(0,Length) Yi ~ uniform(0,Width) Si ~ N(b0i+b1ilog(Di), δi) b0i ~ N(b0, δb0) b1i ~ N(b1, δb1) b0 ~ N(0,1000) b1 ~ N(0,1000) δ0 ~ Gamma(0.001,0.001) δ1 ~ Gamma(0.001,0.001) i=1,2,3,4,5

  18. Leave-one-out error (feet) 40 35 Labeled M1 30 Labeled Hier M2 Average Error (feet) SmoothNN 25 20 15 10 0 50 100 150 200 250 Size of labeled data More Results.. • M2 behaves similar to M1, but provides improvement for very small training sets • Hmm, still comparable to SmoothNN

  19. Can we get reasonable estimates of position without any labeled data? • Yes! • Observe signal strengths from existing data packets (unlabeled by default) • No more running around collecting data.. • Over and over.. and over..

  20. Labeled: training [(x1,y1),(-40,-55,-90,..)] [(x2,y2),(-60,-56,-80,..)] [(x3,y3),(-80,-70,-30,..)] [(x4,y4),(-64,-33,-70,..)] Unlabeled: mobile object(s) [(?,?),(-45,-65,-40,..)] [(?,?),(-35,-45,-78,..)] [(?,?),(-75,-55,-65,..)] Probability distributions for all the unknowns Propagation constants b0i, b1i for each Base Station (x,y) for each (?,?) Input Output

  21. Yi Xi D1 D3 D4 D5 D2 S1 S3 S4 S5 S2 b11 b13 b14 b15 b01 b12 b03 b04 b05 b02 b Model 3 (Zero Profiling) • Same graph as M2 (Hierarchical) but with (unlabeled data) • Hmm, why would this work? [1] Prior knowledge about distance-signal strength [2] Prior knowledge that access points behave similarly

  22. Results Close to SmoothNN, BUT…!! Leave-one-out error (feet) 80 80 80 UNLABELED 60 60 60 Zero Profiling M3 Average Error in Feet SmoothNN 40 40 40 20 20 20 0 0 0 50 50 50 100 100 100 150 150 150 200 200 200 250 250 250 LABELED Size of input data

  23. C1 C3 C4 C5 C2 D1 D3 D4 D5 D2 Other ideas Yi Xi • Corridor Effects • What? RSS is stronger along corridors • Variable c =1 if the point shares x or y with the AP • No improvements • Informative Prior distributions S1 S3 S4 S5 S2 b1 b3 b4 b5 b2 b

  24. Error CDF Across Algorithms 1 0.9 0.8 0.7 0.6 Probability 0.5 0.4 0.3 RADAR 0.2 Probabilistic M1-labeled M2-labeled 0.1 M3-unlabeled 0 0 10 20 30 40 50 60 70 80 90 100 Distance in feet Comparison to previous work • More ad-hoc • Adaptive • No labor investment

  25. Conclusion and Future Work • First time to actually use BGM • Considerable promise for localization • Performance comparable to existing approaches • Zero profiling! (can we localize anything with a radio??) • Where are we going? • Other models • Variational approximations • Tracking applications • Minimum extra infrastructure (more information - better accuracy)!

  26. Thanks! ☺

More Related