1 / 28

Distributed Inference: High Dimensional Consensus

Distributed Inference: High Dimensional Consensus. Jos é M. F. Moura Work with: Usman A. Khan ( Upenn ), Soummya Kar (Princeton) The Australian National University RSISE Systems and Control Series Seminar Canberra, Australia, July 30, 2010.

Download Presentation

Distributed Inference: High Dimensional Consensus

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed Inference: High Dimensional Consensus José M. F. Moura Work with: Usman A. Khan (Upenn), SoummyaKar (Princeton) • The Australian National University • RSISE Systems and Control Series Seminar • Canberra, Australia, July 30, 2010 Acknowledgements: AFOSR grant # FA95501010291, NSF grant # CCF1011903, ONR MURI N000140710747 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAAAAAAA

  2. Outline • Motivation for networked systems and distributed algorithms • Identify main characteristics of networked systems and distributed algorithms • Consensus algorithms and emerging behavior • Example: Localization • Conclusions

  3. Motivation • Networked systems: agents, sensors • Applications: inference (detection, estimation, filtering, …) • Distributed algorithms: • Consensus: • More general algorithms – High dimensional consensus • Realistic conditions: • Randomness: infrastructure (link failures), random protocols (gossip), communication noise • Quantization effects • Measurement updates • Issues: convergence – design topology to speed convergence; prove • Applications • Localization

  4. 1 2 • If link not available, • W is symmetric, sparse • W reflects the topology of network • Neighborhoods: 3 In matrix form, consensus is: Consensus is linear & iterative – issues: convergence and rate of convergence Networked Systems: Consensus Example

  5. 1 2 3 Consensus: Optimization • Consensus • Convergence • Limit • Spectral condition Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

  6. 1 2 3 Topology Design • Speed convergence by making small • Choose where nonzero entries of W are and the actual values of the nonzero entries of W 1 2 3 Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

  7. Topology Design • Design Laplacean to minimize • Equal weights : weight α (Xiao and Boyd, CDC, Dec 2003) • Graph design: subject to constraints, e.g., number of edges M, structure of graph Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

  8. Topology Design • Nonrandom topology: (topology static or fixed) • Class 1: Noiseless communication • Class 2: Noisy communication • Random topology: Class 3 links may fail intermittently • Random topology with communication costs and budget constraint: Class 4 • Communication in link (i,j) has cost • Link (i,j) fails with probability • Average comm. network budget constraint per iteration random Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

  9. Topology Design – Class 1: Ramanujan (LPS) (22/12/1887 – 26/4/1920) Fig.1. A non-bipartite LPS Ramanujan graph with degree k = 6, and number of vertices N = 62 (Figure constructed using software Pajek) Lubotzky, Phillips, Sarnak (LPS) (1988) and Margulis (1988) Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

  10. Comparison Studies Performance Metric LPS Ramanujan (We use a non-bipartite Ramanujan graph construction from LPS and call it LPS-II.) vs vs vs Regular Ring Lattice (RRL) Highly-structured regular graphs with nearest-neighbor connectivity Watts-Strogatz (WS-I) Small-world networks using Watts-Strogatz construction Erdos-Renyi (ER) Random networks

  11. Regular graph LPS Ramanujan vs Regular Ring Lattice (RRL) Ramanujan Ratio speed convergence Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

  12. LPS-II vs Erdös-Renýi (ER) • The top blue line corresponds to the LPS-II graphs. The LPS-II graphs perform much better than the best ER graphs. • The relative performance of the LPS-II graphs over the ER graphs increases steadily with increasing N. Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

  13. LPS-II vs Watts-Strogatz (WS-I) Kar & Moura, Transactions Signal Processing, vol. 56, no. 6, June 2008

  14. Topology Design–Class 4: Communication Costs • Communication in link (i,j) has cost • Link (i,j) fails with probability • Average comm. network budget constraint per iteration • Convex optimization (SDP): Kar & Moura, Transactions Signal Processing, vol. 56:7, July 2008

  15. Random Topology w/ Comm. Cost Fig. 4. Per step convergence gain Sg: N = 80 and |E| = 9N=720 Kar & Moura, Transactions Signal Processing, vol. 56:7, July 2008

  16. High Dimensional Consensus • LOCAL INTERACTIONS: The local updates are given as • GLOBAL BEHAVIOR: Under what conditions does HDC converge: • for some appropriate function, wl Khan, Kar, Moura, ICASSP ‘09, ‘10, ASILOMAR ‘09, IEEE TSP ‘10.

  17. Distributed Localization • Localize M sensors with unknown locations in m-dimensional Euclidean space [1] • Minimal number, n=m+1 , of anchors with known locations • Sensors only communicate in a neighborhood • Only local distances in the neighborhood are known to the sensor • There is no central fusion center m = 2-D plane [1] Khan, Kar, Moura, “Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes,” IEEE Tr. on Sign. Pr., 57(5), pp. 2000-2016, May 2009.

  18. Distributed Sensor Localization • Assumptions • Sensors lie in convex hull of anchors • Anchors not on a hyper-plane • Sensors find m+1 neighbors so they lie in their convex hull • Only local distances available • Distributed localization (DILOC) algorithm • Sensor updates position estimate as convex l.c. of n=m+1 neighbors • Weights of l.c. are barycentric coordinates • Barycentric coordinates: ratio of generalized volumes • Barycentric coordinates: Cayley-Menger determinants (local distances) (TRIANGULATION)

  19. BarycentricCoord. & Cayley-Menger Det. • Barycentric coordinates: • Example 2D: • Cayley-Menger determinants: 1 l 2 3

  20. Set-up phase: Triangulation • Test to find a triangulation set, • Convex hull inclusion test: based on the following observation. • The test becomes 1 1 l l 2 2 3 3

  21. Distributed Localization • Distributed localization algorithm (DILOC) • K anchors and M sensors (K+M=N) in m dimensions: • Matrix form:

  22. Distributed Localization: DILOC { • DILOC: • Assume: 𝚼 (Triangulation) (Barycentric Coordinates) Theorem [Convergence]: Under above assumptions: The underlying Markov chain with transition probability is absorbing. DILOC converges to the exact sensor coordinates: 𝚼

  23. Distributed Localization: Simulations • N=7 node network in 2-d plane • M= 4 sensors, K = m+1 = 3 anchors • M = 497 sensors

  24. Convergence of DILOC • Theorem [Convergence]: • Random network • Connected on average , • Noisy communication • Errors in intersensor distances • Persistence cond. • Distributed distance localization algorithm converges • Khan, Kar, and Moura, “DILAND: An Algorithm for DistributedSensor Localiz. with Noisy Distance Meas.,” IEEE Tr. Signal Pr., 58:3, 1940-1947,March 2010

  25. Proof of Theorem • Proof: Cannot use standard stochastic approx. techniques because • function of past measurements, non Markovian • Study path behavior of error process wrt idealized update • Define error process wrt idealized update: • Dynamics of error process: • Error goes to zero:

  26. Conclusion • High Dimensional Consensus • Optimization: Topology design • Distributed localization (DILOC): • Linear iterative • Local communications • Barycentric coordinates • Cayley-Menger determinants • Convergence: • Deterministic networks (protocols): standard Markov chain arguments • Random networks: structural (link) failures, noisy comm, quantized data − standard stochastic approximation algorithms not sufficient to prove convergence

  27. Bibliography • SoummyaKar, SaeedAldosari, and José M. F. Moura, “Topology for Distributed Inference on Graphs,” IEEE Transactions on Signal Processing, volume 56 number 6, pp. 2609-2613, June 2008. • SoummyaKar and José M. F. Moura, “Sensor Networks with Random Links: Topology Design for Distributed Consensus,” IEEE Transactions on Signal Processing, 56:7, pp. 3315-3326, July 2008. • U. A. Khan, S. Kar, and J. M. F. Moura, “Distributed Sensor Localization in Random Environments using Minimal Number of Anchor Nodes,” IEEE Transactions on Signal Processing, 57: 5, pp. 2000-2016, May 2009; DOI:10.1109/TSP.2009.2014812. • U. A. Khan, S. Kar, and J. M. F. Moura, “DILAND: An Algorithm for DistributedSensor Localization with Noisy Distance Measurements,” IEEE Transactions on Signal Processing, Vol. 58:3, pp.:1940-1947, March 2010. • U. A. Khan, S. Kar, and J. M. F. Moura, “Higher dimensional consensus: Learning in large-scale Networks,” IEEE Transactions on Signal Processing, Vol. 58:5, pp. 2836 -2849, May 2010.

  28. The END Thanks

More Related