1 / 32

Advances in Sensor Data Fusion: A Review

Advances in Sensor Data Fusion: A Review. Bahador Khaleghi Pattern Analysis and Machine Intelligence Lab. Outline. Introduction Multisensor Data Fusion Challenging Problems MDF Algorithms MDF Architectures Discussion Conclusion. Introduction.

corbett
Download Presentation

Advances in Sensor Data Fusion: A Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advances in Sensor Data Fusion: A Review BahadorKhaleghi Pattern Analysis and Machine Intelligence Lab

  2. Outline • Introduction • Multisensor Data Fusion • Challenging Problems • MDF Algorithms • MDF Architectures • Discussion • Conclusion

  3. Introduction • Data fusion is a technology to enable combining information from several sources in order to form a unified picture • Originally developed for military applications and now widely used in a multitude of areas, e.g. sensor networks, robotics, etc. • The literature on data fusion is rich and diverse and has been subject of research in many disciplines • We focus on sensor data fusion

  4. Multisensor Data Fusion • Many definitions are proposed, e,g. JDL, Klein et al. Wald et al. “Information fusion is the study of efficient methods for automatically or semi-automatically transforming information from different sources and different points in time into a representation that provides effective support for human or automated decision making” - Boström et al. (2007)

  5. MDF Conceptualizations • JDL formalism: characterize data I/O (popular) • Dasarathy’s framework: characterize data I/O + processing as data flow • Random set based: characterize data processing • Category theoretic formalism: most abstract and general

  6. Why Is It challenging? • Data related issues • Imperfection: uncertainty, impreciseness, ambiguity, vagueness • Correlation (dependence) • Inconsistency: conflict, disorder, outliers • Forms: multi-modality, human-generated data • Organizational frameworks • Centralized, distributed, etc • Deployment issues • Data association and registration, operational timing, dynamic environment (phenomenon) • Other • Computational complexity, information overload, etc.

  7. MDF Algorithms

  8. Fusion of Imperfect Data • Most fundamental problem of MDF systems • Several taxonomies exist, ours inspired by Smets et al. • Imperfection aspects • Uncertainty: associated confidence degree < 1 • Ambiguity: unable to clearly distinguish among several classes of objects • Vagueness: membership function does not have a crisp relation (i.e. 1: belong, 0: doesn’t belong) • Incompleteness: degree of confidence is unknown but the upper limit of confidence is given

  9. Fusion of Imperfect Data: Probability • Oldest and well established approach to deal with uncertainty • Usually relies on Bayes rule to combine prior data with observations • Kalman filter and it extensions (EKF, UKF, etc.) • Most popular algorithms (esp. In tracking) • Assume linear system models and Gaussian additive noise • Efficient and easy to implement • Sequential Monte-Carlo based (Particle filter and its variants) • Applicable to non-linear models and non-Gaussian noise) • Computationally expensive (require large number of particles to accurately represent probability distributions)

  10. Fusion of Imperfect Data: Evidential • Originally proposed by Dempster and mathematically formulated by Shafer in 1970s (Dempster-Shafer Theory) • Regarded as generalization of probability theory to treat uncertainty and ambiguity using probability mass functions defined over power set of possible events (i.e. universe of discourse) • Typically fuse evidence using Dempster’s rule of combination • Issues • Computationally expensive (exponential at worse case) • Counter-intuitive results for fusion of highly conflicting data

  11. Fusion of Imperfect Data: Fuzzy Set • Proposed first by Zadeh in 1960s • Introduces the novel notion of partial set membership, which enables representation of vague data • Deploy fuzzy rules to produce fuzzy fusion output(s) • Useful to represent and fuse vague data produced by human experts in a linguistic fashion

  12. Fusion of Imperfect Data: Possibility • Also founded by Zadeh in 1970s • Based on fuzzy set theory, yet developed to represent incomplete data • Treatment of imperfect data is similar in spirit to probability and D-S theory with a different quantification approach • Deploys fusion rules similar to fuzzy fusion • Arguably, the most appropriate fusion approach in poorly informed environments (no statistical data available)

  13. Fusion of Imperfect Data: Rough Set • Developed by Pawlak in 1990s to handle ambiguous (indiscernible) data • Major advantage of rough set theory compared to other alternatives is that it does not require any preliminary or additional information • Relatively new theory and not well understood • Rarely applied to data fusion problems

  14. Fusion of Imperfect Data: Hybridization • IDEA: various imperfect data fusion methods are complimentary and should be cooperating instead of competing • Hybrid Dempster-Shafer Fuzzy theory (Yen 1990) • Frequently studied in the literature • Hybrid Fuzzy Rough Set Theory (Dubios and Parade 1990) • Recently generalized to arbitrary fuzzy relations (Yeung et al. 2006)

  15. Fusion of Imperfect Data: Random Set • Theoretical frameworks so far lend themselves to specific type(s) of imperfect data • Many practical applications involve complex situations where different types of imperfect data must be fused • Random set theory is a promising candidate solution to this issue to represent (and fuse) all aspects of imperfect data • Particularly useful for multi-source multi-target estimation problem

  16. Fusion of Imperfect Data: Big Picture

  17. Fusion of Correlated Data • Many data fusion algorithms, including the popular KF, require either independence or prior knowledge of the cross covariance of data to produce consistent results • In practice correlation could be a prior unknown • Common noise acting on the observed phenomena • Rumour propagation (Data Incest) • If not addressed • Over confidence in fusion results • Divergence of fusion algorithm

  18. Eliminating Data Correlation • Explicit incest removal • Usually assume a specific network topology as well as fixed communication delays • Recent extensions based on graph theory consider more general topologies with variable delays • Implicit incest removal • Attempt to form a decorrelated sequence of measurements by reconstructing them such that the correlation with previous intermediate updates from current intermediate state updates is removed

  19. Fusion in Correlation Presence • Covariance Intersection • Avoids the problem of covariance matrix under-estimation due to data incest • Issues • Requires non-linear optimization • Rather pessimistic (tends to over-estimate) • Solutions • Fast CI • Largest Ellipsoid

  20. Fusion of Inconsistent Data: Outliers • Data may be spurious due to unexpected situations • Permanent failures, short duration spike faults, or slowly developing failure • If fused with correct data, can lead to dangerously inaccurate estimates! • Mostly focused on identification/prediction (specific failure models) and subsequent elimination of outliers • Recent general Bayesian framework (Kumar et al. 2006): adds a new term to represent belief that data is not spurious

  21. Fusion of Inconsistent Data: Disordered • Caused by • Variable propagation times • Heterogeneous sensors (operational timing) • How to use this, usually old, data to update the current estimate while accounting for the correlated process noise • Solutions • Discard or reprocess all • Mostly rely on retrodiction (backward prediction of the current state) • Assume single and multiple lags, various target dynamics, etc.

  22. Fusion of Inconsistent Data: Conflictual • Several experts (sensors) have very different ideas about the same phenomenon • Heavily studied for D-S theoretic fusion • Zadeh’s counter-example • Solutions • Improper application of Dempster’s combination rules (constraints are not met!) • Many (ad-hoc) alternative combination rules • Transferable Believe Model (TBM): relies on open-world assumption, and allowing elements outside the frame of discernment to be represented by the empty set

  23. Fusion of Disparate Data • Fusion data can come in many forms • Multi modality of sensors • Human-generated (soft) data • Fusion of soft as well as hard/soft data is hot recent topic and no well studied • Recent trends • Disambiguation of linguistic data (lexicons, grammars, and dictionaries) • Human-centered data fusion: allows humans to participate in data fusion process not merely as soft sensors but also hybrid computers and ad-hoc teams (social networks and virtual worlds)

  24. Data Fusion Architectures

  25. Centralized Data Fusion • Sensor fusion unit is treated as a central processor that collects all information from the different sensors • Pros • Can achieve optimal performance (theoretically) • Cons • Communication bottleneck • Scalability issue • Reliability issue • Inflexible

  26. Distributed Data Fusion • Each node in the sensor field can act as a local data fusion unit using local measurements • Fusion process is usually iterative and ideally converges to global results • Pros • Scalability, reliability and robustness, efficiency • Cons • Inherent lack of global structure, i.e. difficult to control and predict the behaviour • Recent approaches based on graphical models appear promising

  27. Hierarchical Data Fusion • Adopts a top-down decomposition of fusion tasks • Pros • Reducing the communication/computational burden • Rather deterministic behaviour • Cons • Significant change of situation may lead to requirement for an entirely different hierarchy of tasks and therefore a large amount of overhead

  28. Federated Data Fusion • Hybrid architecture of hierarchical and distributed schemes • Offers the high fusion autonomy of sensor nodes found in distributed architectures • Requires data communication among nodes to go through dedicated middle nodes, as in hierarchical scheme • Pros: enhanced robustness and flexibility • Cons: does not support dynamic restructuring necessary for data fusion in extremely dynamic environments

  29. Discussion: Emerging Trends • Opportunistic fusion • Rely on new ubiquitous computing and communication technologies and treat sensors as shared resources • Fusion of negative information • Use data related to absence of any feature within the effective sensor range (e.g. Robot localization) • Soft/Hard data fusion • Emphasize role of user(human) in fusion process • Unified fusion framework • Search for a theoretical framework that enables formalization of all aspects of data fusion • Fusion and learning • Enhance performance by enabling fusion algorithm to adapt to changes in the operational environment

  30. Discussion: On-going Research • Optimization-based fusion • Treat data fusion as an optimization of often heuristically defined objective function, e.g. evolutionary algorithms • Automated fusion • Apply formal methods to autonomously develop fusion algorithms based on formally presented specifications • Distributed data fusion • Highly scalable and adaptive to environment dynamics • Belief reliability • Study the reliability of underlying models producing belief about imperfect data • Evaluation framework • Most fusion evaluation unrealistic (in simulation) and no standard evaluation platform

  31. Conclusion • A brief overview of vast literature on sensor data fusion was presented • Existing fusion algorithms were discussed according to a novel data-driven taxonomy • Common fusion architectures were described • Some of the existing gaps in literature work were identified • Active and emerging areas of research in fusion community were introduced

  32. Questions?

More Related