1 / 26

Sensor management under uncertainty

Sensor management under uncertainty. Mark P. Kolba and Leslie M. Collins ECE Department, Duke University MURI Workshop June 2006. Sensor management. “[Directing] the right sensor on the right platform to the right target at the right time” 1

lazar
Download Presentation

Sensor management under uncertainty

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sensor management under uncertainty Mark P. Kolba and Leslie M. Collins ECE Department, Duke University MURI Workshop June 2006

  2. Sensor management • “[Directing] the right sensor on the right platform to the right target at the right time”1 • GOAL: Development of an effective, realistic sensor management framework for the landmine detection problem • Manage an increasingly diverse and complex suite of sensors to achieve rapid detection of landmines • Keep operator out of harm’s way • Information-theoretic static target detection (IT-STAD) framework • An information-based formulation by Kastella is chosen as the basis for this work2 • Computationally tractable • Suitable for realistic use • Maximization of a measure of information is reasonable • Mathematical framework for sensor management • Choice of information measure is flexible 1 R. Mahler, Objective functions for bayesian control-theoretic sensor management, I: multitarget first-moment approximation. Proc. IEEE Aerospace Conf., vol. 4, p. 4/1905-4/1923, 2002. 2 Kastella, K., Discrimination gain to optimize detection and classification. IEEE Trans. Systems, Man, and Cybernetics—Part A: Systems and Humans, 1997. vol. 27, no. 1, pp. 112-116.

  3. IT-STAD framework • M sensors search for N targets in a grid • Binary cell states and sensor observations • State probabilities calculated as • Sensor takes next observation to maximize expected discrimination gain Sc = s denotes the state of cell c being s xc,kis observation k in cell c Xc,kis observations 1, 2, . . ., k in cell c

  4. Extensions – part 1 • Incorporates constrained and unconstrained sensor motion • Allows sensor platforms with multiple sensing modalities on each platform • Incorporates sensor cost of use and greedily maximizes the ratio of expected discrimination gain to observation cost • Allows non-uniform priors to take advantage of a priori knowledge about the scenario at hand • Is robust to unknown target number in the initialization of state probabilities

  5. Uncertainty analysis (E-part 2) • Consider a real-world scenario: unknown and irregular ground, unfamiliar obstacles, unknown target and clutter types, unknown propagation characteristics Uncertaintyis present in the problem • Uncertainty in Pd and Pfmay be both assumed and/or true, creating four cases: PD/PF Assumption certain uncertain Bound Robust? certain Truth Define Perf hit Recovery uncertain • Uncertain Pd and Pf will have beta densities (natural conjugate prior)  parameters r and k • Smaller k corresponds to more uncertainty • Consider three uncertainty levels: low, medium, and high, for which k = 100, k = 10, and k = 5, respectively

  6. New mathematics • Maintain densities for Pd and Pf in each cell: Pd,c and Pf,c • Probability of making an observation given the cell state: • State probability update and expected discrimination calculation: • UpdatePd,c (or Pf,c) after an observation: • Pd,c and Pf,c densities are maintained for each of the M sensors

  7. Certain Pd and Pf, Uncer. proc • First consider when uncertainty is truly not present in the problem 1 sensor, 5 targets 3 sensors, 5 targets Performance does not degrade if you assume uncertainty

  8. Uncertain Pd and Pf certain k = 100 1 sensor 5 targets Uncertainty truly present k = 10 k = 5

  9. Uncertainty on real data • Apply uncertainty modeling to GA Tech data • Performance is improved most significantly using k = 10, with nearly a 50% reduction in Pe at time = 1000 • All uncertainty modeling provides some improvement PdPf cost S1: 0.850 0.323 1 S2: 0.850 0.085 1 S3: 0.950 0.056 1

  10. Mismatched beta densities • In sensor management framework, beta densities are used to describe sensor Pd and Pf when uncertainty is present • Three different uncertainty levels: k = 100, k = 10, and k = 5 • Examine performance when the true and assumed beta densities are mismatched For example: Truth: Assumption: Truth: Assumption: k = 100, k = 10, or k = 5 k = 100, k = 10, or k = 5 OR k = 10 k = 100

  11. Mismatched beta results • When there is low uncertainty (k = 100), there is little performance difference for any of the assumptions • For medium uncertainty (k = 10), assuming that high uncertainty (k = 5) is present causes minimal performance loss, and vice versa • For medium and high uncertainty, assuming that low uncertainty is present causes a noticeable performance degradation • These results suggest the following: • Safer to assume higher uncertainty rather than lower if unsure • Performance is reasonably robust to mismatches in beta densities k = 5 density is true 1 sensor, 5 targets

  12. Declaration-based approach • GOAL: Development of an effective, realistic sensor management framework for the landmine detection problem • OBSERVATION: Current performance metric, Pe, displays a number of inadequacies when considering an applied setting • Does not give direct information about Pd and Pf as is often desired in landmine detection applications • Pe calculation as it has been formulated requires knowledge of the number of targets present in the scene • Estimated cell locations of the targets are selected based on the largest posterior state probabilities of containing a target • Reasonable if target number is known . . . but consider the following example: Cell number: 1 2 3 4 5 Searching for one target: P(no target | data): 0.99 0.99 0.99 0.97 0.99 P(target | data): 0.01 0.01 0.01 0.03 0.01 Would an operator actually wish to say that a target is present in cell 4?

  13. Declaration-based approach • Rather than estimate the target locations based on the largest posterior state probabilities, make declarations about the contents of each cell based on the data that has been observed • Possible declarations: target, no target, undecided (need more info) • Declarations model realistic behavior and also allow Pd and Pf to be calculated and compared to the total number of measurements or to a total cost measure for use as a performance metric • Benefits of declaration-based approach • Pd and Pf may be straightforwardly calculated • Knowledge of the number of targets in the scene is not required • Avoids the problem of choosing low-probability cells as containing targets • Easy to extend to non-binary data • To implement the declaration-based approach, use the sequential probability ratio test (SPRT)8 8Wald, A., Sequential Analysis. New York: John Wiley & Sons, Inc., 1947.

  14. SPRT implementation H0: no target present • Hypothesis is target vs. no target: • Observations are binary: • After m observations have been made in a cell, calculate Zm: • Once a TARGET or NO TARGET declaration has been made, that declaration is final and will not be changed H1: target present Thresholds A and B defined as If B < Zm < A declare UNDECIDED α is Type-I error (choose H1 when H0 is true) If Zm≥A declare TARGET PRESENT β is Type-II error (choose H0 when H1 is true) If Zm≤B declare NO TARGET PRESENT

  15. Simulation results • Results are presented for the following search techniques: • Discrimination-directed search: Uses sensor manager. Once a final declaration is made in a cell, that cell is never observed again • Direct search (w/o skipping): Blind search that continues to observe all cells on each pass through the grid (no information from sensors incorporated into search pattern) • Direct search (w/ skipping): Blind search that sweeps through the grid but skips cells that have a final declaration (primitive sensor management) α = 0.05 β = 0.004

  16. Simulation results • Now consider searches at 0 dB when sensor Pd and Pf are uncertain Both discrimination-directed and direct search in the table above are at 0dB Cost is given in arbitrary time units NM: Uncertainty not modeled M: Uncertainty modeled

  17. AMDS data • Data for 320 cells: 92 mines, 178 clutter objects, and 50 blanks • Two sensors: GPR and EMI • For each sensor, binary observations are generated by processing the sampled portion of raw data and comparing the resulting decision statistic to a threshold • GPR: summed, whitened energy • EMI: energy

  18. AMDS results PdPf EMI: 0.793 0.531 GPR: 0.801 0.509 • Performance is plotted as Pd vs. cost • Pd vs. Pf curve is also given • Discrimination-directed search achieves the same Pd at lower cost than either of the direct search techniques α= 0.05 β= 0.05 Each sensor has cost of use equal to 1

  19. AMDS results • Now incorporate uncertainty modeling; sample results presented for k = 10 • Uncertainty modeling increases the cost, but allows better Pd performance to be achieved after a large number of observations

  20. AMDS results • Another useful performance metric to consider is the expected probability of detection after a large number of observations have been made • Uncertainty modeling improves the expected Pd • Discrimination-directed search provides the best expected Pd at the best expected cost (with and without uncertainty modeling) Expected costs are given in arbitrary time units (i.e., same as Pd vs. cost plots)

  21. Conclusions • The IT-STAD framework for sensor management has been presented, based on Kastella’s discrimination gain technique, that incorporates multiple sensors and targets, realistic cost constraints, and uncertainty modeling • Extensive simulation has demonstrated that discrimination-directed search performance is superior to the performance of a direct search technique; the performance improvement is typically 3-6dB • Performance of the sensor manager has been shown to be robust to reasonable errors in assumed information and to be computationally superior to an alternative sensor management technique (static-detection JMPD) • The sensor manager has been successfully implemented on real landmine data (GA Tech data and AMDS data), and the IT-STAD sensor manager has again outperformed direct search on the real datasets

  22. Other sensor managers • He, et al., use a partially observable Markov decision process (POMDP) framework to direct a multimodal sensing platform containing EMI and GPR sensors through a region of interest6 • Target models are proposed using knowledge of characteristic landmine responses and the POMDP is either trained offline using labeled data or the parameters are estimated online through observation and oracle (excavation) requests • Performs sensor management by operating at the level of individual data samples, processing each sample that is observed with the POMDP model in order to determine the optimal action to perform next • IT-STAD performs sensor management by dividing the region of interest into grid cells, collecting sensor observations within the cells, and then choosing the next cell to observe that will maximize the expected discrimination gain • The process that generates a “target present” or “target absent” observation within a cell is a black box signal processing algorithm to the IT-STAD sensor manager and not the focus of the research • Given the “target present” and “target absent” observations and the sensor Pd and Pf values that are provided, the sensor manager seeks to most effectively direct the movements of the sensor platform, subject to the specific cost constraints, so that landmines may be found as quickly possible 6He, L., S. Ji, W. Scott, Jr., and L. Carin, Adaptive Multi-Modality Sensing of Landmines, 2006.

  23. Other sensor managers • Kreucher, et al., use the joint multitarget probability density (JMPD) to specify probabilities for specific target locations and target numbers, and sensor management is then performed by maximizing the expected information gain that will be obtained with the next sensor observation5 • Framework is formulated for the multitarget tracking application • Rényi information divergence is used as the information measure • Non-myopic search techniques have been explored • The JMPD is used to estimate the probabilities of all possible combinations of locations for all possible numbers of targets; to reduce computational complexity, a particle filter is used to approximate the full JMPD • IT-STAD calculates state probabilities in each cell and then chooses the cell to observe that will result in the largest expected discrimination gain • Both techniques use Kastella’s idea of maximizing expected information gain; IT-STAD uses the Kullback-Leibler information, or discrimination, as its information measure • IT-STAD is designed for the static target detection problem, with particular emphasis on applications to landmine detection • A comparison between IT-STAD and a modified version of the JMPD technique is presented 5Kreucher, C., K. Kastella, and A.O. Hero, Information based sensor management for multitarget tracking. Proc. SPIE, 2003. vol. 5204, p. 480-489.

  24. JMPD comparison • Modifications to the latest JMPD framework are necessary for its implementation in the static target detection problem • Particle filtering may not be used to approximate the JMPD because there is no motion model present in a static detection problem • Kullback-Leibler information is used as the information measure (KL information is one of the measures explored in5) • The JMPD framework implemented here is analogous to Kastella’s original presentation of the technique7 • The central idea is to form the JMPD: tci represents a target present in cell i X is the entire observation history For example, Tmax = max number of targets is the posterior probability of no targets in the grid is the posterior probability of exactly one target in cell tc1 is the posterior probability of exactly two targets in cells tc1, tc2 is the posterior probability of exactly three targets in cells tc1, tc2, tc3 5Kreucher, C., K. Kastella, and A.O. Hero, Information based sensor management for multitarget tracking. Proc. SPIE, 2003. vol. 5204, p. 480-489. 7 Kastella, K, Joint multitarget probabilities for detection and tracking. Proc. SPIE, 1997. vol. 3086, pp. 122-128.

  25. JMPD comparison • First consider the probability of error performance • Pe performance is nearly identical for the two algorithms Parameters: 4x4 cell grid 2 targets 1 sensor Tmax = 5

  26. JMPD comparison • Next consider the computation time performance 3x3 grid 4x4 grid 5x5 grid Tmax = 2 Tmax = 3 Tmax = 4 Tmax = 5 All computation times are given in seconds per realization of the simulation • Without the particle filter present, the JMPD framework suffers from a computational explosion as the grid size and Tmax increase • IT-STAD is clearly superior in its computational efficiency and is able to be implemented on more realistically-sized static detection problems

More Related