1 / 15

Corporate Headquarters: 11911 Freedom Drive Suite 800 Reston, VA 20190-5602 (703)787-8700

http://www.metsci.com. Simulation Sciences Division (SSD). Progress in Using Entity-Based Monte Carlo Simulation With Explicit Treatment of C4ISR to Measure IS Metrics. Prepared by Dr. Bill Stevens, Metron for IS Metrics Workshop 28-29 March 2000. Corporate Headquarters:

cybil
Download Presentation

Corporate Headquarters: 11911 Freedom Drive Suite 800 Reston, VA 20190-5602 (703)787-8700

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. http://www.metsci.com Simulation Sciences Division (SSD) Progress in Using Entity-Based Monte Carlo Simulation With Explicit Treatment of C4ISR to Measure IS Metrics Prepared by Dr. Bill Stevens, Metron for IS Metrics Workshop 28-29 March 2000 Corporate Headquarters: 11911 Freedom Drive Suite 800 Reston, VA 20190-5602 (703)787-8700 (703)787-3518 (FAX) Simulation Sciences Division: 512 Via de la Valle Suite 301 Solana Beach, CA 92075-2715 (858)792-8904 (Voice) (858)792-2719 (FAX)

  2. Approach Key Metrics Related Details Basic Monte Carlo Metrics and Statistics Cause-and-Effect Analysis Sensitivity Analysis Hypothesis Testing Examples CINCPACFLT IT-21 Assessment FBE-D Lessons-Learned and Challenges OUTLINE

  3. Provides one means to directly measure relevant IS metrics in mission-to-campaign level scenarios. Assess impact of IT and WPR improvements on warfighting outcome. Explicit C4ISR includes representation of: Platforms, systems, and commanders, Command organization (group, mission, platform), Commander’s plans and doctrine, Information collection, Information dissemination, Tactical picture processing, and Warfighting interactions. Provides means to capture, simulate/view, and quantify the performance of alternate C4ISR architectures and warfighting plans. Entity-Based Monte Carlo Simulation with Explicit C4ISR

  4. Typical Monte Carlo metrics are random variables X which are computed for each replication (Xn= value in replication n). Examples: Percent of threat subs tracked/trailed/killed on D+10, Average threat sub AOU on D+0, etc. Three key quantities should be computed for each X: Key Metrics Related DetailsBasic Metrics and Statistics

  5. Example C4ISR Operational Sequence Cause-and-Effect Metrics (Force Level) Cause-and-Effect Metrics (For Each Threat Presentation) Threat emission Wide-area sensor detection Engagement sensor cue Weapon allocation Engagement BDA collection Re-engagement Record time(s) of key threat emissions. WAS tasking loads vs. capacity vs. time. Percent miss-allocations vs. time. Record time, accuracy, completeness of each WAS detection. Engage sensor cueing loads vs. capacity vs. time. Percent miss-allocations vs. time. Record cue receipt times. Time from cue receipt to acquisition. Record weapon system allocation times. Weapon system tasking loads vs. capacity vs. time. Percent miss-allocations vs. time. Record weapon launch and intercept times and engage results. BDA collection system tasking loads vs. capacity vs. time. Percent miss-allocations vs. time. Record BDA collection times, associated engage events, BDA data. Key Metrics Related DetailsCause-and-Effect Analysis Relate Data Recorded Above to Force Effectiveness Metrics - Force Attrition and Damage, Resources Expended, and Commander’s Objectives Attained.

  6. Monte Carlo runs can be organized in the form of a scenario baseline + scenario excursion sets + selected metrics and metric breakdowns. Example excursion sets: SA-10 Pk’s: [0.0, 0.2, 0.4, 0.6] CV-68 VA Squadron: [squadron-x, squadron-y, squadron-z] Resulting excursion set sensitivity graphs can be generated: Key Metrics Related DetailsExcursion Analysis Squadron X Number of BLUE Fighters Killed Squadron Y Squadron Z Pk

  7. Many typical study objectives can be addressed through the use of statistical hypothesis testing. As an example, one could employ hypothesis testing to test H0 vs. H1: H0: mX>= mY H1: mX< mY and to thus determine whether or not squadron X is statistically more or less survivable that squadron Y for given SAM configuration. Standard tests can be applied as a function of (a,b) where a(b) = probability of falsely rejecting(accepting) H0. mX Squadron X Number of BLUE Fighters Killed Squadron Y mY Pk’ SAM Pk Key Metrics Related DetailsHypothesis Testing

  8. IT-21 tactical picture is significantly improved. More tracks and more tracks with ID info. ExamplesCINCPACFLT IT-21 Assessment Simulation revealed that IT-21 ground picture would have much improved ID rate …

  9. IT-21 with on-the-fly ATO: - More kills for same number of sorties - Fewer BLUE losses Degree of improvement: 36% more kills 46% fewer losses ExamplesCINCPACFLT IT-21 Assessment On-the-fly ATO concept was proposed to leverage the improved ID rates …

  10. Strike OODA Loop: IT-21 Strike OODA Loop for High-Priority Targets Reduced from 13.5 to 5.5 Hours. Artillery Attrition Goal Achieved in 34 vs. 64 Hours. 50% Increase in Critical Mobile Target Kills. • Time Within Sensor Range • Pre-positioned surveillance and engagement asset holding points. • Initial Indication of Target Time • Overhead/all-source detection of specific targets. • IPB Time • Faster processing and communication of annotated imagery data. • Faster and surer assertion of target types. • Actionable Time • Distributed fusion efficiencies decrease correlation times. • Rapid prioritization of targets. • Dynamic allocation of assets to identified targets. • Better weapon to target pairings. • Engagement Time • Better positioned engagement assets • In-flight target updates lead to shorter localization times. • Assessment Time • All-source BDA data married with common tactical picture. • Quicker relay of BDA. • Anticipatory scheduling of pre- and post-strike imagery assets shortens engagement cycle. ExamplesCINCPACFLT IT-21 Assessment Combined IT and process improvements yield speed-of-command and commander’s attrition goal timeline improvements …

  11. USN Surface Warfare Commander Traditional Centralized C2 Distributed Surface Picture Management Nodes Distributed Battle Management Nodes Picture Manager 1 Battle Manager 1 FBE-D Distributed C2 Picture Manager N Battle Manager N ExamplesFleet Battle Experiment Delta (FBE-D) The MBC/C7F hypothesized that distributed surface picture management and distributed localization/prosecution asset allocation, leveraging planned IT-21 improvements, would result in significant improvements in CSOF mission effectiveness …

  12. NSS SIMULATION LIVE C2 Maritime CSOF Commander C2 System (LAWS) Sensors P-3C and SH-60 Fusion USN C2 Ships Sensor Reports Targets BDA Reports Threat Assets nK SOF force transport boats Attack Assets USAF AC-130 a/c, AH-64 Apaches, USAF/ROK ACC strike a/c, and USN CV strike a/c. Prosecution Tasking Weapons ExamplesFleet Battle Experiment Delta (FBE-D) M&S was employed to model the CSOF threat and US/ROK surveillance, localization, and prosecution assets. Live operators interacted with the simulation by making surveillance, localization, and prosecution asset allocations. These asset allocations were fed into the simulation in order to provide operator feedback and for the purpose of assessing the effectiveness of the experimental distributed C2 architecture.

  13. ExamplesFleet Battle Experiment Delta (FBE-D) A novel live operator-to-simulation voice and GUI based approach was employed to effect the desired virtual experimentation environment. Pictured here is the air asset interface ...

  14. ExamplesFleet Battle Experiment Delta (FBE-D) The FBE-D distributed C2 architecture plus new in-theater attack asset capabilities yielded the surprise result that the assessed CSOF threat could be countered in Day 01 of the Korean War Plan. Post-analysis, pictured below, was employed to assess the sensitivity of this result to different force laydowns.

  15. Lessons-Learned and Challenges • Lessons-Learned • C4ISR architectures and C2 decision processes can be explicitly represented at the commander, platform, and system levels. Detailed alternatives can be explicitly represented and assessed. • Simulation supports detailed observation of C4ISR architecture in n-sided campaign and mission level scenarios. • Facilitates/forces community to think through proposed C4ISR architectures. • ID of key performance drivers and assessment of warfighting impact of technology initiatives using Monte Carlo simulation is feasible. • Challenges • Detailed C4ISR assessments require consideration of nearly all details associated with planning and executing a C4ISR exercise or experiment. • Collection of valid platform, system, and (in particular) C2 data and assumptions for friendly and threat forces is an issue. • Campaign-level decisions (e.g. determine commander’s objectives) not easily handled. • Scenarios in which major re-planning (e.g. modify commander’s objectives) is warranted are not easily handled. • Execution times limit the analyses which can be reasonably performed.

More Related