1 / 40

FAA EDR Standards Project RTCA Plenary Sub-Group 4 & 6 September 16, 2014

FAA EDR Standards Project RTCA Plenary Sub-Group 4 & 6 September 16, 2014 Presented by: Sal Catapano - Exelis. Outline. Background / FAA EDR Standards Project Report Current In Situ EDR Algorithms / Aircraft Equipped FAA EDR Standards Project Overview

eaton-watts
Download Presentation

FAA EDR Standards Project RTCA Plenary Sub-Group 4 & 6 September 16, 2014

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. FAA EDR Standards Project • RTCA Plenary Sub-Group 4 & 6 • September 16, 2014 • Presented by: Sal Catapano - Exelis

  2. Outline Background / FAA EDR Standards Project Report Current In Situ EDR Algorithms / Aircraft Equipped FAA EDR Standards Project Overview 1-Minute Mean In Situ EDR Report Performance and Recommendations 1-Minute Peak In Situ EDR Report Performance and Recommendations Variability Analysis Findings Follow-on Recommendations Open Question / Discussion Period

  3. Background & Current EDR Algorithms / Aircraft Equipped

  4. Background • In 2001, ICAO made EDR the turbulence metric standard • In 2011, ADS-B ARC recommended the FAA “establish performance standards for EDR” • In 2012, RTCA SC-206, developed an Operational Services and Environmental Definition (OSED) identifying the necessity for: • An international effort to develop performance standards for aircraft (in situ) EDR values, independent of computation approach • In response, the FAA sponsored an EDR Standards Project from July, 2012 to September, 2014 that: • Provides the analysis, inputs, and recommendations required to adopt in situ EDR performance standards

  5. FAA EDR Standards Final Report • Delivered to FAA August 29, 2014 • Documents research, analysis, and findings of project • Includes performance tolerance recommendations for 1 minute mean and peak in situ EDR reports • Report and appendices submitted to RTCA on September 10, 2014

  6. Current Operational In Situ EDR Algorithms NCAR Vertical Acceleration-Based ATR Accelerometer-Based Input: TAS, Altitude, Vertical Acceleration, Weight, Frequency Response Users: American Airlines, others Windowing: 5 sec running window Average Calc: N/A Peak Calc: Largest EDR in 30 seconds Input: TAS, Altitude, Vertical Acceleration, Weight, Frequency Response, Mach, Flap Angle, Autopilot Status, QC Parameters Users: United Airlines Windowing: 10 sec window every 5 sec Average Calc: Arithmetic mean over 1 min Peak Calc: 95th percentile over 1 minute NCAR Vertical Wind-Based Panasonic Longitudinal Wind-Based Input: TAS, Altitude, Inertial Vertical Velocity, Body Axis AoA, Pitch Rate, Pitch, Roll Angle, QC, Filter Parameters Users: Delta and Southwest Airlines Windowing: 10 sec running Average Calc:Median over 1 min Peak Calc: Largest EDR in 1 Minute Input: TAS, Roll Angle for QC, TAMDAR Icing for QC (if using TAMDAR Sensor) Users: TAMDAR - Regional Airlines Windowing:9 sec window Average Calc: 1, 3, 7min; 300, 1500ft Peak Calc: Largest EDR in 1, 3, 7min; 300, 1500ft

  7. Aircraft Equipped Total: 1133 7

  8. FAA EDR Standards Project Overview

  9. Project Team and Key Stakeholders

  10. Standards Research Process Raw Vertical Winds Commercial Output Simulator Output Research Quality Data

  11. Input Winds Development • Homogenous – exercise mean EDR • Maintains single EDR on average throughout wind dataset (e.g. 0.5 EDR) .5 EDR • Non-Homogenous – exercise peak EDR • Simulate “burst” of turbulence embedded in background field of ambient turbulence X = Minute 3 Minute 1 Minute2

  12. Input Winds Datasets X =

  13. ‘Expected’ EDR Value • Homogeneous Wind Datasets – Mean EDR • Single ‘expected’ EDR value corresponding to each dataset (i.e., .5 EDR dataset has ‘expected’ EDR value of .5 EDR) • Non-homogeneous Wind Dataset – Peak EDR • Spectral Scaling Method developed by the project used to calculated EDR ‘expected’ values based on individual implementation window length and modulation characteristics .5 EDR

  14. Expected Mean vs. Sample Mean

  15. Statistical Sets Sample Mean Sample Mean Sample Mean

  16. 1-Minute Mean EDR

  17. In Situ EDR Mean Report Findings Real-world operational noise floors are dependant on avionics and sensor characteristics • Implementations performed well compared with EDR ‘expected’ value • Consistency across turbulence severity levels and length scales, as well as across implementations • Noise floor for the project’s simulation determined to be at or below 0.02 EDR • Below noise floor bias is high for all implementations • Above noise floor bias quickly improves, as signal-to-noise ratio increases

  18. 1-Minute Mean EDR Performance & Recommendations

  19. 1-Minute Mean EDR Performance Tolerance Threshold Example

  20. Wake Vortex Decay Modeling Performance Objectives (99%-Band) Note: Wake decay modeling community has performance objectives for null to light turbulence

  21. 1-Minute Peak EDR

  22. 1-Minute Peak EDR Expected Value 5 Sec Window 10 Sec Window Spike ‘Representative’ Expected Value Standard Deviation Increases .7 EDR Expected Value .6 .5 Mid Expected Value Decreases Bias Increases Flat Window Length (Meters)

  23. Performance Recommendations for 1-Minute PeakIn Situ EDR Reports

  24. 1-Minute Peak EDR Example Spike EDR Expected Value Mid .51 .445 ‘Representative’ Expected Value Bias Increases Bias Increases .38 .51 Flat 5 Sec Window 10 Sec Window Window Length (Meters)

  25. Peak EDR Performance Objectives Today • Current user applications employ peak EDR data only for strategic forecasting and planning • Interested in receiving numerous peak EDR reports • User objectives for peak EDR are subjective and require further sensitivity analysis Future • EDR used in tactical decision making (e.g., cross-linking between aircraft) • Improved inter-implementation consistency may be required across EDR implementations • Today’s peak EDR performance may not meet future use performance requirements

  26. In Situ EDR Peak Report Findings • All implementations performed well compared with EDR ‘expected’ value • However, all existing in situ EDR implementations employ different window lengths, parameter settings, and calculation methodologies • Some of these component differences lead to inconsistent peak EDR values across implementations in non-homogeneous turbulence (i.e., convective, mountain wave) • Project elected to perform variability analysis on a set of EDR algorithm components to discover sources of variability

  27. Variability Analysis Scatter Plot of Results Consistency Performance Curve

  28. Consistency Improvement Potential

  29. Follow-on Recommendations Leverage momentum of Project Team’s Success • Performance standard adoption • Validate in situ recommendations • Determine how compliance will be enforced • Define operational requirements • Pursue broad ConOps for EDR • Perform application specific sensitivity analyses • Continue variability analyses • Research additional algorithm components • Define parameter values for all components • Pursue additional research into the science of EDR • Analyze impact of distorting assumptions • Define an approach to develop vertical EDR profiles • Consider non-in situ EDR performance standards Follow-on activities MUST have operational significance and benefit

  30. Questions / Discussions

  31. Back-up Slides

  32. Inertial Sub-Range

  33. Work Element Relationship

  34. EDR Standards Process

  35. Algorithm Input Data Development

  36. Modulation Pulse

  37. Notional Depiction of Relationships Associated with Differing Window Length

  38. Spike, Mid, and Flat Expected Values

  39. ISE Comparison of Project Generated Non-homogenous Wind Data Sets

  40. Example: 0.1 Mean EDR Bias 99%-band 70%-band Expected Value = 0.1 EDR Results Center = 0.1021 EDR Performance Stnd = 0.105 EDR Sample Mean Sample Mean = 0.1 EDR Right Tolerance Band = 0.1075 EDR Left Tolerance Band = 0.0925 EDR Performance Stnd = At least 70% of results must be between 0.09 EDR and 0.11 EDR Sample Mean -49.5% +49.5% -35% +35% Sample Mean = 0.1 EDR Right Tolerance Band = 0.1187 EDR Left Tolerance Band = 0.0813 EDR Performance Stnd = At least 99% of results must be between 0.08 EDR and 0.12 EDR

More Related