1 / 48

Atmosphere Agenda

Science Team TBS Atmosphere PEATE TBS. NICSE. I&TSE. Ozone. SD3E. Sounder. Atmosphere. Land. Ocean. PSOE. Atmosphere Agenda. NPP Science Team Counterparts for Atmosphere PEATE. Bryan Baum (UW): VIIRS Cloud Retrievals Christina Hsu (GSFC): VIIRS Aerosol Retrievals

dee
Download Presentation

Atmosphere Agenda

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Science Team TBS Atmosphere PEATE TBS NICSE I&TSE Ozone SD3E Sounder Atmosphere Land Ocean PSOE Atmosphere Agenda

  2. NPP Science Team Counterparts for Atmosphere PEATE Bryan Baum (UW): VIIRS Cloud Retrievals Christina Hsu (GSFC): VIIRS Aerosol Retrievals Hank Revercomb (SSEC): SDR Validation Omar Torres (UMBC): Aerosol Validation Paul Menzel (UW): VIIRS and heritage CDRs

  3. Atmosphere PEATE Organization Project Management Hank Revercomb (PI) Liam Gumley (Co-I, PM) Algorithms & Validation Bob Holz Richard Frey Bryan Baum Paolo Antonelli Andy Heidinger Mike Pavolonis Dave Tobin Computing Systems Scott Mindock Steve Dutcher Bruce Flynn Rick Jensen Operations Jerry Robaidek Rosie Spangler Dee Wade

  4. Land PEATE 1. Albedo (Surface) 2. Land Surface Temperature 3. Snow Cover and Depth 4. Surface Type 5. Active Fires (ARP) 6. Ice Surface Temperature 7. Vegetation Index 8. Aerosol Optical Thickness 9. Aerosol Particle Size Ocean PEATE 10. Ocean Color/Chlorophyll 11. Sea Surface Temperature Ozone PEATE 12. Ozone Total Column/Profile Atmosphere PEATE 13. Suspended Matter 14. Cloud Cover/Layers 15. Cloud Effective Particle Size 16. Cloud Top Height 17. Cloud Top Pressure 18. Cloud Top Temperature 19. Cloud Base Height 20. Cloud Optical Thickness Sounder PEATE 21. Atm Vertical Moisture Profile 22. Atm Vertical Temp. Profile 23. Atm Vertical Pressure Profile NPP Environmental Data Records (EDRs) VIIRS Intermediate Products Cloud Mask

  5. Science Team TBS Atmosphere PEATE TBS NICSE I&TSE Ozone SD3E Sounder Atmosphere Land Ocean PSOE Atmosphere Agenda

  6. The NPP Atmosphere PEATE will be implemented within the framework and facilities of the SSEC TBR This system has been successfully supporting operational, satellite-based remote-sensing missions since 1996, and its capabilities continue to evolve and expand to meet the demands and challenges of future missions. TBR Atmosphere PEATE Design

  7. Acquire VIIRS RDRs, SDRs, and Atmosphere EDRs from the SD3E and ADS/CLASS Assess the quality of the NPP Atmosphere EDRs for accomplishing NASA’s climate research requirements TBR Provide suggested algorithm improvements to the IDPS Element Overview

  8. TBR Changes since PDR

  9. SeaBASS RSMAS VOST SD3E CLASS(ADS) Casa-NOSA AncillaryDataProviders I&TSE Ocean ScienceCommunity NICSE PSOE Atmosphere PEATE Interface Diagram Analysis Results, Proposed Algorithm Updates xDRs, IPs, Ancillary Data Management Direction TBR xDRs, IPs, Ancillary Data (if unavailable from SD3E) OceanPEATE Pre-flight Algorithms, Data, Info Software, Data Ancillary Data xDR Eval. Results, Algorithm Updates Calibration Updates and Evaluations Interaction In Situ Data In Situ Data Algorithm Updates, Test Requests & Results Matchups

  10. External Interfaces: SD3E Messaging: Any request or report requiring email interaction will be handled by SSEC Data Center (DC) Staff (dcstaff@ssec.wisc.edu). Problem reports, system status notices, subscription requests, and transfer errors all fall into this category. DC Staff are available 0730 - 2300 Central Mon-Fri. DC will escalate issues to the PEATE team only if DC can’t solve them. File Transfers: Pull transfers: DC will inaugurate and monitor any regularly scheduled downloads from the SD3E 32-day data store. These downloads will be handled by the PEATE Ingest subsystem (see Part II of this presentation). Push transfers: DC will request and monitor any subscriptions which have been established with the SD3E. Checksums and digital signatures generated at SD3E will be automatically ingested and verified. Requests for retransmits will be routed through DC staff. The PEATE Ingest subsystem will also generate in-house checksums based on MD5. SD3E file naming conventions for be maintained. Files will be compressed internally using bzip2 or similar.

  11. External Interfaces

  12. Network Performance: NASA GSFC to SSEC Source: Andy Germain <Andy.Germain@gsfc.nasa.gov>

  13. External Interfaces: NPP Science Team Product Evaluation and Algorithm Testing/Improvement The Atmosphere PEATE will provide a development server for use by the NPP Science Team Atmosphere subgroup for interactive product evaluation (e.g., Matlab, IDL, ENVI) and testing of improved EDR algorithms (C, C++, FORTRAN). The system will also provide source code version management (SVN) and a modest online disk archive (<10 TB) for algorithm testing. EDR Product Generation The Atmosphere PEATE Science Processing System is currently envisioned as a “super-user” system. This means NPP Science Team members will deliver compiled EDR code to the system, where a system manager will run the code on the dataset requested by the investigator. Products will be made available on a PEATE FTP site. Data Search and Order The Atmosphere PEATE will provide a web-based interface for the NPP Science Team to search current data holdings and order files for FTP push or pull. Products not available online will be recreated as necessary.

  14. Bullets for previous slide Atmosphere PEATE Interfaces

  15. Atmosphere PEATE: Success Criteria Provide environment for pre-launch testing and evaluation of operational atmosphere EDR algorithms Allow rapid post-launch evaluation and comparison of NPP atmosphere EDRs Create infrastructure using available validation data to allow rapid assessment of NPP EDR products Assist NPP Science Team in assessing the suitability of NPP atmosphere EDRs for continuing the climate record of cloud observations from space Provide environment where NPP Science Team can test alternative EDR algorithms on climatologically significant samples of global proxy data

  16. Atmosphere PEATE Science Processing System • Assess Cloud EDRs for their ability to support the system long term data trending • Enable creation of consistent long-term cloud property datasets for EDR evaluation • To evaluate climate quality of atmosphere EDRs, must use a consistent version of the calibration and science algorithms for a long-term dataset (e.g., one month, one year, entire mission). • EDRs must be created rapidly in order for Science Team to give timely feedback to NPP project on algorithm performance. • NPP Science Team members do not have the individual resources to host large datasets, integrate operational NPP algorithms, and test improved/alternative algorithms. • The NPP cloud products must be put into context with historical and ongoing global cloud property datasets (e.g., PATMOS-X, UW-HIRS, MODIS Terra/Aqua,) to create self-consistent climate data records (CDRs).

  17. Processing System Trade Studies and Key Decisions • Examined NASA Ocean SDPS and MODIS Land Processing Systems. • Lessons learned: • Recipe-based approach to running science algorithms (system doesn’t care what the algorithm is, as long as it knows how to assemble the ingredients to make the recipe) • Cluster of compute resources (no need for a large shared memory computer) • Decouple the components of the processing system (store, compute, distribute) • Use commodity hardware/software (e.g., Rackmount Intel/AMD servers, Linux) • Key Design Decisions: • Create a system where individual components have loose dependencies on each other. • Leverage existing cluster processing hardware infrastructure and knowledge base. • Create a system which is scalable, efficient, and cost effective.

  18. Need Logical Design 1st

  19. Atmosphere PEATE Science Processing System (APSPS)

  20. APSPS, Atmosphere PEATE Science Processing System DMS: Data Management System Stores Data CRG: Computational Resource Grid Processes Data ARM: Algorithm Rule Manage Applies Product Rules to Data ING : Ingest System Brings Data into System

  21. Science Processing System Preliminary Design Ingest Data Manage Data Manage Processing Process Data

  22. Science Processing System: Lines of Code break out reused source Why so small? 1. Java libraries 2. Open Source code contribtuions are not counted (e.g., AXIS Web Services, Hibernate, Tomcat, Log4J)

  23. Algorithm Lifecycle Algorithm can come from anywhere. Once qualified, the algorithm can be applied from ARM.

  24. Algorithm Ingest Algorithm entered into subversion Product created in bugzilla Algorithm is ported and wrapped Tests are created

  25. Algorithm Qualification Write a script to execute algorithm Script manages execution environment Algorithm name, inputs and outputs entered into ARM

  26. LEOCAT: Low Earth Orbit Cloud Algorithm Testbed LEOCAT History • Developed by Mike Pavolonis at UW under VIIRS IGS funding to investigate differences in the operational VIIRS cloud algorithms and heritage algorithms in a manner that isolates algorithmic differences. • The best way to accomplish this is to apply each algorithm to the same Level 1B and ancillary data sets using the same radiative transfer model. • A secondary use of LEOCAT is to serve as an algorithm development tool and global EDR processing system. • LEOCAT approach is also being used for GOES-R AWG work (GEOCAT). LEOCAT Features • Handles multiple imaging sensors (e.g., MODIS, VIIRS, AVHRR) • Multiple algorithms with the same and/or different parameters can be executed in one instance • Allows for the addition of new algorithms with minimal programming effort (can be added as shared libraries) • Produces HDF4 output • Can use CRTM or PLOD forward models • Can use GFS, NCEP, or ECMWF ancillary data • Optimized for efficiency • Process a single or multiple granules in one instance

  27. LEOCAT Architecture LEOCAT Core Science Algorithm

  28. EDR Evaluation Demonstration • Goal: To demonstrate the workflow necessary to evaluate a VIIRS atmosphere EDR for climate product quality. • Proxy Data: Aqua MODIS is the best available spectral simulation of VIIRS. • Products to be Compared: • VIIRS OPS Cloud Mask (versions 1.3 and 1.4) • MODIS operational cloud mask (collection 5) • MODIS operational cloud mask with VIIRS bands only • Components of Demonstration • Obtain Products (from archive, or generate from scratch) • Run Quality Control process on each product • Intercompare products (internally) • Validate products (using external data)

  29. Demonstration Work Plan Process one global month of Aqua MODIS proxy data (day/night), starting with Level 1A data (RDR). Run MODISL1DB algorithms for geolocation and calibration. Run DAAC operational algorithm for Level 1B destriping. Run Cloud Mask algorithms. Examine quality of each individual product (e.g., algorithm success/failure and retrieval yield statistics; processing summaries (per granule, per day); granule based images; clear radiance composites (daily, 8-day, monthly). Intercompare the products (compute and map differences in clear sky radiance statistics for final retrieval and intermediate spectral tests). Validate each product by collocating with CALIPSO lidar and comparing to CALIPSO cloud mask. Each step in the process must be straightforward and well documented.

  30. MODIS Band 2 (0.87 micron)

  31. MODIS C5 Cloud Mask

  32. VIIRS OPS v1.4 Cloud Mask (Preliminary/Unverified)

  33. VIIRS OPS v1.4 Cloud Mask (Preliminary/Unverified) VIIRS OPS v1.4 Cloud Mask (Verified)

  34. EDR Evaluation: Satellite and Ground Measurements • Evaluate the effectiveness of proposed cloud algorithms and the resulting global cloud products generated from MODIS (proxy for VIIRS), AIRS (proxy for CrIS), Cloudsat, and CALIPSO • A subsequent test of algorithm robustness will be to apply the cloud algorithms to METOP data (AVHRR, HIRS, and IASI) for concurrent time period as A-Train data analyses • Build the capability to assess instrument issues, such as out-of-band response, channels that perform out of spec, detector striping, etc • When VIIRS is launched, it is unlikely that a space-based lidar/radar will be in operation and there will not be continuous coincident lidar/radar measurements with VIIRS • A combined satellite and ground measurement plan provides a comprehensive evaluation capability to assess the VIIRS products

  35. EDR Evaluation Measurement Plan • The NASA A-Train measurement platform using MODIS as a proxy for VIIRS will provide: • a platform to compare the VIIRS algorithms directly with MODIS, CALIPSO and CloudSat cloud retrievals (global) • a “baseline” for our global performance expectations for VIIRS • The assessment using ground measurements will provide well-calibrated point measurements that will be available at VIIRS launch • The combined ground/satellite evaluation using MODIS will provide a measure of how representative the ground evaluation will be in determining the global performance of the VIIRS retrievals at launch

  36. EDR Evaluation Goals • The PEATE will be designed to identify algorithm/instrument issues from the physical sensitivity differences between the evaluation and VIIRS products • The VIIRS science team will be enlisted to establish protocols so that cloud product inter-comparisons are performed similarly for all algorithms • The goal is to automate the product inter-comparison process • The evaluation results will be compiled for each VIIRS processing run using established protocols • Graphics (figures) and comparison statistics will be automatically generated for review allowing for instantaneous feedback on changes to the VIIRS algorithms • When new evaluation measurements/retrievals become available they can be easily integrated into the evaluation system • Well-documented evaluation protocols for each VIIRS product will be created

  37. EDR Evaluation: Cloud Height Global Images (August 2006)

  38. Pre-launch Evaluation Approaches, by EDR • Cloud mask (CALIPSO) • Aerosol/cloud discrimination (CALIPSO) • Aerosol layer height (CALIPSO) • Cloud layering (CALIPSO/CloudSat) • Cloud top height (CALIPSO/CloudSat) • Cloud base (CALIPSO/CloudSat) • Cloud thermodynamic phase (CALIPSO) • Cloud optical thickness (for  < ~3 from CALIPSO; for higher  from CloudSat) • Cloud particle radius (ongoing research: CALIPSO/CloudSat)

  39. Evaluation: Post NPP Launch Evaluation Flow Diagram Post NPP Launch

  40. Atmosphere EDR Evaluation Summary At VIIRS launch, the Atmosphere PEATE EDR evaluation system will have the capability to: • Ingest and store global VIIRS RDRs, SDRs and atmosphere EDRs • Regenerate self-consistent SDR and EDR long-term datasets for evaluating climate quality of atmosphere EDRs • Ingest, process, and store the evaluation measurements (ground and satellite) • Collocate (space and time) the VIIRS SDRs and EDRs with the evaluation measurements (ground and satellite) • Produce quantitative comparisons between the VIIRS SDR/EDRs and the evaluation products (global, long-term) • Produce quick-look images of the VIIRS SDR/EDRs, evaluation, and collocated products • Distribute results to NPP Science Team

  41. Atmosphere PEATE Development Schedule

  42. Atmosphere PEATE Development Schedule

  43. Atmosphere PEATE Development Schedule

  44. Atmosphere PEATE Development Schedule

  45. Atmosphere PEATE: Prototype Hardware (January 2007) Based on commodity hardware: Dual/Quad core servers, SATA RAID, Gigabit Ethernet. By NPP launch: 250 CPU cores, 215 TB disk. Aiming for 100x processing rate for NPP Atmosphere EDRs. 50 CPU cores 40 TB disk

  46. At-Launch PEATE Hardware: Notional Configuration • Compute Resources: • 250 CPU cores, AMD or Intel, 2GB RAM per core. Estimate $250K. • Justification: 50 CPU cores yielded  50x EDR generation rate in prototyping studies. We will have 10 Atmosphere EDRs from NPP, and we desire 100x EDR generation rate. • 50 CPU = 4 EDRs at 50x • 100 CPU = 4 EDRs at 100x • 250 CPU = 10 EDRs at 100x • Storage: • 215 Terabytes (TB). Estimate $200K. • Justification: Aqua MODIS L0 volume is 18.25 TB/year compressed (ref: Ocean SDPS). We desire complete Aqua MODIS L0 archive online (7.5 years), plus 3 years of space for NPP RDR (complete) and SDR+EDR (subset). Estimate NPP at 25 TB/year. • (18.25 TB/year x 7.5 years) + (25 TB/year x 3 years) = 215 TB. • Networking: • Cisco Catalyst Stackable Gigabit Switch Infrastructure (32 Gbps stack interconnect) with 4 x 48 ports. Estimate $50K.

  47. Atmosphere PEATE: Achievements to Date Processed one month of global Aqua MODIS proxy data using MODIS, MODIS VIIRS-like, and VIIRS OPS cloud mask algorithms. Collected Aqua MODIS global Level 1A data since Jan. 2006. Adapted existing software (LEOCAT) to create infrastructure for running VIIRS OPS EDR code on Linux (32, 64-bit) and OS X. Demonstrated 50x processing rate on first generation computing system. Created validation plan and demonstrated a validation approach for cloud mask using CALIPSO CALIOP. Completed System Requirements Review for NASA. Demonstrated process for evaluating a VIIRS EDR (Cloud Mask). Completed successful Preliminary Design Review for NASA. Started deployment and testing of Science Processing System.

  48. Gap Analysis EDR Algorithm Qualification/Verification A major concern is the problems we have encountered in obtaining any meaningful test suites to verify that EDR algorithms are operating in the Atmosphere PEATE environment as intended by the algorithm designers. We must make a best guess as to what Gridded IPs and Ancillary datasets should be used as proxies in pre-launch testing. Others?

More Related