1 / 60

Landsat Update

Landsat Update. Rochester Institute of Technology John Schott Aaron Gerace Nina Raqueno Monica Cook. Discussion Topics. Landsat DIRSIG - Detector modeling- Banding, streaking, Linearity - Side Slither - TIRS - SPLIT Window - Next Priorities

rey
Download Presentation

Landsat Update

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Landsat Update Rochester Institute of Technology John Schott Aaron Gerace Nina Raqueno Monica Cook

  2. Discussion Topics • Landsat DIRSIG- Detector modeling- Banding, streaking, Linearity - Side Slither - TIRS - SPLIT Window - Next Priorities • Landsat Surface Temperature - Sensitivity Review - Current Approach and Run times - Early Results - Confidence Band Ideas - Path Forward • Landsat Calibration Automation- Methodology - Status - Next Steps

  3. Detector Modeling: Pixel-to-Pixel Uniformity Rochester Institute of Technology Aaron Gerace John Schott South Dakota State University Dennis Helder Michele Kuester

  4. OLI Pixel-to-Pixel Uniformity in DIRSIG:Overview Goal: Determine if OLI’s pixel-to-pixel uniformity meets requirement…0.5% across the full FOV. • Developed an OLI sensor model in DIRSIG that incorporates all non-uniformity effects. • RSR, gain , bias, nonlinearity, noise/quantization. • Implemented at the detector level. • Create panels in DIRSIG whose spectral characteristics reflect requirements, • uniform sources with the radiance level above 2*Ltypical. • Spectral radiance from bare desert soil as observed through a dry atmosphere (excluding band 9). • Spectral radiance proportional to the TOA solar irradiance. • Spectral radiance from a dense vegetation target as observed through a moist atmosphere (excluding band 9). Figure 5 - 3 TOA Spectra for Uniformity Analyses

  5. OLI Pixel-to-Pixel Uniformity in DIRSIG:Panels • TOA radiance spectra can be applied directly to panels. • Spectra from Figure 5-3 in OLI specification manual used to determine if instrument meets requirement: 5.6.2.3.1. • Also included panels whose spectra were derived from AVIRIS. • used to determine OLI’s potential over dark targets…particularly water. Bare Desert, Dry Atmosphere Figure 5-3 Vegetation, Moist Atmosphere Snow Exo. Radiance Vegetation Lake Ontario AVIRIS Long Pond RGB Rendering of Radiance Panels

  6. OLI Pixel-to-Pixel Uniformity in DIRSIG:Pseudo Arrays • 2 Pseudo-arrays were developed to image panels. • 840 detectors per array, each with a unique RSR (60 detectors x 14 FPMs). • OLI Bands 1-7 used in the analysis. • All non-uniformity effects (Non-linearity, Gains, Bias, Noise, Quantization) applied to band-integrated radiance data…resulting images represent 12-bit raw data. • Raw data gain/bias corrected as indicated in OLI requirements document. 840 Detectors Overlap region is 20 detectors wide. 840 Detectors

  7. OLI Pixel-to-Pixel Uniformity in DIRSIG: Results • “The standard deviation of all pixel column average radiances across the FOV within a band shall not exceed 0.5% of the average radiance.” • Determined the column averages over 100 pixels in each panel. • Plots show Std.Dev.(Column Averages)/Mean(Column Averages) for each band. • Variability in the dark spectra is high where the signal is low. Required Spectra Dark Spectra

  8. OLI Pixel-to-Pixel Uniformity in DIRSIG:Future Work • Looking at uniformity over realistic scene such as Lake Tahoe. • We are collaborating with Dennis Helder and Michele Kuester at SDSU. • Investigating methods to further reduce pixel-to-pixel variability • Generated Lake Tahoe data with panels. • Gain and bias drift model. • Correlated Noise. • Spatial • Spectral

  9. Side-Slither Determine Variability in Relative Gains Introduced by Potential PIC Sites

  10. Side-Slither: An Exciting Application of DIRSIG • DIRSIG can be used to evaluate how favorable a potential site is for side-slither calibration. • Pixel-to-pixel variability required to be less than 0.5%. • We wish to identify sites that introduce variability of less than 0.05% (an order of magnitude smaller than requirement). • We need to identify sites whose brightness values span the dynamic range of each band of OLI…nonlinearity. Radiance Levels for Identified Sites Detector Linearity Curve L DN L Band

  11. Side-Slither: PICS Analysis • SDSU (Dennis Helder, Bikash Basnet) performed a nice statistical analysis that identified Pseudo Invariant Calibration Sites (PICS) around the world. • Side Slither was simulated over these sites using DIRSIG to determine if they will be suitable for the side slither mission.

  12. Side-Slither: PICS Analysis • Landsat 5 data was downloaded from Earth Explorer and used as input to the DIRSIG model for the sites listed below. • PIC sites were excluded from analysis if… • They were too small. • They did not lie in the center of swath. • Other sites were included in analysis to help span dynamic range of OLI.

  13. Side-Slither: OLI Arrays • 2 OLI arrays used to image radiance images. • LOS vectors for FPMs 7 and 8 used. • 494 detectors per array, however, 1 RSR used for each band. • We wish to isolate variability in relative gains that is introduced by the potential site, not the sensor. • Bands 1-7 used in the analysis. 494 Detectors 494 Detectors

  14. Side-Slither: Variability Introduced by Potential Sites • Overestimation of Gain Variability: • Imaging 30m data with a 30m sensor. • TM instrument non-uniformities Gain Variability Potential Sites

  15. Side-Slither: A Significant Issue A jump in relative gains occurs due to the 2 arrays imaging different spots on the ground.

  16. Side-Slither: Future Work • Flat field relative gains (within an array) • Use PIC’s sites for overlapping pixels. • Average 1000 common in track pixels for each detector. • Average all overlapping detectors in an array. • Force “overlapping averages” to match. R-1 R+1 R Average of 1000 Pixels Libya 4 R Pixel # RL R-1R RR

  17. Side-Slither: Future Work • Address BRDF Effects • Ran MODTRAN to approximate BRDF over desert • Solar-zenith angles [165, 170, 175, 180, 185, 190, 195]. • Azimuth angles of 0 (North) and 90 (East). • MODTRAN’s desert albedo used. • Solar position • Longitude 0.8 east of Greenwich • Latitude 23.2 north North-South slice of BRDF East-West slice of BRDF

  18. TIRS Side Slither

  19. TIRS Side-Slither: All Non-Uniformity Effects • TIRS is required to image whenever OLI images. • Does TIRS side-slither data have potential value? • Algeria 3 PIC site imaged with TIRS array. • Non-uniformity effects applied (Gains, “Offsets”) • A random drift model applied to (nominal) ground-based measurements. • flat-field raw data using… • Ground-based measurements. • Side-slither gains.

  20. TIRS Side-Slither: All Non-Uniformity Effects Nominal Correction Algeria 3: Input Image Raw Data Side-Slither Correction Residual Variability of 0.2%

  21. TIRS Spatial ShapePSF Modeling on a Realistic Scene Matt Montanaro Brian Wenny Allen Lunsford Dennis Reuter Rochester Institute of Technology

  22. TIRS PSF Modeling: As Measured Spatial Shape Representative spatial shape curves from TVAC2 data (from Brian Wenny’s analysis) 10.8um band 12.0um band

  23. TIRS PSF Modeling: PSF Applied to Realistic Synthetic Scene Mid-Latitude Summer Atmosphere – 10.8um band Measured Edge vs. Required Edge RMS = 0.002531 Required Edge vs. Ideal Edge RMS = 0.010541 Fractional Difference

  24. Split Window Atmospheric Correction

  25. Split Window: Basalt Single Regression

  26. Split Window: Basalt and Salton Sea

  27. LDCM DIRSIG: Moving Forward • Short term (next 3 months) • Demonstrate that normal imaging mode can be used to flat-field between arrays. • Characterize non-linearities with side-slither. • Incorporate bias and gain drift models. • Continue to support/collaborate with SDSU. • Investigating methods to further reduce pixel-to-pixel variability. • Long term • Develop a correlated noise model. • Spatial/spectral? • Develop a DIRSIG scene to support the cross-calibration of thermal and reflective LDCM data. • Lake Mead. • Center-pivot irrigation circles.

  28. Land Surface Temperature Product

  29. LANDSAT SCENE metadata HGT_1, TMP_1, SHUM_1 HGT_2, TMP_2, SHUM_2 [349x277x29] download GRIB data interpolate to Landsat acquisition time and restrict to Landsat scene hgt tmp shum [100x29] build tape5 files (lat/lon-height-temp-alb) 6000 tape5 files MODTRAN RUNS , Lu, Ld at each height at each NARR point calculations with MODTRAN output determine NARR points for each pixel , Lu, Ld at four NARR points LT, h, , Lu, Ld for each pixel interpolations 29

  30. Sensitivity Studies Temporal Interpolation Linearly interpolate NARR profiles to Landsat acquisition time Height Interpolation Linearly interpolate lowest atmospheric layer to MODTRAN ground altitude Linearly interpolate atmospheric parameters to pixel elevation Evaluate at 9 heights at each NARR point Spatial Interpolation Inverse distance weight of 4 NARR points in atmospheric parameter space Number of MODTRAN Runs Generate , Lu, and Ld, using three MODTRAN runs 30

  31. Processing • Submitting runs to RIT’s Research Computing Cluster • Processing times depends on cluster use and availability • 2.2 GHz Processor and 64 GB Memory • Changing with installation of new hardware • Once on a node, a scene can take 2.5 hours - 4.5 hours • Average run time: 3.754 hours 31

  32. California Study Processing all scenes over all time over California Span: - Paths 38 - 46 - Rows 31 - 37 - Years 1982 - 2011 Approx 9400 scenes total 16 cores  3 months 25 cores  2 months 100 cores  2 weeks 32

  33. Initial Studies - California Processing scenes over Lake Tahoe (503 scenes) and Salton Sea (58 scenes) and comparing with ground truth data - 4 buoys with available data in Lake Tahoe - 1 buoy with available data in Salton Sea > 16 cores  5.5 days Processing all scenes in 2003 (699 scenes) > 16 cores  7 days 33

  34. Preliminary Results Initial Flags Clouds in Band 6 (Visual) Retrieved Temperature < 260 K Max Relative Humidity > 70 % Max Temperature > 305 K (Temp - Dew Point) < 3K transmission < 0.8 Acceptable is error < 1K 25 Initial Scenes 5 unreliable buoy readings 7 acceptable results without flags 3 acceptable results with flags 8 bad results with flags (5 imaging clouds) 2 bad results without flags 34

  35. Preliminary Statistics • Scenes without Flags: > Average error from buoy temp: -0.533 K > Standard Deviation: 2.221 K 35

  36. Preliminary Results - 20 June 2010 36

  37. Preliminary Results - 28 Feb 2010 37

  38. Preliminary Results - 1 April 2010 38

  39. Preliminary Results - 15 January 2010 39

  40. Preliminary Results - 1 April 2010 254 K at approx. 5 km in temperature profile 40

  41. Future Work: Reducing Processing Times • Investigate necessary number of NARR points - Extending beyond scene edges for interpolation • Number of heights • Pixel size - Actually 60 m or 120 m pixels in Landsat Band 6 - Fewer spatial interpolations with slowly varying NARR • Number of MODTRAN runs 41

  42. Future Work: Analysis Per pixel flagging? - Temperature map with confidence Analyzing More Results - Tuning initial flags - Confidence values on clear pixels - Confidence values of clouds Extending process to world - Data availability - Resolution 42

  43. Landsat Thermal CalibrationUpdate on Buoy Automation Process April 12, 2012Nina Raquenonina@cis.rit.edu

  44. NOAA’s NationalDataBuoyCenter Buoys: Yellow=active, Red= inactive(April_24_2011) Approx 118 possible

  45. . . . 110 NDBC buoys Mar 2012

  46. NDBC Buoy 45012 exists in Path 16 & 17 45012 110 buoys yields 162 opportunities given that some buoys exists in multiple scenes 3

  47. Landsat 7 Path/Rows Intersected with NDBC Buoys Paths: 13,29,45,61,77… Cycle: 1 March 11, 2012 Buoys:12 Example: Water Temperature Range: 34.2-56.8 4

  48. Landsat 7 Path/Rows Intersected with NDBC Buoys Paths: 4,20,36,52,68,84… Cycle: 2 March 12, 2012 Buoys:15 5

  49. Approach Site and Scene Selection Computation of Bulk to Skin Temperature Recreation of Atmospheric Column: Surface Correction Upper Air Interpolation and Processing Obtaining Predicted At-Sensor Radiance Compare to Image Derived Radiance

More Related