1 / 30

A Multimodel Streamflow Forecasting System for the Western U.S.

A Multimodel Streamflow Forecasting System for the Western U.S. Theodore J. Bohn, Andrew W. Wood, and Dennis P. Lettenmaier University of Washington, U.S.A. EGU Conference, Spring 2006 Session HS23/NP5.04. Outline. Background UW West-Wide Forecasting System Bayesian Model Averaging

trudy
Download Presentation

A Multimodel Streamflow Forecasting System for the Western U.S.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Multimodel Streamflow Forecasting System for the Western U.S. Theodore J. Bohn, Andrew W. Wood, and Dennis P. Lettenmaier University of Washington, U.S.A. EGU Conference, Spring 2006 Session HS23/NP5.04

  2. Outline • Background • UW West-Wide Forecasting System • Bayesian Model Averaging • Multi-model vs Individual Models • Deterministic Retrospective Forecasts • ESP Retrospective Forecasts

  3. Background • UW West-Wide Stream Flow Forecast system (Wood and Lettenmaier, in review; Wood et al, 2002) • Developed in partnership with USDA/NRCS NWCC • Long-lead-time (1-12 months) stream flow forecasting for western U.S. • Main component: Variable Infiltration Capacity (VIC) large-scale hydrological model • Probabilistic forecasts • Uses forecasts from multiple climate models to take into account climate uncertainty • Does not yet take into account uncertainty in hydrologic model physics

  4. Forecast data flow start of month 0 end of mon 6-12 1-2 years back forecast ensemble(s) model spin-up initial conditions climatology ensemble NCDC met. station obs. up to 2-4 months from current 2000-3000 stations in west LDAS/other real-time met. forcings for remaining spin-up ~300-400 stations in west climate forecast information data sources Forecast Products streamflow soil moisture runoff snowpack derived products e.g., reservoir system forecasts obs snow state information (eg, SNOTEL)

  5. Background • Immediate goal: improve forecast skill at long lead times (1-12 months) • Problems: • Uncertainty grows with lead time • Greater uncertainty when making forecasts before the snow pack has accumulated • How much of this uncertainty is due to hydrologic model physics?

  6. Relative important of initial condition and climate forecast error in streamflow forecasts Columbia R. Basin fcst more impt ICs more impt Rio Grande R. Basin RMSE (perfect IC, uncertain fcst) RMSE (perfect fcst, uncertain IC) RE =

  7. climate forecasts more important ICs more important high streamflow volume forecast period climate ensembles N ensembles IC ensembles low Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug Sep Expansion to multiple-model framework It should be possible to balance effort given to climate vs IC part of forecasts

  8. How to quantify uncertainty and reduce bias? • Multi-model ensemble • Average the results of multiple models – reduces bias • Ensemble mean should be more stable than a single model • Combines the strengths of each model - generally as good as the best model at all times/locations • Provides estimates of model uncertainty

  9. Expansion to multiple-model framework Seasonal Climate Forecast Data Sources CCA NOAA CAS OCN CPC Official Outlooks SMLR CA Coupled Forecast System VIC Hydrology Model NASA NSIPP/GMAO dynamical model ESP ENSO UW ENSO/PDO

  10. Expansion to multiple-model framework Multiple Hydrologic Models CCA NOAA CAS OCN CPC Official Outlooks Model 1 SMLR CA Coupled Forecast System Model 2 NASA NSIPP/GMAO dynamical model Model 3 ESP weightings calibrated via retrospective analysis ENSO UW ENSO/PDO

  11. Averaging of Forecasts Bayesian Model Averaging (BMA) (Raftery et al, 2005) • Ensemble mean: E(y|f1,…fK) = Σwkfk where y = observation fk = forecast of kth model wk = weight of kth model = expected fraction of data points for which kth model forecast is best of the ensemble • Ensemble variance, for forecast at time t: Var(yt|f1t,…,fKt) = Σwk(fkt - Σwifit)2 + Σwkσk2 where σk2 = uncertainty of kth model, conditional on kth model being the best = weighted mean square error (MSE), favoring data points for which kth model forecast is best of the ensemble Spread due to model uncertainty Spread among models

  12. σ1 f1 Model 1 p(y|f1) Model 2 f2 σ2 p(y|f2) Model 3 σ3 f3 p(y|f3) Averaging of Forecasts wk, σk reflect uncertainty due to model physics p(y|f1,…f3) Σwkfk w1f1 + w2f2 = Multimodel Average + w3f3

  13. Computing Model Weights • Parameters wk and σk • wk and σk depend on each other • computed via iterative maximum likelihood method • Currently: determined from model performance in a retrospective deterministic simulation • Future: determine from performance of retrospective probabilistic forecasts • The σk help define a distribution about the multimodel average • Reflect model uncertainty • This method assumes normally-distributed data • Discharge tends to have positive skew Therefore: • Generate monthly wk and σk from log-transformed discharge • Form multimodel average from log-transformed forecasts • Transform multimodel average (and distribution) back to flow domain

  14. UW West-Wide Forecast Ensemble Models: • VIC - Variable Infiltration Capacity (UW) • SAC - Sacramento/SNOW17 model (National Weather Service) • NOAH – NCEP, OSU, Army, and NWS Hydrology Lab Model Energy Balance Snow Bands VIC Yes Yes SAC No Yes NOAH Yes No SAC does not compute PET; it uses PET computed by NOAH Data: • Calibration parameters from NLDAS 1/8 degree grid (Mitchell et al 2004) – no further calibration performed • Meteorological Inputs: Maurer et al. (2002), 1949-1999

  15. Three Test Basins Salmon R. (Above Snake R.) Drainage area: 33600 km2 Feather R. (Above Oroville Res.) Drainage area: 8600 km2 Colorado R. (Above Grand Junction) Drainage area: 19900 km2

  16. Deterministic Retrospective 1956-1995 Training Period: Even Years 1.0 0.5 0.0 1.0 0.5 0.0 1.0 0.5 0.0 Model Weights Monthly Mean Discharge Monthly RMSE Salm. Colo. Feat.

  17. Deterministic Retrospective 1956-1995 Validation Period: Odd Years 1.0 0.5 0.0 1.0 0.5 0.0 1.0 0.5 0.0 Model Weights Monthly Mean Discharge Monthly RMSE Salm. Colo. Feat.

  18. Deterministic Retrospective Results Individual Models • VIC is best in general • Best at capturing autumn-winter base flow (all basins) → high weights • Best estimate of snowmelt peak in Colorado basin • Generally Lowest RMSE • SAC is second • Low autumn/winter base flow → low weights • In Salmon basin, snowmelt peak flow is early but magnitude is close to observed in May → high weight • Best estimate of snowmelt peak in Feather basin → high weight • NOAH is last • No autumn/winter base flow → low weights • In Salmon and Colorado basins, snowmelt peak flow is 1-2 months early and far too small (high snow sublimation, lack of elevation bands) → low weights • Competitive in Feather basin (snowmelt is less dominant here) • Generally highest RMSE and lowest weights

  19. Deterministic Retrospective Results Multimodel Ensemble Prediction • In general, ensemble bias and RMSE are at least as small as those of the best individual model • Notable exceptions: June in Salmon Basin, February in Feather Basin • SAC beats the ensemble RMSE in both the training and validation sets – how? • Model weights reflect each model’s best performance • SAC consistently good here • VIC not as consistent, but when it is good, it is very good → gets equal weight to SAC • This warrants further investigation

  20. ESP Forecasts • Extended Streamflow Prediction • Start with I.C. of forecast year • Run model with ensemble of historical meteorological forcings (climatology) • The distribution of results indicates uncertainty due to climate • (but implicitly contains uncertainty due to the model) ESP forecast distribution Save state vector here ESP forecast typically includes median and quartile values Retrospective simulation Forecasts using climatology, starting from saved ICs

  21. ESP Forecasts and Multimodel • Approach 1: • FIRST form multimodel average of all models for each forcing • THEN form ESP distribution of the multimodel averages • Use weights determined in the training period One multimodel distribution for each forcing Multimodel Forcing 1 Forcing 1 Forcing 2 Model 1 Add these distributions Model 2 Model 3 ESP distribution of multimodel distributions (“grand distribution”) Forcing 2

  22. ESP Forecasts and Multimodel • Approach 2: • FIRST form the ESP distribution for each model • THEN form multimodel average of the ESP distributions • Determine wk and σk based on each model’s ESP distribution σ1 Model 1 ESP 1 w1ESP1 + σ2 Model 2 ESP 2 = w2ESP2 ESP distribution of multimodel distributions (“grand distribution”) + σ3 Model 3 ESP 3 w3ESP3

  23. ESP Forecasts and Multimodel • Approach 2 incorporates model forecast performance into the computation of wk, σk • Should be more accurate • Approach 1 is simpler • We will start with approach 1

  24. Oct Oct Oct Dec Dec Dec Feb Feb Feb Apr Apr Apr Jun Jun Jun Aug Aug Aug Example ESP Forecast, 1966-1967 ESP Distribution of multimodel averages S. Distributions of Individual models Spread of multimodel averages is similar to individual model ESP spreads - mainly reflects uncertainty in climatological forcings the average reflects which model is more reliable but does not quantify model uncertainty C. F.

  25. Oct Oct Oct Dec Dec Dec Feb Feb Feb Apr Apr Apr Jun Jun Jun Aug Aug Aug Now add multimodel “grand distribution” Multimodel “grand distribution” S. Grand distribution has larger spread than distribution of multimodel averages, due to addition of model uncertainty C. ESP distribution of multimodel averages (Note: grand distribution error bars extend from 1%-ile to 99 %-ile) F.

  26. Oct Oct Oct Dec Dec Dec Feb Feb Feb Apr Apr Apr Jun Jun Jun Aug Aug Aug Aggregate ESPs, Odd years 1956-1995 Mean 25th – 75th %-ile Range Grand distribution has larger range between 25th-75th %-ile range than that of the distribution of multimodel means alone. This difference reflects the contribution of model uncertainty. Grand distribution’s 25-75 range is larger than most individual models during spring snowmelt peak (May-June), reflecting range of model snow formulations. S. C. F.

  27. Conclusions • Multimodel averaging can • reduce the bias of a hydrological forecast • but not always – depends on the weighting scheme • help quantify model uncertainty and/or identify where model uncertainty is important • model snow formulation in snowmelt-driven basins • Future work: • Weights based on forecast performance • Multimodel’s influence on dependence of skill on forecast start date

  28. References Wood, A.W., Maurer, E.P., Kumar, A. and D.P. Lettenmaier, 2002. Long Range Experimental Hydrologic Forecasting for the Eastern U.S. J. Geophysical Research, VOL. 107, NO. D20, October. Raftery, A.E., F. Balabdaoui, T. Gneiting, and M. Polakowski, 2005. Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174.

  29. Model Averaging: Process Flow

More Related