1 / 36

Better Data Assimilation through Gradient Descent

London Mathematical Society - EPSRC Durham Symposium Mathematics of Data Assimilation. Better Data Assimilation through Gradient Descent. Leonard A. Smith, Kevin Judd and Hailiang Du Centre for the Analysis of Time Series London School of Economics. Outline. Perfect model scenario (PMS)

hillier
Download Presentation

Better Data Assimilation through Gradient Descent

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. London Mathematical Society - EPSRC Durham Symposium Mathematics of Data Assimilation Better Data Assimilation through Gradient Descent Leonard A. Smith, Kevin Judd and Hailiang Du Centre for the Analysis of Time Series London School of Economics

  2. Outline • Perfect model scenario (PMS) • GD method • GD is NOT 4DVAR • Results compared with Ensemble KF • Imperfect model scenario (IPMS) • GD method with stopping criteria • GD is NOT WC4DVAR • Results compared with Ensemble KF • Conclusion & Further discussion

  3. Experiment Design (PMS)

  4. Ensemble techniques • Generate ensemble directly, e.g. Particle Filter, Ensemble Kalman Filter • Generate ensemble from perturbations of a reference trajectory, e.g. SVD on 4DVAR Gradient Descent (GD) Method K Judd & LA Smith (2001) Indistinguishable States I: The Perfect Model Scenario, Physica D 151: 125-141.

  5. Gradient Descent (Shadowing Filter)

  6. Gradient Descent (Shadowing Filter)

  7. Gradient Descent (Shadowing Filter)

  8. Gradient Descent (Shadowing Filter)

  9. Gradient Descent (Shadowing Filter)

  10. GD is NOT 4DVAR • Difference in cost function • Noise model assumption Observational noise model 4DVAR cost function GD cost function not depend on noise model • Assimilation window 4DVAR dilemma: • difficulties of locating the global minima with long assimilation window • losing information of model dynamics and observations without long window

  11. Methodology

  12. Reference trajectory Obs t=0 Form ensemble GD result

  13. t=0 Candidate trajectories Form ensemble Sample the local space Perturb observations and run GD

  14. Form ensemble t=0 Ensemble trajectory Draw ensemble members according to likelihood

  15. Obs t=0 Form ensemble Ensemble trajectory

  16. Ensemble members in the state space Compare ensemble members generated by Gradient Descent method and Ensemble Adjustment Kalman Filter method in the state space. Low dimensional example to visualize, higher dimensional results later.

  17. Ikeda Map, Std of observational noise 0.05, 512 ensemble members

  18. Evaluate ensemble via Ignorance Ensemble->p(.) The Ignorance Score is defined by: where Y is the verification. Ikeda Map and Lorenz96 System, the noise model is N(0, 0.4) and N(0, 0.05) respectively. Lower and Upper are the 90 percent bootstrap resampling bounds of Ignorance score

  19. Imperfect Model Scenario

  20. Toy model-system pairs Ikeda system: Imperfect model is obtained by using the truncated polynomial, i.e.

  21. Toy model-system pairs Lorenz96 system: Imperfect model:

  22. Define the implied noise to be and the imperfection error to be Insight of Gradient Descent

  23. Insight of Gradient Descent

  24. Insight of Gradient Descent

  25. Insight of Gradient Descent

  26. Implied noise Imperfection error Distance from the “truth” Statistics of the pseudo-orbit as a function of the number of Gradient Descent iterations for both higher dimension Lorenz96 system-model pair experiment (left) and low dimension Ikeda system-model pair experiment (right).

  27. GD with stopping criteria • GD minimization with “intermediate” runs produces more consistent pseudo-orbits • Certain criteria need to be defined in advance to decide when to stop or how to tune the number of iterations. • The stopping criteria can be built by testing the consistency between implied noise and the noise model • or by minimizing other relevant utility function

  28. Imperfection error vs model error Obs Noise level: 0.01 Model error Imperfection error Not accessible!

  29. Imperfection error vs model error Obs Noise level: 0.002 Obs Noise level: 0.05 Imperfection error

  30. GD vs WC4DVAR WC4DVAR Model error assumption Model error estimates GD

  31. Forming ensemble • Apply the GD method on perturbed observations. • Apply the GD method on perturbed pseudo-orbit. • Apply the GD method on the results of other data assimilation methods. Particle filter?

  32. Imperfect model experiment: Ikeda system-model pair, Std of observational noise 0.05, 1024 EnKF ensemble members, 64 GD ensemble members

  33. Evaluate ensemble via Ignorance The Ignorance Score is defined by: where Y is the verification. Ikeda system-model pair and Lorenz96 system-model pair, the noise model is N(0, 0.5) and N(0, 0.05) respectively. Lower and Upper are the 90 percent bootstrap resampling bounds of Ignorance score

  34. Conclusion • Methodology of applying GD for data assimilation in PMS is demonstrated outperforms the 4DVAR and Ensemble Kalman filter methods • Outside PMS, mmethodology of applying GD for data assimilation with a stopping criteria is introduced and shown to outperform the WC4DVAR and Ensemble Kalman filter methods. • Applying the GD method with a stopping criteria also produces informative estimation of model error. No data assimilation without dynamics.

  35. Thank you! H.L.Du@lse.ac.uk Centre for the Analysis of Time Series: http://www2.lse.ac.uk/CATS/home.aspx

More Related