350 likes | 603 Views
Time Series Forecasting: The Case for the Single Source of Error State Space Model. J. Keith Ord, Georgetown University Ralph D. Snyder, Monash University Anne B. Koehler, Miami University Rob J. Hyndman, Monash University Mark Leeds, The Kellogg Group
E N D
Time Series Forecasting: The Case for the Single Source of Error State Space Model J. Keith Ord, Georgetown University Ralph D. Snyder, Monash University Anne B. Koehler, Miami University Rob J. Hyndman, Monash University Mark Leeds, The Kellogg Group http://www.buseco.monash.edu.au/depts/ebs/pubs/wpapers/2005
Outline of Talk • Background • General SSOE model • Linear and nonlinear examples • Estimation and model selection • General linear state space model • MSOE and SSOE forms • Parameter spaces • Convergence • Equivalent Models • Explanatory variables • ARCH and GARCH models • Advantages of SSOE
Review Paper A New Look At Models for Exponential Smoothing (2001). JRSS, series D [The Statistician], 50, 147-59. Chris Chatfield, Anne Koehler, Keith Ord &Ralph Snyder
Framework Paper A State Space Framework for Automatic Forecasting Using Exponential Smoothing(2002) International J. of Forecasting, 18, 439-454 Rob Hyndman, Anne Koehler, Ralph Snyder & Simone Grosse
Some background • The Kalman filter: Kalman (1960), Kalman & Bucy (1961) • Engineering: Jazwinski (1970), Anderson & Moore (1979) • Regression approach: Duncan and Horn (JASA, 1972) • Bayesian Forecasting & Dynamic Linear Model: Harrison & Stevens (1976, JRSS B); West & Harrison (1997) • Structural models: Harvey (1989) • State Space Methods: Durbin & Koopman (2001)
Single Source of Error (SSOE)State Space Model • Developed by Snyder (1985) among others • Also known as the Innovations Representation • Any Gaussian time series has an innovations representation [SSOE looks restrictive but it is not!]
Why a structural model? • Structural models enable us to formulate model in terms of unobserved components and to decompose the model in terms of those components • Structural models will enable us to formulate schemes with non-linear error structures, yet familiar forecast functions
Reduced ARIMA Form ARIMA(0,1,1):
Model Selection p is the number of free states plus the number of parameters
Parameter Space 1 • Both correspond to the same ARIMA model in the steady state BUT parameter spaces differ • SSOE has same space as ARIMA • MSOE space is subset of ARIMA • Example: for ARIMA (0,1,1), = 1- • MSOE has 0 < < 1 • SSOE has 0 < <2 equivalent to –1 < < 1
Parameter space 2 • In general, ρ = 1 (SSOE) yields the same parameter space as ARIMA, ρ = 0 (MSOE) yields a smaller space. • No other value of ρ yields a larger parameter space than does ρ = 1 [Theorems 5.1 and 5.2] • Restricted parameter spaces may lead to poor model choices [e.g. Morley et al., 2002]
Convergence 2 • The practical import of this result is that, provided t is not too small, we can approximate the state variable by its estimate • That is, heuristic forecasting procedures, such as exponential smoothing, that generate forecast updates in a form like the state equations, are validated.
Equivalence • Equivalent linear state space models (West and Harrison) will give rise to the same forecast distribution. • For the MSOE model the equivalence transformation H of the state vector typically produces a non-diagonal covariance matrix. • For the SSOE model the equivalence transformation H preserves the perfect correlation of the state vectors.
Advantages of SSOE Models • Mapping from model to forecasting equations is direct and easy to see • ML estimation can be applied directly without need for the Kalman updating procedure • Nonlinear models are readily incorporated into the model framework
Further Advantages of SSOE Models • Akaike and Schwarz information criteria can be used to choose models, including choices among models with different numbers of unit roots in the reduced form • Largest parameter space among state space models. • In Kalman filter, the covariance matrix of the state vector converges to 0.