1 / 47

State Space Models

State Space Models. Let { x t : t  T } and { y t : t  T } denote two vector valued time series that satisfy the system of equations:. y t = A t x t + v t (The observation equation) x t = B t x t- 1 + u t (The state equation).

kim-johns
Download Presentation

State Space Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. State Space Models

  2. Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations: yt = Atxt+ vt (The observation equation) xt = Btxt-1+ ut (The state equation) The time series { yt:t T} is said to have state-space representation.

  3. Note: { ut:t T} and { vt:t T} denote two vector valued time series that satisfying: • E(ut) = E(vt) = 0. • E(utusˊ) = E(vtvsˊ) = 0 if t ≠ s. • E(ututˊ) = Suand E(vtvtˊ) = Sv. • E(utvsˊ) = E(vtusˊ) = 0 for all t and s.

  4. Example: One might be tracking an object with several radar stations. The process {xt:t T} gives the position of the object at time t. The process { yt:t  T} denotes the observations at time t made by the several radar stations. As in the Hidden Markov Model we will be interested in determining position of the object, {xt:t T}, from the observations, {yt:t T} , made by the several radar stations

  5. Example: Many of the models we have considered to date can be thought of a State-Space models Autoregressive model of order p:

  6. Define Then Observation equation and State equation

  7. Hidden Markov Model: Assume that there are m states. Also that there the observations Yt are discreet and take on n possible values. Suppose that the m states are denoted by the vectors:

  8. Suppose that the n possible observations taken at each state are

  9. Let and Note

  10. Let So that The State Equation with

  11. Also Hence and where diag(v) = the diagonal matrix with the components of the vector v along the diagonal

  12. Since then and Thus

  13. We have defined Hence Let

  14. Then The Observation Equation with and

  15. Hence with these definitions the state sequence of a Hidden Markov Model satisfies: The State Equation with and The observation sequence satisfies: The Observation Equation with and

  16. Kalman Filtering

  17. We are now interested in determining the state vector xt in terms of some or all of the observation vectors y1, y2, y3, … , yT. We will consider finding the “best” linear predictor. We can include a constant term if in addition one of the observations (y0 say) is the vector of 1’s. We will consider estimation of xt in terms of • y1, y2, y3, … , yt-1(the prediction problem) • y1, y2, y3, … , yt (the filtering problem) • y1, y2, y3, … , yT (t < T, the smoothing problem)

  18. For any vector x define: where is the best linear predictor of x(i), the ith component of x, based on y0, y1, y2, … , ys. The best linear predictor of x(i) is the linear function that of x, based on y0, y1, y2, … , ys that minimizes

  19. Remark: The best predictor is the unique vector of the form: Where C0, C1, C2, … ,Cs, are selected so that:

  20. Remark: If x, y1, y2, … ,ys are normally distributed then:

  21. Remark Let u and v, be two random vectors than is the optimal linear predictor of u based on v if

  22. State Space Models

  23. Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations: yt = Atxt+ vt (The observation equation) xt = Btxt-1+ ut (The state equation) The time series { yt:t T} is said to have state-space representation.

  24. Note: { ut:t T} and { vt:t T} denote two vector valued time series that satisfying: • E(ut) = E(vt) = 0. • E(utusˊ) = E(vtvsˊ) = 0 if t ≠ s. • E(ututˊ) = Suand E(vtvtˊ) = Sv. • E(utvsˊ) = E(vtusˊ) = 0 for all t and s.

  25. Kalman Filtering: Let { xt:t T} and { yt:t T} denote two vector valued time series that satisfy the system of equations: yt = Atxt+ vt xt = Bxt-1+ ut Let and

  26. Then where One also assumes that the initial vector x0 has mean mand covariance matrix S an that

  27. The covariance matrices are updated with

  28. Summary: The Kalman equations 1. 2. 3. 4. 5. with and

  29. Proof: Now hence proving (4) Note

  30. Let Let Given y0, y1, y2, … , yt-1 the best linear predictor of dt using et is:

  31. Hence (5) where and Now

  32. Also hence (2)

  33. Thus (4) (5) where (2) Also

  34. Hence (3) The proof that (1) will be left as an exercise.

  35. Example: Suppose we have an AR(2) time series What is observe is the time series {ut|t  T} and {vt|t  T} are white noise time series with standard deviations suand sv.

  36. This model can be expressed as a state-space model by defining: then

  37. The equation: can be written Note:

  38. The Kalman equations 1. 2. 3. 4. 5. Let

  39. The Kalman equations 1.

  40. 2.

  41. 3.

  42. 4.

  43. 5.

  44. Kalman Filtering (smoothing): Now consider finding These can be found by successive backward recursions for t = T, T – 1, … , 2, 1 where

  45. The covariance matrices satisfy the recursions

  46. The backward recursions 2. 1. 3. In the example: - calculated in forward recursion

More Related