1 / 25

Distribution of Estimates and Multivariate Regression

Distribution of Estimates and Multivariate Regression. Lecture XXIX. Models and Distributional Assumptions. The conditional normal model assumes that the observed random variables are distributed

adler
Download Presentation

Distribution of Estimates and Multivariate Regression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distribution of Estimates and Multivariate Regression Lecture XXIX

  2. Models and Distributional Assumptions • The conditional normal model assumes that the observed random variables are distributed • Thus, E[yi|xi]=a+bxi and the variance of yi equals s2. The conditional normal can be expressed as

  3. Further, the ei are independently and identically distributed (consistent with our BLUE proof). • Given this formulation, the likelihood function for the simple linear model can be written:

  4. Taking the log of this likelihood function yields: • As discussed in Lecture XVII, this likelihood function can be concentrated in such a way so that

  5. So that the least squares estimator are also maximum likelihood estimators if the error terms are normal. • Proof of the variance of b can be derived from the Gauss-Markov results. Note from last lecture:

  6. Remember that the objective function of the minimization problem that we solved to get the results was the variance of estimate:

  7. This assumes that the errors are independently distributed. Thus, substituting the final result for di into this expression yields:

  8. Multivariate Regression Models • In general, the multivariate relationship can be written in matrix form as:

  9. If we expand the system to three observations, this system becomes:

  10. Expanding the exactly identified model, we get

  11. In matrix form this can be expressed as • The sum of squared errors can then be written as:

  12. A little matrix calculus is a dangerous thing • Note that each term on the left hand side is a scalar. Since the transpose of a scalar is itself, the left hand side can be rewritten as:

  13. Variance of the estimated parameters • The variance of the parameter matrix can be written as:

  14. Substituting this back into the variance relationship yields:

  15. Note that ee’=s2I therefore

  16. Theorem 12.2.1 (Gauss-Markov) Let b*=C’y where C is a T x K constant matrix such that C’X=I. Then, b^is better than b* if b* ≠ b^.

  17. This choice of C guarantees that the estimator b* is an unbiased estimator of b. The variance of b* can then be written as:

  18. To complete the proof, we want to add a special form of zero. Specifically, we want to add s2(X’X)-1-s2(X’X)-1=0.

  19. Focusing on the last terms, we note that by the orthogonality conditions for the C matrix

  20. Focusing on the last terms, we note that by the orthogonality conditions for the C matrix

  21. Substituting backwards

  22. Thus,

  23. The minimum variance estimator is then C=X(X’X)-1 which is the ordinary least squares estimator.

More Related