1 / 24

Mean Squared Error and Maximum Likelihood

Mean Squared Error and Maximum Likelihood. Lecture XVIII. Mean Squared Error. As stated in our discussion on closeness, one potential measure for the goodness of an estimator is. In the preceding example, the mean square error of the estimate can be written as:

makana
Download Presentation

Mean Squared Error and Maximum Likelihood

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mean Squared Error and Maximum Likelihood Lecture XVIII

  2. Mean Squared Error • As stated in our discussion on closeness, one potential measure for the goodness of an estimator is

  3. In the preceding example, the mean square error of the estimate can be written as: where q is the true parameter value between zero and one.

  4. This expected value is conditioned on the probability of T at each level value of q.

  5. MSEs of Each Estimator

  6. Definition 7.2.1. Let X and Y be two estimators of q. We say that X is better (or more efficient) than Y if E(X-q)2E(Y-q) for all q in Q and strictly less than for at least one q in Q.

  7. When an estimator is dominated by another estimator, the dominated estimator is inadmissable. • Definition 7.2.2. Let q be an estimator of q. We say that q is inadmissible if there is another estimator which is better in the sense that it produces a lower mean square error of the estimate. An estimator that is not inadmissible is admissible.

  8. Strategies for Choosing an Estimator: • Subjective strategy: This strategy considers the likely outcome of q and selects the estimator that is best in that likely neighborhood. • Minimax Strategy: According to the minimax strategy, we choose the estimator for which the largest possible value of the mean squared error is the smallest:

  9. Definition 7.2.3: Let q^ be an estimator of q. It is a minimax estimator if for any other estimator of q~ , we have:

  10. Best Linear Unbiased Estimator: • Definition 7.2.4: q^ is said to be an unbiased estimator of q if for all q in Q. We call bias

  11. In our previous discussion T and S are unbiased estimators while W is biased. • Theorem 7.2.10: The mean squared error is the sum of the variance and the bias squared. That is, for any estimator q^ of q

  12. Theorem 7.2.11 Let {Xi} i=1,2,…n be independent and have a common mean m and variance s2. Consider the class of linear estimators of m which can be written in the form and impose the unbaisedness condition

  13. Then for all ai satisfying the unbiasedness condition. Further, this condition holds with equality only for ai=1/n.

  14. To prove these points note that the ais must sum to one for unbiasedness • The final condition can be demonstrated through the identity

  15. Theorem 7.2.12: Consider the problem of minimizing with respect to {ai} subject to the condition

  16. The solution to this problem is given by

  17. Asymptotic Properties • Definition 7.2.5. We say that q^ is a consistent estimator of q if

  18. Maximum Likelihood • The basic concept behind maximum likelihood estimation is to choose that set of parameters that maximize the likelihood of drawing a particular sample. • Let the sample be X={5,6,7,8,10}. The probability of each of these points based on the unknown mean, m, can be written as

  19. Assuming that the sample is independent so that the joint distribution function can be written as the product of the marginal distribution functions, the probability of drawing the entire sample based on a given mean can then be written as:

  20. The value of m that maximize the likelihood function of the sample can then be defined by Under the current scenario, we find it easier, however, to maximize the natural logarithm of the likelihood function:

More Related