450 likes | 577 Views
FEE Course, January 2013. Likelihood, Inference, and Model Comparison. Likelihood Methods and Models in Ecology. http://www.sortie-nd.org/lme/lme_course.html. Outline. Probability and probability density functions
E N D
FEE Course, January 2013 Likelihood, Inference, and Model Comparison Likelihood Methods and Models in Ecology http://www.sortie-nd.org/lme/lme_course.html
Outline • Probability and probability density functions • Maximum likelihood estimates (versus traditional “method of moment” estimates) • Statistical inference • Classical “frequentist” statistics : Limitations and mental gyrations... • The “likelihood” alternative: Basic principles and definitions • Model comparison as a generalization of hypothesis testing
Probability defined more generally... • Consider an outcome X from some process that has a set of possible outcomes S: • If X and S are discrete, then P{X} = X/S • If X is continuous, then the probability has to be defined in the limit: Where g(x) is a probability density function(PDF)
The Normal Probability Density Function (PDF) m = mean s2= variance • Properties of a PDF: • (1) 0 <prob(x)< 1 • (2) ∫ prob(x) = 1
Common PDFs... • For continuous data: • Normal • Lognormal • Gamma • For discrete data: • Poisson • Binomial • Multinomial • Negative Binomial See McLaughlin (1993) “A compendium of common probability distributions” in the reading list
Why are PDFs important? Answer: because they are used to calculate likelihood… (And in that case, they are called “likelihood functions”)
Statistical “Estimators” A statistical estimator is a function applied to a sample of data, and used to estimate an unknown population parameter (and an “estimate” is just the result of applying an “estimator” to a sample)
Properties of Estimators • Some desirable properties of “point estimators” (functions to estimate a fixed parameter) • Bias: if the average error is zero, the estimate is unbiased • Efficiency: an estimate with the minimum variance is the most efficient (note: the most efficient estimator is often biased) • Consistency: As sample size increases, the probability of the estimate being close to the parameter increases • Asymptotically normal: a consistent estimator whose distribution around the true parameter θ approaches a normal distribution with standard deviation shrinking in proportion to as the sample size n grows
Maximum likelihood (ML) estimates versus Method of moment (MOM) estimates Bottom line: MOM was born in the time before computers, and was OK, ML needs computing power, but has more desirable properties…
What’s wrong with MOM’s way? • Nothing, if all you are interested in is calculating properties of your sample… • But MOM’s formulas are generally not the best way1 to infer estimates of the statistical properties of the population from which the sample was drawn… For example: Population variance (because the second central moment is a biased underestimate of the population variance) 1… in the formal terms of bias, efficiency, consistency, and asymptotic normality
The Maximum Likelihood alternative… Going back to PDF’s: in plain language, a PDF allows you to calculate the probability that an observation will take on a value (x), given the underlying (true?) parameters of the population
Inference defined... “a: the act of passing from one proposition, statement, or judgment considered as true to another whose truth is believed to follow from that of the former b: the act of passing from statistical sample data to generalizations (as of the value of population parameters) usually with calculated degrees of certainty” Source: Merriam-Webster Online Dictionary
Statistical Inference... ... Typically concerns inferring properties of an unknown distribution from data generated by that distribution ... Components: -- Point estimation -- Hypothesis testing -- Model comparison
Probability and Inference • How do you choose the “correct inference” from your data, given inevitable uncertainty and error? • Can you assign a probability to your certainty in the correctness of a given inference? • (hint: if this is really important to you, then you should consider becoming a Bayesian, as long as you can accept what I consider to be some fairly objectionable baggage…) • How do you choose between alternate hypotheses? • Can you assess the strength of your evidence for alternate hypotheses?
The crux of the problem... “Thus, our general problem is to assess the relative merits of rival hypotheses in the light of observational or experimental data that bear upon them....” (Edwards, pg 1). Edwards, A.W.F. 1992. Likelihood. Expanded Edition. Johns Hopkins University Press.
Assigning Probabilities to Hypotheses • Unfortunately, hypotheses (or even different parameter estimates) can not generally be treated as “data” (outcomes of trials) • Statisticians have debated alternate solutions to this problem for centuries • (with no generally agreed upon solution)
One Way Out: Classical “Frequentist” Statistics and Tests of Null Hypotheses • Probability is defined in terms of the outcome of a series of repeated trials.. • Hypothesis testing via “significance” of pre-defined “statistics” • What is the probability of observing a particular value of a predefined test statistic, given an assumed hypothesis about the underlying scientific model, and assumptions about the probability model of the test statistic... • Hypotheses are never “accepted”, but are “rejected” (categorically) if the probability of obtaining the observed value of the test statistic is very small (“p-value”)
An Implicit Assumption • The data are an approximate “sample” of an underlying “true” reality – i.e., there is a true population mean, and the sample provides an estimate of it...
Limitations of Frequentist Statistics • Do not provide a means of measuring relative strength of observational support for alternate hypotheses (merely helps decide when to “reject” individual hypotheses in comparison to a single “null” hypothesis...) • So you conclude the slope of the line is not = 0. How strong is your evidence that the slope is really 0.45 vs. 0.50? • Extremely non-intuitive: just what is a “confidence interval” anyway...
So what is our alternative? likelihood as a basis for inference • Remember that the PDF defines the probability of observing an outcome (x), given that you already know the true population parameter (θ) • But we want to generate an estimate of θ, given our data (x) • And, unfortunately, the two are not identical:
Fisher and the concept of “Likelihood”... The “Likelihood Principle” In plain English: “The likelihood (L) of the parameter estimates (θ), given a sample (x) is proportional to the probability of observing the data, given the parameters...” {and this probability is something we can calculate, using the appropriate underlying probability model (i.e. a PDF)}
R.A. Fisher (1890- 1962) “Likelihood and Probability in R. A. Fisher’s Statistical Methods for Research Workers” (John Aldrich) A good summary of the evolution of Fisher’s ideas on probability, likelihood, and inference… Contains links to PDFs of Fisher’s early papers… A second page shows the evolution of his ideas through changes in successive editions of Fisher’s books… Age 22 http://www.economics.soton.ac.uk/staff/aldrich/fisherguide/prob+lik.htm
Calculating Likelihood and Log-Likelihood for Datasets From basic probability theory: If two events (A and B) are independent, then P(A,B) = P(A)P(B) More generally, for i = 1..nindependent observations, and a vector X of observations (xi): where is the appropriate PDF But, logarithms are easier to work with, so...
A simple example… A sample of 10 observations… Assume they are normally distributed, with an unknown population mean and standard deviation. What is the (log) likelihood that the mean is 4.5 and the standard deviation is 1.2?
Likelihood “Surfaces” The variation in likelihood for any given set of parameter values defines a likelihood “surface”... For a model with just 1 parameter, the surface is simply a curve: (aka a “likelihood profile”)
“Support” and “Support Limits” Log-likelihood = “Support” (Edwards 1992)
Another (still somewhat trivial) example… • MOM vs ML estimates of the probability of survival for a population: • Data: a quadrat in which 16 of 20 seedlings survived during a census interval. (Note that in this case, the quadrat is the unit of observation…, so sample size = 1) i.e. Given N=20, x = 16, what is p? x <- seq(0,1,0.005) y <- dbinom(16,20,x) plot(x,y) x[which.max(y)]
A more realistic example # Create some data (5 quadrats) N <- c(11,14,8,22,50) x <- c(8,7,5,17,35) # Calculate the log-likelihood for each # probability of survival p <- seq(0,1,0.005) log_likelihood <- rep(0,length(p)) for (i in 1:length(p)) { log_likelihood[i] <- sum(log(dbinom(x,N,p[i]))) } # Plot the likelihood profile plot(p,log_likelihood) # What probability of survival maximizes log likelihood? p[which.max(log_likelihood)] 0.685 # How does this compare to the average across the 5 quadrats mean(x/N) 0.665
Focus in on the MLE… # what is the log-likelihood of the MLE? max(log_likelihood) [1] -9.46812 • Things to note about log-likelihoods: • They should always be negative! (if not, you have a problem with your likelihood function) • The absolute magnitude of the log-likelihood increases as sample size increases
An example with continuous data… The normal PDF: x = observed m = mean s2= variance In R: dnorm(x, mean = 0, sd = 1, log = FALSE) > dnorm(2,2.5,1) [1] 0.3520653 > dnorm(2,2.5,1,log=T) [1] -1.043939 > Problem: Now there are TWO unknowns needed to calculate likelihood (the mean and the variance)! Solution: treat the variance just like another parameter in the model, and find the ML estimate of the variance just like you would any other parameter…
Likelihood and Model Comparison as a basis for Hypothesis Testing • When and where is “strong inference” really useful? • When is it just an impediment to progress? Platt, J. R. 1964. Strong inference. Science 146:347-353 Stephens et al. 2005. Information theory and hypothesis testing: a call for pluralism. Journal of Applied Ecology 42:4-12.
Chamberlain’s alternative: multiple working hypotheses • Science rarely progresses through a series of dichotomously branched decisions… • Instead, we are constantly trying to choose among a large set of alternate hypotheses • Concept is very old, but the computational power needed to adopt this approach has only recently become available… “Conscientiously followed, the method of the (single) working hypothesis … has some serious defects. … To avoid this grave danger, the method of multiple working hypotheses is urged. It differs… in that it distributes the effort and divides the affections. …the effort is to bring up into review every rational explanation of the phenomenon in hand… and to give to all of these as impartially as possible a working form and a due place in the investigation.” Chamberlain, T. C. 1890. The method of multiple working hypotheses. Science 15:92.
Hypothesis testing and “significance” • Nester’s (1996) Creed: • TREATMENTS: all treatments differ • FACTORS: all factors interact • CORRELATIONS: all variables are correlated • POPULATIONS: no two populations are identical in any respect • NORMALITY: no data are normally distributed • VARIANCES: variances are never equal • MODELS: all models are wrong • EQUALITY: no two numbers are the same • SIZE: many numbers are very small Nester, M. R. 1996. An applied statistician’s creed. Applied Statistician 45:401-410
Hypothesis testing vs. estimation “The problem of estimation is of more central importance, (than hypothesis testing)..for in almost all situations we know that the effect whose significance we are measuring is perfectly real, however small; what is at issue is its magnitude.” (Edwards, 1992, pg. 2) “An insignificant result, far from telling us that the effect is non-existent, merely warns us that the sample was not large enough to reveal it.” (Edwards, 1992, pg. 2)
The most important point of the lecture… Any hypothesis test can be framed as a comparison of alternate models… (and being free of the constraints imposed by the alternate models embedded in classical statistical tests is perhaps the most important benefit of the likelihood approach…)
Differences in Frequentist vs. Likelihood Approaches • Traditional Frequentist Approach: • Report “significance” of a test that …… based on a test statistic calculated from sums of squares (F statistic), with a necessary assumption of a homogeneous and normally distributed error • Likelihood Approach • Compare a set of alternate models, assess the strength of evidence in your data for each of them, and identify the “best” model • If the assumption about the error term isn’t appropriate, use a different error term!
Remember that the error term is part of the model… And you don’t just have to accept that a simple, normally distributed, homogeneous error is appropriate… Estimate a separate error term for each group Or an error term that varies as a function of the predicted value Or where the error isn’t normally distributed
An Example: Analysis of Covariance • A traditional frequentist ANCOVA model (homogeneous slopes): • What is restrictive about this model? • How would you generalize this in a likelihood framework? • What alternate models are you testing with the standard frequentist statistics? • What more general alternate models might you like to test?
But is likelihood enough? The challenge of parsimony The importance of seeking simple answers... “It will not be sufficient, when faced with a mass of observations, to plead special creation, even though, as we shall see, such a hypothesis commands a higher numerical likelihood than any other.” (Edwards, 1992, pg. 1, in explaining the need for a rigorous basis for scientific inference, given uncertainty in nature...)
Models, Truth, and “Full Reality”(The Burnham and Anderson view...) “We believe that “truth” (full reality) in the biological sciences has essentially infinite dimension, and hence ... cannot be revealed with only ... finite data and a “model” of those data... ... We can only hope to identify a model that provides a good approximation to the data available.” (Burnham and Anderson 2002, pg. 20)
The “full” model • What I irreverently call the “god” model: everything is the way it is because it is… • In statistical terms, this is simply a model with asmany parameters as observations • i.e.: xi = θi This will always be the model with the highest likelihood! (but it won’t be the most parsimonious)…
Parsimony, Ockham’s razor, and drawing elephants... William of Ockham (1285-1349): “Pluralitas non est ponenda sine neccesitate” “entities should not be multiplied unnecessarily” “Parsimony: ... 2: economy in the use of means to an end; especially: economy of explanation in conformity with Occam's razor” (Merriam-Webster Online Dictionary)
So how many parameters DOES it take to draw an elephant...?* Information Theory perspective: “How much information is lost when using a simple model to approximate reality?” Answer: the Kullback-Leibler Distance (generally unknowable) More Practical Answer: Akaike’s Information Criterion (AIC) identifies the model that minimizes KL distance *30 would “carry a chemical engineer into preliminary design” (Wel, 1975) (cited in B&A, pg 30)
The brave new world… • Science is the development of simplified models as explanations (approximations) of reality… • The “quality” of the explanation (the model) will be a balance of many factors (both quantitative and qualitative)