1 / 99

Statistical Inference I: Hypothesis testing; sample size

Statistical Inference I: Hypothesis testing; sample size. Statistics Primer. Statistical Inference Hypothesis testing P-values Type I error Type II error Statistical power Sample size calculations. What is a statistic?.

carter
Download Presentation

Statistical Inference I: Hypothesis testing; sample size

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistical Inference I: Hypothesis testing; sample size

  2. Statistics Primer • Statistical Inference • Hypothesis testing • P-values • Type I error • Type II error • Statistical power • Sample size calculations

  3. What is a statistic? • A statistic is any value that can be calculated from the sample data. • Sample statistics are calculated to give us an idea about the larger population.

  4. Examples of statistics: • mean • The average cost of a gallon of gas in the US is $2.65. • difference in means • The difference in the average gas price in Los Angeles ($2.96) compared with Des Moines, Iowa ($2.32) is 64 cents. • proportion • 67% of high school students in the U.S. exercise regularly • difference in proportions • The difference in the proportion of Democrats who approve of Obama (83%) versus Republicans who do (14%) is 69%

  5. What is a statistic? • Sample statistics are estimates of population parameters.

  6. Sample statistic: mean IQ of 5 subjects Truth (not observable) Sample (observation) Mean IQ of some population of 100,000 people =100 Make guesses about the whole population Sample statistics estimate population parameters:

  7. What is sampling variation? • Statistics vary from sample to sample due to random chance. • Example: • A population of 100,000 people has an average IQ of 100 (If you actually could measure them all!) • If you sample 5 random people from this population, what will you get?

  8. Truth (not observable) Sampling Variation Mean IQ=100

  9. Sampling Variation and Sample Size • Do you expect more or less sampling variability in samples of 10 people? • Of 50 people? • Of 1000 people? • Of 100,000 people?

  10. Sampling Distributions Most experiments are one-shot deals. So, how do we know if an observed effect from a single experiment is real or is just an artifact of sampling variability (chance variation)? **Requires a priori knowledge about how sampling variability works… Question: Why have I made you learn about probability distributions and about how to calculate and manipulate expected value and variance? Answer: Because they form the basis of describing the distribution of a sample statistic.

  11. Standard error • Standard Error is a measure of sampling variability. • Standard error is the standard deviation of a sample statistic. • It’s a theoretical quantity! What would the distribution of my statistic be if I could repeat my experiment many times (with fixed sample size)? How much chance variation is there? • Standard error decreases with increasing sample size and increases with increasing variability of the outcome (e.g., IQ). • Standard errors can be predicted by computer simulation or mathematical theory (formulas). • The formula for standard error is different for every type of statistic (e.g., mean, difference in means, odds ratio).

  12. What is statistical inference? • The field of statistics provides guidance on how to make conclusions in the face of chance variation (sampling variability).

  13. Example 1: Difference in proportions • Research Question: Are antidepressants arisk factor for suicide attempts in children and adolescents? • Example modified from: “Antidepressant Drug Therapy and Suicide in Severely Depressed Children and Adults”; Olfson et al. Arch Gen Psychiatry.2006;63:865-872.

  14. Example 1 • Design: Case-control study • Methods: Researchers used Medicaid records to compare prescription histories between 263 children and teenagers (6-18 years) who had attempted suicide and 1241 controls who had never attempted suicide (all subjects suffered from depression). • Statistical question: Is a history of use of antidepressants more common among cases than controls?

  15. Example 1 • Statistical question: Is a history of use of particular antidepressants more common among heart disease cases than controls? What will we actually compare? • Proportion of cases who used antidepressants in the past vs. proportion of controls who did

  16. Results No (%) of cases (n=263) No (%) of controls (n=1241) Any antidepressant drug ever 120 (46%)  448 (36%) 46% 36% Difference=10%

  17. What does a 10% difference mean? • Before we perform any formal statistical analysis on these data, we already have a lot of information. • Look at the basic numbers first; THEN consider statistical significance as a secondary guide.

  18. Is the association statistically significant? • This 10% difference could reflect a true association or it could be a fluke in this particular sample. • The question: is 10% bigger or smaller than the expected sampling variability?

  19. What is hypothesis testing? • Statisticians try to answer this question with a formal hypothesis test

  20. Hypothesis testing Step 1: Assume the null hypothesis. Null hypothesis: There is no association between antidepressant use and suicide attempts in the target population (= the difference is 0%)

  21. Hypothesis Testing Step 2: Predict the sampling variability assuming the null hypothesis is true—math theory (formula): The standard error of the difference in two proportions is: Thus, we expect to see differences between the group as big as about 6.6% (2 standard errors) just by chance…

  22. Hypothesis Testing • In computer simulation, you simulate taking repeated samples of the same size from the same population and observe the sampling variability. • I used computer simulation to take 1000 samples of 263 cases and 1241 controls assuming the null hypothesis is true (e.g., no difference in antidepressant use between the groups). Step 2: Predict the sampling variability assuming the null hypothesis is true—computer simulation:

  23. Computer Simulation Results

  24. Standard error is about 3.3% What is standard error? Standard error: measure of variability of sample statistics

  25. Hypothesis Testing Step 3: Do an experiment We observed a difference of 10% between cases and controls.

  26. Hypothesis Testing Step 4: Calculate a p-value P-value=the probability of your data or something more extreme under the null hypothesis.

  27. Observed difference between the groups. A Z-value of 3.0 corresponds to a p-value of .003. Difference in proportions follows a normal distribution. Standard error. Hypothesis Testing Step 4: Calculate a p-value—mathematical theory:

  28. When we ran this study 1000 times, we got 1 result as big or bigger than 10%. We also got 2 results as small or smaller than –10%. The p-value from computer simulation…

  29. P-value P-value=the probability of your data or something more extreme under the null hypothesis. From our simulation, we estimate the p-value to be: 3/1000 or .003

  30. Hypothesis Testing Step 5: Reject or do not reject the null hypothesis. Here we reject the null. Alternative hypothesis: There is an association between antidepressant use and suicide in the target population.

  31. What does a 10% difference mean? • Is it “statistically significant”? YES • Is it clinically significant? • Is this a causal association?

  32. What does a 10% difference mean? • Is it “statistically significant”? YES • Is it clinically significant? MAYBE • Is this a causal association? MAYBE Statistical significance does not necessarily imply clinical significance. Statistical significance does not necessarily imply a cause-and-effect relationship.

  33. What would a lack of statistical significance mean? • If this study had sampled only 50 cases and 50 controls, the sampling variability would have been much higher—as shown in this computer simulation…

  34. 263 cases and 1241 controls. Standard error is about 3.3% Standard error is about 10% 50 cases and 50 controls.

  35. Standard error is about 10% If we ran this study 1000 times, we would expect to get values of 10% or higher 170 times (or 17% of the time). With only 50 cases and 50 controls…

  36. Two-tailed p-value = 17%x2=34% Two-tailed p-value

  37. What does a 10% difference mean (50 cases/50 controls)? • Is it “statistically significant”? NO • Is it clinically significant? MAYBE • Is this a causal association? MAYBE No evidence of an effect  Evidence of no effect.

  38. Example 2: Difference in means • Example: Rosental, R. and Jacobson, L. (1966) Teachers’ expectancies: Determinates of pupils’ I.Q. gains. Psychological Reports, 19, 115-118.

  39. The Experiment (note: exact numbers have been altered) • Grade 3 at Oak School were given an IQ test at the beginning of the academic year (n=90). • Classroom teachers were given a list of names of students in their classes who had supposedly scored in the top 20 percent; these students were identified as “academic bloomers” (n=18). • BUT: the children on the teachers lists had actually been randomly assigned to the list. • At the end of the year, the same I.Q. test was re-administered.

  40. Example 2 • Statistical question: Do students in the treatment group have more improvement in IQ than students in the control group? What will we actually compare? • One-year change in IQ score in the treatment group vs. one-year change in IQ score in the control group.

  41. The standard deviation of change scores was 2.0 in both groups. This affects statistical significance… Results: “Academic bloomers” (n=18) Controls (n=72) Change in IQ score: 12.2 (2.0)  8.2 (2.0) 12.2 points 8.2 points Difference=4 points

  42. What does a 4-point difference mean? • Before we perform any formal statistical analysis on these data, we already have a lot of information. • Look at the basic numbers first; THEN consider statistical significance as a secondary guide.

  43. Is the association statistically significant? • This 4-point difference could reflect a true effect or it could be a fluke. • The question: is a 4-point difference bigger or smaller than the expected sampling variability?

  44. Hypothesis testing Step 1: Assume the null hypothesis. Null hypothesis: There is no difference between “academic bloomers” and normal students (= the difference is 0%)

  45. Hypothesis Testing Step 2: Predict the sampling variability assuming the null hypothesis is true—math theory: The standard error of the difference in two means is: We expect to see differences between the group as big as about 1.0 (2 standard errors) just by chance…

  46. Hypothesis Testing • In computer simulation, you simulate taking repeated samples of the same size from the same population and observe the sampling variability. • I used computer simulation to take 1000 samples of 18 treated and 72 controls, assuming the null hypothesis (that the treatment doesn’t affect IQ). Step 2: Predict the sampling variability assuming the null hypothesis is true—computer simulation:

  47. Computer Simulation Results

  48. Standard error is about 0.52 What is the standard error? Standard error: measure of variability of sample statistics

  49. Hypothesis Testing Step 3: Do an experiment We observed a difference of 4 between treated and controls.

  50. Hypothesis Testing Step 4: Calculate a p-value P-value=the probability of your data or something more extreme under the null hypothesis.

More Related