1 / 33

One Sample and Two Sample T-tests

One Sample and Two Sample T-tests. Introduce t test Hypothesis testing using a t test Paired t test Independent samples t test. Recap. Single score compared to a known distribution Sample mean compared to a known sampling distribution (central limit theorem).

misae
Download Presentation

One Sample and Two Sample T-tests

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. One Sample and Two Sample T-tests • Introduce t test • Hypothesis testing using a t test • Paired t test • Independent samples t test

  2. Recap • Single score compared to a known distribution • Sample mean compared to a known sampling distribution (central limit theorem)

  3. Example test of a hypothesis • Say the 8:00 stats students average 6.85 hours of sleep on weeknights. • UNT Daily reports that students as a whole average 7.5 hours of sleep per weeknight (standard deviation of 2.5). • Do the stats students get less sleep on average?

  4. Example test of a hypothesis • Hypothesis: UNT students average 7.5 hours of sleep (s = 2.5) • Data from stat students:

  5. Example test of a hypothesis • Z = -2.10 • Consult table: what is the probability of coming up with a z value of -2.10 (which is just a transformation of our mean of 6.85) or more extreme ? • p = .018

  6. Example test of a hypothesis • Another way to phrase our answer is that the probability of getting a mean of 6.85 or less hours slept when we were expecting 7.7 is .018. • Question: are 8:00am stats students typical in their sleeping habits? • Is their mean sleeping average significantly different than what we’d expect?

  7. Example test of a hypothesis • Formally • Ho : μ = 7.7 • H1 : μ ≠ 7.7 • If the probability of the result is less than some criterion we’ve set up (e.g. p = .10), then we reject the null hypothesis (Ho ), and believe that our sample mean comes from a population with a mean that is different from the hypothesized population mean. • The difference between the sample mean ( ) and the hypothesized population mean (μ) is statistically significant. • Not just due to sampling error

  8. One and two-tailed tests • There are two ways to state our alternative hypothesis • For example • Ho : μ = 7.7 • H1 : μ ≠ 7.7 • However, seeing as we were dealing with an 8:00am class, it wouldn’t have seemed unlikely to have expected these folks to sleep less than typical UNT students. In other words • Ho : μ = 7.7 • H1 : μ < 7.7

  9. One and two-tailed tests • In the first case we have a non-directional hypothesis, also called a two-tailed test • We are expecting extreme results that are some distance greater than or less than the hypothesized population mean (but we’re not sure which) • In the second case we had a directional hypothesis, one-tailed test • We expect our result to be greater than the hypothesized mean • Or we expect our result to be greater than the hypothesized mean • We only test one of these two outcomes

  10. One and two-tailed tests • What difference does it make? • Say I was going to state that any outcome with a probability of < .05 would be deemed statistically significant • What z-score am I hoping to find for a one-tailed test vs. a two-tailed test in order to claim significance?

  11. One-tailed test H1: µ < some value H1: µ > some value

  12. Two-tailed test H1: µ ≠ some value We don’t know if it’ll be greater than or less than

  13. One and two-tailed tests • Note that the critical z-value that we’d need to reach the .05 level is different for each test. • In the one-tailed situation, we only need a z score of +1.645 or –1.645 (depending on whether we our alternative hypothesis states that the result will be greater than or less than the hypothesized population mean) • Gist: Easier to find a result that qualifies as significant (more statistical power) • In the two-tailed situation, we need a z score of 1.96 • Tougher

  14. Two-tailed rejection • Question: can you claim a directional result from a two-tailed test?

  15. One and two-tailed tests • Why don’t we just do a one-tailed test all the time? • Sometimes we don’t know one way or another and/or are just looking for a difference of some kind • Even when we have a good idea, our theory could be wrong • Cover our ignorance

  16. Limitation to the z-test • We need to know  (population standard deviation) to compare things using our normal distribution tables • Most situations we do not know 

  17. Solution • Remember that the standard deviation has properties that make it a very good estimate of the population value • We can use our sample standard deviation to estimate the population standard deviation

  18. T test Which leads to: where and remember degrees of freedom (n-1) Look familiar?

  19. t distribution • Gossett, who worked crunching numbers for the Guinness brewing company, had a few too many one day and stumbled upon the t distribution. Having enough wits about him to figure he might regret his actions the next day, he sent off his work to the journal under the pseudonym “Student”. • However the work, like Guinness itself, has stood the test of time as a quality product • Student’s t distribution • We need to use a variety of distributions for t depending on df (sample size)

  20. What’s the difference? • Why a “t” now not a “z”? • The difference involves using our sample standard deviation to estimate the population standard deviation • Standard deviation is positively skewed, and so slightly underestimates the population value. • Our standard error part of the formula will also be smaller than it should, which would lead to a larger value of z than should be

  21. How is t different? • Since s likely to be smaller than the appropriate value of s more likely to produce a larger z than if we had used the actual s • So instead we use the t-distribution which accounts for this • Because we are trying to estimate , how well s does depends on the sample size • The t-distribution is normal for an infinitely large sample size • With larger samples our ‘important’ t-scores will be close to the ‘important’ z-scores (1.645, 1.96, 2.58)

  22. What about when n is not large? • Most of the time we do not have a very large n • With smaller samples, s is more likely to underestimate s • positively skewed sampling distribution of s • As n gets larger and larger the t distribution more closely approximates the normal distribution

  23. So now that we have something we call a t, how do we know what the critical value of t will be to reject the null hypothesis?

  24. Looking up t distributions • Table A.2 in book • Use both df and a to determine the critical value • a?? What the heck is a? • I mean, what the heck is alpha? • The alpha level is simply going to be whatever our criterion for statistical significance is going to be • Our alpha level is associated with a particular t-value (critical value) • It is also associated with making a type of error (but we’ll get to that later)

  25. Looking up t distributions • Normally if the printout of our results shows the p-value to be less than some criterion we’ve set up (e.g. .05), we reject the null hypothesis • But I don’t have a printout! • You calculate the t- score. If the t-score is beyond the critical t-value listed in the table that is associated with the .05 or .01 alpha levels (you get to pick your cutoff) for a one or two-tailed test, you reject the null hypothesis • Saves having to write a t-score for every p-value possible like we did for the normal distribution • Note: you would get a p-value from a stats program and should report it • More info is better

  26. Important point • We will talk more about this later, but alpha level and the p-value are not the same thing • We are entering dangerous territory!

  27. Comparing t to z • When a = .05, then z (two tails) = 1.96 • What are the comparable values of t with: 1 df, 3 df, 5 df, 15 df, 30 df, 60 df, 120 df? • Note your table: As with the z-test we performed, t-tests can be one-tailed or two

  28. One vs Two-tailed test • For which is it easier to obtain a ‘significant’ result- .05 one-tailed or .05 two-tailed test?

  29. Previous Example: New version • The UNT Daily claims in its recruiting literature that its students get an average of 8 hours of sleep a night (no s reported) • Collected sleep data from 25 grad students, this sample has a mean of 7.2 hours sleep, s = 1.5

  30. Plug in the numbers Formula where t= = = -2.667 What else do we need to know?

  31. Critical value of t • Degrees of freedom (df) = n-1 = 25 - 1 = 24 • Critical value at the .05 alpha level • t.05(24) = +2.064 • In this situation we are looking for a value greater than +2.064 or less than –2.064 (a two-tailed test) • The t score observed from our data (-2.667) falls beyond the critical value • Therefore p < .05 • Conclusion?

  32. Summary of hypothesis testing (or at least one way to go about it) 1) State the research question 2) State the null hypothesis, which simply gives us a value to test, and alternative hypothesis 3) Construct a sampling distribution based on the null hypothesis and locate region of rejection (i.e. find the critical value on your table) 4) Calculate the test statistic (t) and see where it falls along the distribution in relation to the critical value (tcv) 5) Reach a decision and state your conclusion

  33. Your turn... The average grade from year to year in statistics courses at UNT is an 81. This year the stats students (200 thus far) have an average of 83 w/ s = 10. Is this unusual? tcv = ? p = ? t = ? Conclusion?

More Related