1 / 65

CHAPTER 7

CHAPTER 7. Procedures for Estimating Reliability. *TYPES OF RELIABILITY. Procedures for Estimating/Calculating Reliability. Procedures Requiring 2 Test Administration Procedures Requiring 1 Test Administration. Procedures for Estimating Reliability.

yale
Download Presentation

CHAPTER 7

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 7 Procedures for Estimating Reliability

  2. *TYPES OF RELIABILITY

  3. Procedures for Estimating/Calculating Reliability • Procedures Requiring 2 Test Administration • Procedures Requiring 1 Test Administration

  4. Procedures for Estimating Reliability • *Procedures Requiring two (2) Test Administration • 1. Test-Retest Reliability Method measures the Stability. • 2. Parallel (Alternate) Equivalent Forms InteritemReliability Method measures the Equivalence. • 3. Test-Retest with Alternate Reliability Forms measures the Stability and Equivalent

  5. Procedures Requiring 2 Test Administration • 1. Test-Retest Reliability Method Administering the same test to the same group of participants then, the two sets of scores are correlated with each other. The correlation coefficient ( r ) between the two sets of scores is called the coefficient of stability. The problem with this method is time Sampling, it means that factors related to time are the sources of measurement error e.g., change in exam condition such as noises, the weather, illness, fatigue, worry, mood change etc.

  6. How to Measure the Test Retest Reliability • Class IQ Scores • Students X –first timeY- second time • John 125 120 • Jo 110 112 • Mary 130 128 • Kathy 122 120 • David 115120 • rfirsttime.second timestability

  7. Procedures Requiring 2 Test Administration • 2. Parallel (Alternate) Forms Reliability Method Different Forms of the same test are given to the same group of participants then, the two sets of scores are correlated. The correlation coefficient (r) between the two sets of scores is called the coefficient of equivalence.

  8. How to measure the Parallel Forms Reliability • Class Test Scores • Students X-Form A Y-Form B • John 95 90 • Jo 80 85 • Mary 78 82 • Kathy 82 88 • David 7572 • rformA•formBequivalence

  9. Procedures Requiring 2 Test Administration • 3. Test-Retest with Alternate Reliability Forms • It is a combination of the test-retest and alternate form reliability method. • On Monday, you administer form A to 1st half of the group and form B to the second half. • On Friday, you administer form B to 1st half of the group and form A to the second half. • The correlation coefficient ( r) between the two sets of scores is called the coefficient of stability and equivalence.

  10. Procedures Requiring 1 Test Administration • A. Internal Consistency Reliability (ICR) • Examines the unidimensionalnature of a set of items in a test. It tells us how unified the items are in a test or in an assessment. • Ex. If we administer a 100-item personality test we want the items to relate with one another and to reflect the same construct (personality). We want them to have item homogeneity. • ICR deals with how unified the items are in a test or an assessment. This is called “item homogeneity.”

  11. Procedures for Estimating Reliability • *Procedures Requiring one(1) Test Administration • A. Internal Consistency Reliability • B. Inter-Rater Reliability

  12. A. Internal Consistency Reliability (ICR) • *4 Different ways to measure ICR • 1. GuttmanSplit Half Reliability Method same as (Spearman Brown Prophesy Formula) • 2. Cronbach’s Alpha Method • 3. Kuder Richardson Method • 4. Hoyt’s Method • They are different statistical procedures to calculate the reliability of a test.

  13. Procedures Requiring 1 Test Administration • A. Internal Consistency Reliability (ICR) • 1. GuttmanSplit-Half Reliability Method(most popular) usually use for dichotomously scored exams. • First, administer a test, then divide the test items into 2 subtests(There are four popular methods), then, find the correlation between the 2 subtests and place it in the formula.

  14. 1. Split Half Reliability Method

  15. 1. Split Half Reliability Method • *The 4 popular methods are: • 1.Assign all odd-numbered items to form 1 and all even-numbered items to form 2 • 2. Rank order the items in terms of their difficulty levels (p-values) based on the responses of the examiners; then assign items with odd-numbered ranks to form 1 and those with even-numbered ranks to form 2

  16. 1. Split Half Reliability Method • The four popular methods are: • 3. Randomly assign items to the two half-test forms • 4. Assign items to half-test forms so that the forms are “matched” in content e.g. if there are 6 items on reliability, each half will get 3.

  17. 1. Split Half Reliability MethodA high Slit Half reliability coefficient (e.g., >0.90) indicates a homogeneous test.

  18. 1. Split Half Reliability Method • *Use the split half reliability method to calculate the reliability estimate of a test with reliability coefficient (correlation) of 0.25 for the 2 halves of this test ?

  19. 1. Split Half Reliability Method

  20. 1. Split Half Reliability MethodA=X and B=Y

  21. Procedures Requiring 1 Test Administration • A. Internal Consistency Reliability (ICR) • 2. Cronbach’sAlpha Method (used for wide range of scoring such as Non-Dichotomouslyand • Dichotomouslyscored exams. • Cronbach’s(α) is a preferred statistic. • Lee Cronbach-

  22. Procedures Requiring 1 Test Administration

  23. Cronbach αfor composite tests K is number of tests/subtest

  24. A. Internal Consistency Reliability (ICR)2. *Cronbach’s Alpha Methodor( Coefficient (α) is a preferred statistic) • Ex. Suppose that the examinees are tested on 4 essay items and the maximum score for each is 10 points. The variance for the items are as follow; σ²1=9, σ²2=4.8, σ²3=10.2, and σ²4=16. If the total score variance σ²x=100, used the Cronbach’s Alpha Method to calculate the internal consistency of this test? A high αcoefficient (e.g., >0.90) indicates a homogeneous test.

  25. Cronbach’s Alpha Method

  26. Procedures Requiring 1 Test Administration 3. Kuder Richardson Method • A. Internal Consistency Reliability (ICR) • *The Kuder-Richardson Formula 20 (KR-20) first published in 1937. It is a measure of internal consistency reliability for measures with dichotomous choices. It is analogous \ə-ˈna-lə-gəs\ to Cronbach'sα, except Cronbach's α is also used for non-dichotomous tests. pq=σ²i. A high KR-20 coefficient (e.g., >0.90) indicates a homogeneous test.

  27. Procedures Requiring 1 Test *Administration

  28. Procedures Requiring 1 Test Administration

  29. 3. *Kuder Richardson Method (KR 20and KR 21) See table 7.1 or data on p.136 next

  30. Variance=square of standard deviation=4.08

  31. Procedures Requiring 1 Test Administration • A. Internal Consistency Reliability (ICR) • 3. KuderRichardson Method (KR 21) It isused only with dichotomously scored items. It does not require the computing of each item variance (you do it once for all items or test variance σ²X=Total test score variance) see table 7.1 for standard deviation and variance for all items. • It assumes all items are equal in difficulties.

  32. Procedures Requiring 1 Test Administration

  33. Procedures Requiring 1 Test Administration • A. Internal Consistency Reliability (ICR) • 4. *Hoyt’s (1941) Method • Hoyt used ANOVA to obtained variance or MS to calculate the Hoyt’s Coefficient. • MS=σ²=S²=Variance

  34. Procedures Requiring 1 Test Administration

  35. 4. *Hoyt’s (1941) MethodMS person MS withinMS items MS betweenMS residual has it’s own calculations, it is not =MS total

  36. Procedures Requiring 1 Test Administration • B. Inter-Rater Reliability It is measure of consistencyfrom rater to rater. It is a measure of agreementbetween the raters.

  37. Procedures Requiring 1 Test Administration • B. Inter-Rater Reliability • Items Rater 1 Rater 2 • 1 4 3 • 2 3 5 • 3 5 5 • 4 4 2 • 5 1 2 • First do the r rater1.rater2 then, X 100.

  38. Procedures Requiring 1 Test Administration • B. Inter-Rater Reliability • More than 2 raters: • Raters 1, 2, and 3 • Calculate r for 1 & 2=.6 • Calculate r for 1 & 3=.7 • Calculate r for 2 & 3=.8 • µ=.7 x100=70%

  39. *Factors that Affect Reliability Coefficients • 1. Group Homogeneity • 2. Test length • 3. Time limit

  40. *Factors that Affect Reliability Coefficients • 1. Group Homogeneity If a sample of examinees is highly homogeneous on the construct being measured, the reliability estimate will be lower than if the sample were more heterogeneous. • 2. Test length Longer tests are more reliable than shorter tests. The effect of changing test length can be estimated by using Spearman Brown Prophecy Formula. • 3. Time limit Time Limit refers to when a test has a rigid time limit. Meaning, some examinees finish but others don’t, this will artificially inflate the test reliability coefficient.

  41. Reporting Reliability DataAccording to Standards for Educational and Psychological Testing • 1. Result of different reliability studies should be reported to take into account different sources of measurement error that are most relevant to score use. • 2.Standard error of measurement and score bands for different confidence intervals should accompany each reliability estimate • 3.Reliability and standard error estimates should be reported for subtest scores as well as total test score.

  42. Reporting Reliability Data • 4.Procedures and sample used in reliability studies should be sufficiently describe to permit users to determine similarity between conditions of the reliability study and their local situations. • 5.When a test is normally used for a particular population of examinees (e.g., those within a grade level or those who have a particular handicap) reliability estimate and standard error of measurement should be reported separately for such specialized population.

  43. Reporting Reliability Data • 6.when test scores are used primarily for describing or comparing group performance, reliability and standard error of measurement for aggregated observations should be reported. • 7.If standard errors of measurement are estimated by using a model such as the binomial model, this should be clearly indicated; users will probably assume that the classical standard error of measurement is being reported. A binomial model is characterized by trials which either end in success (heads) or failure (tails).

  44. CHAPTER 8Introduction to Generalizability TheoryCronbach (1963)

  45. CHAPTER 8Introduction to Generalizability TheoryCronbach (1963) • Generalizability is another way to calculate the reliability of a test by using ANOVA. • Generalizability refers to the degreeto which a particular set of measurements of an examinee generalizes to a more extensive set of measurements of that examinee. (just like conducting inferential research)

  46. Introduction to GeneralizabilityGeneralizability Coefficient • FYI, In Classical True Score Theory, the Reliability was defined as the ratio of the True score to Observed score. Reliability= T/T+E • Also, an examinee’s True score is defined as the average (mean) of large number of strictly parallel measurements, and the True score variance σ²Tdefined as variance of these averages. • Reliability CoefficientpX1X2= σ²T/ σ²X

  47. Introduction to Generalizability*Generalizabilty Coefficient • In Generalizability theory an examinee’s Universe Score is defined as the average of the measurements in the universe of generalization (The Universe Score is the same as True score in classical test theory), it is the average or mean of the measurements in the Universe of Generalization.

  48. Introduction to Generalizability*Generalizabilty Coefficient • The Generalizability Coefficient or pis defined as the ratio of Universe Score Variance (σ²μ) to expected Observed Score Variance (eσ²X). • Generalizability Coefficient=p= σ²μ/eσ²X Ex. if Expected Observed Score Variance=eσ²X =10 and Universe Score Variance =σ²μ =5 Then the Generalizability Coefficient is: 5/10=0.5

More Related