400 likes | 506 Views
Item Response Theory and Longitudinal Modeling: The Real World is Less Complicated than We Fear. Marty McCall Northwest Evaluation Association Presented to the MSDE/MARCES Conference ASSESSING & MODELING COGNITIVE DEVELOPMENT IN SCHOOL: INTELLECTUAL GROWTH AND STANDARD SETTING
E N D
Item Response Theory and Longitudinal Modeling: The Real World is Less Complicated than We Fear Marty McCall Northwest Evaluation Association Presented to the MSDE/MARCES Conference ASSESSING & MODELING COGNITIVE DEVELOPMENT IN SCHOOL: INTELLECTUAL GROWTH AND STANDARD SETTING October, 19, 2006
Examining constructs through vertical scales • What are vertical scales? • Who uses them and why? • Who doesn’t use them and why not?
What are vertical scales? • In the IRT context, they are: • scales measuring a construct from the easiest through the most difficult tasks • equal interval scales spanning ages or grades • also called developmental scales • a common framework for measurement of a construct over time
Why use vertical scales? • To model growth: Tests that are vertically scaled are intended to support valid inferences regarding growth over time. --Patz, Yao, Chia, Lewis, & Hoskins (CTB/McGraw)
Why use vertical scales? • To study cognitive changes: When people acquire new skills, they are changing in fundamental interesting ways. By being able to measure change over time it is possible to map phenomena at the heart of the educational enterprise. --John Willet
Who uses vertical scales? • CTB McGraw • TerraNova Series • Comprehensive Test of Basic Skills (CTBS) • California Achievement Test • Harcourt • Stanford Achievement Test • Metropolitan Achievement Test • Statewide NCLB tests • All states using CTB or Harcourt’s tests • Mississippi, North Carolina, Oregon, Idaho • Woodcock cognitive batteries
Development and use Note that many of these scales were developed prior to NCLB and before cognitive psychology had gained currency. Achievement tests began in an era of normative interpretation. Policymakers are now catching up to content and standards-based interpretations.
Assumptions-implicit and explicit The construct is a unified continuum of learning culminating in mature expertise Domain coverage areas are not necessarily statistical dimensions Scale building models the sequence of skills and the relationship between them The construct embodies a complex unidimensional ability
What? The construct embodies a complex unidimensional ability A mature ability such as reading or doing algebra problems involve many component skills. The ability itself is unlike any of its component skills. Complex skills are emergent properties of simpler skills and in turn become components of still more complex skills
Who doesn’t use vertical scales?Why not? In recent years, there have been challenges to the validity of vertical scales. Much of this comes from the viewpoint of standards-based tests including those developed for NCLB purposes. Many critical studies use no data or data simulated to exhibit dimensionality.
Assumptions-implicit and explicit Subject matter at each grade forms a unique construct described in content standards documents. Topics not explicitly covered in standards documents are not tested. Content categories represent independent or quasi-independent abilities.
How assumptions affect vertical scaling issues Cross-grade linking blocks detract from grade-specific content validity Changes in content descriptions indicate differences in dimensionality for different grades Vertical linking connects unlike constructs to a scale that may be mathematically tractable but lacks validity
Vertical scale critics ask: “How can you put unlike structures together and expect to get meaningful scores and coherent achievement standards?” Vertical scale proponents ask: “If you believe the constructs are different how can you talk about change over time? Without growth modeling how can you get coherent achievement standards?”
Criticism centers on two major issues • Linking error • Violations of dimensionality assumptions
Issue #1: Linking creates error There is some error associated with all measurement, but current methods of vertical scaling greatly minimize it. These methods include: --triangulation with multiple forms or common person links --comprehensive and well-distributed linking blocks --continuous adjacent linking --fixed parameter linking in adaptive context
How do people actually create and maintain vertical scales? • Harcourt – common person for SAT and comprehensive linking blocks • CTB – methods include concurrent calibration, non-equivalent anchor tests (NEAT), innovative linking methods • ETS – (the king of NEAT) – also uses an integrated IRT method (Davier & Davier)
How do we do it? Scale establishment method extensively described in Probability in the Measurement of Achievement By George Ingebo
How do we do it?Extensive initial linking A 1 B 2 3 4 1 3 2 4 3 2 C D 3 4
Adaptive Continuous Vertical Linking Benchmark X Benchmark X +1
Issue #2Dimensionality Reading and mathematics at grade 3 looks very different than those subjects at grade 8. In addition, the curricular topics differ at each grade. How can they be on the same scale?
Study of Dimensionality: Research Questions 1. Does essential unidimensionality hold throughout the scale? 2. Do content areas within scales form statistical dimensions?
Does essential unidimensionality • hold throughout the scale? • Examine a set of items that comprised forms for state tests in reading and mathematics in grades 3 through 8 • Use Yen’s Q3 statistic to assess dimensionality for an exploratory dimensionality study
Basic concept: When the assumption of unidimensionality is satisfied, responses exhibit local independence. That is, when the effects of theta are taken into account, correlation between responses is zero. Q3 is the correlation between residuals of response pairs.
dik is the residual: where: uik is the score of the kth examinee on the ith item Pi(qk) is as given in the Rasch model:
The correlation taken over examinees who have taken item i and item j is: Fishers r to z’ transformation gives a normal distribution to the correlations: Q3 values tend to be negative (Kingston & Doran)
Pairs of responses from adaptive tests – NWEA’s Measures of Academic Progress Over 49 million response pairs per subject
2. Do content areas within scales form statistical dimensions? Used method from Bejar (1980). “A procedure for investigating the unidimensionality of achievement tests based on item parameter estimates” J of Ed Meas, 17(4), 283-296 Calibrate each item twice; once, using responses to all items on the test (the usual method); again using only responses to items in the same goal area.
2. Do content areas within scales form statistical dimensions? Data is from fixed form statewide accountability test of reading and mathematics.
What we have found regarding skill development: • New topics build on earlier ones and show up statistically as part of the construct • Although they may not be specified in later standards, early topics and skills are embedded in later ones (e.g., phonemics, number sense) • Essential unidimensionality (Stout’s terminology) holds throughout the scale with minor dimensions of interest
Thank you for your attention. Marty McCall Northwest Evaluation Association5885 SW Meadows Road, Suite 200Lake Oswego, Oregon 97035-3256Phone: 503-624-1951FAX: 503-639-7873Marty.McCall@nwea.org