1 / 30

Using Quality Indicators to Identify Quality Research

Using Quality Indicators to Identify Quality Research. Melody Tankersley, PhD Kent State University Bryan G. Cook, PhD University of Hawaii at Manoa.

dbruce
Download Presentation

Using Quality Indicators to Identify Quality Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using Quality Indicators to Identify Quality Research Melody Tankersley, PhD Kent State University Bryan G. Cook, PhD University of Hawaii at Manoa

  2. No Child Left Behind Act of 2002 stipulates that federally funded educational programs and practices must be grounded in scientifically-based research “that involves the application of rigorous, systematic, and objective procedures to obtain reliable and valid knowledge relevant to education activities and programs” (NCLB; p. 126). • begs two questions: • How do we know if a research study involves rigorous, systematic, and objective procedures? • How do we use the results of such research studies to identify educational practices that are effective for improving student outcome?

  3. How do we know if a research study involves rigorous, systematic and objective procedures? • CEC-Division for Research • Sponsored prominent researchers to author papers to propose • Parameters for establishing that reported research has been conducted with high quality (quality indicators) • Criteria for determining whether a practice has been studied sufficiently (enough high-quality research studies conducted on its effectiveness) and shown to improve student outcomes (effects are strong enough) Graham, S. (2005). Criteria for evidence-based practice in special education [special issue]. Exceptional Children, 71.

  4. Exceptional Children(2005) volume 71(2) • Group Experimental and Quasi-Experimental Research(Gersten, Fuchs, Comptom, Coyne, Greenwood, & Innocenti) • Single-Subject Research(Horner, Carr, Halle, McGee, Odom, & Wolery) • Correlational Research(Thompson, Diamond, McWilliam, Snyder, Snyder) • Qualitative Studies (Brantlinger, Jimenez, Klingner, Pugach, & Richardson)

  5. Quality Indicators (QIs) for Experimental (and Quasi-Experimental) Research • Describing Participants • Sufficient information about participants and interventionists, selection procedures as well as comparability across conditions • Implementation of Intervention and Description of Comparison Conditions • Clear description of intervention (and comparison conditions) with implementation fidelity assessed • Outcome Measures • Use of multiple measures at appropriate times • Data Analysis • Analysis techniques appropriate to questions and unit of analysis with effect size calculated

  6. QIs for Single-Subject Research • Description of Participants and Settings • Participants, selection procedures, setting precisely described • Dependent Variables • Variables described with operational precision, quantifiable, measured repeatedly over time with interobserver agreement • Measurement technique valid and described with precision • Independent Variables • Described with precision, systematically manipulated, fidelity of implementation highly desirable • Baseline • Repeated measurement of dependent variable with established pattern to predict future performance • Described with precision

  7. QIs for Single-Subject Research (cont) • Experimental Control/Internal Validity • At least 3 demonstrations of experimental effect and documents pattern of experimental control • Controls for common threats to internal validity • External Validity • Experimental effects across participants, settings, materials • Social Validity • DV and magnitude of change are socially important • Implementation of IV is practical and cost-effective • Enhanced when IV is over time, by typical interventionists in typical contexts

  8. Reviewing the QIs • Initial review • conducted at the 2004 Office of Special Education Programs (OSEP) Research Project Director’s Meeting • Next step • piloting the quality indicators (QIs) • Applying Experimental (Quasi-Experimental) QIs to two studies • Applying Single-Subject QIs to two studies

  9. Applying Experimental QIs • Kamps, Kravits, Stolze, & Swaggart (1999) • examined the effect of universal interventions (classroom management, social skills training, peer tutoring in reading) for students with and at risk for emotional and behavioral disorders in urban schools • Walker, Kavanagh, Stiller, Golly, Severson, & Feil (1998) • exposed two cohorts of 46 kindergarten students at-risk for antisocial behavior to a 3-month First Step to Success program that consisted of three modules: screening, school intervention, and home intervention

  10. Results Applying the Experimental QIs • Few experimental or quasi-experimental studies to choose from in the EBD literature • Neither study came close to meeting all 10 of the QIs (Kamps et al. = 3; Walker et al. = 5)

  11. Issues in Applying Experimental QIs • Rigorous requirements of some aspects of QIs • Completeness of reporting research • Clarity of requirements of some QIs

  12. Issues in Applying Experimental QIs (cont) • Rigorous requirements of some aspects of QIs • “Interventions should be clearly described on a number of salient dimensions, including conceptual underpinnings; detailed instructional procedures; teacher actions and language (e.g., modeling, corrective feedback); use of instructional materials (e.g., task difficulty, example selection); and student behaviors (e.g., what students are required to do and say)” (p. 156). • Conceptual underpinnings: not described; described for some, but not all elements of multi-faceted IV • Intervention description: references for implementation provided; appendix for instructional procedures

  13. Issues in Applying Experimental QIs (cont) • Completeness of reporting research • Rationale for data analysis techniques • Comparison of demographics of experimental and control group • Description of comparison conditions • An effect size not calculated, but sufficient data reported for calculating an effect size

  14. Issues in Applying Experimental QIs (cont) • Completeness of reporting research (cont) • Examples • Rationale for data analysis technique: “A brief rationale for major analyses and for selected secondary analyses is critical” (p. 161). • Not typically done. If acceptable analysis, must it be justified? • Effect size: “Statistical analyses are accompanied with presentation of effect sizes” (p. 161). • An effect size not calculated for one study, but sufficient data were reported for calculating an effect size.

  15. Issues in Applying Experimental QIs (cont) • Clarity of requirements of some QIs • “For some studies, it may be appropriate to collect data at only pre- and posttest. In many cases, however, researchers should consider collecting data at multiple points across the course of the study, including follow-up measures” (p. 160). • Must pre- and post-test measures be analyzed? It is implied when advocating for the use of follow-up measures, but not stated unequivocally • “At a minimum, researchers should examine comparison groups to determine what instructional events are occurring, what texts are being used, and what professional development and support is provided to teachers.” (p. 158). • Must all components (instructional events, texts, professional development, support) be described?

  16. Issues in Applying Experimental QIs (cont) • Clarity of requirements of some QIs (cont) • “The optimal method for assigning participants to study conditions is through random assignment, although in some situations, this is impossible. It is then the researchers’ responsibility to describe how participants were assigned to study conditions (convenience, similar classrooms, preschool programs in comparable districts, etc.)” (p. 155). • Under what conditions are non-random assignment of participants to groups justifiable? When random assignment is not feasible, what procedures for assigning participants to groups are acceptable?

  17. Issues in Applying Experimental QIs (cont) • Clarity of requirements of some QIs (cont) • Other questions: • Is there a minimum level of implementation fidelity that is acceptable? • What is meant by a broad and robust outcome measure that is not tightly aligned with the dependent variable?

  18. Applying Single-Subject QIs • Hall, Lund, & Jackson (1968) • investigated the effects of contingent teacher attention on the study behavior of six students who were nominated for participation by their teachers for disruptive and dawdling behaviors • Sutherland, Wehby, & Copeland (2000) • investigated the effects of a teacher’s behavior-specific praise on the on-task behavior of 9 students identified with emotional and behavioral disorders

  19. Results Applying Single-Subject QIs • Literature base of interventions addressing behavioral concerns using single-subject research methods is robust • Neither Hall et al. nor Sutherland et al. met all of the QIs—each met only one of the seven

  20. Issues in Applying Single-Subject QIs • Rigorous requirements of some aspects of QIs • Completeness of reporting research • Clarity of requirements of some QIs

  21. Issues in Applying Single-Subject QIs • Rigorous requirements of some aspects of QIs • Cost-effectiveness, practicality of IV • “Implementation of the independent variable is practical and cost effective” (p. 174). • Instruments and processes used to identify disability • “…operational participant descriptions of individuals with a disability would require that the specific disability…and the specific instrument and process used to determine their disability…be identified. Global descriptions such as identifying participants as having developmental disabilities would be insufficient” (p. 167).

  22. Issues in Applying Single-Subject QIs (cont) • Completeness of reporting research • “The process for selecting participants is described with replicable precision” (p. 174). • QIs prescribe integrating information about the “level, trend, and variability of performance occurring during baseline and intervention conditions” (p. 171) to determine the extent to which a functional relationship between the dependent and independent variables exists. • Can it be estimated through visual inspection of the graphed data? Must the researchers discussexperimental control?

  23. Issues in Applying Single-Subject QIs (cont) • Clarity of requirements of some QIs • Discrepancies between what is listed in the tables identifying QIs and the text descriptions of them • Measurement of intervention implementation fidelity • Table: “highly desirable” (p. 174) • Text: “documentation of adequate implementation fidelity is expected…” (p. 168) • Subjective terminology—sufficient detail, replicable precision, critical features, cost-effectiveness, socially important

  24. Issues in Applying Single-Subject QIs (cont) • Clarity of requirements of some QIs(cont) • Equivocal guidelines—5 or more data points for baseline condition (not specified for other conditions), “but fewer data points are acceptable in specific cases” (p. 168) • “Burden of proof” • Are researchers required to discuss each point of analysis or can readers determine whether experimental control exists based on their own visual inspection of graphed data? • Must researchers document the social validity of the dependent variable? • If less thanfive data points exist for a certain phase, is it the responsibility of the authors to justify having fewer than five data points?

  25. Considerations for Applying QIs We agree with the spirit of all the QIs—in applying them, however, we have a few points for consideration … • Identifying QIs vs. Operationally defining QIs • Published QIs were intended as conceptual guideposts rather than as fully defined, ready to apply indicators • Strictly applying these QIs may have been an unfair test • Restrictions of dissemination outlets • Page restrictions of journal • “Readability” of reports

  26. Considerations for Applying QIs (cont) • Fundamental (rather than preferable) QIs for determining effectiveness of practices • Indicators should reflect all but only the most essential methodological considerations • Requirements for elements that may enhance the study, but do not affect the trustworthiness of the research, should be reduced

  27. Considerations for Applying QIs (cont) • Retro-fitting • Applying criteria determined today to studies previously reported will not allow all good studies to pass through • Grandfather criteria for previously reported research? • New research held to contemporary requirements • APA effect size

  28. Recommendations • Operationally define the QIs—with exemplars and non-exemplars for illustration, perhaps • Continue to pilot-test the utility of the QIs with revisions as needed • Finalize QIs with reliable interobserver agreement established • Determine fundamental QIs for previously conducted research (to eliminate retro-fitting issues) • Acquire professional support for QIs so that future research (and publishers) will use them in reports

  29. Essential Quality Indicators for Group Experimental and Quasi-Experimental Research Proposed by Gersten et al. (2005)

  30. Essential Quality Indicators for Single-Subject Research Proposed by Horner et al. (2005)

More Related