1 / 58

A Procedure for Assessing Fidelity of Implementation in Experiments Testing Educational Interventions

A Procedure for Assessing Fidelity of Implementation in Experiments Testing Educational Interventions. Michael C. Nelson 1 , David S. Cordray 1 , Chris S. Hulleman 2 , Catherine L. Darrow 1 , & Evan C. Sommer 1 1 Vanderbilt University, 2 James Madison University. Purposes of Paper:.

sibley
Download Presentation

A Procedure for Assessing Fidelity of Implementation in Experiments Testing Educational Interventions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Procedure for Assessing Fidelity of Implementation in Experiments Testing Educational Interventions Michael C. Nelson1, David S. Cordray1, Chris S. Hulleman2, Catherine L. Darrow1, & Evan C. Sommer1 1Vanderbilt University, 2James Madison University

  2. Purposes of Paper: • To argue for a model-based approach for assessing implementation fidelity • To provide a template for assessing implementation fidelity that can be used by intervention developers, researchers, and implementers as a standard approach.

  3. Presentation Outline • What is implementation fidelity? • Why assess implementation fidelity? • A five-step process for assessing implementation fidelity • Concluding points

  4. A Note on Examples: • Examples are drawn from our review of (mainly) elementary math intervention studies, which we are currently deepening and expanding to other subject areas • Examples for many areas are imperfect or lacking • As our argument depends on having good examples of the most complicated cases, we appreciate any valuable examples to which you can refer us (michael.nelson@vanderbilt.edu.)

  5. What Is Implementation Fidelity?

  6. What is implementation fidelity? • Implementation fidelity is the extent to which the intervention has been implemented as expected • Assessing fidelity raises the question: Fidelity to what? • Our answer: Fidelity to the intervention model. • Background in “theory-based evaluations” (e.g., Chen, 1990; Donaldson & Lipsey, 2006)

  7. Why Assess Implementation Fidelity?

  8. Fidelity vs. the Black Box The intent-to-treat (ITT) experiment identifies the effects of causes: Treatment “Black Box” Intervention’s Causal Processes Outcome Measures Outcomes Assignment to Condition Control “Black Box” Business As Usual Causal Processes Outcome Measures Outcomes

  9. Fidelity vs. the Black Box …While fidelity assessment “opens up” the black box to explain the effects of causes: Intervention “Black Box” Assignment to Condition Intervention Component Mediator Outcome Fidelity Measure 1 Fidelity Measure 2 Outcome Measure

  10. Fidelity assessment allows us to: • Determine the extent of construct validity and external validity, contributing to generalizability of results

  11. Fidelity assessment allows us to: • Determine the extent of construct validity and external validity, contributing to generalizability of results • For significant results, describe what exactly did work (actual difference between Tx and C)

  12. Fidelity assessment allows us to: • Determine the extent of construct validity and external validity, contributing to generalizability of results • For significant results, describe what exactly did work (actual difference between Tx and C) • For non-significant results, it may explain why beyond simply “the intervention doesn’t work”

  13. Fidelity assessment allows us to: • Determine the extent of construct validity and external validity, contributing to generalizability of results • For significant results, describe what exactly did work (actual difference between Tx and C) • For non-significant results, it may explain why beyond simply “the intervention doesn’t work” • Potentially improve understanding of results and future implementation

  14. Limitations of Fidelity Assessment: • Not a causal analysis, but it does provide evidence for answering important questions

  15. Limitations of Fidelity Assessment: • Not a causal analysis, but it does provide evidence for answering important questions • Involves secondary questions

  16. Limitations of Fidelity Assessment: • Not a causal analysis, but it does provide evidence for answering important questions • Involves secondary questions • Field is still developing and validating methods and tools for measurement and analysis

  17. Limitations of Fidelity Assessment: • Not a causal analysis, but it does provide evidence for answering important questions • Involves secondary questions • Field is still developing and validating methods and tools for measurement and analysis • Cannot be a specific, one-size-fits-all approach

  18. A Five Step Process for Assessing Fidelity of Implementation • Specify the intervention model • Identify fidelity indices • Determine index reliability and validity • Combine fidelity indices* • Link fidelity measures to outcomes* *Not always possible or necessary

  19. Step 1: Specify the Intervention Model

  20. The Change Model • A hypothetical set of constructs and relationships among constructs representing the core components of the intervention and the causal processes that result in outcomes

  21. The Change Model • A hypothetical set of constructs and relationships among constructs representing the core components of the intervention and the causal processes that result in outcomes • Should be based on theory, empirical findings, discussion with developer, actual implementation

  22. The Change Model • A hypothetical set of constructs and relationships among constructs representing the core components of the intervention and the causal processes that result in outcomes • Should be based on theory, empirical findings, discussion with developer, actual implementation • Start with Change Model because it is sufficiently abstract to be generalizable, but also specifies important components/processes, thus guiding operationalization, measurement, and analysis

  23. Change Model: Generic Example Intervention Component Mediator Outcome Teacher training in use of educational software Teachers assist students in using educational software Improved student learning

  24. Change Model: Project LINCS Adapted from Swafford, Jones, and Thornton, 1997 Increase in teacher knowledge of geometry Instruction in geometry Improved teacher instructional practice Increase in teacher knowledge of student cognition Instruction in student cognition of geometry

  25. The Logic Model • The set of resources and activities that operationalize the change model for a particular implementation

  26. The Logic Model • The set of resources and activities that operationalize the change model for a particular implementation • A roadmap for implementation

  27. The Logic Model • The set of resources and activities that operationalize the change model for a particular implementation • A roadmap for implementation • Derived from the change model with input from developer and other sources (literature, implementers, etc.)

  28. Logic Model: Project LINCS Adapted from Swafford, Jones, and Thornton, 1997 Improved teacher instructional practice Instruction in geometry Increase in teacher knowledge of geometry Geometry content course What is taught How it is taught Instruction in student cognition of geometry Increase in teacher knowledge of student cognition Characteristics teachers display Research seminar on van Hiele model

  29. A Note on Models and Analysis: Recall that one can specify models for both the treatment and control conditions. The “true” cause is the difference between conditions, as reflected in the model for each. Using the change model as a guide, one may design equivalent indices for each condition to determine the relative strength of the intervention (Achieved Relative Strength, ARS). This approach will be discussed in the next presentation (Hulleman).

  30. Steps 2 and 3: Develop Reliable and Valid Fidelity Indices and Apply to the Model

  31. Examples of Fidelity Indices • Self-report surveys • Interviews • Participant logs • Observations • Examination of permanent products created during the implementation process

  32. Index Reliability and Validity • Both are reported inconsistently • Report reliability at a minimum, because unreliable indices cannot be valid • Validity is probably best established from pre-existing information or side studies

  33. Index Reliability and Validity • Both are reported inconsistently • Report reliability at a minimum, because unreliable indices cannot be valid • Validity is probably best established from pre-existing information or side studies • We should be as careful in measuring the cause as we are in measuring its effects!

  34. Selecting Indices • Guided foremost by the change model: identify core components as those that differ significantly between conditions and upon which the causal processes are thought to depend

  35. Selecting Indices • Guided foremost by the change model: identify core components as those that differ significantly between conditions and upon which the causal processes are thought to depend • Use the logic model to determine fidelity indicator(s) for each change component

  36. Selecting Indices • Guided foremost by the change model: identify core components as those that differ significantly between conditions and upon which the causal processes are thought to depend • Use the logic model to determine fidelity indicator(s) for each change component • Base the number and type of indices on the nature and importance of each component

  37. Selecting Indices: Project LINCS Adapted from Swafford, Jones, and Thornton, 1997

  38. Step 4: Combining Fidelity Indices*

  39. Why Combine Indices? • *May not be possible for the simplest models • *Depends on particular questions

  40. Why Combine Indices? • *May not be possible for the simplest models • *Depends on particular questions • Combine within component to assess fidelity to a construct • Combine across components to assess phase of implementation • Combine across model to characterize overall fidelity and facilitate comparison of studies

  41. Some Approaches to Combining Indices: • Total percentage of steps implemented • Average number of steps implemented

  42. Some Approaches to Combining Indices: • Total percentage of steps implemented • Average number of steps implemented HOWEVER: These approaches may underestimate or overestimate the importance of some components!

  43. Some Approaches to Combining Indices: • Total percentage of steps implemented • Average number of steps implemented HOWEVER: These approaches may underestimate or overestimate the importance of some components! • Weighting components based on the intervention model • Sensitivity analysis

  44. MAP Example Weighting of training sessions for the MAP intervention Cordray, et al (Unpublished)

  45. Step 5: Linking Fidelity Measures to Outcomes*

  46. Linking Fidelity and Outcomes • *Not possible in (rare) cases of perfect fidelity (no covariation without variation) • *Depends on particular questions • Provide evidence supporting the model (or not) • Identify “weak links” in implementation • Point to opportunities for “boosting” strength • Identify incorrectly-specified components of the model

  47. Assessment to Instruction (A2i) • Teacher use of web-based software for differentiation of reading instruction • Professional developmentStudents use A2i Teachers use A2i recommendations for grouping and lesson planningStudents improve learning • Measures: Time teachers logged in, observation of instruction, pre/post reading (Connor, Morrison, Fishman, Schatschneider, and Underwood, 1997)

  48. Assessment to Instruction (A2i) • Used Hierarchical Linear Modeling to analyze • Overall effect size of .25 Tx vs. C • Pooling Tx+C, teacher time using A2i accounted for 15% of student performance • Since gains were greatest among teachers who both attended PD and were logged in more, concluded both components were necessary for outcome (Connor, Morrison, Fishman, Schatschneider, and Underwood, 1997)

  49. Some Other Approaches to Linking from the Literature • Compare results of hypothesis testing (e.g., ANOVA) when “low fidelity” classrooms are included or excluded • Correlate overall fidelity index with each student outcome • Correlate each fidelity indicator with the single outcome • Calculate Achieved Relative Strength (ARS) and use HLM to link to outcomes

  50. Concluding points

More Related