1 / 34

Knowing When and How to Conduct an Impact Evaluation

Knowing When and How to Conduct an Impact Evaluation. Anita Singh, PhD, RD USDA, Food and Nutrition Service Office of Research and Analysis Arizona Nutrition Network Partner’s Meeting January 28, 2010. Session Objectives. Learn about the different types of evaluations

marinel
Download Presentation

Knowing When and How to Conduct an Impact Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Knowing When and How to Conduct an Impact Evaluation Anita Singh, PhD, RD USDA, Food and Nutrition Service Office of Research and Analysis Arizona Nutrition Network Partner’s Meeting January 28, 2010

  2. Session Objectives • Learn about the different types of evaluations • Understand the importance of formative and process evaluation • Understand the difference between “outcome measures” and an “outcome evaluation.” • Learn how to conduct an impact evaluation

  3. Why Evaluate? • To obtain ongoing, systematic information about a project. For Project: • Management • Efficiency • Accountability

  4. Types of Evaluation • Formative • Process • Outcome • Impact

  5. Formative Evaluation • Typically occurs when an intervention is being developed. • Results used in designing intervention • Results are informative – not definitive • Examples – focus groups, literature review etc.

  6. Process Evaluation • Tracking the actual implementation (e.g. delivery, resources) • Used to determine if intervention was delivered as designed • Helps identify barriers to implementation and strategies to overcome barriers

  7. Outcome Evaluation • Addresses whether anticipated changes occurred in conjunction with the intervention • Example: Pre- Post intervention test of nutrition knowledge • Indicates the degree of change but it is not conclusive evidence

  8. Outcome Evaluation versus Outcome Measure • Outcome measure – It is a tool for answering the question posed by each type of evaluation. • Have “outcome measures” for all four types of evaluation – formative, process, outcome and impact

  9. Impact Evaluation • Allows one to conclude authoritatively that the observed outcomes are due to the intervention • Can draw cause and effect conclusions by isolating the intervention from other factors that might contribute to the outcome.

  10. Planning for an Impact Evaluation IS THE INTERVENTION EVALUABLE? • What are the objectives? • What is the expected size of the impact? • Why, how and when is the intervention expected to achieve the objectives? • Will the intervention be implemented as intended?

  11. Planning for an Impact Evaluation • Build on available research • Engage stakeholders • Describe the intervention (e.g. develop a Logic Model; develop a conceptual framework)

  12. Simplest form of logic model INPUTS OUTPUTS OUTCOMES Source: University of Wisconsin-Extension, Program Development and Evaluation

  13. A bit more detail INPUTS OUTPUTS OUTCOMES Activities Participation Short Medium Long-term Program investments What we invest What we do Who we reach What results SO WHAT?? What is the VALUE? Source: University of Wisconsin-Extension, Program Development and Evaluation

  14. Planning for an Impact Evaluation – Study Design Considerations • Experimental – Strongest type of design -- cause and effect, uses random assignment; cost considerations • Quasi-experimental designs – does not use random assignment; can have a control group – may include multiple groups and or/multiple waves of data collection

  15. Planning for an Impact Evaluation Prepare the study plan • Develop “SMART” objectives • Select outcomes measures that fit the intervention • Sampling plan • Data collection plan • Address protection of human subjects – IRB, privacy and confidentiality issues • Data analysis plan

  16. Measurement Selection Includes • Knowing the information needs • Understanding Campaign/Intervention rationale • Basing on Theoretical model for behavior change • Selecting approach e.g. mail survey, phone, in-person interview, records – reliability, response rate, cost • Selecting Measurement tools – validity and reliability of instruments

  17. Study Design – Sample Size • Statistical Power –based on amount of change that could be expected • Once desired magnitude of change has been established, then select/calculate sample size with statistical power to determine if the change is due to the intervention and not random chance (see Hersey et. al.)

  18. Sample Size Depends on: • Difference that is expected to be detected • Measurement tool • Study design – cross-sectional versus longitudinal study

  19. Other Considerations • Response rate – higher the response rate, the greater the likelihood that the sample is representative of the study population. • Example survey – 30 percent completed versus 80 percent completed.

  20. Other Considerations • Low response rate – deal with issues such as “intention to treat.” • Intention to treat analyses are done to avoid the effects of crossover and drop-out, which may break the randomization to the treatment groups in a study. • Intention to treat analysis provides information about the potential effects of treatment policy rather than on the potential effects of specific treatment.

  21. Other Considerations • Selection bias: the sample is not truly representative of the study population • Repeated interviews/testing • Sample attrition • If high attrition rate - comparing pre/baseline scores of non-dropout with dropouts. • May need to adjust for difference

  22. Other Considerations • Seasonal effects – Fresh fruits and vegetable consumption • Maturation – Children • Other – another campaign, health related publicity etc.

  23. As the Intervention Begins • Collect impact data after start-up problems have been resolved but do not wait until implementation is wide spread; Follow-up (interest/resources) After the Intervention • Report the findings • Use the findings

  24. Current FNS Nutrition Education Evaluation Projects • To demonstrate that nutrition education through SNAP can bring about meaningful behavioral change. • To show that nutrition education implementers can conduct meaningful intervention evaluations. • Two Studies – Models of SNAP-Ed and Evaluation Wave I & Wave II- Results are intended to identify an initial set of promising practices for both nutrition education and evaluation.

  25. Models of SNAP-Ed and Evaluation – Wave I • Four demonstration projects - competitively selected. • Each project has a self-evaluation component. • FNS contractor will also conduct impact evaluations • Baseline data collection to begin shortly • Final report expected in Fall, 2011.

  26. Models of SNAP-Ed and Evaluation – Wave I • The four demonstration projects are: • The University of Nevada at Reno’s “All 4 Kids” intervention which targets pre-kindergarten children attending Las Vegas Head Start centers. • The Chickasaw Nation Nutrition Service’s “Eagle Play” intervention, targets 1st through 3rd grade children in Pontotoc County, Oklahoma

  27. Models of SNAP-Ed and Evaluation –Wave I • The Pennsylvania State University’s “Eating Competencies” web-based intervention promotes Satter’s eating competencies as an outcome for SNAP eligible women, ages 18-45. • The New York State Department of Health’s “Eat Well, Play Hard” in Childcare Settings, targets 3- to 4-year-old low-income children and their caregivers.

  28. Models of SNAP-Ed and Evaluation – Wave II • Applications are being reviewed; 3 will be selected. • SNAP-Ed connection web site has project overview http://snap.nal.usda.gov/nal_display/index.php?info_center=15&tax_level=1

  29. Summary • Evaluation can provide valuable, ongoing systematic information about a project • Common evaluation features across delivery types • Choice of features and evaluation type(s) will be driven by your information needs • Cost and resource considerations are important

  30. Evaluation Resources • Nutrition Education: Principles of Sound Impact Evaluation, FNS, Sept. 05 http://www.fns.usda.gov/oane/menu/Published/NutritionEducation/Files/EvaluationPrinciples.pdf • Building capacity in Evaluating Outcomes –UW Extension, Oct 08 http://www.uwex.edu/ces/pdande/evaluation/bceo/index.html

  31. Resources continued • WK Kellogg Foundation Evaluation Handbook, Jan 98 http://www.wkkf.org/default.aspx?tabid=75&CID=281&NID=61&LanguageID=0 • Developing a logic model: Teaching and training guide: E. Taylor-Powell and E. Henert; UW Extension Feb 08 http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

  32. Resources continued • Harris et al. An introduction to qualitative research for food and nutrition professionals. J Am Diet Assoc. 2009;109:80-90 • National Collaborative on Obesity Research – Policy Evaluation Webinar Series: http://www.nccor.org/ “Enhancing the Usefulness of Evidence to Inform Practice”

  33. Resources continued CDC on line course • http://www.cdc.gov/nccdphp/dnpa/socialmarketing/training/phase5/index.htm (evaluation) Evaluating Social Marketing in Nutrition: A Resource Manual by Hersey et. al. http://www.fns.usda.gov/oane/MENU/Published/nutritioneducation/Files/evalman-2.PDF

  34. Information on FNS’s Completed and Planned Research Studies Office of Research and Analysis http://www.fns.usda.gov/fns/research.htm See 2010 Study and Evaluation Plans Identifying High-Performance Nutrition Promotion Strategies

More Related