1 / 24

Examining Value-Added Models to Measure Teacher Effectiveness

Examining Value-Added Models to Measure Teacher Effectiveness. Laura Goe, Ph.D. Research Scientist, ETS, and Principal Investigator for the National Comprehensive Center for Teacher Quality. Hofstra University Doctoral Policy Forum. October 15, 2011  Hempstead, NY.

frey
Download Presentation

Examining Value-Added Models to Measure Teacher Effectiveness

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Examining Value-Added Models to Measure Teacher Effectiveness Laura Goe, Ph.D. Research Scientist, ETS, and Principal Investigator for the National Comprehensive Center for Teacher Quality Hofstra University Doctoral Policy Forum October 15, 2011 Hempstead, NY

  2. The goal of teacher evaluation

  3. Trends in teacher evaluation • Policy is way ahead of the research in teacher evaluation measures and models • Though we don’t yet know which model and combination of measures will identify effective teachers, many states and districts are compelled to move forward at a rapid pace • Inclusion of student achievement growth data represents a huge “culture shift” in evaluation • Communication and teacher/administrator participation and buy-in are crucial to ensure change • The implementation challenges are enormous • Few models exist for states and districts to adopt or adapt • Many districts have limited capacity to implement comprehensive systems, and states have limited resources to help them

  4. How did we get here? • Value-added research shows that teachers vary greatly in their contributions to student achievement (Rivkin, Hanushek, & Kain, 2005). • The Widget Effect report (Weisberg et al., 2009) “…examines our pervasive and longstanding failure to recognize and respond to variations in the effectiveness of our teachers.” (from Executive Summary)

  5. A concise definition of teacher effectiveness • Anderson (1991) stated that “… an effective teacher is one who quite consistently achieves goals which either directly or indirectly focus on the learning of • their students” (p. 18).

  6. Validity and use of assessments to evaluate teachers • Tests, systems, etc. do not have validity • Validity lies in how they are used • A test designed to measure student knowledge and skills in a specific grade and subject may be valid for determining where that student is relative to his/her peers at a given point in time • However, there are questions about validity in terms of using such test results to measure teachers • What part of a student’s score is attributable solely to the teacher’s instruction and effort?

  7. Achievement Proficient Teacher A: “Success” on Ach. Levels Teacher B: “Failure” on Ach. Levels Start of School Year End of Year Growth vs. Proficiency Models In terms of growth, Teachers A and B areperforming equally Slide courtesy of Doug Harris, Ph.D, University of Wisconsin-Madison

  8. Achievement Proficient Teacher A Teacher B Start of School Year End of Year Growth vs. Proficiency Models (2) A teacher with low-proficiency students can still be high in terms of GROWTH (and vice versa) Slide courtesy of Doug Harris, Ph.D, University of Wisconsin-Madison

  9. Most popular growth models: Value-added and Colorado Growth Model • EVAAS uses prior test scores to predict the next score for a student • Teachers’ value-added is the difference between actual and predicted scores for a set of students • http://www.sas.com/govedu/edu/k12/evaas/index.html • Colorado Growth model • Betebenner 2008: Focus on “growth to proficiency” • Measures students against “academic peers” • www.nciea.org

  10. Linking student learning results to professional growth opportunities Slide courtesy of Damian Betebenner at www.nciea.org

  11. What value-added and growth models cannot tell you • Value-added and growth models are really measuring classroom, not teacher, effects • Value-added models can’t tell you why a particular teacher’s students are scoring higher than expected • Maybe the teacher is focusing instruction narrowly on test content • Or maybe the teacher is offering a rich, engaging curriculum that fosters deep student learning. • How the teacher is achieving results matters!

  12. Value-Added: Student effects “A teacher who teaches less advantaged students in a given course or year typically receives lower-effectiveness ratings than the same teacher teaching more advantaged students in a different course or year.” “Models that fail to take student demographics into account further disadvantage teachers serving large numbers of low-income, limited English proficient, or lower-tracked students.” (Newton et al., 2010, pg 2)

  13. Value-Added: Error rates and stability • “Type I and II error rates for comparing a teacher’s performance to the average are likely to be about 25 percent with three years of data and 35 percent with one year of data.” • “Any practical application of value-added measures should make use of confidence intervals in order to avoid false precision, and should include multiple years of value-added data in combination with other sources of information to increase reliability and validity.” (Schochet & Chiang, 2010, abstract)

  14. Value-Added: Subscales • Teachers’ scores on subscales of a test can yield very different results, which also raises the question of weighting subscale results (Lockwood et al, 2007) • Lockwood et al. found substantial variation in teachers’ rankings based on the subscales (“Problem Solving” and “Procedures”) • More variation within teachers than across teachers • “Our results provide a clear example that caution is needed when interpreting estimated teacher effects because there is the potential for teacher performance to depend on the skills that are measured by the achievement tests” (Lockwood et al, 2007, pg. 55)

  15. Value-Added: Test content • Polikoff and colleagues (2011) found that • About half of standards are tested • If half the standards they are teaching are not tested, how can the test accurately reflect teachers’ contribution to student learning? • About half of test content corresponds with grade/subject standards • If half of test content is material that is not in the standards teachers are supposed to be teaching, is if fair to hold teachers accountable for test results?

  16. Value-Added: Multiple teachers • In one study, 21% of teachers in Washington, DC had students who had also been in another math teacher’s class that year (Hock & Isenberg, 2011) • This covered all situations, including students who had changed classes or schools as well as co-teaching and other cases where students were taught by more than one teacher • Hock & Isenberg determined best estimates were obtained by counting student multiple times, for each teacher the student had, rather than trying to account for how much each teacher contributed to students’ scores

  17. Value-Added: Possible responses to technical challenges Use multiple years of data to mitigate sorting bias and gain stability in estimates (Koedel & Betts, 2009; McCaffrey et al., 2009; Glazerman et al., 2010 ) Use confidence intervals and other sources of information to improve reliability and validity of teacher effectiveness ratings (Glazerman et al., 2010) Have teachers and administrators verify rosters to ensure scores are calculated with students the teachers actually taught Consider the importance of subscores in teacher rankings

  18. Growth Models Wisconsin’s Value-Added Research Center (VARC) http://varc.wceruw.org/ SAS Education Value-Added Assessment System (EVAAS) http://www.sas.com/govedu/edu/k12/evaas/index.html • Mathematica • http://www.mathematica-mpr.com/education/value_added.asp American Institutes of Research (AIR) http://www.air.org/ Colorado Growth Model www.nciea.org

  19. References • Betebenner, D. W. (2008). A primer on student growth percentiles. Dover, NH: National Center for the Improvement of Educational Assessment (NCIEA). • http://www.cde.state.co.us/cdedocs/Research/PDF/Aprimeronstudentgrowthpercentiles.pdf • Braun, H., Chudowsky, N., & Koenig, J. A. (2010). Getting value out of value-added: Report of a workshop. Washington, DC: National Academies Press. http://www.nap.edu/catalog.php?record_id=12820 • Glazerman, S., Goldhaber, D., Loeb, S., Raudenbush, S., Staiger, D. O., & Whitehurst, G. J. (2011). Passing muster: Evaluating evaluation systems. Washington, DC: Brown Center on Education Policy at Brookings. • http://www.brookings.edu/reports/2011/0426_evaluating_teachers.aspx# • Glazerman, S., Goldhaber, D., Loeb, S., Raudenbush, S., Staiger, D. O., & Whitehurst, G. J. (2010). Evaluating teachers: The important role of value-added. Washington, DC: Brown Center on Education Policy at Brookings. • http://www.brookings.edu/reports/2010/1117_evaluating_teachers.aspx • Herman, J. L., Heritage, M., & Goldschmidt, P. (2011). Developing and selecting measures of student growth for use in teacher evaluation. Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST). • http://www.aacompcenter.org/cs/aacc/view/rs/26719

  20. References (continued) • Hock, H., & Isenberg, E. (2011). Methods for accounting for co-teaching in value-added models. Princeton, NJ: Mathematica Policy Research. • http://www.aefpweb.org/sites/default/files/webform/Hock-Isenberg%20Co-Teaching%20in%20VAMs.pdf • Koedel, C., & Betts, J. R. (2009). Does student sorting invalidate value-added models of teacher effectiveness? An extended analysis of the Rothstein critique. Cambridge, MA: National Bureau of Economic Research. • http://economics.missouri.edu/working-papers/2009/WP0902_koedel.pdf • McCaffrey, D., Sass, T. R., Lockwood, J. R., & Mihaly, K. (2009). The intertemporal stability of teacher effect estimates. Education Finance and Policy, 4(4), 572-606. • http://www.mitpressjournals.org/doi/abs/10.1162/edfp.2009.4.4.572 • Linn, R., Bond, L., Darling-Hammond, L., Harris, D., Hess, F., & Shulman, L. (2011). Student learning, student achievement: How do teachers measure up? Arlington, VA: National Board for Professional Teaching Standards. • http://www.nbpts.org/index.cfm?t=downloader.cfm&id=1305 • Lockwood, J. R., McCaffrey, D. F., Hamilton, L. S., Stecher, B. M., Le, V.-N., & Martinez, J. F. (2007). The sensitivity of value-added teacher effect estimates to different mathematics achievement measures. Journal of Educational Measurement, 44(1), 47-67. • http://www.rand.org/pubs/reprints/RP1269.html

  21. References (continued) • New York State Education Department (2011). Summary of Provisions in 3012c Regulations: May, 2011 (revised September 14, 2011 for impact of August Court decision and other clarifications) • http://usny.nysed.gov/rttt/teachers-leaders/greenmemo.pdf • Newton, X. A., Darling-Hammond, L., Haertel, E., & Thomas, E. (2010). Value-added modeling of teacher effectiveness: An exploration of stability across models and contexts. Education Policy Analysis Archives, 18(23). • http://epaa.asu.edu/ojs/article/view/810 • Polikoff, M. S. (2011). How well aligned are state assessments of student achievement with state content standards? American Educational Research Journal, 48(4), 965-995. • http://aer.sagepub.com/content/48/4/965.abstract?rss=1 • Policy Analysis for California Education and Rennie Center for Education Research and Policy (2011). The road ahead for state assessments. Cambridge, MA: Rennie Center for Education Research and Policy. • http://renniecenter.issuelab.org/research/listing/road_ahead_for_state_assessments • Race to the Top Application • http://www2.ed.gov/programs/racetothetop/resources.html • Rivkin, S. G., Hanushek, E. A., & Kain, J. F. (2005). Teachers, schools, and academic achievement. Econometrica, 73(2), 417 - 458. • http://www.econ.ucsb.edu/~jon/Econ230C/HanushekRivkin.pdf

  22. References (continued) • Sanders, W. L., & Horn, S. P. (1998). Research findings from the Tennessee Value-Added Assessment System (TVAAS) Database: Implications for educational evaluation and research. Journal of Personnel Evaluation in Education, 12(3), 247-256. • http://www.sas.com/govedu/edu/ed_eval.pdf • Schochet, P. Z., & Chiang, H. S. (2010). Error rates in measuring teacher and school performance based on student test score gains. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. • http://ies.ed.gov/ncee/pubs/20104004/pdf/20104004.pdf Weisberg, D., Sexton, S., Mulhern, J., & Keeling, D. (2009). The widget effect: Our national failure to acknowledge and act on differences in teacher effectiveness. Brooklyn, NY: The New Teacher Project. http://widgeteffect.org/downloads/TheWidgetEffect.pdf

  23. Questions?

  24. Laura Goe, Ph.D. • P: 609-734-1076 • E-Mail: lgoe@ets.org • Website: www.tqsource.org

More Related