1 / 27

Formative and Interim Assessment: as Parts of a Coherent Assessment System Lorrie A. Shepard

Formative and Interim Assessment: as Parts of a Coherent Assessment System Lorrie A. Shepard University of Colorado at Boulder Stakeholders Formative & Interim Subcommittee August 3, 2010. Overview Definition of formative assessment Research base for formative assessment

trista
Download Presentation

Formative and Interim Assessment: as Parts of a Coherent Assessment System Lorrie A. Shepard

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Formative and Interim Assessment: as Parts of a Coherent Assessment System Lorrie A. Shepard University of Colorado at Boulder Stakeholders Formative & Interim Subcommittee August 3, 2010

  2. Overview Definition of formative assessment Research base for formative assessment Definition of interim assessment Beginnings of research on interim assessment Curricular coherence and comprehensive assessment systems

  3. Perie, Marion, & Gong (2009) “Moving Toward a Comprehensive Assessment System”

  4. Council of Chief State School Officers “Formative assessment is a process used by teachers and students during instruction that provides feedback to adjust ongoing teaching and learning to improve students’ achievement of intended instructional outcomes.” “ Formative assessment is not an adjunct to teaching but, rather, is integrated into instruction and learning with teachers and students receiving frequent feedback.”

  5. Paul Black & Dylan Wiliam (1998).Assessment and Classroom Learning. “Assessment becomes ‘formative assessment’ when evidence is actually used to adapt the teaching work to meet student needs.” Formative assessment experiments produce effect sizes of .40 - .70, larger than found for most educational interventions. Many studies show that improved formative assessments helps low achievers most. School of Education ING’S K LONDON College Founded1829 Inside the black box Raising standards through classroom assessment Paul Black & Dylan William

  6. Knowing What Students Know Pellegrino, Chudowsky, & Glaser NRC, 2001 Cognitive science findings on key aspects of learning processes can be translated into targeted features of formative assessment: • Accessing prior knowledge • Strategic use of feedback • Teaching and assessing for • transfer • Meta-cognitive benefits of self- • assessment

  7. Motivation and Participation Are Inextricably Entwined with Learning • Historically, research on motivation was separate from research on learning until studies of metacognition showed import of motivation (Resnick & Klopfer, 1989). • “Performance-oriented” students work for grades, to please the teacher, and to appear competent. Normative grading practices and extrinsic rewards induce these behaviors. (Stipek, 1996). • Activity theory and Vygotsky’s notion of socially supported learning provides a wholly different view of what might “motivate” students to devote their hearts and minds to learning (Shepard, 2001).

  8. These theories not only bring coherence to elements of formative assessment practice, they explain why formative assessment works when it works. • Kluger and DeNisi’s meta-analysis cautions that in one-third of studies feedback worsens performance (based on person rather than task evaluation). • Research on intrinsic motivation, e.g., Deci & Ryan (2000) shows negative effects of extrinsic rewards. Children can learn to be extrinsically motivated.

  9. An Intervention Study Elawar & Corno (1985). A factorial experiment in teachers’ written feedback on student homework: changing teacher behavior a little rather than a lot, Journal of Educational Psychology. Study design: • Teachers were trained to give written feedback focused on specific errors and poor strategy with suggestions about how to improve. • The control group received grades on homework but no comments. • Findings: • The effect of focused feedback on final achievement was as great as the effect of prior achievement. • There were also large positive effects on attitudes toward mathematics and the initial superiority of boys over girls was reduced.

  10. Self assessment • Student self assessment promises to increase students’ responsibility for their own learning. • In case studies, students became more interested in the criteria and substantive feedback than grades… more honest about their own work, fair with other students, and able to defend their opinions in terms of the evidence.(Klenowski, 1995)

  11. An Intervention Study White & Frederiksen (1996). The Thinker Tools Inquiry Project: Making Scientific Inquiry Accessible to Students Center for Performance Assessment, Educational Testing Service. Assessment criteria were developed for attributes desired while conducting investigations in science. Students engaged in a set of activities to foster “reflective assessment.” • At several stages in the Inquiry Cycle curriculum, students evaluated their own work in terms of the criteria. • Each time they applied the criteria AND wrote a brief rationale pointing to the features of their work that supported their rating. • Students in the reflective assessment classrooms also used the criteria to give feedback to classmates after oral presentations. Compared to controls, students in reflective classrooms produced more highly rated projects (with the greatest gains for low-achieving students). Low-achieving students also showed dramatic gains on a measure of conceptual understanding.

  12. To Be Effective, Formative Assessment Tools Must • “Embody” learning goals. • Be curriculum-embedded (both in timing and substance). Tasks are instructional tasks so no instructional time is lost; occurs “midstream”* to inform instruction not as a unit summative test. • Enable the supportive learning processes invoked in the formative assessment literature. *see Stuart Kahl, Ed Week, 9.21.05

  13. Substantively, Good Formative Assessment Tools • Can never be all multiple-choice. • Provide qualitative insights about understandings and misconceptions not just a numeric score. • Have immediate implications for what to do besides re-teaching every missed item. (the 1000 mini-lessons problem).

  14. Perie, Marion, & Gong (2009) “Moving Toward a Comprehensive Assessment System” Definition: “Interim assessments are assessments administered during instruction to evaluate students’ knowledge and skills relative to a specific set of academic goals in order to inform policymaker or educator decisions at the classroom, school, or district level.” “The specific interim assessment designs are driven by the purposes and intended uses (instructional, evaluative, predictive), but the results of any interim assessment must be reported in a manner allowing aggregation across students, occasions, or concepts.” Think of interim assessments as quasi-summative, coming at the end of a unit of study.

  15. University of Colorado at Boulder / School of Education • Substantive insights • are rare. • Blanc et al.: • “Instructional Communities” • In one school, principal was clearly the moral and organizational star. Coherent emphasis on core curriculum helped to link Benchmark results to instructional strengths and weaknesses. • Olah et al.: • “Analyzing Benchmark Data” • Two of 25 teachers connected item and standards data aback to Everyday Mathematics curriculum. • Diagnostic insights from items, e.g. problems with regrouping, were rare.

  16. University of Colorado at Boulder / School of Education • Item-by-item teaching • Bulkley et al.: • “Role of the District” • Assumed that teachers would be able to figure out what to do by looking at the data, talking with other teachers, and drawing inferences about their own needs for professional development. • Blanc et al., • High-stakes state testing talk pervaded grade-level meetings, focus on “bubble kids,” reteach usually with the same instructional strategy, alternative strategies not informed by assessment results. • Olah et al., • Teachers invented their own thresholds to determine priorities for reteaching. Analyses of items were most often procedural or about item validity. Reteaching was focused procedurally on reteaching items missed, step by step. 

  17. Knowing What Students Know • Pellegrino, Chudowsky, & Glaser • NRC, 2001 • Coherence between classroom & large-scale assessments • Requires that both share the same underlying model of learning, • i.e., both must share the same conception of developing competence in a domain.

  18. The Importance of Content: Instructional and Assessment Tasks that Embody Learning Goals Rather than alignment, embodiment might be a better term to characterize the more complete and substantive alignment that occurs when the tasks, problems, and projects in which students are engaged represent the range and depth of what we want students to understand and be able to do. Wiggins and McTighe use “backward design” to get from intended goals to compelling evidence or demonstrations of learning.

  19. A campground has a large lawn with a soccer field that measures 100 × 50 meters (Figure 1). The park manager decides to keep the field open at night. • The diagram below (Figure 3) shows the lighting of the field when lights are placed at points D and B. What is the area of the soccer field that is NOT lit when these two light posts are used. Show your work. Figure 1 Figure 3 Therefore, a decision needs to be made about where to place some light posts. Standard lamp posts are 13 meters high and light a circular region with a radius of 50 meters (see Figure 2). Figure 2 Dutch examination item by the courtesy of the Freudenthal Institute.

  20. Learning progressions “describe skills, understandings, and knowledge in the sequence in which they typically develop: a picture of what it means to ‘improve’ in an area of learning.” (Masters & Forster, 1996). Learning progressions or Progress Maps provide an underlying model of learning to coherently link classroom and large-scale assessments. A criterion-referenced growth model. See Knowing What Students Know (2001) Assessments Can/Should Be Coherently Linked to Curriculum by Shared Learning Progressions.

  21. Coherence between classroom and external tests requires a shared understanding of the construct at the level of the progress dimension and at the level of specific assessment tasks and scoring guides. (Wilson & Draney, 2004) Progress variables help to align learning goals, instruction, and assessment. Embedded assessments help to compare students’ thinking with curricular expectations. Student performance gains on classroom assessments can be linked to expected gains on an external large-scale assessment without “teaching to the test.” (Kennedy, Brown, Draney, & Wilson) BEAR Assessment System (Wilson & Sloane, 2000)Best example in the US, illustrates KWSK concept of coherence. More sophisticated understanding of buoyancy Less sophisticated understanding of buoyancy

  22. What happens if you cut the nail in half? - + - + - + - + + - - + + - Otero, Jalovec, & HerManyHorses. (2008).

  23. Designing Effective and Coherent Assessment Systems • Formative Assessment • Curriculum-embedded, real-time processes used for feedback, to adapt instruction, and to engage students in self assessment • Substantive insight > numerical scores • Benchmark/Interim Assessments • Could be culminating assessments for specific units of study or scored elements for summative accountability, but current examples are of poorer quality than current state assessments. • Summative Assessments • Used for program evaluation and accountability

  24. University of Colorado at Boulder / School of Education • The Importance of Curriculum • The NAEd Standards paper argues for the concurrent, coherent design of curriculum, assessments (both large-scale and classroom level), and teacher professional development. • Top performing countries in TIMSS have leaner, more hierarchically sequenced curricula leading to progressively more advanced topics and deeper understanding. (Schmidt et al., 2005) • Recent analysis of content standards in 14 states found they did not focus on big ideas or build from grade to grade. (Porter et al., 2009). Common Standards are a help but are still only a skeletal framework. • National control is not required for coherence (Schmidt & Prawat, 2006). Rather, coherence leads to effective outcomes if it is achieved at whatever level of governance has authority over policy instruments.

  25. University of Colorado at Boulder/ School of Education • Curricula deepen the meaning of standards and provide teachers and students with a roadmap of how to reach proficiency. • Instead of turning Standards over to test makers for implementation, the NRC report on state science assessments calls for: • Horizontal coherence (linking curriculum, instruction, and assessment) • Vertical coherence (shared vision at classroom, school, district, and state levels) • Developmental coherence (taking account of how understanding develops over time) • If RTTT $$ cannot be spent directly on curriculum development, DOE could nonetheless require applicants to demonstrate how they will link assessment design to new or existing curricula.

  26. Lessons from Research on Teaching Teachers need better access to materials that model teaching for understanding – with extended instructional activities, formative assessment tasks, scoring rubrics, and summative assessments built in. And they need extended support while attempting to use these materials and teach in new ways. * * * * * * Involve expert teacher-leaders in design of assessment tasks and ensure 100% teacher participation in non-burdensome scoring, focused on capacity building and teacher learning. Avoid mistakes of current data systems that focus on scores and item-by-item reteaching and leave teachers to their own devices to improve.

  27. University of Colorado at Boulder / School of Education • “Agreeing on Scope and Sequence Won’t Be Easy” • Every state has to attend to political processes that have led to everything-but-kitchen-sink standards and wide variation in assessment quality. • Focusing on just one content area in a limited number of grades may help mitigate political difficulties, especially if considered a pilot. • Skill areas such as reading, writing, and scientific inquiry may be easier to agree upon than specific content knowledge. • The problem of agreeing on curriculum could be further softened by developing only a limited number of big-idea instructional units per grade. • from Shepard, 2009, RTTT testimony

More Related