1 / 48

Innovation in Assessment and Evaluation

Innovation in Assessment and Evaluation. Prof. dr. Martin Valcke http://allserv.ugent.be/~mvalcke/CV/CVMVA.htm Ghent University Maputo July 2011. Structure. Advance organizer Traditional approaches Trends in assessment & evaluation Assumptions in relation to these trends Research

ziazan
Download Presentation

Innovation in Assessment and Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Innovation in Assessment and Evaluation Prof. dr. Martin Valcke http://allserv.ugent.be/~mvalcke/CV/CVMVA.htm Ghent University Maputo July 2011

  2. Structure • Advance organizer • Traditional approaches • Trends in assessment & evaluation • Assumptions in relation to these trends • Research • Practices • Conclusions

  3. Advance organizer

  4. Transmission of your message 3. Eye Contact Advance organizer

  5. Can I learnthis?

  6. The importance of assessment & evaluation

  7. Traditional approaches

  8. Critical issues • Validity of evaluationapproach in view of assessment of skills and complex knowledge • Fant et al., (1985) • Rating scales, daily logs, anecdotal records, behavior coding, and self-assessment for evaluating student teachers. • Oralexaminations, portfolio assessment, centralassessmentcentres, 360° assessment • …

  9. Recent developments Group learner Expert eacher Teachers Individuallearner Assessment system Externalinstitution Institutional level

  10. Recent developments • Stronger focus on “consequential validity”of measurement (Gielen, Dochy & Dierick, 2003) • Stronger emphasis on feedback value of assessment • What is the “learning potential” of the assessment approach

  11. Recent developments • Stiggins (1987): performance assessment • Performance assessment is expected to be geared in a better way to assess complex behavior in medical, legal, engineering, … and educational contexts (Sluijsmans, et.al., 2004).

  12. Concrete examples • Self- and peer assessment • Rubrics based assessment

  13. Self- and peer-assessment

  14. How to help the student to assess him/herself?

  15. Self- and peer assessment • Learn about your own learning process. • Schmitz (1994): “assessment-as-learning”. • ~ self corrective feedback

  16. See experiential learning cycle of Kolb. • Boekaerts (1991) self evaluation as a competency. • Development of metacognitive knowledge and skills (see Brown, Bull & Pendlebury, 1998, p.181). • Freeman & Lewis (1998, p.56-59): developing pro-active learners

  17. The Learning Cycle Model

  18. Is it possible? Group evaluations tend to fluctuate around the mean

  19. Learning to evaluate • Develop checklists • Give criteria • Ask to look for quality indicators. • Analysis of examples good and less good practices: develop a quality “nose”

  20. Learning to evaluate • Freeman & Lewis (1998, p.127) : • Learnerdevelops list of criteria. • Pairs of learnerscomparelisted criteria. • Pairs develop a criterion checklist. • Individualapplication of checklist. • Use of checklist to evalutework of otherlearner. • Individualreworkshis/her work. • Finalresult checkeed by teacher and resultcompared to learnerevaluation. • Pairs rechecktheirworkon the base of teacher feedback.

  21. Learning to evaluate • Peer evaluation is not the same as Peer grading • Final score is given by teacher • Part of score could build on accuracy of self/peer evaluation and self-correction

  22. Rubrics

  23. Rubrics • Rubrics focus on the relationship between competencies, criteria, and indicators and are organized along mastery levels (Morgan, 1999).

  24. http://web.njit.edu/~ronkowit/teaching/rubrics/samples/rubric_apa_research.pdfhttp://web.njit.edu/~ronkowit/teaching/rubrics/samples/rubric_apa_research.pdf

  25. Rubrics • Rubric: scoring tool for a qualitative assessment of the quality level of an authentic or complex activity • A rubric builds on criteria, enriched with a scale to indicate a mastery level. • For each level, standards are indicated that reflect this level. • A rubric dictates both teacher and student what is concretely expected. • Rubrics are used for “high stake assessment” and “formative assessment” (Arter & McTighe, 2001; Busching, 1998; Perlman, 2003). • Rubrics focus on the relationship between competencies, criteria, are organized along mastery levels (Morgan, 1999).

  26. Rubrics: indicator-based assessment • Assessment objective • Criteria • Enriched with indicators in terms of observable behavior • Limited number of indicators

  27. Critical thinking rubric http://academic.pgcc.edu/~wpeirce/MCCCTR/Designingrubricsassessingthinking.html

  28. Assumptions about rubrics • Larger consistency in scores (reliability). • More valid assessment of complex behavior. • Positive impact on related learning process.

  29. Critical issues • Adoption of this assessment approach is marred by teacher beliefs about nature of evaluation (see e.g., Chong, Wong, & Lang, 2004); • Also student beliefs (Joram & Gabriele, 1998) • Validity of the criteria and indicators (Linn, 1990), • Reliability of performance evaluation, e.g., when multiple evaluators assess and score performance (Flowers & Hancock, 2003).

  30. Research about rubrics use • Reviewarticle 75 studies aboutrubricsusage • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educationalconsequences. Educational Research Review, 2, 130–144. • (1) the reliable scoring of performance assessments can be enhanced by the use of rubrics, especially if they are analytic, topic-specific, and complemented with exemplars and/or rater training; • (2) rubrics do not facilitate valid judgment of performance assessments per se. However, valid assessment could be facilitated by using a more comprehensive framework of validity; • (3) rubrics seem to have the potential of promoting learning and/or improve instruction. The main reason for this potential lies in the fact that rubrics make expectations and criteria explicit, which also facilitates feedback and self-assessment.

  31. Conditions effective usage • Check frame of reference for the rubric: tasks, objectives • Train the users • Use multiple assessors: interrater usage • Developed by teacher and/or students!

  32. Development of rubrics • Choose quality criteria: 4 to 15 statements describing the nature of a criterion • Determine bandwidth to judge differences in quality related to the criterion: e.g., 0-5 or qualitative descriptors • Elaborate descriptors for each bandwidth level: concrete operational terms • Start from available student work!

  33. Rubrics: example • Writing a fiction story • Complex skill • Criteria?

  34. Rubrics

  35. http://isucomm.iastate.edu/files/image/OV3-WrittenAssignmentRubric.pnghttp://isucomm.iastate.edu/files/image/OV3-WrittenAssignmentRubric.png

  36. Rubrics • Rubric: scoringstoolvooreenkwalitatieveinschatting van het niveau van eenauthentieke of complexeactiviteit. • Eenrubriekbouwtverder op criteria die verrijktworden met eenschaalwaaropbeheersingsniveauszijnaangegeven • Per beheersingsniveauwordenstandaardenaangegeven die dieniveauweerspiegelen. • Een rubric geeftzowelvoor de lesgeverals de student aanwatconcreetverwachtwordt. • Rubrics wordenvoor “high stake assessment” gebruikt en voor “formatievetoetsing” (in functie van leren).(Arter & McTighe, 2001; Busching, 1998; Perlman, 2003). • Rubrics focus on the relationship between competencies-criteria, and indicators and are organized along mastery levels (Morgan, 1999).

  37. Performance assessment • Rubrics focus on the relationship between competencies-criteria, and indicators and are organized along mastery levels (Morgan, 1999).

  38. Aanpak ontwikkeling rubric • Kies criteria voor verwacht gedrag • 4 tot 15 statements die criterium beschrijven • Bepaal bandbreedte die verschil in bereiken criterium weergeven • Bijv. 0-5 of kwalitatieve termen • Werk een beschrijving uit voor elke waarde in de bandbreedte • Concreet observeerbare/vaststelbare kwalificaties

  39. http://www.teach-nology.com/cgi-bin/presentation.cgi/

  40. http://teachers.teach-nology.com/cgi-bin/research.cgi

  41. More information? • Overview of tools, examples, theory, bacdkground, research: http://school.discoveryeducation.com/schrockguide/assess.html • Critical thinking rubrics: http://academic.pgcc.edu/~wpeirce/MCCCTR/Designingrubricsassessingthinking.html • Rubric generators: http://www.teach-nology.com/web_tools/rubrics/ • Interesting rubric sites: http://web.njit.edu/~ronkowit/teaching/rubrics/index.htm • Rubric APA research paper: http://web.njit.edu/~ronkowit/teaching/rubrics/samples/rubric_apa_research.pdf • K12 examples: http://school.discoveryeducation.com/schrockguide/assess.html • General intro and overview:http://web.njit.edu/~ronkowit/teaching/rubrics/index.htm

  42. http://www.teach-nology.com/web_tools/rubrics/

  43. Statements about evaluation • Learners should be trained to develop themselves such rubrics. • Staff should collaborate in developing formal assessment and summative assessment rubrics • Rubrics will help staff to be more concrete as to their teaching and learning focus

  44. Innovation in Assessment and Evaluation Prof. dr. Martin Valcke http://allserv.ugent.be/~mvalcke/CV/CVMVA.htm Ghent University Maputo July 2011

More Related