1 / 44

Systematic Methods of Evaluation and Monitoring

Systematic Methods of Evaluation and Monitoring. Early Reading First Conference Wild Horse Pass Chandler, Arizona October, 2004. ACE Partners. Gadsden Elementary School District- Lead Fiscal Agency Cocopah Indian Tribe Head Start Somerton Elementary School District Head Starts

adamdaniel
Download Presentation

Systematic Methods of Evaluation and Monitoring

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Systematic Methods of Evaluation and Monitoring Early Reading First Conference Wild Horse Pass Chandler, Arizona October, 2004

  2. ACE Partners • Gadsden Elementary School District- Lead Fiscal Agency • Cocopah Indian Tribe Head Start • Somerton Elementary School District Head Starts • WACOG identified Head Starts • Arizona State University- Lead Curriculum and Alignment • Southwest Institute- Lead Community Development and Evaluation

  3. Olivia Zepeda- Project Director James Christie- Co-Proj Director Jay Blanchard- Co-Proj Director Tanis Bryan- Co-Proj Director Karen Burstein- Evaluator Robert Bernhart, Business Director Marla Chamberlain- Mentor Elizabeth “Bibi” Garza- Mentor Lolinda Lee- Mentor Roselia Ramirez- Mentor 5 Parent Liaisons 16 Head Start Teachers 19 Instructional Aids Technology staff from ASU Consultants in Early Literacy and Head Start Faculty and Staff

  4. Academic Centers of ExcellenceACE* • Rosalia Ramirez, Teacher Mentor, WACOG • Marla Chamberlain, Teacher Mentor, Gadsden Schools • Brenda Casillas, Teacher, WACOG • Connie Felix, Teacher, WACOG • Tanis Bryan, Karen Burstein, SWI • James Christie, Jay Blanchard, ASU • Cevriye Ergul, Pen-Chiang Chao, ASU/SWI • *ACE is a collaborative project between Gadsden School District, WACOG HeadStart, Somerton School District, Cocopah Indian Reservation, Southwest Institute for Families and Children with Special Needs (SWI), and Arizona State University

  5. Evaluation and Monitoring • Standardized, Norm Referenced • Twice annually • All participants and a matched control group • Measure the progress of the PROJECT • Curriculum-Based Measurement • Weekly • A small group of children form each class based on standardized test results and teacher report • Monitor and adjust instruction for children

  6. STANDARDIZED ASSESSMENTS Individually or group administered Compares individual to group Used for program evaluation and planning CURRICULUM BASED ASSESSMENT Individually administered Compares individual to curriculum Used for individual evaluation and instructional planning EFFECTIVE PROGRAM EVALUATION AND MONITORING CONSITS OF A BALANCE BETWEEN STANDARDIZED NORM REFERENCED AND CURRICULUM-BASED ASSESSMENTS

  7. Standardized Testing

  8. ACE3 Assessment • Initial assessment: We want to know what children know at the beginning of the project in the areas of: • Oral Language- understanding of language and spoken vocabulary • Phonologic Awareness- recognizes/ demonstrates that language is made up of individual sounds • Alphabet Knowledge- demonstrates the relationship between sounds and letters • Concepts of Print- demonstrates how to use printed materials, e.g., top of page, bottom, beginning, end, word v. letter

  9. Standardized assessments are professionally developed tests administered under standard conditions, producing scores that can be used to evaluate programs. The type of standardized test required by ERF is designed to determine whether groups of children within a program are progressing or meeting the standards of learning as compared to children not engaged in ERF programs. • Scores of standardized tests also indicate how children within each tested program in each site performed, how groups performed, and how groups of children across the state performed. • NCLB requires that scores for schools and districts be disaggregated so that the performance of children from different subgroups can be examined. • Standardized tests aligned with state standards are essential for administrators to determine whether schools/sites are meeting their goals under NCLB.

  10. Standardized testing achieves standardization by norming practices, machine scoring of multiple-choice questions, precise instructions for administration, and standard formats for tests and recording of responses. The results can then be used to draw inferences about the state of cohorts or individuals as compared to an established standard.

  11. Standardization Procedures • Pilot testing • Controlled environment (no distractions) • Trained observers, evaluators • Written administration instructions • Guidelines for motivation of children/ question responses • Cross-check procedures for scoring and data input

  12. Assessing Measures • Make sure that the tests measure what you want them to measure (content validity) • Choose measures that have been validated on populations of children who have like characteristics of age, language, cognitive ability, and regional background • Train assessors to administer tests and compare the results of different assessors so that controls are in place to eliminate the effects of these different examiners.

  13. ACE Measures Individually Administered, Twice Annually, Approximate Time 25 minutes per child. • IDEA Individual Proficiency Test- Oral Language, English Proficiency • Peabody Picture Vocabulary Test- III- Receptive Vocabulary • Predictive Assessment of Reading- Vocabulary, Phonemic Awareness, Rapid Object Naming (Spanish and English) • Head Start Letter Naming • Get Ready to Read- Vocabulary, Concepts of Print, Letter Naming, Phonemic Awareness • Get It, Got It, Go! (Baby DIBLES) Vocabulary, Phonemic Awareness

  14. Norm Referenced Tests • Standardized tests, the results of which compare individual scores to those of a norm, or group average. The average is typically gathered from individuals of like age, • Results are reported in scores such as (per)centiles, stanines, normal curve equivalencies,

  15. Standardization is of little importance if the results of assessment are to be used in isolation from all other factors. In other words, if the purpose is simply to learn about the state of a single subject/child, a unique assessment (CBM) might be devised to furnish the information desired. However, if the assessment is to be used for the purpose of comparison, generalization, or decision- making, then standardization is essential.

  16. Monitoring Children’s Learning • A major responsibility of educators is to teach children academic skills. • Teaching includes: • Monitoring whether children are learning what is being taught and • Making changes in instruction when children are not making adequate progress.

  17. Testing to MonitorChildren’s Learning Testing is one way of monitoring what children are learning. • When professionals and the public think about testing, they tend to think in terms of standardized test scores. • “No Child Left Behind” includes extensive mandates to use standardized tests to hold schools accountable for gains in children’s academic performance.

  18. Limitations of Standardized Tests The tests typically used: • Norms are based on a national average • Lack overlap with local curriculum • Given infrequently • Not sensitive to short-term academic gains • Do not provide teachers with the information they need to make classroom instructional decisions

  19. Curriculum Based Measurement(CBM) • Teachers need-and value- information that is directly applicable to their day-to-day practice and linked to specific instructional objectives and learning concepts. • An alternative to standardized tests is Curriculum Based Measurement (CBM)

  20. CBM • CBM is a method of monitoring student educational progress using materials taken from the child's classroom curriculum. • CBM tests are constructed from the lessons taught. • CBM is Fast! (2 minutes, weekly) • CBM is Inexpensive and Easy!

  21. CBM • CBM allows teachers to • Continuously measure their students' growth in performance, • Determine if their students' are learning at the expected rate CBM provides data for teachers to evaluate their instructional strategies when students fail to demonstrate expected growth.

  22. Advantages of CBM over other testing methods • Based on classroom curriculum • Individual referenced • Peer referenced • Provides teachers with information to make instructional change decisions • Allows for direct and continuous monitoring of student achievement related to expected curriculum outcomes • Highly sensitive to student growth • Time efficient • Cost effective • Produces results that are easy to understand

  23. Purpose of CBM in the ACE • Each teacher in the ACE project is using weekly CBM in order to: • Monitor children’s acquisition of receptive and expressive vocabulary, letter recognition, and phonemic awareness skills. • Monitor how well children with special needs and at-risk are doing relative to higher achieving children. • Make adaptations in instruction based on comparisons of individual and group data. teachers make

  24. CBM Procedures Steps to CBM: • 1. Identify children (each ACE classroom is monitoring 4 children: 1 receiving special education services, 1 at-risk for special education (referred but did not qualify for special education), 1 typical achiever, and 1 “super star”). • The “super star” child defines what is possible; the average achieving child’ s “sets the standard” for what is reasonable. • Teachers selections of average and super star achievers will be compared with the results of standardized tests administered at the beginning of school.

  25. CBM Procedures 2. Develop CBM: Collect samples from each week’s lessons (e.g., receptive vocabulary, expressive vocabulary, alliteration). 3. Alliteration items are on cards for easier administration. 4. Prepare a separate score sheet for each child. The score sheet includes the vocabulary and letters being tested as well the materials needed to assess (e.g., posters, small books).

  26. Sample Score Sheet

  27. Curriculum-Based Measurement Procedures • Administer CBM weekly. • Teachers select a time that allow them to be consistent each week in administering the CBM. • Ideally, teachers should use the same wording, speed, and wait time for each child. In reality, there are time differences based on the child’s needs. • Record number of items done correctly on score sheet.

  28. Aim Line Graph the results for the children in each classroom. The average achieving child’s performance defines the “aim line” In the ACE project, we are defining the aim line as the expected performance based on the typically developing child’s performance.

  29. Curriculum-Based Measurement (Cont.) 8. Develop a line for each child on a graph

  30. Interpreting the Results Decision Rule 1: If three consecutive data points are below the aim line (See sample graph) for a child, teacher asks, why. • upsetting events in the home? • illness? • absences? • behavior difficulties? • CBM items too difficult?

  31. Interpreting the Results

  32. Interpreting Results Instructional changes for Decision Rule 1: • Change the classroom environment • change the student's seating • change time of instruction or testing • Modify Instruction • change the type of instruction (small group or one-on-one instruction) • Give more time for guided and independent practice of skills taught • Reteach when necessary • Review skills • Simplify directions.

  33. Interpreting the Results Decision Rule 2: If the child's performance is not consistently above or below the aim line (i.e., more than three consecutive data points), don't make any instructional changes(See the sample graph for decision rule 2).

  34. Administration of Curriculum-based measurement (Cont.)

  35. Interpreting the Results Decision Rule 3: If the child's performance is above the aim line for three consecutive data points, consider enhancing and enriching the instructional program (See the sample graph for decision rule 3).

  36. Interpreting the Results

  37. Using the Results Continue to measure and monitor Regularly inform parentsabouttheir child's progress toward the goals. • A written explanation of individual child’s graph might be sent home. The written note may include an explanation of child’s progress on the graph. • Note positive changes in the child’s progress on the graph • Arrange conferences for parents who might need some assistance in understanding the concept and may need an explanation of the CBM.

  38. Data from ACE CBM— Feb to May, 2004

  39. CBM Mean Percentage of Correct Responsefor the Three Groups Typical AtRiskSpecialNeeds (n = 25) (n = 7) (n = 21) Letter Identification Unit 1 81.00 (30.4) 59.53 (34.5) 50.20 (37.4) Letter Identification Unit 2 72.92 (30.4) 61.63 (27.3) 51.15 (34.8) Letter Identification Unit 3 76.08 (28.7) 50.00 (32.3) 60.04 (27.2) Show Me Unit 1 73.29 (14.9) 48.19 (14.3) 69.23 (24.0) Show Me Unit 2 78.83 (13.4) 56.70 (19.4) 70.66 (25.5) Show Me Unit 3 82.37 (13.5) 46.67 (27.1) 68.64 (21.7) Tell Me Unit 1 59.38 (20.3) 47.56 (17.1) 50.71 (26.0) Tell Me Unit 2 59.83 (17.6) 47.23 (15.1) 52.69 (25.7) Tell Me Unit 3 75.07 (19.0) 57.51 (14.2) 57.48 (25.0) Alliteration 70.31 (27.1) 69.96 (21.7) 50.49 (29.6)

  40. CBM Mean Percentage of Correct Responsefor the Three Groups S Typical (n = 25) At Risk (n = 7) Special Needs (n = 21) Letter Identification Show Me Tell Me Alliteration

  41. An Example of CBM Performance: Typical vs. Special Needs Typical Child Child with Special Needs Letter Identification Show Me Tell Me Alliteration

  42. Comparisons Between the Three Groups Statistically Significant Group Differences (ANOVA): Letter Identification Unit 1: F (2, 50) = 4.87, p = .012 Letter Identification Unit 3: F (2, 53) = 3.25, p = .047 Show Me Unit 1: F (2, 50) = 4.80, p = .012 Show Me Unit 2: F (2, 53) = 3.73, p = .031 Show Me Unit 3: F (2, 53) = 10.61, p < .001 Tell Me Unit 3: F (2, 53) = 4.83, p = 0.12 Alliteration: F (2, 53) = 3.43, p = 0.40 Significant Post Hoc Comparisons: Letter Identification Unit 1: Typical (M = 81.0) > Special Needs (M = 50.2), p = .009 Show Me Unit 1: Typical (M = 73.3) > At Risk (M = 48.2), p = .009 Show Me Unit 2: Typical (M = 78.8) > At Risk (M = 56.7), p = .028 Show Me Unit 3: Typical (M = 82.4) > At Risk (M = 46.7), p < .001 Typical (M = 82.4) > Special Needs (M = 60.6), p = 039 Special Needs (M = 60.6) > At Risk (M = 46.7), p = .027 Tell Me Unit 3: Typical (M = 75.1) > Special Needs (M = 57.5), p = .015 Alliteration: Typical (M = 70.3) > Special Needs (M = 50.5), p = .040

  43. Comparisons Within the Three Groups Typical Children (n = 25): Letter Identification:No significant changes over time Show Me:Statistically significant, Wilks’ Λ = .71, F (2, 23) = 4.75, p = .019, multivariate η2 = .29 (effect size) Children’s “Show Me” performance improved over time Unit 3 (M = 82.4) > Unit 2 (M = 78.8) > Unit 1 (M = 73.3) Tell Me:Statistically significant, Wilks’ Λ = .54, F (2, 23) = 9.92, p = .001, multivariate η2 = .46 (effect size) Children’s “Tell Me” performance improved over time Unit 3 (M = 75.1) > Unit 2 (M = 59.8) > Unit 1 (M = 53.4) At-Risk Children (n = 7):No significant changes Children with special needs (n = 21):No significant changes

More Related