1 / 34

Advancing Assessment of Quantitative and Scientific Reasoning

Advancing Assessment of Quantitative and Scientific Reasoning: A Progress Report on an NSF Project D onna L. Sundre, Douglas Hall, K aren Smith,. Advancing Assessment of Quantitative and Scientific Reasoning. Donna L. Sundre Center for Assessment and Research Studies (CARS)

Download Presentation

Advancing Assessment of Quantitative and Scientific Reasoning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advancing Assessment of Quantitative and Scientific Reasoning: A Progress Report on an NSF ProjectDonna L. Sundre, Douglas Hall, Karen Smith,

  2. Advancing Assessment of Quantitative and Scientific Reasoning Donna L. Sundre Center for Assessment and Research Studies (CARS) James Madison University www.jmu.edu/assessment/

  3. Overview of talk • Current NSF Research project • History of the test instrument • Phase I: Generalizability of the instrument • Phase II: Assessment Practice and Validity • Results from some of our partners: • James Madison University • Truman State University • St. Mary’s University

  4. Current NSF Project • 3-year grant funded by National Science Foundation: “Advancing assessment of scientific and quantitative reasoning” • Hersh & Benjamin (2002) listed four barriers to assessing general education learning outcomes: • confusion; • definitional drift; • lack of adequate measures, and • misconception that general education cannot be measured • This project addresses all of these concerns with special emphasis on the dearth of adequate measures

  5. Partner Institutions • Michigan State University: State-supported; Research institution • Truman State University: State-supported; Midwestern liberal arts institution • St. Mary’s University (Texas): Independent; Roman-Catholic; Hispanic Serving institution • Virginia State University: State-supported; Historically Black institution

  6. Objectives of NSF project • Explore psychometric quality and generalizability of the QR and SR instruments • Build scientifically based assessment plans • Build assessment capacity at partner institutions • Develop new assessment models for adoption and adaptation • Document potential barriers to assessment practice and explore solutions • Create scholarly communities of assessment practitioners to sustain work

  7. History of the instrument • Tests have been under development since 1997 at JMU • Quantitative Reasoning (QR- 26 items) and • Scientific Reasoning (SR- 49 items) • Designed to measure 8 general education learning objectives • Test information and manuals available at http://www.jmu.edu/assessment/resources/prodserv/cbts.htm

  8. Project phases • Phase I: First Faculty institute (conducted July 2007 at JMU); followed by data collection, identification of barriers, and reporting of results • Phase II: Assessment practice and validity studies; research questions developed at July 2008 Faculty Institute; dissemination of findings and institutional reports

  9. Early content validity evidence • Results strongly support generalizability of test items • Truman State: 100% of items mapped • Michigan State: 98% (1 item not mapped) • Virginia State: 97% (2 items unmapped) • St. Mary’s: 92% (5 items not mapped) • Mapping of items alone is not sufficient • Balance across objectives must be obtained • Teams then created additional items to cover identified gaps in content coverage • 14 for MSU; 11 for St. Mary’s; 10 for Truman State; 4 for VSU

  10. Research at JMU • Highlights of our Findings: • Grades in relevant courses are positively correlated with QR and SR scores • Student QR and SR scores improve with additional course work • AP and JMU credits show greater improvement • Transfer credits do not show as marked gains • Students completing their requirements perform better than those who have not • Sophomores and juniors score higher than entering first year students

  11. Research at JMU • Highlights of our Findings: • Sophomore students who have completed 3 or 4 courses score higher than sophomores who have not. • We have established faculty ‘standards’ for performance • Many of our students are not meeting those high expectations • Of those completing requirements: QR: 70% SR: 73% • These percentages are much higher than those observed for entering students or students who have not completed their requirements • We ‘filter’ our data using motivation Effort scores • This removes about 30-35 scores out of 1,100

  12. Research Plan • Administration of QRSR to incoming freshman classes (Fall 2007 & 2008) • Administration to students with junior standing Spring 2008 and 2009 • Link results to various student groups and other academic data

  13. QRSR Freshman Results a – p<0.05 vs. SET; b – p<0.01; c – Effect size

  14. QRSR Score Correlations a – California Critical Thinking Skills Test

  15. QRSR Junior Results a – p<0.01 vs. SET; b – vs. SET; c – vs. HSS

  16. Test Scores and Student Motivation Level • Student Opinion Survey (SOS) developed by JMU • 10 items – 1-5 scale • Score Range 10-50 • 3 scores • Effort • Importance • Total Motivation

  17. Freshman QRSR And Motivation Levels

  18. Spring 2009 Junior QRSR And Motivation Levels

  19. Junior QRSR And Motivation Levels a – p<0.05 vs. SET

  20. Truman State University QRSR results

  21. Institution Characteristics • Public liberal arts • Highly selective • High economic diversity • Low ethnic diversity – predominately white • Long history of assessment • Good infrastructure for data collection

  22. Questions • Reliability of QRSR v CAAP? • Correlations with number of science and quantitative classes? • Correlations with ACT? • Comparison of majors v nonmajors • Correlation with STAT 190 performance? • Comparison of Juniors’ scores to first-year students’?

  23. QRSR Administration • Juniors • Part of normal junior testing • Spanned two academic years: Fall 07- Spring 09 • All Jr.’s participate – roughly 50/50 between JMU and CAAP science and math • Paper-pencil administration • 2283 total • Smaller scale study of First-year students • Invitations to instructors of first-year experience • Online administration • 135 total • Both versions include 10 additional items for coverage of outcomes

  24. RQ 1: How does the reliability of QRSR compare to the CAAP? Overall reliability is comparable CAAP: .84 - .86* QRSR Juniors 2007-2008: .80 (calculated) 2008-2009: .81 (calculated) First-year students Fall 2008 .86 (calculated) (*http://www.act.org/caap/pdf/handbook/Chapter4.pdf) (No item data available)

  25. Reliability of outcome-based SUBSCALES (Juniors 07-08) • Physical science outcome 1 – 38 items: .639 • Physical science outcome 2 – 26 items: .579 • Physical science outcome 3 – 9 items: .460 • Physical science outcome 4 – 8 items: .189 • Life science outcome 1 – 38 items: .639 • Life science outcome 2 – 26 items: .579 • Life science outcome 3 –19 items: .563 • Life science outcome 4 – 5 items: .174 • Life science outcome 5 – 16 items: .518 • Math outcome 1 – 27 items: .666 • Math outcome 2 –5 items: .274 • Math outcome 3 – 22 items: .609 • Math outcome 4 – 5 items: .439 • Math outcome 5 – 5 items: .396

  26. RQ 2:How do QRSR scores and CAAP scores correlate with number of classes taken (at Truman) in science and quantitative areas?

  27. RQ 3:How do ACT science and math subscores correlate with science and math subscores on the two assessment instruments? CORRELATIONS

  28. RQ 4:Does QRSR discriminate science/math majors from non science/math majors? Yes. • Science and Math majors : 85.3% • Other majors average 79.7% These differences are statistically significant for the overall score (t(584) = 5.85, p < .001) and for each of the outcome subscores.

  29. RQ5: Does STAT 190 predict student performance of the QRSR? Too few students without STAT 190 credit to test those with the course v those without. Correlation with Truman STAT 190 course grades QRSR: .318 CAAP MATH: .374 CAAP SCI: .282

  30. RQ6: How do scores of Juniors compare to those of first-year students? First-year scores and junior scores are significantly different, p < .01, effect size .215 (for junior 07-08)

  31. Challenges Collecting data from first-year students Estimates of student motivation Sharing the model outside quantitative and scientific disciplines Using the data in a changing curriculum

  32. Uses & Future directions • Considered as part of gen ed curriculum reform • Data analysis from 08-09 juniors continues

  33. Thank you for coming!Questions?? All slides will be made available from the NASPA website in a week or two.

More Related