10 likes | 191 Views
Students’ actual responses. Total 81,819. It turns to ash. The heat makes it lose weight. It loses weight because if the fire goes on it, It burns it and turns into ashes. Because wood burns into ash, takes out the water and sap. It becomes ashes. Ashes don’t weigh anything.
E N D
Students’ actual responses Total 81,819 It turns to ash. The heat makes it lose weight. It loses weight because if the fire goes on it, It burns it and turns into ashes. Because wood burns into ash, takes out the water and sap. It becomes ashes. Ashes don’t weigh anything. Developing progress variables and learning progressions for the Carbon Cycle General Level Structure The BEAR Assessment System Karen Draney, Mark Wilson, Jinnie Choi, and Yongsang Lee, UC Berkeley Outcome space Progress variables A progress variable is used to represent a cognitive theory of learning consistent with a developmental perspective. This is grounded in the principle that assessments be designed with a developmental view of student learning. The underlying purpose of assessment is to determine how students are progressing from less expert to more expert in the domain of interest, rather than limiting the use of assessment to measure competence after learning activities have been completed.With a progress variable, we seek to describe a continuum of qualitatively different levels of knowledge, from a relatively naïve to a more expert level. The “outcome space” refers to the mapping of all possible responses to an assessment item to a particular level of the progress variable. The outcome space must be exhaustive (every student response must be mappable) and finite! A careful study of this mapping has engaged our group for many meetings. In order to ensure the reliability of our scoring methods, and the stability of our variable structure, we have taken sets of student responses to many of our assessment items, and had teams at both Michigan State University and and Berkeley, score them All differences were resolved through discussion (below left). In addition, we have attempted to select from our data an “exemplary” student response for each item at every scoring level of the variable(s) with which it is associated, and to annotate why it is exemplary (below right). • Carbon Cycle progress variables • Structure of Systems • Tracing Matter • Tracing Energy • Citizenship • Change over Time Current Work TM Item scoring issue : BURNING A MATCH Q : What happens to the wood of a match as the match burns? Why does the match lose weight as it burns? Future Work To define the levels of the progress variables, we started with a description of the types of reasoning necessary to function at a high level of environmental literacy. In addition, we examined both previous literature, and written and interview-based accounts, from elementary, middle school, and high school students, to define additional levels. The lowest levels of reasoning, generally used by middle to upper elementary school students, we referred to as a lower anchor This includes a lack of awareness of various “invisible” mechanisms (including microscopic and atomic-molecular structure, large-scale structure, gases, etc.); reliance on senses rather than data; and narrative rather than model-based reasoning. As an upper anchor, we selected the highest levels of performance seen in high school students after completing relevant science units UCB : Level 2 MSU : More than Level 3 • It could be a simple description • about visible events. • Students might simply mention • ashes without recognizing • chemical reaction, so we should • not assume students recognize • chemical reaction with these • responses. • Ashes are macroscopic products and • students who identify ashes should be • distinguished from students who don’t. • “burning into ash” is a chemical reaction, • so we can assume students identify • chemical process with these responses. Measurement Model Items design Each item must represent some number of levels of one or more progress variables. The design of items, and the selection of existing items, to represent the variables, has occupied much of our time. In this process, the progress variables take form. Graphical Representation of the Item level difficulties and person performance levels for Structure of Systems In the graphical representation to the right, an X represents a person performance on a collection of items related to Structure of Systems. The numbers on the right represent scoring levels (from the general level structure above) on individual items (e.g. 5.2 indicates a level 2 performance on item 5). Less difficult tasks and less proficient persons are toward the bottom of the page. The use of a formal measurement model like this one to analyze our data allows us to examine our expectations for assessment empirically. We can make certain that items that we expect to be particularly easy or difficult in fact are, and we can find and examine item and person performances that are unusual or unexpected. In this case, we expected that students would use similar levels of reasoning on a wide variety of different assessment tasks, leading to a “banding” effect in which the same scoring levels would occur together across most or all of the items. In our preliminary analysis of the Structure of Systems data, we have indeed observed this.