1 / 31

ARE STUDENTS BETTER AT VALIDATION AFTER AN INQUIRY-BASED TRANSITION-TO-PROOF COURSE?

ARE STUDENTS BETTER AT VALIDATION AFTER AN INQUIRY-BASED TRANSITION-TO-PROOF COURSE?. Annie Selden John Selden New Mexico State University 17 th RUME Conference Denver, CO February 28, 2014. Research Question.

onan
Download Presentation

ARE STUDENTS BETTER AT VALIDATION AFTER AN INQUIRY-BASED TRANSITION-TO-PROOF COURSE?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ARE STUDENTS BETTER AT VALIDATION AFTER AN INQUIRY-BASED TRANSITION-TO-PROOF COURSE? Annie Selden John Selden New Mexico State University 17th RUME Conference Denver, CO February 28, 2014

  2. Research Question Would taking an inquiry-based transition-to-proof course that emphasized proof construction significantly enhance students’ proof validation abilities?

  3. Why would anyone think they would be better at validation after a transition-to-proof course? Proving requires at least implicitly validating (i.e., reading & checking) one’s own proof before submitting it in (for a grade or publication).

  4. Setting: The Course and the Students • Students were interviewed at the end of a transition-to-proof course. • The course was inquiry based, i.e., students were given notes with definitions, requests for examples, and statements of theorems to prove. They proved the theorems outside of class and presented their proofs in class and received extensive critiques. • Topics included sets, functions, continuity, and beginning abstract algebra. The teaching aim was to have students experience constructing as many different kinds of proofs as possible. • In addition to homework, which consisted of attempts to prove the next 2 or 3 theorems in the notes, and presenting their attempted proofs in class, the students had mid-term and final examinations.

  5. The Students in the Study • Sixteen of the 17 enrolled students opted to participate in the study for extra credit. • Of these, 81% (13 of 16) were either mathematics majors, secondary education mathematics majors, or were in mathematics-related fields (e.g., electrical engineering, civil engineering, computer science).

  6. Methodology: The Conduct of the Interviews • Interviews were conducted outside of class during the final two weeks of the course. • The students received extra credit for participating and signed up for one-hour time slots. They were told that they need not study for this extra credit session. • The protocol was the same as that of our earlier validation study (Selden & Selden, 2003).

  7. They were asked to think aloud and to decide whether the purported proofs were indeed proofs. • Participants were encouraged to ask clarification questions and informed that the interviewer would decide whether a question could be answered. • They were given a Fact Sheet about multiples of 3. • Upon arrival, participants were first informed that they were going to validate four student-constructed “proofs” of a single number theory theorem.

  8. There were four phases to the interview: • A warm-up phase during which the students gave examples of the theorem: For any positive integer n,ifn2 is a multiple of 3, then n is a multiple of 3 and then tried to prove it • Asecond phase during which they validated, one-by-one, the four purported (student-constructed) proofs of the theorem • Athird phase during which they were able to reconsider the four purported proofs (presented altogether on one sheet of paper), and • Afourth debrief phase during which they answered questions about how they normally read proofs.

  9. Participants wrote as much or as little as they wanted on the sheets with the purported proofs. • Participants took as much time as they wanted to validate each proof, with one participant initially taking 25 minutes to validate “Proof (a)”. • The interviewer answered an occasional clarification question, such as the meaning of the vertical bar in 3|n2, but otherwise only took notes, and handed the participants the next printed page when they were ready for it. • The interviews were audio recorded. N.B. Because the purported proofs were constructed by undergraduate students and because the participants in this study were also undergraduate students, henceforth we refer to the undergraduates in this study as “participants” to avoid confusion.

  10. Analysis of the Data The following data were collected: • The sheets on which the participants wrote. • The interviewer’s notes, and • The recordings of the interviews. These data were analyzed multiple times to note anything that might be of interest. If one wanted to give this a fancy name, one might call this the “constant comparative” method.

  11. Observations and Tallies Were Made of Participants’ Actions • The number of correct judgments made by each participant individually; • The percentage of correct judgments made by the participants (as a group) at the end of Phase 2 and again at the end of Phase 3 • The validation behaviors of the participants that were observed by the interviewer; • The validation comments that the participants proffered;

  12. The amount of time taken by each participant to validate each of the purported proofs; • The number of times each participant reread each purported proof; • The number of participants who underlined or circled parts of the purported proofs; • The number of times the participants substituted numbers for n; and • The number of times the participants consulted the Fact Sheet.

  13. Observed Participants’ Validation Behaviors • Participants took the task very seriously. Table 1: Time (in minutes) taken initially to validate the purported proofs (during Phase )

  14. Beneficial participant behaviors •  Underlined, or circled, parts of the purported proofs (100%, 16); • Pointed with their pencils or fingers to words or phrases, as they read along linearly (50%, 8); • Checked the algebra, for example, by “foiling” (3n+1)2 (62.5%, 10); • Substituted numbers for n to check the purported equalities (37.5%, 6); • Reread all, or parts of, the purported proofs (87.5%, 14); • Consulted the Fact Sheet to check something about multiples of 3 (56.25%, 9).

  15. Summmarizing the Above Participants used: • Focus/reflection aids (1. & 2.); • Checked computations or tested examples (3. & 4.); • Revisited important points – perhaps as a protection against “mind wandering” (5.); and • Checked their own knowledge (6.).

  16. Participants’ Evaluative Comments • For example, CY objected to “Proof (b)” being referred to as a proof by contradiction. He insisted it was a contrapositive proof and twice crossed out the final words “we have a proof by contradiction”. • Fourteen (87.5%) mentioned the lack of a proof framework, or an equivalent, even though the interviewer had informed them at the outset that the students who wrote the purported proofs had not been taught to construct proof frameworks.

  17. Lack of clarity in the way the purported proofs were written, referring to parts of them as “confusing”, “convoluted”, “a mess”, or not “making sense” (68.75%); • The notation, which one participant called “wacky”; • The fact that “Proof (d)” started with n, then introduced m, and did not go back to n; • Not knowing what the students who had constructed the purported proofs knew or were allowed to assume; • Having too much, or too little, information in a purported proof. For example, one participant said there was “not enough evidence for a contradiction” in “Proof (b)”; • The “gap” in “Proof (c)” which was remarked on by six participants.

  18. Individual Participants’ Comments • Local comments on “Proof (a)”: “[I] don’t like the string of = s.” (MO). “3n+1, if n=1, is not odd, [rather it] would be even.” (KW). “This [pointing to n2 = 9n2] isn’t equal.” (AF). • Overall comments on “Proof (a)”: “[It] needsmore explanation -- I can’t see where they are going.” (CL). “[The] first case doesn’t seem right.” (CY). “Not going where they need to go.” (KW). “Not a proper proof”. (FR). “Partial proof”. (MO).

  19. Participants’ comments, above, do not focus just on whether the theorem has been proved, that is, on validation. We suspect participants might have had difficulty separating matters of validity from matters of styleand personal preference, or even from their own confusion.

  20. What They Say They Do When Reading Proofs • All said they reread a proof several times or as many times as needed. • Some volunteered that they work through proofs with an example, write on scratch paper, read aloud, or look for the framework. • Furthermore, ten (62.5%) said they tell if a proof is correct by whether it “makes sense” or they “understand it”. • Four (25%) said a proof is incorrect if it has a mistake, and four (25%) said a proof is correct “if they prove what they set out to prove.” • All said that they check every step of a proof or read a proof line-by-line.

  21. Answer to Research Question • The participants in this study took their task very seriously, but made fewer final correct judgments(73% vs. 81%) than the undergraduates we studied earlier despite, as a group, being somewhat further along academically. • In this study, 56% (9 of 16) of the participants were in their 4th year of university, whereas just 37.5% (3 of 8) of the undergraduates in our earlier study were in their 4th year.

  22. Because the participants in this study were completing an inquiry-based transition-to-proof course emphasizing proof construction, we conjectured they would be better at proof validation than those at the beginning of our earlier transition-to-proof course, but they weren’t. • If one wants undergraduates to learn to validate “messy” student-constructed, purported proofs, in a reliable way, one needs to teach validation explicitly.

  23. We stress this because it is counterintuitive. • As students most mathematicians have received considerable implicit proof construction instruction through feedback on assessments. However, most have received no validation instruction, but are very skilled at it.

  24. Future Research • In addition to proof validation, there are three additional related concepts in the literature: proof comprehension, proof construction, and proof evaluation. • There has been little research on how these four concepts are related. • In this study, we investigated one of them -- whether improving undergraduates’ proof construction abilities would enhance their proof validation abilities and obtained some negative evidence.

  25. Proof comprehensionmeans understanding a (textbook or lecture) proof. Mejia-Ramos, Fuller, Weber, Rhoads, and Samkoff (2012) have given an assessment model for proof comprehension, and thereby described it in practical terms. • Examples of their assessment items include: Write the given statement in your own words. Identify the type of proof framework. Make explicit an implicit warrant in the proof. Provide a summary of the proof.

  26. Proof constructionmeans constructing correct proofs at the level expected of mathematics students (depending which year they are in their program of study).

  27. Proof evaluationwas described by Pfeiffer (2011) as “determining whether a proof is correct and establishes the truth of a statement (validation) and also how good it is regarding a wider range of features such as clarity, context, sufficiency without excess, insight, convincingness or enhancement of understanding.” (p. 5).

  28. Finally, we feel that there is a need to develop characteristics of a reasonable learning progression for tertiary proof construction, going from novice(lower-division mathematics students) to competent(upper-division mathematics students), on to proficient(mathematics graduate students), and eventually to expert (mathematicians). N.B. The terms novice, proficient, competent, and expert have been adapted from the Dreyfus and Dreyfus (1986) novice-to-expert scale of skill acquisition.

  29. References • Dreyfus, H. L., & Dreyfus, S. E. (1986). Mind over machine: The power of human intuition and expertise in the age of the computer. Oxford: Blackwell. • Inglis, M., & Alcock, L. (2012). Expert and novice approaches to reading mathematical proofs. Journal for Research in Mathematics Education, 43(4), 358-390. • Ko, Y.-Y., & Knuth, E. J. (2013). Validating proofs and counterexamples across content domains: Practice of importance for mathematics majors. Journal of Mathematical Behavior, 32, 20-35. • Mejia-Ramos, J. P., Fuller, E., Weber, K., Rhoads, K., & Samkoff, A. (2012). An assessment model for proof comprehension in undergraduate mathematics. Educational Studies in Mathematics 79(1), 3-18.

  30. Pfeiffer, K.(2011). Features and purposes of mathematical proofs in the view of novice students: Observations from proof validation and evaluation performances. Doctoral dissertation, National University of Ireland, Galway. • Selden, A., & Selden, J. (2003). Validations of proofs written as texts: Can undergraduates tell whether an argument proves a theorem? Journal for Research in Mathematics Education, 34(1), 4-36. • Selden, J., & Selden, A. (1995). Unpacking the logic of mathematical statements. Educational Studies in Mathematics, 29, 123-151. • Weber, K. (2008). How do mathematicians determine if an argument is correct? Journal for Research in Mathematics Education, 39(4), 431-439.

  31. Thank you Questions/Comments?

More Related