I'm still struggling to find a few spare synapses to think about scientific research, but the teaching demands will begin to diminish soon. In the meantime here's a post about the only research I'm currently paying much attention to.
I'm teaching freshman biology to about 380 students, in two sections. With one of our wonderful teaching fellows (supported by UBC's Carl Wieman Science Education Initiative), I'm carrying out an experiment to find out how homework might improve students' understanding of the course material and ability to explain their understanding in writing.
It's been widely assumed, but perhaps never explicitly tested, that doing homework helps students understand course material. Int his experiment we're going to examine whether students whose homework required them to formulate their ideas in correctly written sentences, and who got detailed feedback on their errors, perform better on midterm and final exams than students whose homework required only that they recognize correct answers.
It's been widely assumed, but perhaps never explicitly tested, that doing homework helps students understand course material. Int his experiment we're going to examine whether students whose homework required them to formulate their ideas in correctly written sentences, and who got detailed feedback on their errors, perform better on midterm and final exams than students whose homework required only that they recognize correct answers.
The course I teach (BIOL 121, Genetics, Evolution, Ecology) has no tutorials and no TAs, just graders for midterms and finals; it's never had homework. This term we've split the students randomly into two homework groups that both get weekly homework assignments with very similar content but different requirements. Each homework is built around a single theme; more of a 'case study' than an series of unrelated questions. Students are given some information, asked one or two questions, given a bit more information, asked more questions, etc.
Group B's questions are in formats that can be automatically graded by our BlackBoard course management system - mostly multiple-choice questions, with some matching and fill-in-the-blanks. Group A is asked many of the same questions, but they have to think up their own answers and write them out. If the question does not require writing (e.g. has a numerical answer) the students are asked to give a written explanation of their answer.
Group B students can check the grading of their answers through the online system but get no specific feedback about their errors. Group A students get individual feedback about both their writing errors and their content errors. Part of this feedback is a very detailed grading key that gives, for each question, a sample answer, a numbered list of points a good answer should contain and errors that should have been avoided, and often a reference to lecture notes, textbook pages, or other sources of clarifying information. For each student's submission, each answer that did not earn full marks is commented with numbers indicating which problems the answer contained. For example, Writing error A is 'grammar errors', and Content error 4a is 'misinterpreting pedigree symbols or relationships'.
At the end of term we will compare the performance of the two groups on both the midterm and the final exam. Both these assessments include questions with written answers, allowing us to evaluate both students' mastery of course material and their ability to write clearly and correctly. Students were given a pre-quiz at the first class, and some of these questions are repeated on the midterm or final, allowing direct before and after comparison.
To make sure students feel they have been treated fairly, the course grades will be normalized across the two groups before being officially submitted to the Registrar's Office. The group with the lower course mean will have its grades raised to match the mean of the other group. This seems to be having the desired effect. We had expected some students to protest the unequal treatment, complaining either that Group A had to work harder or that Group A was going to learn more, but this hasn't materialized.
Group B's questions are in formats that can be automatically graded by our BlackBoard course management system - mostly multiple-choice questions, with some matching and fill-in-the-blanks. Group A is asked many of the same questions, but they have to think up their own answers and write them out. If the question does not require writing (e.g. has a numerical answer) the students are asked to give a written explanation of their answer.
Group B students can check the grading of their answers through the online system but get no specific feedback about their errors. Group A students get individual feedback about both their writing errors and their content errors. Part of this feedback is a very detailed grading key that gives, for each question, a sample answer, a numbered list of points a good answer should contain and errors that should have been avoided, and often a reference to lecture notes, textbook pages, or other sources of clarifying information. For each student's submission, each answer that did not earn full marks is commented with numbers indicating which problems the answer contained. For example, Writing error A is 'grammar errors', and Content error 4a is 'misinterpreting pedigree symbols or relationships'.
At the end of term we will compare the performance of the two groups on both the midterm and the final exam. Both these assessments include questions with written answers, allowing us to evaluate both students' mastery of course material and their ability to write clearly and correctly. Students were given a pre-quiz at the first class, and some of these questions are repeated on the midterm or final, allowing direct before and after comparison.
To make sure students feel they have been treated fairly, the course grades will be normalized across the two groups before being officially submitted to the Registrar's Office. The group with the lower course mean will have its grades raised to match the mean of the other group. This seems to be having the desired effect. We had expected some students to protest the unequal treatment, complaining either that Group A had to work harder or that Group A was going to learn more, but this hasn't materialized.
No comments:
Post a Comment
Markup Key:
- <b>bold</b> = bold
- <i>italic</i> = italic
- <a href="http://www.fieldofscience.com/">FoS</a> = FoS