I have a series of low-stakes assignments in which students are asked in one week to submit a short argument on a recent course topic, and then in the next week they are asked to evaluate 3 other students’ short arguments, using the PeerMark assignment feature on Turnitin. All submissions and evaluations are anonymous (unless students deliberately put their names in the body of their work).
I have been tweaking this assignment model off and on for a few years, with mixed results, but this semester it is going very well. In their evaluations, I ask students to give the argument they are evaluating a score on various parameters and then to give constructive, helpful comments to explain their score and to help the arguer improve. I also ask them to present a counterargument.
I’ve been pleased with the effort students have been putting into these assignments this semester, but I am considering two changes in how I do this next time, and would love suggestions or comments on how best to carry them out (or for alternatives):
- Have students create the rubric for what each score should represent (giving them a blank or pared down version to collaborate on completing).
- Find ways to encourage follow-up or opening communication about the submissions after the initial argument/evaluation cycle. (I’m not sure how to do this without undermining or outright destroying the benefits of having everything be anonymous initially.)
Regarding 1, the evaluation questions and current rubric are below.
EVALUATION QUESTIONS (Those that require a scored response using the rubric are in bold.)
- As I understand it, the thesis is that: (fill in or cut and paste from the argument itself)
- How rationally persuasive is the author’s argument? Does it provide good evidence and reasoning to support the thesis?
Scale Highest: very persuasive, Lowest: not persuasive
- What advice would you give for improving the rational persuasiveness of the argument? (Where, if anywhere, does the argument fail to rationally persuade? Or, if it is successful, what makes it work? Note that this is not the same as a counterargument; this is advice for how to improve the argument, versus a counterargument which gives reasons to reject the argument.)
- How accurately does the author present course material from readings or lecture? Are terms defined, philosophers’ views explained, and issues interpreted correctly?
Scale Highest: completely accurate, Lowest: mostly inaccurate
- Identify and explain any inaccuracies noted above. If you gave a 3 or less, clarify exactly what you think is wrong in the definitions or presentation of class material!
- Is the writing clear and well organized? Assess the quality of the writing itself (considering the grammar, spelling, style, and so on, rather than the content).
Scale Highest: very clear, Lowest: very unclear
- What suggestions would you make to improve the writing clarity or organization? Or, if you have no suggestions, identify what makes it successful.
- How original is the thinking shown? That is, does the author mostly repeat ideas from others, such as in the readings and class discussion, or does the author show independent thought, with new arguments or examples (or at least new twists or personalization of them)?
Scale Highest: very original, Lowest: not at all original
- Provide the best counterargument(s) you can, which must include at least one piece of supporting evidence (NO questions, and NO mere disagreement).
- Optional: other comments you wish to share with the arguer? An overall ‘grade’ for the argument?
THE CURRENT RUBRIC: (The formatting seems to have gone a bit wonky as posted on the blog – it’s not this ugly, really!)
Argument Evaluation Rubric
Clarity and Organization | The meaning was very difficult to understand. | The meaning and/or connection between ideas were often unclear. | Relatively easy to follow, with decent grammar, etc., but should be edited to bring it above average. | The meaning was mostly clear, with some bits that could be re-written for greater clarity or logical flow. | The meaning is clear throughout, and the ideas are well-organized with good logical flow. The writing is efficient and effective. |
Originality | Arguments, examples, or ideas seem straight from the reading or class discussion. | Only minimal efforts have been made to personalize the arguments. | Average. | While perhaps a bit derivative, the author has clearly made the arguments his or her own. | Examples or ideas show innovative, independent thought. |
2 replies on “Feeback Frenzy”
To your first point–a student-informed rubric–I’ll say I’ve had really good experiences with this and one real big fail. Most classes, when I’ve built into our weekly plans a collaborative rubric, have kind of taken to it with zeal. I tend to ask small groups to come up with 4-5 key things (concepts, outcomes, etc.) for the categories (the design of the rubric varies with the course), and then, collectively, we refine all 20-30 things down in a rubric. I’ve really found it useful. My lone exception: one class just didn’t want to do it. Getting feedback from them about how we could refine all the terms was like pulling teeth.
Thanks for your information and insight on this, Kyllikki, both here and in yesterday’s meeting!