Academic Journals Database
Disseminating quality controlled scientific knowledge

Eliciting formative assessment in peer review

ADD TO MY LIST
 
Author(s): Ilya M. Goldin & Kevin D. Ashley

Journal: Journal of Writing Research
ISSN 2030-1006

Volume: 4;
Issue: 2;
Start page: 203;
Date: 2012;
Original page

Keywords: computer-supported peer-review | rubrics | peer assessment validity

ABSTRACT
Computer-supported peer review systems can support reviewers and authors in many different ways, including through the use of different kinds of reviewing criteria. It has become an increasingly important empirical question to determine whether reviewers are sensitive to different criteria and whether some kinds of criteria are more effective than others. In this work, we compared the differential effects of two types of rating prompts, each focused on a different set of criteria for evaluating writing: prompts that focus on domain-relevant aspects of writing composition versus prompts that focus on issues directly pertaining to the assigned problem and to the substantive issues under analysis. We found evidence that reviewers are sensitive to the differences between the two types of prompts, that reviewers distinguish among problem-specific issues but not among domain-writing ones; that both types of ratings correlate with instructor scores; and that problem-specific ratings are more likely to be helpful and informative to peer authors in that they are less redundant.

Tango Rapperswil
Tango Rapperswil

     Affiliate Program