Evaluating the effect of peer feedback on the quality of online discourse - 0 views
-
Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality.
-
Abstract: This study explores the effect on discussion quality of adding a feedback mechanism that presents users with an aggregate peer rating of the usefulness of the participant's contributions in online, asynchronous discussion. Participants in the study groups were able to specify the degree to which they thought any posted comment was useful to the discussion. Individuals were regularly presented with feedback (aggregated and anonymous) summarizing peers' assessment of the usefulness of their contribution, along with a summary of how the individuals rated their peers. Results indicate that continuous, anonymous, aggregated feedback had no effect on either the students' or the instructors' perception of discussion quality. This is kind of a show-stopper. It's just one study but when you look at the results there appears to be no effect whatsoever from peers giving feedback about the usefulness of discussion posts, nor any perceived improvement in the quality of the discussions as evaluated by faculty. It looks like we'll need to begin looking carefully at just what kinds of feedback will really make a difference. Following up on Corinna's earlier post http://blogs.hbr.org/cs/2010/03/twitters_potential_as_microfee.html about the effectiveness of short immediate feedback being more effective than lengthier feedback that actually hinders performance. The trick will be to figure out just what kinds of feedback will actually work in embedded situations. It's interesting that an assessment of utility wasn't useful...?