Developing and Implementing Calibrated Peer Assessment for Large Classes

Peer assessment is defined as “an arrangement for learners to consider and specify the level, value, or quality of a product or performance of other equal-status learners” (Topping,  2009, pp. 20-21). In another word, peer assessment is a learning process whereby students evaluate or are evaluate by their peers against set assessment criteria.

Research has been done extensively on various aspects and best practices of peer assessment in traditional classrooms (see Falchikov & Goldfinch, 2000; Topping, 1998; 2009, for reviews). These reviews indicate substantial evidence that peer assessment, when implemented successfully, developed students’ metacognitive, personal, and professional skills, and resulted in improved learning.

Recent years have seen growing interests in the peer assessment in digital platforms, given the remarkably increasing offerings of online and blended courses (Li et al., 2016). In massive open online courses (MOOCs), peer assessment has been widely used to provide feedback, as it is virtually impossible for instructors and teaching assistants to provide individual students with feedback when thousands or even hundreds of thousands of them across the world are enrolled in the course.  It seems to be an economical solution as there is no need to hire a large pool of instructional staff for support any more. However, due to its scale and a completely online learning environment, MOOCs are very different than traditional classroom instruction.Instructors who teach MOOCs or large classes face some logistical challenges when implementing peer assessment (Suen, 2014):

  1. It is very complex to link raters and assignments, due to the scale of MOOCs.
  2. The credibility of the peer assessment becomes questionable, due to a lack of or no guidance and supervision from the instructor.

Researchers at Georgia Tech Physics Education Research Group face similar challenges when they offered an introductory physics course in different classroom contexts: a MOOC, a large “blended” on-campus section, and small-enrollment online-only courses. One of the researchers from the group, Scott Douglas, is invited to CTL’s Teaching with Technology Spotlight session on November 7, 2017, and he will discuss how they developed and implemented a calibrated peer assessment system to evaluate students’ video lab reports, and how students become more reliable graders and gain a more expert-like attitude toward peer assessment. Scott is currently writing his dissertation in physics education research under the advisorship of Professor Mike Schatz.

Please come and join us to participate in a discussion on the potential application of the calibrated peer assessment to your courses and beyond.

References

Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287-322

Li, H., Xiong, Y., Zang, X., L. Kornhaber, M., Lyu, Y., Chung, K. S., & K. Suen, H. (2016). Peer assessment in the digital age: a meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education, 41(2), 245-264.

Suen, H. K. (2014). Peer assessment for massive open online courses (MOOCs). The International Review of Research in Open and Distributed Learning, 15(3).

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of educational Research, 68(3), 249-276.

Topping, K. J. (2009). Peer assessment. Theory into practice, 48(1), 20-27.

 

Print Friendly, PDF & Email

You may also like...