Marking essays on screen: an investigation into the reliability of marking extended subjective texts

Marking essays on screen: an investigation into the reliability of marking extended subjective texts

  • Version 1.0.0
  • Download 2
  • File Size 401.67 KB
  • File Count 1
  • Create Date August 2, 2018
  • Last Updated August 2, 2018

Marking essays on screen: an investigation into the reliability of marking extended subjective texts

There is a growing body of research literature that considers how the mode of assessment, either computer- or paper-based, might affect candidates’, performances (Paek, 2005). Despite this, there is a fairly narrow literature that shifts the focus of attention to those making assessment judgments and which considers issues of assessor consistency when dealing with extended textual answers in different modes.This research project explored whether the mode in which a set of extended essay texts were accessed and read systematically influenced the assessment judgments made about them. During the project twelve experienced English Literature assessors marked two matched samples of ninety essay exam scripts on screen and on paper. A variety of statistical methods were used to compare the reliability of the essay marks given by the assessors across modes. It was found that mode did not present a systematic influence on marking reliability. The analyses also compared examiners’, marks with a gold standard mark for each essay and found no shifts in the location of the standard of recognised attainment across modes.

Attached Files

FileAction
paper_30171921f.pdfDownload 
Menu
X