The Role of Errors in Validating a Large-Scale Assessment of Adolescent English Writing in Austria

  • Samuel Hafner Universität Klagenfurt
  • Günther Sigott Universität Klagenfurt

Abstract

This study investigates errors in a sample of 50 written performances of Austrian learners of English collected in the 2009 baseline study for the Austrian Educational Standards-Based Writing Test for English at grade 8 (E8 Standards Writing Test). The research aims to contribute to the validation of this large-scale assessment by studying the relationship between errors (described using the Scope – Substance error taxonomy) and human ratings awarded to writing performances. The results add to the validity evidence of the E8 Standards Writing Test. There is a negative relationship between human ratings and the presence of errors; a low error density is associated with higher ratings and a high error density with lower ratings. Substance word, clause, and text error densities play an important role in the rating in most dimensions; errors with a larger scope also have a strong effect. By highlighting aspects of errors to which raters seem to be sensitive, these findings constitute evidence of context validity. At the same time, the findings are relevant to theory-based validity by concretising areas of competence that learners need to develop in order to receive higher ratings. While errors are important determinants of the ratings, additional factors, presumably positive features, must be at play as the accuracy of the regression models is low to moderate. This should in fact be the case since the E8 rating scale refers to negative as well as positive features.

Published
2021-10-07
Section
Language and Linguistics: Results