Abstract
Determining translation quality requires a precise measure of the traits being examined. This article evaluates a new framework for translation quality evaluation, Multidimensional Quality Metrics (MQM), to the task of grading student translations. It demonstrates the viability (i.e. the practicality, validity and reliability) of using the MQM framework by novice raters to judge translations based on the American Translators Association’s translator certification exam grading system. The data gathered for this study were drawn from 29 student translations of a single news story that were rated by nine novice and two expert raters. The study used average time on task, correlations between novices and experts and Many-Facet Rasch Measurement to identify the extent to which this use of the MQM framework was viable. The findings indicate that this implementation of MQM can be viable with novice raters under the right conditions.

This work is licensed under a Creative Commons Attribution 4.0 International License.