The impact of Google Neural Machine Translation on Post-editing by student translators
PDF
HTML

How to Cite

Yamada, M. (2019). The impact of Google Neural Machine Translation on Post-editing by student translators. JoSTrans: The Journal of Specialised Translation, (31), 87–106. https://doi.org/10.26034/cm.jostrans.2019.178

Abstract

The author of this study used the design of a 2014 experiment that investigated college students' post-editing potential. The raw Google statistical machine translation (SMT) used in the 2014 experiment was replaced with the raw Google neural machine translation (NMT) of the same source text. A comparison of the results of the two studies yielded the following observations: 1) A quantitative evaluation of post-editing (PE) showed no significant difference in cognitive effort between the studies, but a significant difference in the amount of editing was observed. Overall, NMT+PE is better than SMT+PE in terms of its final product, which contains fewer errors; however, NMT+PE does not empower college students to meet professional standards of translation quality. 2) Students exhibit a poorer error correction rate in the NMT+PE condition despite similar perceived cognitive effort, which is possibly related to NMT producing human-like errors that make it more difficult for students to post-edit. 3) NMT+PE requires almost the same competence as translating a text 'from scratch' or editing human translation. Therefore, translation training is necessary for students to be able to shift their attention to the right problems (such as mistranslation) and be effective post-editors. The results of this study suggest that the more advanced, human-like translation abilities of NMT make it even more challenging for student translators to meet a professional standard of post-editing quality.
https://doi.org/10.26034/cm.jostrans.2019.178
PDF
HTML
Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

Copyright (c) 2019 Masaru Yamada