The Journal of Specialised Translation
https://www.jostrans.org/
<p>JoSTrans is a multilingual diamond-open-access journal on specialised translation and interpreting issues. Launched in 2004, it is free, electronic, double-blind peer-reviewed and published bi-annually. JoSTrans does not charge authors to publish their work.</p> <p>E-ISSN: 1740-357X</p>ZHAWen-USThe Journal of Specialised Translation1740-357XProposal for a Triple Bottom Line for Translation Automation and Sustainability
https://www.jostrans.org/article/view/4706
<p>This article is both an editorial introduction to the guest-edited special issue of JoSTrans on Translation Automation and Sustainability, and a position paper in which we propose a model for evaluating the sustainable use of automation technology in translation and beyond. As grounding notions, the article reviews definitions of automation and considers the urgency of sustainability. Thereafter we propose an adaptation of Elkington’s (1997) triple bottom line, giving equal weight to evaluation based on people, planet, and performance, describing each of these elements in turn. Finally, we introduce the articles from this special issue, in which authors describe various aspects of automation technology in translation with a focus on sustainability.</p>Joss MoorkensSheila CastilhoFederico GaspariAntonio ToralMaja Popović
Copyright (c) 2024
https://creativecommons.org/licenses/by/4.0
2024-01-302024-01-304122510.26034/cm.jostrans.2024.4706Roundtable: Translation Automation and Sustainability
https://www.jostrans.org/article/view/4737
Joss MoorkensSheila CastilhoFederico GaspariAntonio ToralMaja Popović
Copyright (c) 2024 Joss Moorkens, Sheila Castilho, Federico Gaspari, Antonio Toral, Maja Popović
https://creativecommons.org/licenses/by/4.0
2024-01-252024-01-254110.26034/cm.jostrans.2024.4737José Javier Ávila-Cabrera (2023). The Challenge of Subtitling Offensive and Taboo Language into Spanish. A Theoretical and Practical Guide
https://www.jostrans.org/article/view/4728
J. David González-Iglesias González
Copyright (c) 2024 J. David González-Iglesias González
https://creativecommons.org/licenses/by/4.0
2024-01-252024-01-254125525810.26034/cm.jostrans.2024.4728Rothwell, Andrew, Joss Moorkens, María Fernández-Parra, Joanna Drugan and Frank Austermuehl (2023). Translation Tools and Technologies
https://www.jostrans.org/article/view/4730
Alina Secară
Copyright (c) 2024 Alina Secară
https://creativecommons.org/licenses/by/4.0
2024-01-252024-01-254125926410.26034/cm.jostrans.2024.4730Moniz, Helena and Carla Parra Escartín (eds) (2023). Towards Responsible Machine Translation: Ethical and Legal Considerations in Machine Translation.
https://www.jostrans.org/article/view/4731
Marian Flanagan
Copyright (c) 2024 Marian Flanagan
https://creativecommons.org/licenses/by/4.0
2024-01-252024-01-254126527010.26034/cm.jostrans.2024.4731Tomáš Svoboda, Łucja Biel and Vilelmini Sosoni (eds). (2023). Institutional Translator Training
https://www.jostrans.org/article/view/4732
Huidan LiuPanpan Chen
Copyright (c) 2024 Huidan Liu, Panpan Chen
https://creativecommons.org/licenses/by/4.0
2024-01-252024-01-254127127810.26034/cm.jostrans.2024.4732Almanna, Ali and Juliane House (eds) (2023). Translation Politicised and Politics Translated
https://www.jostrans.org/article/view/4735
Le ChengMing Xu
Copyright (c) 2024 Le Cheng, Ming Xu
https://creativecommons.org/licenses/by/4.0
2024-01-252024-01-254127928610.26034/cm.jostrans.2024.4735Editorial
https://www.jostrans.org/article/view/4725
David Orrego-Carmona
Copyright (c) 2024 David Orrego Carmona
https://creativecommons.org/licenses/by/4.0
2024-01-252024-01-25411110.26034/cm.jostrans.2024.4725Re-thinking Machine Translation Post-Editing Guidelines
https://www.jostrans.org/article/view/4696
<p>Machine Translation Post-Editing (MTPE) is a challenging task. It frequently creates tension between what the industry expects in terms of quality and what translators are willing to deliver as an end product. Conventional approaches to MTPE take as a point of departure the distinction between light and full MPTE, but the division gets blurred when implemented in an actual MTPE project where translators find difficulties in differentiating between essential and preferential changes. At the time MTPE guidelines were designed, the role of the human translator in the MT process was perceived as ancillary, a view inherited from the first days of MT research aiming at the so-called <em>Fully Automatic High Quality Machine Translation</em> (FAHQMT). My proposal challenges the traditional division of MTPE levels and presents a new way of looking at MTPE guidelines. In view of the latest developments in neural machine translation and the higher quality level of its output, it is my contention that the traditional division of MTPE levels is no longer valid. In this contribution I advance a proposal for redefining MTPE guidelines in the framework of an ecosystem specifically designed for this purpose.</p>Celia Rico Pérez
Copyright (c) 2024
https://creativecommons.org/licenses/by/4.0
2024-01-302024-01-3041264710.26034/cm.jostrans.2024.4696Speech-to-text Recognition for the Creation of Subtitles in Basque
https://www.jostrans.org/article/view/4711
<p>This contribution aims at analysing the speech-to-text recognition of news programmes in the regional channel ETB1 for subtitling in Basque using ADITU (2024) (a technology developed by the Elhuyar foundation) applying the NER model of analysis (Romero-Fresco and Martínez 2015). A total of 20 samples of approximately 5 minutes each were recorded from the regional channel ETB1 in May, 2022. A total of 97 minutes and 1737 subtitles were analysed by applying criteria from the NER model. The results show an average accuracy rate of 94.63% if we take all errors into account, and 96.09% if we exclude punctuation errors. A qualitative analysis based on quantitative data foresees some room for improvement regarding language models of the software, punctuation, recognition of proper nouns and speaker identification. From the evidence it may be concluded that, although quantitative data does not reach the threshold to consider the quality of recognition <em>fair </em>or comprehensible with regards to the NER model, results seem promising. When presenters speak with clear diction and standard language, accuracy rates are sufficient for a minority language like Basque in which speech recognition software is still in early phases of development.</p>Ana TamayoAlejandro Ros Abaurrea
Copyright (c) 2024 Ana Tamayo, Alejandro Ros Abaurrea
https://creativecommons.org/licenses/by/4.0
2024-01-302024-01-3041487310.26034/cm.jostrans.2024.4711Does training in post-editing affect creativity?
https://www.jostrans.org/article/view/4712
<p>This article presents the results of an experiment with eleven students from two universities that translated and post-edited three literary texts distributed on the first and last days of their translation technology modules. The source texts were marked with units of creative potential to assess creativity in the target texts (before and after training). The texts were subsequently reviewed by an independent professional literary translator and translation trainer. The results show that there is no quantitative evidence to conclude that the training significantly affects students’ creativity. However, after the training, a change is observed both in the quantitative data and in the reflective essays, i.e. the students are more willing to try creative shifts and they feel more confident to tackle machine translation (MT) issues, while also showing a higher number of errors. Further, we observe that students have a higher degree of creativity in human translation (HT), but significantly fewer errors in post-editing (PE) overall, especially at the start of the training, than in HT.</p>Ana Guerberof-ArenasSusana Valdez Aletta G. Dorst
Copyright (c) 2024 Ana Guerberof-Arenas, Susana Valdez , Aletta G. Dorst
https://creativecommons.org/licenses/by/4.0
2024-01-302024-01-3041749710.26034/cm.jostrans.2024.4712Data-driven Asian Adapted MQM Typology and Automation in Translation Quality Workflows
https://www.jostrans.org/article/view/4713
<p>In this study we aim to test the impact of applying translation error taxonomies oriented towards European Languages in the annotation of Asian Languages. We aim to demonstrate how an error typology adapted for the latter languages can not only result in more linguistically accurate annotations, but also how this can be applied to automating and scaling translation quality evaluation.</p> <p>As such, we propose a Translation Errors Typology that aims to cover the shortcomings of the Multidimensional Quality Metrics (Lommel et al. 2014) framework (MQM) in what concerns the annotation of the East Asian Languages of Mandarin, Japanese and Korean. The effectiveness of the typology here proposed was tested by analysing the Inter-annotator agreement (IAA) scores obtained, in contrast with the typology proposed by Ye and Toral (2020) and the Unbabel Error Typology<a href="#bu7qzyzh51n8"><sup>1</sup></a>. Finally, we propose a way of automating Translation Quality Workflows through a Quality Estimation (QE) technology that is able to predict the MQM scores of machine translation outputs at scale with a fair correlation with the human judgement produced by applying the East Asian Languages MQM module proposed in this study.</p>Beatriz SilvaMarianna BuchicchioDaan van StigtCraig StewartHelena MonizAlon Lavie
Copyright (c) 2024 Beatriz Silva, Marianna Buchicchio, Daan van Stigt, Craig Stewart, Helena Moniz, Alon Lavie
https://creativecommons.org/licenses/by/4.0
2024-01-302024-01-30419812610.26034/cm.jostrans.2024.4713Decisions in projects using machine translation and post-editing
https://www.jostrans.org/article/view/4715
<p>Machine translation (MT) and post-editing (PE) have become increasingly important in the professional language industry in recent years. However, not every translation job is suitable for MT and there are many options for carrying out translation/post-editing projects, e.g. no PE, light PE, full PE, full PE plus revision or translation without MT assistance. In 2019, we published a decision tree for post-editing projects (Nitzke et al. 2019) that aimed to take all considerations into account and guide the stakeholders in charge of deciding whether a job is suitable for MT and PE and, if so, what kind of quality assurance might lead to fit-for-purpose translations.</p> <p>To empirically test our decision tree model now, we developed a semi-structured interview with 21 questions and a scoring task addressing stakeholders who work with MT projects and have to make the decisions which are essential to our model. The interview was carried out with 19 interview partners. In the article, we discuss the interviews’ findings against the background of our model. Further, we present qualitative findings on strategic decisions, risk considerations, as well as the value of translation, working conditions and job profiles. Finally, we present our revised model motivated by the empirical findings.</p>Jean NitzkeCarmen CanforaSilvia Hansen-SchirraDimitrios Kapnas
Copyright (c) 2024 Jean Nitzke, Carmen Canfora, Silvia Hansen-Schirra, Dimitrios Kapnas
https://creativecommons.org/licenses/by/4.0
2024-01-302024-01-304112714810.26034/cm.jostrans.2024.4715Data Augmentation with Translation Memories for Desktop Machine Translation Fine-tuning in 3 Language Pairs
https://www.jostrans.org/article/view/4716
<p>This study aims to investigate the effect of data augmentation through translation memories for desktop machine translation (MT) fine-tuning in OPUS-CAT. It also focuses on assessing the usefulness of desktop MT for professional translators. Engines in three language pairs (English → Turkish, English → Spanish, and English → Catalan) are fine-tuned with corpora of two different sizes. The translation quality of each engine is measured through automatic evaluation metrics (BLEU, chrF2, TER and COMET) and human evaluation metrics (ranking, adequacy and fluency). Overall evaluation results indicate promising quality improvements in all three language pairs and imply that the use of desktop MT applications such as OPUS-CAT and fine-tuning MT engines with custom data in a translator’s desktop can potentially provide high-quality translations aside from their advantages such as privacy, confidentiality and low use of computation power.</p>Gokhan DogruJoss Moorkens
Copyright (c) 2024 Gokhan Dogru, Joss Moorkens
https://creativecommons.org/licenses/by/4.0
2024-01-302024-01-304114917810.26034/cm.jostrans.2024.4716When minoritized languages encounter MT
https://www.jostrans.org/article/view/4718
<p>Machine translation (MT) is improving even for low-resource minoritized languages such as Basque, for which free online engines are available. However, the level of adoption and common practices involving the technology are unknown, even though it has the potential to disrupt a carefully planned Basque language revitalization and sustainability process. To shed light on MT usage habits and perceptions among the Basque community, we report on the results of a survey of language specialists and general users, and a focus group with professional translators and interpreters. The data shows that MT is already becoming more popular among users of all backgrounds and that, overall, the attitude towards the technology is positive, which might result in increased use in the future. However, participants express concern about the impact MT will have on the development of Basque. The results call for further research on the language impact of MT and MT literacy initiatives.</p>Nora AranberriUxoa Iñurrieta
Copyright (c) 2024 Nora Aranberri, Uxoa Iñurrieta
https://creativecommons.org/licenses/by/4.0
2024-01-302024-01-304117920510.26034/cm.jostrans.2024.4718Towards Predicting Post-editing Effort with Source Text Readability
https://www.jostrans.org/article/view/4723
<p>This paper investigates the impact of source text readability on the effort of post-editing English-Chinese Neural Machine Translation (NMT) output. Six readability formulas, including both traditional and newer ones, were employed to measure readability, and their predictive power towards post-editing effort was evaluated. Keystroke logging, self-report questionnaires, and retrospective protocols were applied to collect the data of post-editing for general text type from thirty-four student translators. The results reveal that: 1) readability has a significant yet weak effect on cognitive effort, while its impact on temporal and technical effort is less pronounced; 2) high NMT quality may alleviate the effect of readability; 3) readability formulas have the ability to predict post-editing effort to a certain extent, and newer formulas such as the Crowdsourced Algorithm of Reading Comprehension (CAREC) outperformed traditional formulas in most cases. Apart from readability formulas, the study shows that some fine-grained reading-related linguistic features are good predictors of post-editing time. Finally, this paper provides implications for automatic effort estimation in the translation industry.</p>Guangrong DaiSiqi Liu
Copyright (c) 2024 Guangrong Dai, Siqi Liu
https://creativecommons.org/licenses/by/4.0
2024-01-302024-01-304120622910.26034/cm.jostrans.2024.4723“A Spanish version of EastEnders”: a reception study of a telenovela subtitled using MT
https://www.jostrans.org/article/view/4724
<p>This article presents the results of three AVT reception experiments with over 200 English-speaking participants who watched a 20-minute clip of a Mexican telenovela in three different translation modalities: human-translated (HT), post-edited (PT) and machine-translated (MT). Participants answered a questionnaire on narrative engagement, enjoyment, and translation reception of the subtitles. The results show that viewers have a higher engagement with PE than HT, but there is only a statistically significant difference when PE is compared to MT. When it comes to enjoyment, the differences are more pronounced, and viewers enjoy MT significantly less than PE and HT. Finally, in translation reception, the gap is even more pronounced between MT vs. PE and HT. However, the high HTER scores demonstrate that a substantial amount of edits are necessary to render the automatic MT subtitles publishable. It is not clear that results would be comparable were subtitlers not given sufficient time or remuneration for the post-editing task.</p>Ana Guerberof-ArenasJoss MoorkensDavid Orrego-Carmona
Copyright (c) 2024 Ana Guerberof-Arenas, Joss Moorkens, David Orrego-Carmona
https://creativecommons.org/licenses/by/4.0
2024-01-252024-01-254123025410.26034/cm.jostrans.2024.4724