https://www.jostrans.org/issue/feedThe Journal of Specialised Translation2024-01-30T00:00:00+00:00Lucja Bieled@jostrans.orgOpen Journal Systems<p>JoSTrans is a multilingual diamond-open-access journal on specialised translation and interpreting issues. Launched in 2004, it is free, electronic, double-blind peer-reviewed and published bi-annually. JoSTrans does not charge authors to publish their work.</p> <p>E-ISSN: 1740-357X</p>https://www.jostrans.org/article/view/4706Proposal for a Triple Bottom Line for Translation Automation and Sustainability2024-01-20T13:45:39+00:00Joss Moorkensjoss.moorkens@dcu.ieSheila Castilhosheila.castilho@dcu.ieFederico Gasparifederico.gaspari@unina.itAntonio Torala.toral.ruiz@rug.nlMaja Popovićmaja.popovic@adaptcentre.ie<p>This article is both an editorial introduction to the guest-edited special issue of JoSTrans on Translation Automation and Sustainability, and a position paper in which we propose a model for evaluating the sustainable use of automation technology in translation and beyond. As grounding notions, the article reviews definitions of automation and considers the urgency of sustainability. Thereafter we propose an adaptation of Elkington’s (1997) triple bottom line, giving equal weight to evaluation based on people, planet, and performance, describing each of these elements in turn. Finally, we introduce the articles from this special issue, in which authors describe various aspects of automation technology in translation with a focus on sustainability.</p>2024-01-30T00:00:00+00:00Copyright (c) 2024 https://www.jostrans.org/article/view/4737Roundtable: Translation Automation and Sustainability2024-01-24T17:04:47+00:00Joss Moorkensjoss.moorkens@dcu.ieSheila Castilhosheila.castilho@dcu.ieFederico Gasparifederico.gaspari@unina.itAntonio Torala.toral.ruiz@rug.nlMaja Popovićmaja.popovic@adaptcentre.ie2024-01-25T00:00:00+00:00Copyright (c) 2024 Joss Moorkens, Sheila Castilho, Federico Gaspari, Antonio Toral, Maja Popovićhttps://www.jostrans.org/article/view/4728José Javier Ávila-Cabrera (2023). The Challenge of Subtitling Offensive and Taboo Language into Spanish. A Theoretical and Practical Guide2024-01-24T10:14:53+00:00J. David González-Iglesias Gonzálezjuandgon@ucm.es2024-01-25T00:00:00+00:00Copyright (c) 2024 J. David González-Iglesias Gonzálezhttps://www.jostrans.org/article/view/4730Rothwell, Andrew, Joss Moorkens, María Fernández-Parra, Joanna Drugan and Frank Austermuehl (2023). Translation Tools and Technologies2024-01-24T10:22:25+00:00Alina Secarăalina.secara@univie.ac.at2024-01-25T00:00:00+00:00Copyright (c) 2024 Alina Secarăhttps://www.jostrans.org/article/view/4731Moniz, Helena and Carla Parra Escartín (eds) (2023). Towards Responsible Machine Translation: Ethical and Legal Considerations in Machine Translation. 2024-01-24T10:25:00+00:00Marian Flanaganmarianflanagan@hum.ku.dk2024-01-25T00:00:00+00:00Copyright (c) 2024 Marian Flanaganhttps://www.jostrans.org/article/view/4732Tomáš Svoboda, Łucja Biel and Vilelmini Sosoni (eds). (2023). Institutional Translator Training2024-01-24T10:27:33+00:00Huidan Liuhdliu@shmtu.edu.cnPanpan Chen202230810125@stu.shmtu.edu.cn2024-01-25T00:00:00+00:00Copyright (c) 2024 Huidan Liu, Panpan Chenhttps://www.jostrans.org/article/view/4735Almanna, Ali and Juliane House (eds) (2023). Translation Politicised and Politics Translated2024-01-24T11:20:31+00:00Le Chengchengle163@hotmail.comMing Xuxuming0833@foxmail.com2024-01-25T00:00:00+00:00Copyright (c) 2024 Le Cheng, Ming Xuhttps://www.jostrans.org/article/view/4725Editorial2024-01-23T16:07:44+00:00David Orrego-Carmonadavid.orrego-carmona@warwick.ac.uk2024-01-25T00:00:00+00:00Copyright (c) 2024 David Orrego Carmonahttps://www.jostrans.org/article/view/4696Re-thinking Machine Translation Post-Editing Guidelines2024-01-15T13:16:00+00:00Celia Rico Pérezcelrico@ucm.es<p>Machine Translation Post-Editing (MTPE) is a challenging task. It frequently creates tension between what the industry expects in terms of quality and what translators are willing to deliver as an end product. Conventional approaches to MTPE take as a point of departure the distinction between light and full MPTE, but the division gets blurred when implemented in an actual MTPE project where translators find difficulties in differentiating between essential and preferential changes. At the time MTPE guidelines were designed, the role of the human translator in the MT process was perceived as ancillary, a view inherited from the first days of MT research aiming at the so-called <em>Fully Automatic High Quality Machine Translation</em> (FAHQMT). My proposal challenges the traditional division of MTPE levels and presents a new way of looking at MTPE guidelines. In view of the latest developments in neural machine translation and the higher quality level of its output, it is my contention that the traditional division of MTPE levels is no longer valid. In this contribution I advance a proposal for redefining MTPE guidelines in the framework of an ecosystem specifically designed for this purpose.</p>2024-01-30T00:00:00+00:00Copyright (c) 2024 https://www.jostrans.org/article/view/4711Speech-to-text Recognition for the Creation of Subtitles in Basque2024-01-22T22:47:42+00:00Ana Tamayoana.tamayo@ehu.eusAlejandro Ros Abaurrea alejandro.ros@ehu.eus<p>This contribution aims at analysing the speech-to-text recognition of news programmes in the regional channel ETB1 for subtitling in Basque using ADITU (2024) (a technology developed by the Elhuyar foundation) applying the NER model of analysis (Romero-Fresco and Martínez 2015). A total of 20 samples of approximately 5 minutes each were recorded from the regional channel ETB1 in May, 2022. A total of 97 minutes and 1737 subtitles were analysed by applying criteria from the NER model. The results show an average accuracy rate of 94.63% if we take all errors into account, and 96.09% if we exclude punctuation errors. A qualitative analysis based on quantitative data foresees some room for improvement regarding language models of the software, punctuation, recognition of proper nouns and speaker identification. From the evidence it may be concluded that, although quantitative data does not reach the threshold to consider the quality of recognition <em>fair </em>or comprehensible with regards to the NER model, results seem promising. When presenters speak with clear diction and standard language, accuracy rates are sufficient for a minority language like Basque in which speech recognition software is still in early phases of development.</p>2024-01-30T00:00:00+00:00Copyright (c) 2024 Ana Tamayo, Alejandro Ros Abaurrea https://www.jostrans.org/article/view/4712Does training in post-editing affect creativity?2024-01-23T08:27:14+00:00Ana Guerberof-Arenasa.guerberof.arenas@rug.nlSusana Valdez s.valdez@hum.leidenuniv.nlAletta G. Dorsta.g.dorst@hum.leidenuniv.nl<p>This article presents the results of an experiment with eleven students from two universities that translated and post-edited three literary texts distributed on the first and last days of their translation technology modules. The source texts were marked with units of creative potential to assess creativity in the target texts (before and after training). The texts were subsequently reviewed by an independent professional literary translator and translation trainer. The results show that there is no quantitative evidence to conclude that the training significantly affects students’ creativity. However, after the training, a change is observed both in the quantitative data and in the reflective essays, i.e. the students are more willing to try creative shifts and they feel more confident to tackle machine translation (MT) issues, while also showing a higher number of errors. Further, we observe that students have a higher degree of creativity in human translation (HT), but significantly fewer errors in post-editing (PE) overall, especially at the start of the training, than in HT.</p>2024-01-30T00:00:00+00:00Copyright (c) 2024 Ana Guerberof-Arenas, Susana Valdez , Aletta G. Dorsthttps://www.jostrans.org/article/view/4713Data-driven Asian Adapted MQM Typology and Automation in Translation Quality Workflows2024-01-23T10:34:20+00:00Beatriz Silvabeatriz.silva@unbabel.comMarianna Buchicchiomarianna@unbabel.comDaan van Stigtdaan.stigt@unbabel.comCraig Stewartcraig.stewart@phrase.comHelena Monizhelena@unbabel.comAlon Laviealon@cmu.edu<p>In this study we aim to test the impact of applying translation error taxonomies oriented towards European Languages in the annotation of Asian Languages. We aim to demonstrate how an error typology adapted for the latter languages can not only result in more linguistically accurate annotations, but also how this can be applied to automating and scaling translation quality evaluation.</p> <p>As such, we propose a Translation Errors Typology that aims to cover the shortcomings of the Multidimensional Quality Metrics (Lommel et al. 2014) framework (MQM) in what concerns the annotation of the East Asian Languages of Mandarin, Japanese and Korean. The effectiveness of the typology here proposed was tested by analysing the Inter-annotator agreement (IAA) scores obtained, in contrast with the typology proposed by Ye and Toral (2020) and the Unbabel Error Typology<a href="#bu7qzyzh51n8"><sup>1</sup></a>. Finally, we propose a way of automating Translation Quality Workflows through a Quality Estimation (QE) technology that is able to predict the MQM scores of machine translation outputs at scale with a fair correlation with the human judgement produced by applying the East Asian Languages MQM module proposed in this study.</p>2024-01-30T00:00:00+00:00Copyright (c) 2024 Beatriz Silva, Marianna Buchicchio, Daan van Stigt, Craig Stewart, Helena Moniz, Alon Laviehttps://www.jostrans.org/article/view/4715Decisions in projects using machine translation and post-editing2024-01-23T11:44:59+00:00Jean Nitzkejean.nitzke@uia.noCarmen Canforacanfora@uni-mainz.deSilvia Hansen-Schirrahansens@uni-mainz.deDimitrios Kapnasdikapnas@uni-mainz.de<p>Machine translation (MT) and post-editing (PE) have become increasingly important in the professional language industry in recent years. However, not every translation job is suitable for MT and there are many options for carrying out translation/post-editing projects, e.g. no PE, light PE, full PE, full PE plus revision or translation without MT assistance. In 2019, we published a decision tree for post-editing projects (Nitzke et al. 2019) that aimed to take all considerations into account and guide the stakeholders in charge of deciding whether a job is suitable for MT and PE and, if so, what kind of quality assurance might lead to fit-for-purpose translations.</p> <p>To empirically test our decision tree model now, we developed a semi-structured interview with 21 questions and a scoring task addressing stakeholders who work with MT projects and have to make the decisions which are essential to our model. The interview was carried out with 19 interview partners. In the article, we discuss the interviews’ findings against the background of our model. Further, we present qualitative findings on strategic decisions, risk considerations, as well as the value of translation, working conditions and job profiles. Finally, we present our revised model motivated by the empirical findings.</p>2024-01-30T00:00:00+00:00Copyright (c) 2024 Jean Nitzke, Carmen Canfora, Silvia Hansen-Schirra, Dimitrios Kapnashttps://www.jostrans.org/article/view/4716Data Augmentation with Translation Memories for Desktop Machine Translation Fine-tuning in 3 Language Pairs2024-01-23T12:10:42+00:00Gokhan Dogrugokhan.dogru@uab.catJoss Moorkensjoss.moorkens@dcu.ie<p>This study aims to investigate the effect of data augmentation through translation memories for desktop machine translation (MT) fine-tuning in OPUS-CAT. It also focuses on assessing the usefulness of desktop MT for professional translators. Engines in three language pairs (English → Turkish, English → Spanish, and English → Catalan) are fine-tuned with corpora of two different sizes. The translation quality of each engine is measured through automatic evaluation metrics (BLEU, chrF2, TER and COMET) and human evaluation metrics (ranking, adequacy and fluency). Overall evaluation results indicate promising quality improvements in all three language pairs and imply that the use of desktop MT applications such as OPUS-CAT and fine-tuning MT engines with custom data in a translator’s desktop can potentially provide high-quality translations aside from their advantages such as privacy, confidentiality and low use of computation power.</p>2024-01-30T00:00:00+00:00Copyright (c) 2024 Gokhan Dogru, Joss Moorkenshttps://www.jostrans.org/article/view/4718When minoritized languages encounter MT2024-01-23T12:29:13+00:00Nora Aranberrinora.aranberri@ehu.eusUxoa Iñurrietau.inurrieta@ueu.eus<p>Machine translation (MT) is improving even for low-resource minoritized languages such as Basque, for which free online engines are available. However, the level of adoption and common practices involving the technology are unknown, even though it has the potential to disrupt a carefully planned Basque language revitalization and sustainability process. To shed light on MT usage habits and perceptions among the Basque community, we report on the results of a survey of language specialists and general users, and a focus group with professional translators and interpreters. The data shows that MT is already becoming more popular among users of all backgrounds and that, overall, the attitude towards the technology is positive, which might result in increased use in the future. However, participants express concern about the impact MT will have on the development of Basque. The results call for further research on the language impact of MT and MT literacy initiatives.</p>2024-01-30T00:00:00+00:00Copyright (c) 2024 Nora Aranberri, Uxoa Iñurrietahttps://www.jostrans.org/article/view/4723Towards Predicting Post-editing Effort with Source Text Readability2024-01-23T15:13:33+00:00Guangrong Daicarldy@163.comSiqi Liu20211210023@gdufs.edu.cn<p>This paper investigates the impact of source text readability on the effort of post-editing English-Chinese Neural Machine Translation (NMT) output. Six readability formulas, including both traditional and newer ones, were employed to measure readability, and their predictive power towards post-editing effort was evaluated. Keystroke logging, self-report questionnaires, and retrospective protocols were applied to collect the data of post-editing for general text type from thirty-four student translators. The results reveal that: 1) readability has a significant yet weak effect on cognitive effort, while its impact on temporal and technical effort is less pronounced; 2) high NMT quality may alleviate the effect of readability; 3) readability formulas have the ability to predict post-editing effort to a certain extent, and newer formulas such as the Crowdsourced Algorithm of Reading Comprehension (CAREC) outperformed traditional formulas in most cases. Apart from readability formulas, the study shows that some fine-grained reading-related linguistic features are good predictors of post-editing time. Finally, this paper provides implications for automatic effort estimation in the translation industry.</p>2024-01-30T00:00:00+00:00Copyright (c) 2024 Guangrong Dai, Siqi Liuhttps://www.jostrans.org/article/view/4724“A Spanish version of EastEnders”: a reception study of a telenovela subtitled using MT2024-01-23T15:32:51+00:00Ana Guerberof-Arenasa.guerberof.arenas@rug.nlJoss Moorkensjoss.moorkens@dcu.ieDavid Orrego-Carmonadavid.orrego-carmona@warwick.ac.uk<p>This article presents the results of three AVT reception experiments with over 200 English-speaking participants who watched a 20-minute clip of a Mexican telenovela in three different translation modalities: human-translated (HT), post-edited (PT) and machine-translated (MT). Participants answered a questionnaire on narrative engagement, enjoyment, and translation reception of the subtitles. The results show that viewers have a higher engagement with PE than HT, but there is only a statistically significant difference when PE is compared to MT. When it comes to enjoyment, the differences are more pronounced, and viewers enjoy MT significantly less than PE and HT. Finally, in translation reception, the gap is even more pronounced between MT vs. PE and HT. However, the high HTER scores demonstrate that a substantial amount of edits are necessary to render the automatic MT subtitles publishable. It is not clear that results would be comparable were subtitlers not given sufficient time or remuneration for the post-editing task.</p>2024-01-25T00:00:00+00:00Copyright (c) 2024 Ana Guerberof-Arenas, Joss Moorkens, David Orrego-Carmona