- Home
- e-Journals
- InContext
- Previous Issues
- Volume 5, Issue 1, 2025
InContext - Volume 5, Issue 1, 2025
Volume 5, Issue 1, 2025
-
Human-centredness in translating with technology
Author(s): Andrea Bergantino and James Luke Hadleypp.: 18–41 (24)More LessAbstractWhat are the potential impacts of new technological advancements over research on the lives and work of literary translators? The emerging concept of human-centredness in translating with Artificial Intelligence (AI) offers unique opportunities to address this question from the existing perspective of Literary Translator Studies. Whereas much research in Translation Studies focuses on texts and processes, Translator Studies delves into individual people practically engaged in translation. Ever since the term “Translator Studies” was employed by Andrew Chesterman in 2009, the interplay between humans and technology in the context of literary translation has remained a potential topic of research. However, studies in this area have historically placed focus on sociological, cultural, and cognitive aspects of the translation industry.
The sudden widespread availability of AI since the end of 2022, more than any previous technological development, carries with it substantial potential implications for literary translators. This article assesses the ways in which human-centredness is being studied at the beginning of the era of translation with AI. It synthesises existing Translator Studies research, identifying patterns and lacunae. It also takes stock of current conversations in the context of Literary Machine Translation and Computer-Assisted Literary Translation, in order to identify research questions and emerging methodological innovations at the point when the practical usability of generative AI in the context of literary translation is first emerging.
At present, research in Translator Studies that is relevant to both literature and technology does not appear to have changed substantially in response to the introduction of generally available Large Language Models. Based on these findings, the article looks to the future, suggesting research topics that move beyond the isolated case study model, integrating the impact of emerging technologies within and beyond Literary Translator Studies. It also suggests that human-centred research that considers the interplay of literary translators and technology could likely make extensive use of data drawn from direct interaction with the humans involved.
-
Human-centered AI
Author(s): María Jiménez-Andrés and Aseel Ibrahimpp.: 42–64 (23)More LessAbstractThe integration of artificial intelligence (AI) and machine translation (MT) technologies into the language service industry is reshaping professional roles, workflows, and expectations. This study examines how AI is discursively constructed by two key groups within the translation profession: freelance translators and Language Service Providers (LSPs). Although both groups engage with similar AI tools, their perspectives differ due to varying professional priorities, constraints, and positionalities. Methodologically, the study uses a mixed-methods approach that combines sentiment analysis with qualitative linguistic and thematic analysis of online content, such as blog posts and social media discussions—to explore how these groups conceptualize AI’s impact on their work. Blogs, forums, and social media posts offer real- time reflections on technological change, making them valuable sources for understanding grassroots responses. The corpus is made up of a convenience sample of 45 blog and social media posts with comments discussing GenAI and new AI tools in the translation field. Findings reveal a clear divide in perceptions: LSPs tend to view AI as a beneficial tool that enhances efficiency, scalability, and competitiveness, while freelance translators often express concerns regarding translation quality, job insecurity, and diminishing professional standards. These concerns reflect broader anxieties about the technologization and platformization of the profession, with translators emphasizing the loss of control and autonomy in increasingly algorithm- driven workflows. An important insight from the study is the translators’ active resistance to anthropomorphizing MT, evident in their insistence that the designation of ‘translator’ applies exclusively to humans, and that translations are only those carried out by human translators. The research highlights the need for more inclusive, user-centered approaches in the design and implementation of AI tools. Specifically, it advocates for participatory design, usability testing, and greater engagement with diverse stakeholders to ensure AI technologies address both industry needs and the professional concerns of translators. By aligning with Human-Centered AI principles, future AI systems could better augment human capabilities, improve work conditions, and foster collaboration within the profession.
-
Using critical posthumanist methods to navigate human translators’ roles in the AI era
Author(s): Haohong Laipp.: 65–86 (22)More LessAbstractThe rapid development of generative artificial intelligence (AI) has led to increased academic inquiry into the ethical role of humans in translation activities from a posthumanist perspective. However, studies that reconceptualize translation or the translator through this lens remain limited, partly due to the marginalization of posthumanist perspectives within the predominantly human-centered discourse of AI. In this context, this article first outlines the three main branches of posthumanism—reactive posthumanism, transhumanism, and critical posthumanism—and seeks to establish preliminary connections between these branches and existing frameworks in translation studies. This foundational discussion is intended to provide essential context for readers unfamiliar with the subject, thereby enabling a deeper engagement with the subsequent analysis. This comparison highlights divergent approaches to technology and human identity. Following this introduction, the article examines perspectives from transhumanism and critical posthumanism, highlighting why critical posthumanism may become a crucial influence in future translation research. In essence, critical posthumanism encourages translators to dismantle the barriers created by self-centered individualism, to seek ways to enhance interdisciplinary or professional skills to navigate complex human-machine workflows, and to recognize the significant contributions of non-human actors as co-participants in the translation process. This study proposes the “Round Table Hypothesis,” which aims to explore the prospective roles and new responsibilities of future translators (termed ‘post-translators’) within evolving, AI-shaped translation practices. This hypothesis will also contribute to expanding the theoretical framework that future research on translator competence and training should take into consideration, particularly regarding interaction with AI. This paper posits that translators and students should adopt a critical posthumanist stance as a vital strategy for navigating present or future shifts in the translation market. This involves recognizing technology not just as a tool but as a co- evolving agent, necessitating new skills and adaptabilities. Such an approach not only helps translators adapt to technological advances but also fosters effective and ethical human-machine collaboration and requires updating training to include AI interaction strategies and interdisciplinary knowledge (e.g., computer science, marketing), thereby ensuring their competitiveness in future translation ecosystems.
-
Risk in AI-mediated medical translation
Author(s): Maribel Tercedor-Sánchezpp.: 87–115 (29)More LessAbstractConcerns about the future of AI implementation, particularly with the explosion of generative AI practices, are the result of the high impact AI is having in all areas of society and the need for debate and reflection about the role of technology in human practices. This paper addresses the medical translation field and the risks associated with the use and integration of AI technologies. To do so, it takes an interdisciplinary perspective that includes the human-centered AI (HCAI) paradigm in translation studies (e.g., Jiménez- Crespo, 2023; O’Brien, 2023), responsible AI (Arrieta et al., 2020), and AI for Social Good (Hager et al., 2019). More specifically, it reflects on areas where human agency is key at the lexical level in AI-mediated translation processes. In order to achieve this purpose, this paper reviews the notion of risk from the perspective of the translation of medical texts and their users, with emphasis in the multimodal forms of communication, which are in continuous growth and are often at the center of non-supervised automatic machine translated practices. This is illustrated with examples from a corpus analysis of human and AI solutions of English-Spanish translations of multimodal texts on mental health, an extremely sensitive and high-stakes domain where existing biases and stigma demand special attention. The focus of the analysis is on aspects such as the comparison of professional and AI translators’ solutions when dealing with terminological variation, interference, metaphor, cultural adequacy or multidimensionality. These key areas illustrate the importance of the human role in rendering appropriate solutions for different users’ profiles, including the role of creativity, an introspective human-specific skill, in promoting critical thinking and avoiding the bias and stigma of information on mental illness. Ultimately, the results highlight the need for a closer collaboration between technology and the humanities. This collaboration is needed to guarantee ethical practices in AI as well as to develop AI literacy in Translation Studies, and it should include the analysis of high-stakes areas in specific domains and the detection of risk and ways to tackle it.
-
An empirical study on GenAI use in speech difficulty evaluation
Author(s): Lihan Wang and Weiwei Wangpp.: 116–145 (30)More LessAbstractThis study examines the use of Artificial Intelligence Generated Content (AIGC) tools for assessing speech difficulty in interpreter training. 25 students were invited to interpret three materials from English into Chinese consecutively and then evaluate the difficulty levels of those speeches, while ChatGPT was provided with the transcripts and the duration of the speeches. Speech evaluations by students were compared to those made by ChatGPT within a standardized framework, the Speech Difficulty Index (SDI). Statistical analysis, specifically one-sample t-tests and one-sample Wilcoxon signed rank tests, were conducted to determine any significant differences between the assessments of students and ChatGPT. As for the total scores, the results indicate a consensus between students and ChatGPT on the difficulty of a moderately challenging speech. However, divergences were observed for the other two speeches classified as more or less difficult. Further comparison of the scores on three breakdown dimensions indicates that students’ evaluation can differ from that of ChatGPT in “Subject Matter”, while there is no significant difference in the scores of “Speed of Delivery”. As for “Density and Style,” the trend is consistent with the one shown in the total scores’ comparison. A following interview presents students’ perspectives on evaluating speech difficulty, with their subjective perceptions as standards to form judgements. Given ChatGPT’s capabilities to analyze delivery speed and minimize subjective biases, the integration of AIGC tools in educational settings is recommended. Moreover, interpreter trainers should notice the divergence and balance between the subjective perception among students and the objective evaluation of speech difficulty, to complement the ignorance of AIGC tools on subjective factors. By providing AIGC tools with reliable frameworks for speech difficulty evaluation, it could refine material selection, ensuring a better alignment with learners’ proficiency levels, thereby optimizing the educational outcomes of interpreter training. Based on the findings and limitations in this study, several promising aspects for future research are proposed.
-
Cross-cultural adaptation in translating popular science books
Author(s): Mengxuan Yu and Rong Chenpp.: 146–166 (21)More LessAbstractCultural adaptation presents a significant challenge in translation, particularly when translating texts across diverse cultural contexts. Artificial intelligence (AI) technologies have introduced new approaches for translators to manage this issue, reshaping the way cultural adaptation is handled in various domains. This study investigates the role of human-centered artifical intelligence (HCAI) in improving translation quality, specifically in the context of translating popular science books. The research aims to explore how AI tools improve both the accuracy and fluency of the translation while maintaining the integrity of cultural features when applied to culturally- loaded content. In this study, the translation of culture-specific elements is initiated through machine translation. Subsequently, AI tools are employed for detecting and proofreading potential errors or inconsistencies, focusing on cultural adaptation. The case studies are drawn from a popular science book, providing a practical foundation for examining how AI tools assist translators in navigating the cultural challenges inherent to such texts. The findings reveal that while AI tools offer substantial support in managing the complexities of cultural adaptation, their effectiveness is optimized when they are used as complementary tools rather than as a replacement for human intervention. The research emphasizes that the translation process must remain fundamentally human-centered. Furthermore, the study underscores that while AI technologies can significantly improve efficiency and consistency, the human translator must retain a central role in ensuring that the translation meets the cultural and contextual expectations of the target audience. AI should be viewed as a supplementary tool that enhances the translator’s work so that cultural adaptation is both accurate and sensitive. This synergy between human translators and AI technology paves the way for more effective and high-quality cross-cultural communication and exchange. Overall, the utilization of AI tools in the translation process holds the potential to improve translation quality, but it must be applied within a framework that prioritizes human expertise and experience, particularly when addressing culturally specific content.
-
Human-centered pedagogies in the age of generative AI
Author(s): Nune Ayvazyanpp.: 167–193 (27)More LessAbstractSince its release in November 2022, ChatGPT has been used by students alongside other AI-powered tools like machine translation for various language-related tasks. Despite its growing use, educators and researchers have not yet fully monitored or understood its impact within academic settings. This study investigates the perceptions of undergraduate students enrolled in a translation course in an English Degree program, focusing on how they view, experience, and use generative AI, with particular emphasis on ChatGPT as the most widely used tool. Conducted as part of the teaching innovation project Multilingual Competence: Implementing AI (ChatGPT) for Multilingual Classroom Success at Universitat Rovira i Virgili, Spain, this study spanned five 90-minute sessions, during which students engaged with ChatGPT through three types of exercises that involved translation, post-editing, and comprehension of texts generated by this tool. Pre- and post-experiment questionnaires were administered to examine the impact of ChatGPT on students’ perceptions of progress and sense of agency. The findings indicate that students generally hold neutral-to-positive views regarding the effectiveness of ChatGPT in translating, writing texts, and language learning. However, ChatGPT received particular criticisms as a post-editing tool, as evidenced by quantitative as well as qualitative data. The students emphasized the need for additional training, particularly in prompt generation. On the other hand, some students expressed their concerns regarding data privacy or various ethical issues such as the environmental impact of ChatGPT. In terms of agency, quantitative and qualitative data shows that most students believe that they retain significant control in their interactions with ChatGPT. These results suggest that while students recognize the potential of ChatGPT, they are also aware of its limitations. These findings align with Human-Centered Artificial Intelligence (HCAI) approaches, which emphasize the importance of human control and critical thinking as fundamental principles in fostering effective and responsible human-machine interactions.
Volumes & issues
Most Read This Month
-
-
Crisis translation
Author(s): Sharon O’Brien
-
-
-
COVID-19 and Interpreting
Author(s): Andrew K. F. Cheung
-
-
-
Informed consent
Author(s): Elisabet Tiselius
-
-
-
Putting SmartTerp to test
Author(s): Francesca M. Frittella and Susana Rodríguez
-
- More Less