- Home
- e-Journals
- Interpreting
- Previous Issues
- Volume 25, Issue 1, 2023
Interpreting - Volume 25, Issue 1, 2023
Volume 25, Issue 1, 2023
-
Interpreter ideology
Author(s): Fei Gao and Jeremy Mundaypp.: 1–26 (26)More LessAbstractThis study investigates empirically the way in which interpreter ideology is manifested in the evaluative language of the World Economic Forum’s Annual Meeting in China in 2016 (English–Chinese language pair). Methodologically, van Dijk’s Ideological Square and Martin and White’s Appraisal framework have been operationalised for the analysis of positive or negative evaluative language in ‘us’ vs ‘them’ discourses. The results reveal an overall positive-‘us’ and negative-‘them’ pattern in the interpreter’s ideological positioning. This is manifested in three ways: (i) negative, pejorative, and sensitive discourses about China are self-censored; (ii) positivity is accentuated and negativity is neutralised in China-related discourses, and (iii) negative tones in the discourses of other countries are amplified. The speaker discourse is ‘edited’ when interpreter ideology is at work during the simultaneous interpreting process. However, the linguistic patterns can provide only partial indications of the possible relationship between interpreter ideology and cognitive operations.
-
From remote control to tweets
Author(s): Özüm Arzık-Erzurumlu and Gamze Yilmazpp.: 27–60 (34)More LessAbstractThis study examined the role Twitter reactions play in (re)shaping the quality criteria in TV interpreting (TVI) during the Oscars ceremony in Turkey, extending over a ten-year period. The secondary goal of this study was to shed light on the extent to which tweets affected the interpreting practice, recruitment of interpreters, and the discourse about TVI in general. The thematic analysis of the tweets showed that the viewers generally criticized interpreters based on their delivery, use of voice, and word choices. Complementary interviews conducted with TV interpreters and one executive revealed that tweets were instrumentalized by executives and recruiters both in real time and during the recruitment process; in this way they became a quality instrument with which to evaluate TV interpreters’ performance. The findings are discussed in the light of the literature on TVI and are used to advance both practical and theoretical implications for the practice of TVI.
-
“Feel sorry for Miss translator!!!”
Author(s): Yuhong Yangpp.: 61–86 (26)More LessAbstractThis article approaches user reception of interpreting events by investigating video users’ parasocial interactions about a specific interpreter on the danmu interface. The video of Chinese athlete Sun Yang’s public hearing (facilitated by an ad hoc interpreter) hosted on the Chinese video-sharing site, Bilibili, was chosen as a case study. Proceeding from an adapted parasocial interaction framework that features nine underlying parasocial processes, this study categorizes and analyses users’ danmu comments directed at the interpreter, mainly qualitatively but also quantitatively. It also examines them as manifestations or verbalizations of their parasocial interactions with the interpreter on screen in cognitive and affective dimensions. The findings show users’ noticeable preference for engaging in evaluation- and comprehension-oriented interactions and expressing their sympathy and empathy towards the interpreter. The findings also show that the parasocial interaction framework usefully accommodates a plethora of user reactions to the interpreter and their performance and offers a way of investigating relevant utterances systematically in a seemingly chaotic danmu space.
-
The right to a fair trial and the right to interpreting
Author(s): Eva Ngpp.: 87–108 (22)More LessAbstractThe right to a fair trial for defendants in the criminal process is internationally recognised as a fundamental human right that, among others, includes the right of defendants to have the free assistance of an interpreter if they cannot understand or speak the language used in court. The failure to provide the required interpreting service or a deficiency in the service provided can be raised as grounds of appeal for potentially denying or compromising defendants’ right to a fair trial. This article discusses the limitations of chuchotage, a mode of interpreting commonly used in domestic courts. These limitations potentially compromise interpreting accuracy, and, specifically, the absence of a record of the interpretation can spell problems for appellate courts dealing with appeals advanced on the ground of the deficient interpreting provided in this mode. This study reviews four such appeals in Hong Kong and reveals inconsistencies in the appellate courts’ rulings and the reasoning behind their decisions. This study argues that these inconsistencies can lead to problems with implementing the principle of stare decisis, while at the same time sending confusing messages about the standard of interpreting required to safeguard a defendant’s right to a fair trial and about the future use of chuchotage in court.
-
Automatic assessment of spoken-language interpreting based on machine-translation evaluation metrics
Author(s): Xiaolei Lu and Chao Hanpp.: 109–143 (35)More LessAbstractAutomated metrics for machine translation (MT) such as BLEU are customarily used because they are quick to compute and sufficiently valid to be useful in MT assessment. Whereas the instantaneity and reliability of such metrics are made possible by automatic computation based on predetermined algorithms, their validity is primarily dependent on a strong correlation with human assessments. Despite the popularity of such metrics in MT, little research has been conducted to explore their usefulness in the automatic assessment of human translation or interpreting. In the present study, we therefore seek to provide an initial insight into the way MT metrics would function in assessing spoken-language interpreting by human interpreters. Specifically, we selected five representative metrics – BLEU, NIST, METEOR, TER and BERT – to evaluate 56 bidirectional consecutive English–Chinese interpretations produced by 28 student interpreters of varying abilities. We correlated the automated metric scores with the scores assigned by different types of raters using different scoring methods (i.e., multiple assessment scenarios). The major finding is that BLEU, NIST, and METEOR had moderate-to-strong correlations with the human-assigned scores across the assessment scenarios, especially for the English-to-Chinese direction. Finally, we discuss the possibility and caveats of using MT metrics in assessing human interpreting.
-
Review of Cho (2022): Intercultural communication in interpreting: Power and choices
Author(s): Jim Hlavacpp.: 144–151 (8)More LessThis article reviews Intercultural communication in interpreting: Power and choices
-
Review of Albl-Mikasa & Tiselius (2022): The Routledge handbook of conference interpreting
Author(s): Robin Settonpp.: 152–158 (7)More LessThis article reviews The Routledge handbook of conference interpreting
Volumes & issues
-
Volume 26 (2024)
-
Volume 25 (2023)
-
Volume 24 (2022)
-
Volume 23 (2021)
-
Volume 22 (2020)
-
Volume 21 (2019)
-
Volume 20 (2018)
-
Volume 19 (2017)
-
Volume 18 (2016)
-
Volume 17 (2015)
-
Volume 16 (2014)
-
Volume 15 (2013)
-
Volume 14 (2012)
-
Volume 13 (2011)
-
Volume 12 (2010)
-
Volume 11 (2009)
-
Volume 10 (2008)
-
Volume 9 (2007)
-
Volume 8 (2006)
-
Volume 7 (2005)
-
Volume 6 (2004)
-
Volume 5 (2000)
-
Volume 4 (1999)
-
Volume 3 (1998)
-
Volume 2 (1997)
-
Volume 1 (1996)
Most Read This Month
-
-
The bilingual individual
Author(s): Francois Grosjean
-
- More Less