1887
Volume 37, Issue 4
  • ISSN 0924-1884
  • E-ISSN: 1569-9986
USD
Buy:$35.00 + Taxes

Abstract

Abstract

In this exploratory study, we investigated rater cognition in English–Chinese translation assessment, drawing on think-aloud, eye-tracking, and interview data. We designed a 3 × 2 × 2 experiment in which experienced raters assessed eighteen renditions of three levels of quality for each translation direction, using a Likert-type scale or analytic rubric scale. We found that: (a) the raters heeded meaning transfer more frequently than other contents; (b) they utilized a variety of processing actions, but a core subset involving eight actions constituted the mainstay; (c) to make a scoring decision, the raters mainly consulted the source text, the target texts, and the rating scale, but also displayed other patterns of interaction (e.g., relying on target texts only); (d) they fixated more frequently per time unit and proportionally longer on the target texts; and (e) translation direction and scoring method seemed to have modulated rater cognition. The implications of these findings for translation assessment are discussed.

Loading

Article metrics loading...

/content/journals/10.1075/target.23040.han
2025-09-11
2026-03-14
Loading full text...

Full text loading...

References

  1. Angelelli, Claudia V.
    2009 “Using a Rubric to Assess Translation Ability: Defining the Construct.” InTesting and Assessment in Translation and Interpreting Studies, edited byClaudia V. Angelelli and Holly E. Jacobson, 13–47. Amsterdam: John Benjamins. 10.1075/ata.xiv.03ang
    https://doi.org/10.1075/ata.xiv.03ang [Google Scholar]
  2. Baker, Beverly Anne
    2012 “Individual Differences in Rater Decision-Making Style: An Exploratory Mixed-Methods Study.” Language Assessment Quarterly9 (3): 225–248. 10.1080/15434303.2011.637262
    https://doi.org/10.1080/15434303.2011.637262 [Google Scholar]
  3. Barkaoui, Khaled
    2007 “Rating Scale Impact on EFL Essay Marking: A Mixed-Method Study.” Assessing Writing12 (2): 86–107. 10.1016/j.asw.2007.07.001
    https://doi.org/10.1016/j.asw.2007.07.001 [Google Scholar]
  4. 2010 “Variability in ESL Essay Rating Processes: The Role of the Rating Scale and Rater Experience.” Language Assessment Quarterly7 (1): 54–74. 10.1080/15434300903464418
    https://doi.org/10.1080/15434300903464418 [Google Scholar]
  5. Chen, Jing, Huabo Yang, and Chao Han
    2022 “Holistic Versus Analytic Scoring of Spoken-Language Interpreting: A Multi-Perspectival Comparative Analysis.” The Interpreter and Translator Trainer16 (4): 558–576. 10.1080/1750399X.2022.2084667
    https://doi.org/10.1080/1750399X.2022.2084667 [Google Scholar]
  6. Chesterman, Andrew
    1998 “Causes, Translations, Effects.” Target10 (2): 201–230. 10.1075/target.10.2.02che
    https://doi.org/10.1075/target.10.2.02che [Google Scholar]
  7. Crisp, Victoria
    2010 “Towards a Model of the Judgement Processes Involved in Examination Marking.” Oxford Review of Education36 (1): 1–21. 10.1080/03054980903454181
    https://doi.org/10.1080/03054980903454181 [Google Scholar]
  8. Cumming, Alister
    1990 “Expertise in Evaluating Second Language Compositions.” Language Testing7 (1): 31–51. 10.1177/026553229000700104
    https://doi.org/10.1177/026553229000700104 [Google Scholar]
  9. Cumming, Alister, Robert Kantor, and Donald E. Powers
    2002 “Decision Making While Rating ESL/EFL Writing Tasks: A Descriptive Framework.” The Modern Language Journal86 (1): 67–96. 10.1111/1540‑4781.00137
    https://doi.org/10.1111/1540-4781.00137 [Google Scholar]
  10. DeRemer, Mary L.
    1998 “Writing Assessment: Raters’ Elaboration of the Rating Task.” Assessing Writing5 (1): 7–29. 10.1016/S1075‑2935(99)80003‑8
    https://doi.org/10.1016/S1075-2935(99)80003-8 [Google Scholar]
  11. Eyckmans, June, and Philippe Anckaert
    2017 “Item-Based Assessment of Translation Competence: Chimera of Objectivity versus Prospect of Reliable Measurement.” Linguistica Antverpiensia, New Series: Themes in Translation Studies161: 40–56.
    [Google Scholar]
  12. Feng, Jia
    2018中英双向互译中翻译认知过程研究: 基于眼动追踪和键盘记录的实证分析 [Cognitive processing in bidirectional Chinese-English translation: Empirical evidence from eye-tracking and keystroke logging]. Beijing: Foreign Language Teaching and Research Press.
    [Google Scholar]
  13. Gieshoff, Anne Catherine, and Michaela Albl-Mikasa
    2022 “Interpreting Accuracy Revisited: A Refined Approach to Interpreting Performance Analysis.” Perspectives32 (2): 210–228. 10.1080/0907676X.2022.2088296
    https://doi.org/10.1080/0907676X.2022.2088296 [Google Scholar]
  14. Godfroid, Aline
    2020Eye Tracking in Second Language Acquisition and Bilinguialism: A Research Synthesis and Methodological Guide. New York: Routledge.
    [Google Scholar]
  15. Grbić, Nadja
    2008 “Constructing Interpreting Quality.” Interpreting10 (2): 232–257. 10.1075/intp.10.2.04grb
    https://doi.org/10.1075/intp.10.2.04grb [Google Scholar]
  16. Han, Chao, and Xiao Zhao
    2021 “Accuracy of Peer Ratings on the Quality of Spoken-Language Interpreting.” Assessment and Evaluation in Higher Education46 (8): 1299–1313. 10.1080/02602938.2020.1855624
    https://doi.org/10.1080/02602938.2020.1855624 [Google Scholar]
  17. Han, Chao, and Xiaoqi Shang
    2023 “An Item-Based, Rasch-Calibrated Approach to Assessing Translation Quality.” Target35 (1): 63–96. 10.1075/target.20052.han
    https://doi.org/10.1075/target.20052.han [Google Scholar]
  18. Han, Chao, Binghan Zheng, Mingqing Xie, and Shirong Chen
    2024 “Raters’ Scoring Process in Assessment of Interpreting: An Empirical Study Based on Eye Tracking and Retrospective Verbalization.” Interpreter and Translator Trainer18 (3): 400–422. 10.1080/1750399X.2024.2326400
    https://doi.org/10.1080/1750399X.2024.2326400 [Google Scholar]
  19. Han, Chao, Rui Xiao, and Wei Su
    2021 “Assessing the Fidelity of Consecutive Interpreting: The Effects of Using Source Versus Target Text as the Reference Material.” Interpreting23 (2): 245–268. 10.1075/intp.00058.han
    https://doi.org/10.1075/intp.00058.han [Google Scholar]
  20. Han, Chao
    2020 “Translation Quality Assessment: A Critical Methodological Review.” The Translator26 (3): 257–273. 10.1080/13556509.2020.1834751
    https://doi.org/10.1080/13556509.2020.1834751 [Google Scholar]
  21. Holmqvist, Kenneth, Saga Lee Örbom, Ignace T. C. Hooge, Diederick C. Niehorster, Robert G. Alexander, Richard Andersson, Jeroen S. Benjamins,
    2023 “Eye Tracking: Empirical Foundations for a Minimal Reporting Guideline.” Behavior Research Methods551: 364–416. 10.3758/s13428‑021‑01762‑8
    https://doi.org/10.3758/s13428-021-01762-8 [Google Scholar]
  22. House, Juliane
    1997Translation Quality Assessment: A Model Revisited. Tübingen: Gunter Narr Verlag.
    [Google Scholar]
  23. Huertas Barros, Elsa, and Juliet Vine
    2018 “Current Trends on MA Translation Courses in the UK: Changing Assessment Practices on Core Translation Modules.” Interpreter and Translator Trainer12 (1): 5–24. 10.1080/1750399X.2017.1400365
    https://doi.org/10.1080/1750399X.2017.1400365 [Google Scholar]
  24. Hurtado Albir, Amparo, ed.
    2017Researching Translation Competence by PACTE Group. Amsterdam: John Benjamins. 10.1075/btl.127
    https://doi.org/10.1075/btl.127 [Google Scholar]
  25. Jakobsen, Arnt Lykke
    2017 “Translation Process Research.” The Handbook of Translation and Cognition, edited byJohn W. Schwieter and Aline Ferreira, 19–49. Hoboken: Wiley-Blackwell. 10.1002/9781119241485.ch2
    https://doi.org/10.1002/9781119241485.ch2 [Google Scholar]
  26. Koby, Geoffrey S.
    2015 “The ATA Flowchart and Framework as a Differentiated Error-Marking Scale in Translation Teaching.” InHandbook of Research on Teaching Methods in Language Translation and Interpretation, edited byYing Cui and Wei Zhao, 220–253. Hershey: IGI Global. 10.4018/978‑1‑4666‑6615‑3.ch013
    https://doi.org/10.4018/978-1-4666-6615-3.ch013 [Google Scholar]
  27. Kruger, Haidee
    2013 “Child and Adult Readers’ Processing of Foreignised Elements in Translated South African Picturebooks.” Target25 (2): 180–227. 10.1075/target.25.2.03kru
    https://doi.org/10.1075/target.25.2.03kru [Google Scholar]
  28. Kruger, Haidee, and Jan-Louis Kruger
    2017 “Cognition and Reception.” InThe Handbook of Translation and Cognition, edited byJohn W. Schwieter and Aline Ferreira, 71–89. Hoboken: Wiley-Blackwell. 10.1002/9781119241485.ch4
    https://doi.org/10.1002/9781119241485.ch4 [Google Scholar]
  29. Lai, Tzu-Yun
    2011 “Reliability and Validity of a Scale-based Assessment for Translation Tests.” Meta56 (3): 713–722. 10.7202/1008341ar
    https://doi.org/10.7202/1008341ar [Google Scholar]
  30. Li, Hang, and Lianzhen He
    2015 “A Comparison of EFL Raters’ Essay-Rating Processes across Two Types of Rating Scales.” Language Assessment Quarterly12 (2): 178–212. 10.1080/15434303.2015.1011738
    https://doi.org/10.1080/15434303.2015.1011738 [Google Scholar]
  31. Lumley, Tom
    2002 “Assessment Criteria in a Large-Scale Writing Test: What Do They Really Mean to the Raters?” Language Testing19 (3): 246–276. 10.1191/0265532202lt230oa
    https://doi.org/10.1191/0265532202lt230oa [Google Scholar]
  32. Ma, Xingcheng, and Dechao Li
    2020 “翻译教师和普通读者在译文在线评阅中的认知过程研究:基于眼动追踪数据的翻译质量评测 [Cognitive processes of translation teachers and ordinary readers in reading translated texts: An eye-tracking perspective to translation quality assessment]” Foreign Languages Research41: 28–36.
    [Google Scholar]
  33. Muñoz, Ricardo
    2010 “Leave No Stone Unturned: On the Development of Cognitive Translatology.” Translation and Interpreting Studies5 (2): 145–162. 10.1075/tis.5.2.01mun
    https://doi.org/10.1075/tis.5.2.01mun [Google Scholar]
  34. Muñoz, Ricardo, and Tomás Conde
    2007 “Effects of Serial Translation Evaluation.” InTranslationsqualität [Translation quality], edited byPeter A. Schmit and Heike E. Jüngst, 428–444. Frankfurt: Peter Lang.
    [Google Scholar]
  35. Obdržálková, Vanda
    2018 “Directionality in Translation: Qualitative Aspects of Translation from and into English as a Non-Mother Tongue.” Sendebar291: 35–57. 10.30827/sendebar.v29i0.6744
    https://doi.org/10.30827/sendebar.v29i0.6744 [Google Scholar]
  36. Pöchhacker, Franz
    2002 “Researching Interpreting Quality: Models and Methods.” InInterpreting in the 21st Century: Challenges and Opportunities, edited byGiuliana Garzone and Maurizio Viezzi, 95–106. Amsterdam: John Benjamins. 10.1075/btl.43.10poc
    https://doi.org/10.1075/btl.43.10poc [Google Scholar]
  37. Pokorn, Nike K., Jason Blake, Donald Reindl, and Agnes Pisanski Peterlin
    2020 “The Influence of Directionality on the Quality of Translation Output in Educational Settings.” The Interpreter and Translator Trainer14 (1): 58–78. 10.1080/1750399X.2019.1594563
    https://doi.org/10.1080/1750399X.2019.1594563 [Google Scholar]
  38. Rothe-Neves, Rui
    2008 “Translation Quality Assessment for Research Purposes: An Empirical Approach.” Cadernos de Tradução [Translation notebooks] 2 (10): 113–131.
    [Google Scholar]
  39. Sun, Sanjun, and Gregory M. Shreve
    2014 “Measuring Translation Difficulty: An Empirical Study.” Target26 (1): 98–127. 10.1075/target.26.1.04sun
    https://doi.org/10.1075/target.26.1.04sun [Google Scholar]
  40. Turner, Barry, Miranda Lai, and Neng Huang
    2010 “Error Deduction and Descriptors: A Comparison of Two Methods of Translation Test Assessment.” Translation & Interpreting2 (1): 11–23.
    [Google Scholar]
  41. Waddington, Christopher
    2001 “Should Translations Be Assessed Holistically or through Error Analysis?” Hermes: Journal of Linguistics14 (26): 15–38.
    [Google Scholar]
  42. Walker, Callum
    2019 “A Cognitive Perspective on Equivalent Effect: Using Eye Tracking to Measure Equivalence in Source Text and Target Text Cognitive Effects on Readers.” Perspectives27 (1): 124–143. 10.1080/0907676X.2018.1449870
    https://doi.org/10.1080/0907676X.2018.1449870 [Google Scholar]
  43. Whyatt, Bogusława
    2019 “In Search of Directionality Effects in the Translation Process and in the End Product.” Translation, Cognition and Behavior2 (1): 79–100.
    [Google Scholar]
  44. Winke, Paula, and Hyojung Lim
    2015 “ESL Essay Raters’ Cognitive Processes in Applying the Jacobs et al. Rubric: An Eye-Movement Study.” Assessing Writing251: 37–53. 10.1016/j.asw.2015.05.002
    https://doi.org/10.1016/j.asw.2015.05.002 [Google Scholar]
  45. Wolfe, Edward W.
    1997 “The Relationship between Essay Reading Style and Scoring Proficiency in a Psychometric Scoring System.” Assessing Writing4 (1): 83–106. 10.1016/S1075‑2935(97)80006‑2
    https://doi.org/10.1016/S1075-2935(97)80006-2 [Google Scholar]
  46. 2005 “Uncovering Rater’s Cognitive Processing and Focus Using Think-Aloud Protocols.” Journal of Writing Assessment2 (1): 37–56.
    [Google Scholar]
  47. Zhang, Jie
    2016 “Same Text Different Processing? Exploring How Raters’ Cognitive and Meta-Cognitive Strategies Influence Rating Accuracy in Essay Scoring.” Assessing Writing271: 37–53. 10.1016/j.asw.2015.11.001
    https://doi.org/10.1016/j.asw.2015.11.001 [Google Scholar]
/content/journals/10.1075/target.23040.han
Loading
/content/journals/10.1075/target.23040.han
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error