Volume 16, Issue 1
  • ISSN 1932-2798
  • E-ISSN: 1876-2700
Buy:$35.00 + Taxes



Usability is a key factor for increasing adoption of machine translation. This study aims to measure the usability of machine translation in the classroom context by comparing translation students’ machine translation post-editing output with their manual translation in two comparable translation tasks. Three dimensions of usability were empirically measured: and . The findings suggest that machine translation post-editing is more efficient than human translation and post-editing produces fewer errors than human translation. While the types of errors vary, errors in terms of accuracy outnumber those related to fluency. In addition, participants perceive the amount of time and work that is saved when post-editing to be greater benefit than the overall utility of post-editing. Likewise, students report a strong desire to learn post-editing skills in training programs.


Article metrics loading...

Loading full text...

Full text loading...


  1. Bevan, Nigel, James Carter, and Susan Harker
    2015 “ISO 9241-11 revised: What have we learnt about usability since 1998?” InHuman Computer Interaction: Design and Evaluation, ed. byKurosu Masaaki, 143–151. Cham: Springer. 10.1007/978‑3‑319‑20901‑2_13
    https://doi.org/10.1007/978-3-319-20901-2_13 [Google Scholar]
  2. Bowker, Lynne
    2020 “Fit-for-purpose translation.” InThe Routledge Handbook of Translation and Technology, ed. byMinako O’Hagan, 453–468. London: Routledge.
    [Google Scholar]
  3. Bowker, Lynne and Jairo Buitrago Ciro
    2015 “Investigating the usefulness of machine translation for newcomers at the public library.” Translation and Interpreting Studies10(2): 165–186. doi:  10.1075/tis.10.2.01bow
    https://doi.org/10.1075/tis.10.2.01bow [Google Scholar]
  4. Cadwell, Patrick, Sharon O’Brien, and Carlos Teixeira
    2018 “Resistance and accommodation: Factors for the (non-) adoption of machine translation among professional translators.” Perspectives26(3): 301–321. doi:  10.1080/0907676X.2017.1337210
    https://doi.org/10.1080/0907676X.2017.1337210 [Google Scholar]
  5. Carl, Michael
    2012 “Translog-II: a program for recording user activity data for empirical reading and writing research.” InThe 8th International Conference on Language Resources and Evaluation, ed. byNicoletta Calzolari, , 21–27. Istanbul.
    [Google Scholar]
  6. Carl, Michael and Toledo Cristina Báez
    2019 “Machine translation errors and the translation process: A study across different languages.” The Journal of Specialised Translation31: 107–132.
    [Google Scholar]
  7. Cohen, Jacob
    1988Statistical Power Analysis for the Behavioral Sciences, 2nd ed.Hillsdale, NJ: Erlbaum.
    [Google Scholar]
  8. Daems, Joke,
    2016 “The effectiveness of consulting external resources during translation and post-editing of general text types.” InNew Directions in Empirical Translation Process, ed. byCarl Michael, Bangalore Srinivas, and Schaeffer Moritz, 111–133. Cham: Springer. 10.1007/978‑3‑319‑20358‑4_6
    https://doi.org/10.1007/978-3-319-20358-4_6 [Google Scholar]
  9. Daems, Joke, Sonia Vandepitte, Robert Hartsuiker, and Lieve Macken
    2017a “Translation methods and experience: A comparative analysis of human translation and post-editing with student and professional translators.” Meta62(2): 246–270. doi:  10.7202/1041023ar
    https://doi.org/10.7202/1041023ar [Google Scholar]
  10. 2017b “Identifying the machine translation error types with the greatest impact on post-editing effort.” Frontiers in Psychology8: 1282. doi:  10.3389/fpsyg.2017.01282
    https://doi.org/10.3389/fpsyg.2017.01282 [Google Scholar]
  11. Davis, Fred
    1989 “Perceived usefulness, perceived ease of use, and user acceptance of information technology.” MIS Quarterly13(3):319–340. doi:  10.2307/249008
    https://doi.org/10.2307/249008 [Google Scholar]
  12. Davis, Fred, Richard Bagozzi, and Paul Warshaw
    1989 “User acceptance of computer technology: A comparison of two theoretical models.” Management Science35(8): 982–1003. doi:  10.1287/mnsc.35.8.982
    https://doi.org/10.1287/mnsc.35.8.982 [Google Scholar]
  13. De Almeida, Giselle and Sharon O’Brien
    2010 “Analysing post-editing performance: Correlations with years of translation experience.” InProceedings of the 14th Annual Conference of the European Association for Machine Translation, ed. byFrançois Yvon and Viggo Hansen, 1–8. Saint-Raphaël.
    [Google Scholar]
  14. Doherty, Stephen and Dorothy Kenny
    2014 “The design and evaluation of a statistical machine translation syllabus for translation students.” The Interpreter and Translator Trainer8(2): 295–315. doi:  10.1080/1750399X.2014.937571
    https://doi.org/10.1080/1750399X.2014.937571 [Google Scholar]
  15. Doherty, Stephen and Sharon O’Brien
    2014 “Assessing the usability of raw machine translated output: A user-centered study using eye tracking.” International Journal of Human-Computer Interaction30(1): 40–51. doi:  10.1080/10447318.2013.802199
    https://doi.org/10.1080/10447318.2013.802199 [Google Scholar]
  16. Ducar, Cynthia and Deborah Houk Schocket
    2018 “Machine translation the L2 classroom: pedagogical solutions for making peace with Google Translate.” Foreign Language Annals51(4): 779–795. doi:  10.1111/flan.12366
    https://doi.org/10.1111/flan.12366 [Google Scholar]
  17. Fiederer, Rebecca and Sharon O’Brien
    2009 “Quality and machine translation: A realistic objective?” The Journal of Specialised Translation11: 52–74.
    [Google Scholar]
  18. Flanagan, Marian and Tina Paulsen Christensen
    2014 “Testing post-editing guidelines: How translation trainees interpret them and how to tailor then for translator training purposes.” The Interpreter and Translator Trainer8(2):257–275. doi:  10.1080/1750399X.2014.936111
    https://doi.org/10.1080/1750399X.2014.936111 [Google Scholar]
  19. Flesch, Rudolf
    1948 “A new readability yardstick.” Journal of Applied Psychology32: 221–223. doi:  10.1037/h0057532
    https://doi.org/10.1037/h0057532 [Google Scholar]
  20. García, Ignacio
    2010 “Is machine translation ready yet?” Target22(1): 7–21. doi:  10.1075/target.22.1.02gar
    https://doi.org/10.1075/target.22.1.02gar [Google Scholar]
  21. 2011 “Translating by post-editing: Is it the way forward?” Machine Translation25: 217–237. doi:  10.1007/s10590‑011‑9115‑8
    https://doi.org/10.1007/s10590-011-9115-8 [Google Scholar]
  22. Germann, Ulrich
    2008 “Yawat: Yet another world alignment tool.” InProceedings of 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies, 20–23. Ohio: Columbus. 10.3115/1564144.1564150
    https://doi.org/10.3115/1564144.1564150 [Google Scholar]
  23. Gile, Daniel
    1994 “Methodological aspects of interpretation and translation research.” InBridge the Gap: Empirical Research in Simultaneous Interpretation, ed. bySylvie Lambert and Barbara Moser-Mercer, 39–56. Philadelphia, PA: John Benjamins. 10.1075/btl.3.06gil
    https://doi.org/10.1075/btl.3.06gil [Google Scholar]
  24. Guerberof Arenas, Ana
    2012 “Productivity and Quality in the Post-editing of Outputs from Translation memories and Machine translation.” Ph.D. dissertation. Universitat Rovira i Virgili, Tarragona.
    [Google Scholar]
  25. 2014 “Correlations between productivity and quality when post-editing in a professional context.” Machine Translation28 (3–4): 165–186. doi:  10.1007/s10590‑014‑9155‑y
    https://doi.org/10.1007/s10590-014-9155-y [Google Scholar]
  26. Hansen, Gyde
    2008 “The dialogue in translation process research.” InTranslation and Cultural Diversity: Selected proceedings of the XVII FIT World Congress, 386–397. Shanghai: Foreign Language Press.
    [Google Scholar]
  27. 2013 “The translation process as object of research.” InThe Routledge Handbook of Translation Studies, ed. byCarmen Millán and Francesca Bartrina, 88–101. London/New York: Routledge.
    [Google Scholar]
  28. Harrati, Nouzha,
    2016 “Exploring user satisfaction for e-learning systems via usage-based metrics and system usability scale analysis.” Computers in Human Behavior61: 463–471. doi:  10.1016/j.chb.2016.03.051
    https://doi.org/10.1016/j.chb.2016.03.051 [Google Scholar]
  29. ISO9241-11
    ISO9241-11 2018 “Ergonomics of human-system interaction-Part 11: Usability: Definitions and concepts.” ISO 9241-11.
    [Google Scholar]
  30. Jia, Yanfang, Michael Carl, and Xiangling Wang
    2019 “How does the post-editing of neural machine translation compare with from-scratch translation? A product and process study.” The Journal of Specialised Translation31: 60–85.
    [Google Scholar]
  31. Kenny, Dorothy
    2018 “Sustaining disruption? The transition from statistical to neural machine translation.” Revista Tradumàtica16: 59–70. doi:  10.5565/rev/tradumatica.221
    https://doi.org/10.5565/rev/tradumatica.221 [Google Scholar]
  32. Kenny, Dorothy and Stephen Doherty
    2014 “Statistical machine translation in the translation curriculum: Overcoming obstacles and empowering translators.” The Interpreter and Translator Trainer8(2): 276–294. doi:  10.1080/1750399X.2014.936112
    https://doi.org/10.1080/1750399X.2014.936112 [Google Scholar]
  33. Kingscott, Geoffrey
    2002 “Technical translation and related disciplines.” Perspectives10(4): 247–255. doi:  10.1080/0907676X.2002.9961449
    https://doi.org/10.1080/0907676X.2002.9961449 [Google Scholar]
  34. Koponen, Maarit
    2010 “Assessing machine translation quality with error analysis.” InElectronic Proceedings of the KaTu Symposium on Translation and Interpreting Studies4: 1–12.
    [Google Scholar]
  35. 2015 “How to teach machine translation post-editing? Experiences from a post-editing course.” In4th Workshop on Post-Editing Technology and Practice (WPTP4), 2–15. Miami: Florida.
    [Google Scholar]
  36. 2016 “Is machine translation post-editing worth the effort? A survey of research into post-editing and effort.” The Journal of Specialised Translation25: 131–147.
    [Google Scholar]
  37. Kortum, Philip and Frederick Oswald
    2017 “The impact of personality on the subjective assessment of usability.” International Journal of Human-Computer Interaction34: 177–186. doi:  10.1080/10447318.2017.1336317
    https://doi.org/10.1080/10447318.2017.1336317 [Google Scholar]
  38. Krüger, Ralph
    2019 “A model for measuring the usability of computer-assisted translation tools.” InChallenging Boundaries: New Approaches to Specialized Communication, ed. byHeike Elisabeth Jüngst, Lisa Link, Klaus Schubert, and Christiane Zehrer, 93–117. Berlin: Frank & Timme.
    [Google Scholar]
  39. Lacruz, Isable, Michael Denkowski, and Alon Lavie
    2014 “Cognitive demand and cognitive effort in PE.” InThird Workshop on PE Technology and Practice, ed. bySharon O’Brien, Michel Simard, and Lucia Specia, 73–84. AMTA.
    [Google Scholar]
  40. Lewis, James R.
    2012 “Usability testing.” InHandbook of Human Factors and Ergonomics, ed. byGavriel Salvendy, 1267–1312. New York: Wiley. 10.1002/9781118131350.ch46
    https://doi.org/10.1002/9781118131350.ch46 [Google Scholar]
  41. Lexile
    Lexile 2007The Lexile Framework for Reading: Theoretical Framework and Development (Tech. Rep). Durham, NC: MetaMetrics, Inc.
    [Google Scholar]
  42. Lommel, Arle, Hans Uszkoreit, and Burchardt Aljoscha
    2014 “Multidimensional Quality Metrics (MQM): A framework for declaring and describing translation quality metrics.” Tradumàtica12: 455–463. doi:  10.5565/rev/tradumatica.77
    https://doi.org/10.5565/rev/tradumatica.77 [Google Scholar]
  43. Mariana, Valerie, Troy Cox, and Alan Melby
    2015 “The multidimensional quality metrics (MQM) framework: a new framework for translation quality assessment.” The Journal of Specialised Translation23: 137–161.
    [Google Scholar]
  44. Mellinger, Christopher D.
    2017 “Translators and machine translation: Knowledge and skills gaps in translator pedagogy.” The Interpreter and Translator Trainer11(4): 280–293. doi:  10.1080/1750399X.2017.1359760
    https://doi.org/10.1080/1750399X.2017.1359760 [Google Scholar]
  45. Mellinger, Christopher D. and Gregory M. Shreve
    2016 “Match evaluation and over-editing in a translation memory environment.” InReembedding Translation Process Research, ed. byRicardo Muñoz Martín, 132–148. Amsterdam: John Benjamins. 10.1075/btl.128.07mel
    https://doi.org/10.1075/btl.128.07mel [Google Scholar]
  46. Mellinger, Christopher D. and Thomas A. Hanson
    2017Quantitative Research Methods in Translation and Interpreting Studies. New York: Routledge.
    [Google Scholar]
  47. 2018 “Interpreter traits and the relationship with technology and visibility.” Translation and Interpreting Studies13(3): 366–392. doi:  10.1075/tis.00021.mel
    https://doi.org/10.1075/tis.00021.mel [Google Scholar]
  48. MetaMetrics
    MetaMetrics 2018About Lexile ® Measures for Reading. https://lexile.com/educators/understanding-lexile-measures/about-lexile-measures-for-reading. Last accessed23 May 2020.
    [Google Scholar]
  49. Moorkens, Joss
    2018 “What to expect from neural machine translation: A practical in-class translation evaluation exercise.” The Interpreter and Translator Trainer12(4): 375–387. doi:  10.1080/1750399X.2018.1501639
    https://doi.org/10.1080/1750399X.2018.1501639 [Google Scholar]
  50. Moorkens, Joss, Antonio Toral, Sheila Castilho, and Andy Way
    2018 “Translators’ perceptions of literary post-editing using statistical and neural machine translation.” Translation Space7(2): 240–262. doi:  10.1075/ts.18014.moo
    https://doi.org/10.1075/ts.18014.moo [Google Scholar]
  51. O’Brien, Sharon
    2004 “Machine translatability and post-editing effort: how do they relate?” Translating and the Computer26:1–31.
    [Google Scholar]
  52. 2007 “An empirical investigation of temporal and technical post-editing effort.” Translation and Interpreting Studies2(1): 83–136. doi:  10.1075/tis.2.1.03ob
    https://doi.org/10.1075/tis.2.1.03ob [Google Scholar]
  53. 2011 “Towards predicting post-editing productivity.” Machine Translation25: 197–215. doi:  10.1007/s10590‑011‑9096‑7
    https://doi.org/10.1007/s10590-011-9096-7 [Google Scholar]
  54. Plitt, Mirko and François Masselot
    2010 “A productivity test of statistical machine translation PE in a typical localization context.” Prague Bulletin of Mathematical Linguistics93: 7–16. doi:  10.2478/v10108‑010‑0010‑x
    https://doi.org/10.2478/v10108-010-0010-x [Google Scholar]
  55. Pym, Anthony
    2013 “Translation skill-sets in a machine-translation age.” Meta58(3): 487–503. doi:  10.7202/1025047ar
    https://doi.org/10.7202/1025047ar [Google Scholar]
  56. R Core Team
    R Core Team 2018 “R: A language and environment for statistical computing.” R Foundation for Statistical Computing. Vienna. www.R-project.org. Last accessed23 May 2020.
  57. Raita, Eeva and Antti Oulasvira
    2011 “Too good to be bad: Favorable product expectations boost subjective usability ratings.” Interacting with Computers23: 363–371. doi:  10.1016/j.intcom.2011.04.002
    https://doi.org/10.1016/j.intcom.2011.04.002 [Google Scholar]
  58. Rossi, Caroline
    2017 “Introducing statistical machine translation in translator training: From users and perceptions to course design, and back again.” Tradumàtica15: 48–62. doi:  10.5565/rev/tradumatica.195
    https://doi.org/10.5565/rev/tradumatica.195 [Google Scholar]
  59. Rossi, Caroline and Jean-Pierre Chevrot
    2019 “Uses and perceptions of machine translation at the European Commission.” The Journal of Specialised Translation31: 201–216.
    [Google Scholar]
  60. Sakamoto, Akiko
    2019 “Unintended consequences of translation technologies: from project managers’ perspectives.” Perspectives27(1): 58–73. doi:  10.1080/0907676X.2018.1473452
    https://doi.org/10.1080/0907676X.2018.1473452 [Google Scholar]
  61. Sánchez-Gijón, Pilar and Olga Torres-Hostench
    2014 “MT Post-editing into the mother tongue or into a foreign language? Spanish-to-English MT translation output post-edited by translation trainees.” InProceedings of the Third Workshop on Post-editing Technology and Practice, ed. bySharon O’Brien, Michel Simard and Lucia Specia, 5–17. Vancouver.
    [Google Scholar]
  62. Shuttleworth, Mark
    2002 “Combing MT and TM on a technology-oriented translation masters: aims and perspectives.” InProceedings of the 6th EAMT Workshop on Teaching Machine Translation, 123–129. Manchester.
    [Google Scholar]
  63. Suojanen, Tytti, Kaisa Koskinen, and Tiina Tuominen
    2015User-Centered Translation. London/New York: Routledge.
    [Google Scholar]
  64. Temizöz, Ö.
    2016 “Postediting machine translation output: subject-matter experts versus professional translators.” Perspectives24(4): 2–18. doi:  10.1080/0907676X.2015.1119862
    https://doi.org/10.1080/0907676X.2015.1119862 [Google Scholar]
  65. Temnikova, Irina
    2010 “Cognitive evaluation approach for a controlled language PE experiment.” InProceedings of the 7th International Conference on Language Resources and Evaluation, ed. byNicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner and Daniel Tapias, 3485–3490. Valletta.
    [Google Scholar]
  66. Tirkkonen-Condit, Sonja
    1990 “Professional vs. Non professional translation: A think-aloud protocol study.” InLearning, Keeping and Using Language: Selected papers from the 8th World Congress of Applied Linguistics, ed. byM. A. K. Halliday, John Gibbons, and Howard Nicholas, 381–394. Amsterdam: John Benjamins. 10.1075/z.lkul2.28tir
    https://doi.org/10.1075/z.lkul2.28tir [Google Scholar]
  67. Thode, Henry
    2002Testing for Normality. New York: Marcel Dekker. 10.1201/9780203910894
    https://doi.org/10.1201/9780203910894 [Google Scholar]
  68. Trace, Jonathan, Gerriet Janssen, and Valerie Meier
    2015 “Measuring the impact of rater negotiation in writing performance assessment.” Language Testing34: 3–22. doi:  10.1177/0265532215594830
    https://doi.org/10.1177/0265532215594830 [Google Scholar]
  69. Van der Heijden, Hans
    2004 “User acceptance of hedonic information systems.” MIS Quarterly28(4): 695–704. doi:  10.2307/25148660
    https://doi.org/10.2307/25148660 [Google Scholar]
  70. Wang, Huashu
    2018 “The development of translation technology in the era of big data.” InRestructuring Translation Education: Implications from China for the Rest of the World, ed. byFeng Yue, , 13–26. Singapore: Springer.
    [Google Scholar]
  71. Wu, Jen-Her and Shu-Ching Wang
    2005 “What drives mobile commerce? An empirical evaluation of the revised technology acceptance model.” Information & Management42: 719–729. doi:  10.1016/j.im.2004.07.001
    https://doi.org/10.1016/j.im.2004.07.001 [Google Scholar]
  72. Yamada, Masaru
    2019 “The impact of Google neural machine translation on post-editing by student translators.” The Journal of Specialized Translation31: 87–105.
    [Google Scholar]
  73. Yang, Yanxia and Xiangling Wang
    2019 “Modeling the intention to use machine translation for student translators: An extension of technology acceptance model.” Computers & Education133: 116–126. doi:  10.1016/j.compedu.2019.01.015
    https://doi.org/10.1016/j.compedu.2019.01.015 [Google Scholar]
  74. Zaharias, Panagiotis
    2009 “Developing a usability evaluation method for e-learning applications: From functional usability to motivation to learn.” International Journal of Human-computer Interaction25(1): 75–98. doi:  10.1080/10447310802546716
    https://doi.org/10.1080/10447310802546716 [Google Scholar]
  75. Zhai, Yuming, Aurélien Max, and Anne Vilnat
    2018 “Construction of a multilingual corpus annotated with translation relations.” InProceedings of the first workshop on linguistic resources for natural language processing, ed. byPeter Machonis, Anabela Barreiro, Kristina Kocijan, and Max Silberztein, 102–111. New Mexico: Santa Fe.
    [Google Scholar]

Data & Media loading...

  • Article Type: Research Article
Keyword(s): human translation; machine translation; post-editing; usability
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error