Volume 28, Issue 2
  • ISSN 0924-1884
  • E-ISSN: 1569-9986
Buy:$35.00 + Taxes


The volume of Audiovisual Translation (AVT) is increasing to meet the rising demand for data that needs to be accessible around the world. Machine Translation (MT) is one of the most innovative technologies to be deployed in the field of translation, but it is still too early to predict how it can support the creativity and productivity of professional translators in the future. Currently, MT is more widely used in (non-AV) text translation than in AVT. In this article, we discuss MT technology and demonstrate why its use in AVT scenarios is particularly challenging. We also present some potentially useful methods and tools for measuring MT quality that have been developed primarily for text translation. The ultimate objective is to bridge the gap between the tech-savvy AVT community, on the one hand, and researchers and developers in the field of high-quality MT, on the other.


Article metrics loading...

Loading full text...

Full text loading...


  1. Avramidis, Eleftherios , Aljoscha Burchardt , Christian, Federmann , Maja Popovićs , Cindy Tscherwinka , and David Vilar
    2012 “Involving Language Professionals in the Evaluation of Machine Translation.” InProceedings of LREC 2012, 1127–1130. www.lrec-conf.org/proceedings/lrec2012/index.html. AccessedDecember 12, 2015.
    [Google Scholar]
  2. Banerjee, Satanjeev , and Alon Lavie
    2005 “METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments.” InProceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, ed. by Jade Goldstein , Alon Lavie , Chin-Yew Lin , and Clare Voss , 65–72. Michigan, MI: University of Michigan.
    [Google Scholar]
  3. Bywood, Lindsay , Martin Volk , Mark Fishel , and Panayota Georgakopoulou
    2013 “Parallel Subtitle Corpora and their Applications in Machine Translation and Translatology.” InCorpus Linguistics and AVT: in Search of an Integrated Approach, special issue ofPerspectives: Studies in Translatology21 (4): 1–16. doi: 10.1080/0907676X.2013.831920
    https://doi.org/10.1080/0907676X.2013.831920 [Google Scholar]
  4. Chaume, Frederic
    2004Cine y traducción. Madrid: Cátedra.
    [Google Scholar]
  5. De Sousa, Sheila C. M. , Wilker Aziz , and Lucia Specia
    2011 “Assessing the Post-Editing Effort for Automatic and Semi-Automatic Translations of DVD Subtitles.” InProceedings of the International Conference on Recent Advances in Natural Language Processing, ed. by Galia Angelova , Kalina Bontcheva , Ruslan Mitkov , and Nikolai Nikolov , 97–103. www.aclweb.org/anthology/R11-1014.pdf. AccessedDecember 22, 2015.
    [Google Scholar]
  6. Díaz-Cintas, Jorge , and Aline Remael
    2007Audiovisual Translation, Subtitling. Manchester: St. Jerome.
    [Google Scholar]
  7. Etchegoyhen, Thierry , Lindsay Bywood , Mark Fishel , Panayota Georgakopoulou , Jie Jiang , Gerard van Loenhout , Arantza del Pozo , Mirjam Sepesy Maucec , Anja Turner , and Martin Volk
    2014 “Machine Translation for Subtitling: A Large-Scale Evaluation.” InProceedings of LREC 2014, 46–53. www.lrec-conf.org/proceedings/lrec2014/index.html. AccessedDecember 22, 2015.
    [Google Scholar]
  8. Lommel, Arle , Aljoscha Burchardt , and Hans Uszkoreit
    2014 “Multidimensional Quality Metrics (MQM): A Framework for Declaring and Describing Translation Quality Metrics.” InTradumàtica: tecnologies de la traducció0 (12): 455–463.
    [Google Scholar]
  9. Papineni, Kishore , Salim Roukos , Todd Ward , and Wei-Jing Zhu
    2002 “BLEU: A Method for Automatic Evaluation of Machine Translation.” InProceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311–318. dl.acm.org/citation.cfm?id=1073083&picked=prox. AccessedDecember 22, 2015.
    [Google Scholar]
  10. Popović, Maja
    2011a “Hjerson: An Open Source Tool for Automatic Error Classification of Machine Translation Output.” The Prague Bulletin of Mathematical Linguistics96: 59–68. doi: 10.2478/v10108‑011‑0011‑4
    https://doi.org/10.2478/v10108-011-0011-4 [Google Scholar]
  11. 2011b “Morphemes and POS Tags for N-gram Based Evaluation Metrics.” InProceedings of the Sixth Workshop on Statistical Machine Translation, 104–107. file:///Users/SRP/Downloads/ngrams.pdf. AccessedDecember 22, 2015.
    [Google Scholar]
  12. Romero-Fresco, Pablo , and Juan Martínez Pérez
    2015 “Accuracy Rate in Live Subtitling – the NER Model.” InAudiovisual Translation in a Global Context: Mapping an Ever-changing Landscape, ed. by Jorge Díaz Cintas , and Rocío Baños Pinero , 28–50. London: Palgrave Macmillan. hdl.handle.net/10142/141892(draft). AccessedNovember 4, 2015. doi: 10.1057/9781137552891_3
    https://doi.org/10.1057/9781137552891_3 [Google Scholar]
  13. Rubin, Ann D.
    1978 “A Theoretical Taxonomy of the Differences between Oral and Written Language.” Center for the Study of Reading Technical Report35.
    [Google Scholar]
  14. Shah, Kashif , Eleftherios Avramidis , Ergun Biçicic , and Lucia Specia
    2013 “QuEst – Design, Implementation and Extensions of a Framework for Machine Translation Quality Estimation.” The Prague Bulletin of Mathematical Linguistics100: 19–30. doi: 10.2478/pralin‑2013‑0008
    https://doi.org/10.2478/pralin-2013-0008 [Google Scholar]
  15. Vilar, David , Jia Xu , Luis Fernando d’Haro , and Hermann Ney
    2006 “Error Analysis of Statistical Machine Translation Output.” InProceedings of LREC 2006, 697–702. file:///Users/SRP/Downloads/2lrec06_errorAnalysis.pdf. AccessedDecember 22, 2015.
    [Google Scholar]

Data & Media loading...

  • Article Type: Research Article
Keyword(s): audiovisual translation; evaluation; machine translation; translation quality
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error