1887
image of Exploring the potential of textually‑enhanced captioned video to
direct learners’ attention to challenging sound contrasts
USD
Buy:$35.00 + Taxes

Abstract

Abstract

This study investigates the potential of textually-enhanced captioned video to direct EFL learners’ attention to a difficult L2 vowel contrast (English /æ/-/ʌ) while watching a 30-minute episode of Ted Lasso. Spanish EFL learners ( = 89) were randomly assigned to five different viewing conditions: unenhanced captions (1); enhanced captions with /æ/ and /ʌ/ in two different colours with the target words either in phonetic symbols (2) or orthography (3); or with /æ/ and /ʌ/ in the same colour, either in phonetic symbols (4) or orthography (5). The participants’ eye movements were recorded with a Tobii TX-1200 eye-tracker. The textual enhancement implemented was effective in directing learners’ attention to the target words and attention was generally maintained during the episode. The enhanced conditions promoted higher fixation rates and durations than the unenhanced one. Additionally, the participants’ answers to a post-viewing questionnaire revealed that they considered these types of enhancement useful to help them spot instances of the target sounds and that the captions were not overwhelming.

Loading

Article metrics loading...

/content/journals/10.1075/jslp.24043.fou
2025-01-14
2025-02-15
Loading full text...

Full text loading...

References

  1. Baranowska, K.
    (2020) Learning most with least effort: subtitles and cognitive load. ELT Journal, (), –. 10.1093/elt/ccz060
    https://doi.org/10.1093/elt/ccz060 [Google Scholar]
  2. Barriuso, T. A., & Hayes-Harb, R.
    (2018) High variability phonetic training as a bridge from research to practice. The CATESOL Journal, (), –. 10.5070/B5.35970
    https://doi.org/10.5070/B5.35970 [Google Scholar]
  3. Bashori, M., van Hout, R., Strik, H., & Cucchiarini, C.
    (2024) I Can Speak: improving English pronunciation through automatic speech recognition-based language learning systems. Innovation in Language Learning and Teaching, (), –. 10.1080/17501229.2024.2315101
    https://doi.org/10.1080/17501229.2024.2315101 [Google Scholar]
  4. Bird, S. A., & Williams, J. N.
    (2002) The effect of bimodal input on implicit and explicit memory: An investigation into the benefits of within-language subtitling. Applied Psycholinguistics, (), –. 10.1017/S0142716402004022
    https://doi.org/10.1017/S0142716402004022 [Google Scholar]
  5. Bradlow, A. R., Pisoni, D. B., Akahane-Yamada, R., & Tohkura, Y.
    (1997) Training Japanese listeners to identify English /r/ and /l/: IV. Some effects of perceptual learning on speech production. Journal of the Acoustical Society of America, (), –. 10.1121/1.418276
    https://doi.org/10.1121/1.418276 [Google Scholar]
  6. Brooks, M. E., Kristensen, K., van Benthem, K. J., Magnusson, A., Berg, C. W., Nielsen, A., Skaug, H. J., Maechler, M., & Bolker, B. M.
    (2017) GlmmTMB balances speed and flexibility among packages for zero-inflated generalized linear mixed modelling. The R Journal, (), –. 10.32614/RJ‑2017‑066
    https://doi.org/10.32614/RJ-2017-066 [Google Scholar]
  7. Brysbaert, M., & New, B.
    (2009) Moving beyond Kucera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English. BehaviorResearch Methods, (), –. 10.3758/BRM.41.4.977
    https://doi.org/10.3758/BRM.41.4.977 [Google Scholar]
  8. Carlet, A., & Cebrian, J.
    (2019) Assessing the effect of perceptual training on L2 vowel identification, generalization and long-term effects. InA. M. Nyvad, M. Hejná, A. Højen, A. B. Jespersen & M. Hjortshøj Sørensen (Eds.), A sound approach to language matters: In honor of Ocke-Schwen Bohn (pp.–). Aarhus University.
    [Google Scholar]
  9. Cebrian, J.
    (2019) Perceptual assimilation of British English vowels to Spanish monophthongs and diphthongs. Journal of the Acoustical Society of America, , –. 10.1121/1.5087645.
    https://doi.org/10.1121/1.5087645 [Google Scholar]
  10. Chun, D. M., Jiang, Y., Meyr, J., & Yang, R.
    (2015) Acquisition of L2 Mandarin Chinese tones with learner-created tone visualizations. Journal of Second Language Pronunciation, (), –. 10.1075/jslp.1.1.04chu
    https://doi.org/10.1075/jslp.1.1.04chu [Google Scholar]
  11. Cintrón-Valentín, M. C., & García-Amaya, L.
    (2021) Investigating textual enhancement and captions in L2 grammar and vocabulary: An experimental study. Studies in Second Language Acquisition, (), –. 10.1017/S0272263120000492
    https://doi.org/10.1017/S0272263120000492 [Google Scholar]
  12. Conklin, K., Pellicer-Sánchez, A., & Carrol, G.
    (2018) Eye-tracking: A guide for applied linguistics research. Cambridge University Press. 10.1017/9781108233279
    https://doi.org/10.1017/9781108233279 [Google Scholar]
  13. Darcy, I., & Holliday, J.
    (2019) Teaching an old work new tricks: phonological updates in the L2 mental lexicon. InJ. Levis, C. Nagle & E. Todey (Eds.), Proceedings of the 10thPronunciation in Second Language Learning and Teaching Conference (pp.–). Iowa State University.
    [Google Scholar]
  14. d’Ydewalle, G., & De Bruycker, W.
    (2007) Eye movements of children and adults while reading television subtitles. European Psychologist, (), –. 10.1027/1016‑9040.12.3.196
    https://doi.org/10.1027/1016-9040.12.3.196 [Google Scholar]
  15. Flege, J. E., & Bohn, O. -S.
    (2021) The revised speech learning model (SLM-r). InR. Wayland (Ed.), Second Language Speech Learning: Theoretical and Empirical Progress (pp.–). Cambridge University Press. 10.1017/9781108886901.002
    https://doi.org/10.1017/9781108886901.002 [Google Scholar]
  16. Fouz-González, J.
    (forthcoming). Technology-assisted pronunciation training: Bridging research and pedagogy. University of Toronto Press.
    [Google Scholar]
  17. Fouz-González, J. & Mompean, J. A.
    (2021) Exploring the potential of phonetic symbols and keywords as labels for perceptual training. Studies in Second Language Acquisition, (), –. 10.1017/S0272263120000455
    https://doi.org/10.1017/S0272263120000455 [Google Scholar]
  18. Galimberti, V., Mora, J. C., & Gilabert, R.
    (2023) Audio-synchronized textual enhancement in foreign language pronunciation learning from videos. System, , . 10.1016/j.system.2023.103078
    https://doi.org/10.1016/j.system.2023.103078 [Google Scholar]
  19. Guion, S. G., & Pederson, E.
    (2007) Investigating the role of attention in phonetic learning. InO. -S. Bohn & M. J. Munro (Eds.), Language experience in second language speech learning (pp.–). John Benjamins. 10.1075/lllt.17.09gui
    https://doi.org/10.1075/lllt.17.09gui [Google Scholar]
  20. Hacking, J. F., Smith, B. L., & Johnson, E. M.
    (2017) Utilizing electropalatography to train palatalized versus unpalatalized consonant productions by native speakers of American English learning Russian. Journal of Second Language Pronunciation, (), –. 10.1075/jslp.3.1.01hac
    https://doi.org/10.1075/jslp.3.1.01hac [Google Scholar]
  21. Hardison, D. M.
    (2004) Generalization of computer-assisted prosody training: Quantitative and qualitative findings. Language Learning & Technology, (), –. 10125/25228
    https://doi.org/10125/25228 [Google Scholar]
  22. Hirata, Y.
    (2004) Computer-assisted pronunciation training for native English speakers learning Japanese pitch and duration contrasts. Computer Assisted Language Learning, (), –. 10.1080/0958822042000319629
    https://doi.org/10.1080/0958822042000319629 [Google Scholar]
  23. Huensch, A.
    (2016) Perceptual phonetic training improves production in larger discourse contexts. Journal of Second Language Pronunciation, (), –. 10.1075/jslp.2.2.03hue
    https://doi.org/10.1075/jslp.2.2.03hue [Google Scholar]
  24. Kartushina, N., Hervais-Adelman, A., Frauenfelder, U. H., & Golestani, N.
    (2015) The effect of phonetic production training with visual feedback on the perception and production of foreign speech sounds. Journal of the Acoustical Society of America, (), –. 10.1121/1.4926561
    https://doi.org/10.1121/1.4926561 [Google Scholar]
  25. Kruger, J. L.
    (2016) Psycholinguistics and audiovisual translation. Target, (), –. 10.1075/target.28.2.08kru
    https://doi.org/10.1075/target.28.2.08kru [Google Scholar]
  26. Kruger, J. -L., Hefer, E., & Matthew, G.
    (2014) Attention distribution and cognitive load in a subtitled academic lecture: L1 vs. L2. Journal of Eye Movement Research, (), –. 10.16910/jemr.7.5.4
    https://doi.org/10.16910/jemr.7.5.4 [Google Scholar]
  27. Kuhl, P. K., Conboy, B. T., Coffey-Corina, S., Padden, D., Rivera-Gaxiola, M., & Nelson, T.
    (2008) Phonetic learning as a pathway to language: new data and native language magnet theory expanded (NLM-e). Philosophical transactions of the Royal Society of London. Series B, Biological sciences, (), –. 10.1098/rstb.2007.2154
    https://doi.org/10.1098/rstb.2007.2154 [Google Scholar]
  28. Lambacher, S., Martens, W., Kakehi, K., Marasinghe, C., & Molholt, G.
    (2005) The effects of identification training on the identification and production of American English vowels by native speakers of Japanese. Applied Psycholinguistics, (), –. 10.1017/S0142716405050150
    https://doi.org/10.1017/S0142716405050150 [Google Scholar]
  29. Lee, M., & Révész, A.
    (2020) Promoting grammatical development through captions and textual enhancement in multimodal input-based tasks. Studies in Second Language Acquisition, (), –. 10.1017/S0272263120000108
    https://doi.org/10.1017/S0272263120000108 [Google Scholar]
  30. Mathias, B., & von Kriegstein, K.
    (2023) Enriched learning: Behavior, brain, and computation. Trends in Cognitive Sciences, (), –. 10.1016/j.tics.2022.10.007
    https://doi.org/10.1016/j.tics.2022.10.007 [Google Scholar]
  31. Meara, P., & Miralpeix, I.
    (2015) V_YesNo Lognostics Vocabulary Test. www.lognostics.co.uk/tools/V_YesNo/V_YesNo.htm
  32. Mitterer, H., & McQueen, J. M.
    (2009) Foreign subtitles help but native-language subtitles harm foreign speech perception. PLoS ONE, (), . 10.1371/journal.pone.0007785
    https://doi.org/10.1371/journal.pone.0007785 [Google Scholar]
  33. Mohsen, M. A., & Mahdi, H. S.
    (2021) Partial versus full captioning mode to improve L2 vocabulary acquisition in a mobile-assisted language learning setting: Words pronunciation domain. Journal of Computing in Higher Education, , –. 10.1007/s12528‑021‑09276‑0
    https://doi.org/10.1007/s12528-021-09276-0 [Google Scholar]
  34. Mompean, J. A. & Fouz-González, J.
    (2021) Phonetic symbols in contemporary pronunciation instruction. RELC Journal, (), –. 10.1177/0033688220943431
    https://doi.org/10.1177/0033688220943431 [Google Scholar]
  35. Mompean, J. A. & Lintunen, P.
    (2015) Phonetic notation in foreign language teaching and learning: Potential advantages and learners’ views. Research in Language, (), –. 10.1515/rela‑2015‑0026
    https://doi.org/10.1515/rela-2015-0026 [Google Scholar]
  36. Montero Perez, M., Peters, E., & Desmet, P.
    (2015) Enhancing vocabulary learning through captioned Video: An eye‐tracking study. The Modern Language Journal, (), –. 10.1111/modl.12215
    https://doi.org/10.1111/modl.12215 [Google Scholar]
  37. (2018) Vocabulary learning through viewing video: the effect of two enhancement techniques. Computer assisted language learning, (), –. 10.1080/09588221.2017.1375960
    https://doi.org/10.1080/09588221.2017.1375960 [Google Scholar]
  38. Mora, J. C., & Fouz-González, J.
    (2024) Contrastive input enhancement in captioned video for L2 pronunciation learning. InC. Muñoz & I. Miralpeix (Eds.), Audiovisual Input and Second Language Learning (pp.–). John Benjamins. 10.1075/lllt.61.07mor
    https://doi.org/10.1075/lllt.61.07mor [Google Scholar]
  39. Mora, J. C. & Mora-Plaza, I.
    (2019) Contributions of cognitive attention control to L2 speech learning. InA. M. Nyvad, M. Hejná, A. Højen, A. B. Jespersen & M. Hjortshøj Sørensen (Eds.), A sound approach to language matters: In honor of Ocke-Schwen Bohn (pp.–). Aarhus University.
    [Google Scholar]
  40. Motohashi-Saigo, M. & Hardison, D. M.
    (2009) Acquisition of L2 Japanese geminates: Training with waveform displays. Language Learning & Technology, (), –. 10125/44179
    https://doi.org/10125/44179 [Google Scholar]
  41. Offerman, H. M., & Olson, D. J.
    (2016) Visual feedback and second language segmental production: The generalizability of pronunciation gains. System, , –. 10.1016/j.system.2016.03.003
    https://doi.org/10.1016/j.system.2016.03.003 [Google Scholar]
  42. Olson, D.
    (2014) The benefits of visual feedback on segmental production in L2 classrooms. Language Learning & Technology, (), –. 10125/44389
    https://doi.org/10125/44389 [Google Scholar]
  43. Pattemore, A., & Muñoz, C.
    (2022) Captions and learnability factors in learning grammar from audio-visual input. JALT CALL Journal, (), –. 10.29140/jaltcall.v18n1.564
    https://doi.org/10.29140/jaltcall.v18n1.564 [Google Scholar]
  44. Pederson, E., & Guion-Anderson, S.
    (2010) Orienting attention during phonetic training facilitates learning. The Journal of the Acoustical Society of America, (), –. 10.1121/1.3292286
    https://doi.org/10.1121/1.3292286 [Google Scholar]
  45. Peterson, R. A.
    (2021) Finding optimal normalizing transformations via bestNormalize. The R Journal, (), –. 10.32614/RJ‑2021‑041
    https://doi.org/10.32614/RJ-2021-041 [Google Scholar]
  46. Peterson, R. A., & Cavanough, J. E.
    (2020) Ordered quantile normalization: a semiparametric transformation built for the cross-validation era. Journal of Applied Statistics, , (), –. 10.1080/02664763.2019.1630372
    https://doi.org/10.1080/02664763.2019.1630372 [Google Scholar]
  47. Popova, M., & Miralpeix, I.
    (2024) Maximizing L2 learning from captioned TV viewing. InC. Muñoz & I. Miralpeix (Eds.) Audiovisual input and second language learning (pp.–). John Benjamins. 10.1075/lllt.61.05pop
    https://doi.org/10.1075/lllt.61.05pop [Google Scholar]
  48. Rehman, I.
    (2021) Real-time formant extraction for second language vowel production training. [Unpublished doctoral dissertation]. Iowa State University.
    [Google Scholar]
  49. Roon, K. D., Kang, J., & Whalen, D. H.
    (2020) Effects of ultrasound familiarization on production and perception of nonnative contrasts. Phonetica, (), –. 10.1159/000505298
    https://doi.org/10.1159/000505298 [Google Scholar]
  50. Sharwood-Smith, M.
    (1993) Input enhancement in instructed SLA: Theoretical bases. Studies in Second Language Acquisition, (), –. 10.1017/S0272263100011943
    https://doi.org/10.1017/S0272263100011943 [Google Scholar]
  51. Scheffler, P., & Baranowska, K.
    (2023) Learning pronunciation through television series. Language Learning & Technology, (), –. hdl.handle.net/10125/73520
    [Google Scholar]
  52. Suemitsu, A., Dang, J., Ito, T., & Tiede, M.
    (2015) A real-time articulatory visual feedback approach with target presentation for second language pronunciation learning. The Journal of the Acoustical Society of America, (), –. 10.1121/1.4931827
    https://doi.org/10.1121/1.4931827 [Google Scholar]
  53. Thomson, R. I.
    (2011) Computer assisted pronunciation training: Targeting second language vowel perception improves pronunciation. CALICO Journal, (), –. 10.11139/cj.28.3.744‑765
    https://doi.org/10.11139/cj.28.3.744-765 [Google Scholar]
  54. (2012) Improving L2 listeners’ perception of English vowels: A computer-mediated approach. Language Learning, (), –. 10.1111/j.1467‑9922.2012.00724.x
    https://doi.org/10.1111/j.1467-9922.2012.00724.x [Google Scholar]
  55. (2018) High Variability [Pronunciation] Training (HVPT). A proven technique about which every language teacher and learner ought to know. Journal of Second Language Pronunciation, (), –. 10.1075/jslp.17038.tho
    https://doi.org/10.1075/jslp.17038.tho [Google Scholar]
  56. VanPatten, B.
    (1990) Attending to form and content in the input: An experiment in consciousness. Studies in Second Language Acquisition, (), –. 10.1017/S0272263100009177
    https://doi.org/10.1017/S0272263100009177 [Google Scholar]
  57. Wang, S., Li, J., & Liang, Q.
    (2024) Visual reinforcement through digital zoom technology in FL pronunciation instruction. Language Learning & Technology, (), –. https://hdl.handle.net/10125/73558
    [Google Scholar]
  58. Winke, P., Gass, S., & Sydorenko, T.
    (2013) Factors influencing the use of captions by foreign language learners: An eye‐tracking study. The Modern Language Journal, (), –. 10.1111/j.1540‑4781.2013.01432.x
    https://doi.org/10.1111/j.1540-4781.2013.01432.x [Google Scholar]
  59. Wisniewska, N., & Mora, J.
    (2020) Can captioned video benefit second language pronunciation?Studies in Second Language Acquisition, (), –. 10.1017/S0272263120000029
    https://doi.org/10.1017/S0272263120000029 [Google Scholar]
  60. Zhu, J., Zhang, X., & Li, J.
    (2024) Using AR filters in L2 pronunciation training: practice, perfection, and willingness to share, Computer Assisted Language Learning, (–), –. 10.1080/09588221.2022.2080716
    https://doi.org/10.1080/09588221.2022.2080716 [Google Scholar]
/content/journals/10.1075/jslp.24043.fou
Loading
/content/journals/10.1075/jslp.24043.fou
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error