1887
Volume 10, Issue 1
  • ISSN 2215-1478
  • E-ISSN: 2215-1486

Abstract

Abstract

Learner and L2 user corpora are increasingly valued in language testing and assessment as they can inform test design, revision, and validation. This paper illustrates the benefits of using an L2 corpus to explore patterns of epistemic stance marking in computer-mediated speaking tests with no live human interlocutor. Drawing on the  – comprising over 630,000 words of L2 speech – we explored the frequency of epistemic stance markers (adverbial, adjectival and verbal) across proficiency levels and speaking task types. The analysis revealed that epistemic stance was prevalent in test-taker discourse and that frequency was influenced by L2 proficiency and task type. The findings demonstrate that computer-mediated speaking tests can elicit expressions of epistemic stance in a comparable way to tests which involve human-human interaction. Implications are drawn for examiner training, test preparation, and an enriched understanding of the elements of pragmatic competence that can be elicited in computer-mediated speaking assessment.

Available under the CC BY 4.0 license.
Loading

Article metrics loading...

/content/journals/10.1075/ijlcr.00044.gab
2024-06-28
2024-07-23
Loading full text...

Full text loading...

/deliver/fulltext/ijlcr.00044.gab.html?itemId=/content/journals/10.1075/ijlcr.00044.gab&mimeType=html&fmt=ahah

References

  1. Aijmer, K.
    (2004) Pragmatic markers in spoken interlanguage. Worlds of Words. A Tribute to Arne Zettersten. Special Issue of Nordic Journal of English Studies, 3(1), 173–190. 10.35360/njes.29
    https://doi.org/10.35360/njes.29 [Google Scholar]
  2. (2011) Well I’m not sure I think… The use of well by non-native speakers. International Journal of Corpus Linguistics, 16(2), 231–254. 10.1075/ijcl.16.2.04aij
    https://doi.org/10.1075/ijcl.16.2.04aij [Google Scholar]
  3. Aijmer, K., & Rühlemann, C.
    (Eds.) (2015) Corpus pragmatics. Cambridge University Press.
    [Google Scholar]
  4. Bae, J., & Lee, Y. S.
    (2011) The validation of parallel test forms: ‘Mountain’ and ‘beach’ picture series for assessment of language skills. Language Testing, 28(2), 155–177. 10.1177/0265532210382446
    https://doi.org/10.1177/0265532210382446 [Google Scholar]
  5. Barker, F., Salamoura, A., & Saville, N.
    (2015) Learner corpora and language testing. InS. Granger, G. Gilquin, & F. Meunier (Eds.) The Cambridge Handbook of Learner Corpus Research (pp.511–533). Cambridge University Press. 10.1017/CBO9781139649414.023
    https://doi.org/10.1017/CBO9781139649414.023 [Google Scholar]
  6. Biber, D., Johansson, S., Leech, G., Conrad, S., & Finegan, E.
    (1999) The grammar of spoken and written English. Longman.
    [Google Scholar]
  7. Biber, D.
    (2006) University language: A corpus-based study of spoken and written registers. John Benjamins. 10.1075/scl.23
    https://doi.org/10.1075/scl.23 [Google Scholar]
  8. Brezina, V.
    (2012) Epistemic markers in university advisory sessions: Towards a local grammar of epistemicity. Unpublished PhD Dissertation. University of Auckland.
  9. Brezina, V., Weill-Tessier, P., & McEnery, A.
    (2020) #LancsBox (Version 5) [software]. corpora.lancs.ac.uk/lancsbox
  10. British Council
    British Council (2023) Aptis. Guide for teachers. [electronic resource]. https://www.britishcouncil.org/sites/default/files/aptis_general_guide_for_teachers_2023.pdf
    [Google Scholar]
  11. Brooks, L., & Swain, M.
    (2014) Contextualizing performances: Comparing performances during TOEFL iBTTM and real-life academic speaking activities. Language Assessment Quarterly, 11(4), 353–373. 10.1080/15434303.2014.947532
    https://doi.org/10.1080/15434303.2014.947532 [Google Scholar]
  12. Callies, M., & Götz, S.
    (2015) Learner corpora in language testing and assessment: Prospects and challenges. InM. Callies, & S. Götz (Eds.), Learner corpora in language testing and assessment (pp.1–10). John Benjamins. 10.1075/scl.70.001int
    https://doi.org/10.1075/scl.70.001int [Google Scholar]
  13. Chapelle, C. A., & Lee, H.
    (2021) Conceptions of validity. InG. Fulcher, & L. Harding (Eds.), The Routledge Handbook of language testing (2nd ed., pp.17–31). Routledge. 10.4324/9781003220756‑3
    https://doi.org/10.4324/9781003220756-3 [Google Scholar]
  14. Crosthwaite, P. R., & Raquel, M.
    (2019) Validating an L2 academic group oral assessment: Insights from a spoken learner corpus. Language Assessment Quarterly, 16(1), 39–63. 10.1080/15434303.2019.1572149
    https://doi.org/10.1080/15434303.2019.1572149 [Google Scholar]
  15. Cushing, S. T.
    (2017) Corpus linguistics in language testing research. Language Testing, 34(4), 441–449. 10.1177/0265532217713044
    https://doi.org/10.1177/0265532217713044 [Google Scholar]
  16. Du Bois, J. W.
    (2007) The stance triangle. InR. Englebretson (Ed.), Stancetaking in discourse: Subjectivity, evaluation, interaction (pp.139–182). John Benjamins. 10.1075/pbns.164.07du
    https://doi.org/10.1075/pbns.164.07du [Google Scholar]
  17. Fox Tree, J. E.
    (2010) Discourse markers across speakers and settings. Language and Linguistics Compass, 4(5), 269–281. 10.1111/j.1749‑818X.2010.00195.x
    https://doi.org/10.1111/j.1749-818X.2010.00195.x [Google Scholar]
  18. Fulcher, G.
    (2010) Practical language testing. Hodder Education/Routledge.
    [Google Scholar]
  19. Fulcher, G., & Harding, L.
    (Eds.) (2022) The Routledge Handbook of language testing. Routledge.
    [Google Scholar]
  20. Fuller, J. M.
    (2003) The influence of speaker roles on discourse marker use. Journal of Pragmatics, 35(1), 23–45. 10.1016/S0378‑2166(02)00065‑6
    https://doi.org/10.1016/S0378-2166(02)00065-6 [Google Scholar]
  21. Gablasova, D.
    (2021) Corpora for second language assessments. InP. Winke, & T. Brunfaut (Eds.), The Routledge handbook of Second Language Acquisition and Language Testing (pp.45–53). Routledge.
    [Google Scholar]
  22. Gablasova, D., Brezina, V., Harding, L., & Dunlea, J.
    (2020) The British Council – Lancaster Aptis Speaking corpus [electronic dataset]. Lancaster University.
    [Google Scholar]
  23. Gablasova, D., Brezina, V., McEnery, T., & Boyd, E.
    (2017) Epistemic stance in spoken L2 English: The effect of task and speaker style. Applied Linguistics, 38(5), 613–637. 10.1093/applin/amv055
    https://doi.org/10.1093/applin/amv055 [Google Scholar]
  24. Gablasova, D., & Brezina, V.
    (2015) Does speaker role affect the choice of epistemic adverbials in L2 speech? Evidence from the Trinity Lancaster Corpus. InJ. Romero-Trillo (Ed.), Yearbook of corpus linguistics and pragmatics 2015 (pp.117–136). Springer. 10.1007/978‑3‑319‑17948‑3_6
    https://doi.org/10.1007/978-3-319-17948-3_6 [Google Scholar]
  25. Gablasova, D., Harding, L., Brezina, V., & Dunlea, J.
    (2023, July). Talking to an imagined interlocutor: Interactional and interpersonal discourse features in computer-mediated semi-direct speaking assessment. Paper presented at theCL2023 conference, Lancaster University.
    [Google Scholar]
  26. Galaczi, E. D.
    (2014) Interactional competence across proficiency levels: How do learners manage interaction in paired speaking tests?. Applied linguistics, 35(5), 553–574. 10.1093/applin/amt017
    https://doi.org/10.1093/applin/amt017 [Google Scholar]
  27. Granger, S., Dupont, M., Meunier, F., Naets, H., & Paquot, M.
    (2020) The International Corpus of Learner English (Version 3). Presses universitaires de Louvain.
    [Google Scholar]
  28. Gray, B., & Biber, D.
    (2012) Current conceptions of stance. InK. Hyland, & C. Guinda (Eds.), Stance and voice in written academic genres (pp.15–33). Palgrave Macmillan. 10.1057/9781137030825_2
    https://doi.org/10.1057/9781137030825_2 [Google Scholar]
  29. Gyllstad, H., & Snoder, P.
    (2021) Exploring learner corpus data for language testing and assessment purposes: The case of verb + noun collocations. InS. Granger (Ed.), Perspectives on the L2 phrasicon. The view from learner corpora (pp.49–71). Multilingual Matters. 10.21832/9781788924863‑004
    https://doi.org/10.21832/9781788924863-004 [Google Scholar]
  30. Hasselgren, A.
    (1994) Lexical teddy bears and advanced learners: A study into the ways Norwegian students cope with English vocabulary. International Journal of Applied Linguistics, 4(2), 237–258. 10.1111/j.1473‑4192.1994.tb00065.x
    https://doi.org/10.1111/j.1473-4192.1994.tb00065.x [Google Scholar]
  31. He, L., & Dai, Y.
    (2006) A corpus-based investigation into the validity of the CET-SET group discussion. Language Testing, 23(3), 370–401. 10.1191/0265532206lt333oa
    https://doi.org/10.1191/0265532206lt333oa [Google Scholar]
  32. Hunston, S., & Thompson, G.
    (Eds.) (2000) Evaluation in text: Authorial stance and the construction of discourse. Oxford University Press. 10.1093/oso/9780198238546.001.0001
    https://doi.org/10.1093/oso/9780198238546.001.0001 [Google Scholar]
  33. Isaacs, T.
    (2018) Fully automated speaking assessment: Changes to proficiency testing and the role of pronunciation. InO. Kang, R. I. Thomson, & J. Murphy (Eds.), The Routledge Handbook of English pronunciation (pp.570–584). Routledge.
    [Google Scholar]
  34. Iwashita, N., May, L., & Moore, P.
    (2017) Features of discourse and lexical richness at different performance levels in the APTIS speaking test (AR-G/2017/2). https://www.britishcouncil.org/sites/default/files/iwashita_et_al_layout_1_revised.pdf
  35. Iwashita, N., May, L., & Moore, P. J.
    (2021) Operationalising interactional competence in computer-mediated speaking tests. InM. R. Salaberry, & A. R. Burch (Eds.), Assessing speaking in context: Expanding the construct and its applications (pp.283–302). Multilingual Matters. 10.21832/9781788923828‑013
    https://doi.org/10.21832/9781788923828-013 [Google Scholar]
  36. Johansen, S. H.
    (2020) Hedging in spoken conversations by Norwegian learners of English. Nordic Journal of Language Teaching and Learning, 8(2), 27–48.
    [Google Scholar]
  37. Kärkkäinen, E.
    (2003) Epistemic stance in English conversation: A description of its interactional functions, with a focus on ‘I think’. John Benjamins. 10.1075/pbns.115
    https://doi.org/10.1075/pbns.115 [Google Scholar]
  38. (2006) Stance taking in conversation: From subjectivity to intersubjectivity. Text & Talk, 26(6), 699–731. 10.1515/TEXT.2006.029
    https://doi.org/10.1515/TEXT.2006.029 [Google Scholar]
  39. (2010) Position and scope of epistemic phrases in planned and unplanned American English. InG. Kaltenböck, W. Mihatsch, & S. Schneider (Eds.), New approaches to hedging (pp.203–236). Brill. 10.1163/9789004253247_011
    https://doi.org/10.1163/9789004253247_011 [Google Scholar]
  40. Kilgarriff, A., Baisa, V., Bušta, J., Jakubíček, M., Kovář, V., Michelfeit, J., Rychlỳ, P., & Suchomel, V.
    (2014) The Sketch Engine: Ten years on. Lexicography, 1(1), 7–36. 10.1007/s40607‑014‑0009‑9
    https://doi.org/10.1007/s40607-014-0009-9 [Google Scholar]
  41. Kizu, M., Gyogi, E., & Dougherty, P.
    (2022) Epistemic stance in L2 English discourse: The development of pragmatic strategies in study abroad. Applied Pragmatics, 4(1), 33–62. 10.1075/ap.20007.kiz
    https://doi.org/10.1075/ap.20007.kiz [Google Scholar]
  42. LaFlair, G. T., & Staples, S.
    (2017) Using corpus linguistics to examine the extrapolation inference in the validity argument for a high-stakes speaking assessment. Language Testing, 34(4), 451–475. 10.1177/0265532217713951
    https://doi.org/10.1177/0265532217713951 [Google Scholar]
  43. Lam, P. W.
    (2009) The effect of text type on the use of so as a discourse particle. Discourse Studies, 11(3): 353–72. 10.1177/1461445609102448
    https://doi.org/10.1177/1461445609102448 [Google Scholar]
  44. Larsson, T., Callies, M., Hasselgård, H., Laso, N. J., Van Vuuren, S., Verdaguer, I., & Paquot, M.
    (2020) Adverb placement in EFL academic writing: Going beyond syntactic transfer. International Journal of Corpus Linguistics, 25(2), 156–185. 10.1075/ijcl.19131.lar
    https://doi.org/10.1075/ijcl.19131.lar [Google Scholar]
  45. Leedham, M., & Cai, G.
    (2013) Besides… on the other hand: Using a corpus approach to explore the influence of teaching materials on Chinese students’ use of linking adverbials. Journal of Second Language Writing, 22(4), 374–389. 10.1016/j.jslw.2013.07.002
    https://doi.org/10.1016/j.jslw.2013.07.002 [Google Scholar]
  46. Liao, S.
    (2009) Variation in the use of discourse markers by Chinese teaching assistants in the US. Journal of Pragmatics, 41(7), 1313–1328. 10.1016/j.pragma.2008.09.026
    https://doi.org/10.1016/j.pragma.2008.09.026 [Google Scholar]
  47. Nakatsuhara, F., May, L., Inoue, C., Willcox-Ficzere, E., Westbrook, C., & Spiby, R.
    (2021) Exploring the potential for assessing interactional and pragmatic competence in semi-direct speaking tests. British Council.
    [Google Scholar]
  48. Neary-Sundquist, C.
    (2013) Task type effects on pragmatic marker use by learners at varying proficiency levels. L2 Journal, 5(2), 1–21. 10.5070/L25212104
    https://doi.org/10.5070/L25212104 [Google Scholar]
  49. Ockey, G. J.
    (2009) Developments and challenges in the use of computer-based testing for assessing second language ability. The Modern Language Journal, 931, 836–847. 10.1111/j.1540‑4781.2009.00976.x
    https://doi.org/10.1111/j.1540-4781.2009.00976.x [Google Scholar]
  50. Ockey, G. J., & Chukharev-Hudilainen, E.
    (2021) Human versus computer partner in the paired oral discussion test. Applied Linguistics, 42(5), 924–944. 10.1093/applin/amaa067
    https://doi.org/10.1093/applin/amaa067 [Google Scholar]
  51. O’Loughlin, K. J.
    (2001) The equivalence of direct and semi-direct speaking tests. Cambridge University Press.
    [Google Scholar]
  52. O’Sullivan, B.
    (2015) Linking the Aptis reporting scales to the CEFR. Aptis Technical Report (TR/2015/003). British Council. https://www.britishcouncil.org/sites/default/files/tech_003_barry_osullivan_linking_aptis_v4_single_pages_0.pdf
    [Google Scholar]
  53. O’Sullivan, B., Dunlea, J., Spiby, R., Westbrook, C., & Dunn, K.
    (2020) Aptis General technical manual [Version 2.2]. British Council.
    [Google Scholar]
  54. O’Sullivan, B., Weir, C., & Saville, N.
    (2002) Using observation checklists to validate speaking-test tasks. Language Testing, 19(1), 33–56. 10.1191/0265532202lt219oa
    https://doi.org/10.1191/0265532202lt219oa [Google Scholar]
  55. Quaid, E., & Barrett, A.
    (2020) Towards the future of computer-assisted language testing: Assessing spoken performance through semi-direct tests. InB. Zou, & M. Thomas (Eds.), Recent developments in technology-enhanced and computer-assisted language learning (pp.208–235). IGI Global.
    [Google Scholar]
  56. Roever, C., & Ikeda, N.
    (2022) What scores from monologic speaking tests can (not) tell us about interactional competence. Language Testing, 39(1), 7–29. 10.1177/02655322211003332
    https://doi.org/10.1177/02655322211003332 [Google Scholar]
  57. Roever, C., & Kasper, G.
    (2018) Speaking in turns and sequences: Interactional competence as a target construct in testing speaking. Language Testing, 35(3), 331–355. 10.1177/0265532218758128
    https://doi.org/10.1177/0265532218758128 [Google Scholar]
  58. Roever, C., & McNamara, T.
    (2006) Language testing: The social dimension. International Journal of Applied Linguistics, 16(2), 242–258. 10.1111/j.1473‑4192.2006.00117.x
    https://doi.org/10.1111/j.1473-4192.2006.00117.x [Google Scholar]
  59. Römer, U.
    (2017) Language assessment and the inseparability of lexis and grammar: Focus on the construct of speaking. Language Testing, 34(4), 477–492. 10.1177/0265532217711431
    https://doi.org/10.1177/0265532217711431 [Google Scholar]
  60. Salsbury, T., & Bardovi-Harlig, K.
    (2000) Oppositional talk and the acquisition of modality in L2 English. InB. Swierzbin, F. Morris, M. E. Anderson, C. A. Klee, & E. Tarone (Eds.), Social and cognitive factors in second language acquisition (pp.57–76). Cascadilla Press.
    [Google Scholar]
  61. Shohamy, E.
    (1994) The validity of direct versus semi-direct oral tests. Language Testing, 11(2), 99–123. 10.1177/026553229401100202
    https://doi.org/10.1177/026553229401100202 [Google Scholar]
  62. Staples, S., Biber, D., & Reppen, R.
    (2018) Using corpus-based register analysis to explore the authenticity of high-stakes language exams: A register comparison of TOEFL iBT and disciplinary writing tasks. The Modern Language Journal, 102(2), 310–332. 10.1111/modl.12465
    https://doi.org/10.1111/modl.12465 [Google Scholar]
  63. Timpe-Laughlin, V., Wain, J., & Schmidgall, J.
    (2015) Defining and operationalizing the construct of pragmatic competence: Review and recommendations. ETS Research Report Series, 2015(1), 1–43.
    [Google Scholar]
  64. Timpe-Laughlin, V., Sydorenko, T., & Dombi, J.
    (2022) Human versus machine: investigating L2 learner output in face-to-face versus fully automated role-plays. Computer Assisted Language Learning, 1–30. 10.1177/13621688231188310
    https://doi.org/10.1177/13621688231188310 [Google Scholar]
  65. Wei, M.
    (2011) A comparative study of the oral proficiency of Chinese learners of English across task functions: A discourse marker perspective. Foreign Language Annals, 44(4), 674–691. 10.1111/j.1944‑9720.2011.01156.x
    https://doi.org/10.1111/j.1944-9720.2011.01156.x [Google Scholar]
http://instance.metastore.ingenta.com/content/journals/10.1075/ijlcr.00044.gab
Loading
/content/journals/10.1075/ijlcr.00044.gab
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error