Volume 32, Issue 3

Abstract

This paper discusses the development of an assessment to satisfy the International Civil Aviation Organization (ICAO) Language Proficiency Requirements. The Versant Aviation English Test utilizes speech recognition technology and a computerized testing platform, such that test administration and scoring are fully automated. Developed in collaboration with the U.S. Federal Aviation Administration, this 25-minute test is delivered via a telephone or computer. Two issues of interest are discussed. The first concerns the practicalities of assessing candidates in each of six separate dimensions of spoken proficiency: Pronunciation, Structure, Vocabulary, Fluency, Comprehension, and Interactions. Although an automated scoring system can objectively segregate these skills, we question whether human raters have the capacity to do this in oral interviews. The second issue discussed is how an automated test can provide a valid assessment of spoken interactions. Tasks were designed to simulate the information exchange between pilots and controllers on which candidates’ proficiency in ‘Interactions’ could be measured, for example, by eliciting functions such as correcting miscommunications and providing clarification. It is argued that candidate ability can be probed and estimated in a fair and standardized way by presenting a series of independent items which are targeted in difficulty at the various ICAO levels.

Loading

Article metrics loading...

/content/journals/10.2104/aral0927
2009-01-01
2024-03-28
Loading full text...

Full text loading...

References

  1. Alderson, J. Charles
    (2008) Final report on a survey of aviation English tests. Unpublished report.
    [Google Scholar]
  2. Bachman, Lyle F ; Palmer, Adrian S
    (1983) The construct validity of the FSI oral interview. Language Learning, 31(1), 67–86. doi: 10.1111/j.1467‑1770.1981.tb01373.x
    https://doi.org/10.1111/j.1467-1770.1981.tb01373.x [Google Scholar]
  3. Bachman, Lyle F. ; Palmer, Adrian S.
    (1996) Language Testing in Practice. Oxford. Oxford University Press.
    [Google Scholar]
  4. Brown, Annie
    (2003) Interviewer variation and the co–construction of speaking proficiency. Language Testing, 20(1), 1–25. doi: 10.1191/0265532203lt242oa
    https://doi.org/10.1191/0265532203lt242oa [Google Scholar]
  5. Brown, Gillian ; Yule, George
    (1983) Discourse Analysis. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511805226
    https://doi.org/10.1017/CBO9780511805226 [Google Scholar]
  6. Brown, Gillian ; Anderson, Anne ; Shillcock, Richard ; Yule, George
    (1984) Teaching Talk: Strategies for Production and Assessment.,Cambridge: Cambridge University Press.
    [Google Scholar]
  7. Fulcher, Glenn
    (1996) Testing tasks: Issues in task design and the group oral. Language Testing, 13(1), 23–51. doi: 10.1177/026553229601300103
    https://doi.org/10.1177/026553229601300103 [Google Scholar]
  8. Henning, Grant
    (1987) A Guide to Language Testing. Boston, MA: Heinle and Heinle.
    [Google Scholar]
  9. Hinofotis, Frances B.
    (1983) The structure of oral communication is an educational environment: a comparison of factor–analytic rotational procedures. In Oller, J.W., Jr (Ed.), Issues in Language Testing Research (pp.170–187). Rowley, MA: Newbury House.
    [Google Scholar]
  10. International Civil Aviation Organization (ICAO)
    International Civil Aviation Organization (ICAO) (2004) Manual on the Implementing of ICAO Language Proficiency Requirements. Montreal: ICAO.
    [Google Scholar]
  11. Johnson, Marysia
    (2001) The Art of Non–conversation: A Reexamination of the Validity of the Oral Proficiency Interview. New Haven, CT: Yale University Press.
    [Google Scholar]
  12. Lantolf, James ; Frawley, William
    (1985) Oral proficiency testing: A critical analysis. Modern Language Journal, 69(4), 337–345. doi: 10.1111/j.1540‑4781.1985.tb04801.x
    https://doi.org/10.1111/j.1540-4781.1985.tb04801.x [Google Scholar]
  13. Linde, Charlotte
    (1988) The quantitative study of communicative success: Politeness and accidents in aviation discourse. Language in Society, 17(3), 375–399 doi: 10.1017/S0047404500012951
    https://doi.org/10.1017/S0047404500012951 [Google Scholar]
  14. McNamara, Tim
    (1996) Measuring Second Language Performance. London: Longman.
    [Google Scholar]
  15. Morrow, Daniel G ; Rodvold, Michelle ; Lee, Alfred
    (1994) Nonroutine transactions in controller–pilot communication. Discourse Processes, 17(2), 235–258. doi: 10.1080/01638539409544868
    https://doi.org/10.1080/01638539409544868 [Google Scholar]
  16. Orr, Michael
    (2002) The FCE speaking test: Using rater reports to help interpret test scores. System, 30(2),143–54. doi: 10.1016/S0346‑251X(02)00002‑7
    https://doi.org/10.1016/S0346-251X(02)00002-7 [Google Scholar]
  17. Pearson
    Pearson 2008Versant Aviation English Test: Test description and validation summary. Palo Alto, CA: Author.
    [Google Scholar]
  18. Qian, David
    (2009) Comparing direct and semi-direct modes for speaking assessment: Affective effects on test-takers. Language Assessment Quarterly, 6(2), 113–125. doi: 10.1080/15434300902800059
    https://doi.org/10.1080/15434300902800059 [Google Scholar]
  19. Ross, Steven
    (1992) Accommodative questions in oral proficiency interviews. Language Testing, 9(2), 173–186. doi: 10.1177/026553229200900205
    https://doi.org/10.1177/026553229200900205 [Google Scholar]
  20. Sacks, Harvey ; Schegloff, Emanuel ; Jefferson, Gail
    (1974) A simplest systematics for the organization of turn-taking for conversation. Language, 50(4), 696–735. doi: 10.1353/lan.1974.0010
    https://doi.org/10.1353/lan.1974.0010 [Google Scholar]
  21. Schegloff, Emanuel ; Jefferson, Gail ; Sacks, Harvey
    (1977) The preference for self-correction in the organization of repair in conversation. Language, 53(2), 361–382. doi: 10.1353/lan.1977.0041
    https://doi.org/10.1353/lan.1977.0041 [Google Scholar]
  22. Silverman, David
    (1976) Interview talk: Bringing off a research instrument. In Silverman, D. and Jones, J. (Eds.), Organisational Work: The language of grading, the grading of language (pp.133–150). London: Collier Macmillan.
    [Google Scholar]
  23. Swain, Merrill
    (2001) Examining dialogue: Another approach to content specification and to validating inferences drawn from test scores. Language Testing, 18(3), 275–302.
    [Google Scholar]
  24. Tannen, Deborah
    (1984) Conversational Style: Analyzing Talk among Friends. Norwood, NJ: Ablex.
    [Google Scholar]
  25. van Lier, L
    (1989) Reeling, writhing, drawling, stretching, and fainting in coils: Oral proficiency interviews as conversation. TESOL Quarterly, 23(3), 489–508. doi: 10.2307/3586922
    https://doi.org/10.2307/3586922 [Google Scholar]
  26. Weir, Cyril J.
    (2005) Language Testing and Validation: An Evidence-based Approach. New York: Palgrave Macmillan. doi: 10.1057/9780230514577
    https://doi.org/10.1057/9780230514577 [Google Scholar]
http://instance.metastore.ingenta.com/content/journals/10.2104/aral0927
Loading

Most Cited