1887
Volume 18, Issue 1
  • ISSN 0155-0640
  • E-ISSN: 1833-7139
USD
Buy:$35.00 + Taxes

Abstract

This paper investigates the empirical validity of the Monash-Melbourne computer adaptive test for French (French CAT), a single parameter Rasch model measurement of underlying morphosyntactic proficiency. It focuses, in particular, on the accuracy of the French CAT as a tool for streaming incoming university students into three levels of a first year (post high school) French course. Psychometric ability estimations of the Rasch model are compared against instructors’ assessment of students’ overall linguistic competence. A comparison is also made between the theoretical confidence interval of predicted abilities and the actual distribution of testee scores. Finally, individual student French CAT scores are correlated with end-of-semester language examination results. In all instances, Item Response Theory, upon which the French CAT is based, is shown to provide a highly valid means of determining linguistic ability for the purposes of course placement. Moreover, given the significant correlation between initial streaming and end-of-semester results, the French CAT is also demonstrated to be a good predictor of short-term achievement.

Loading

Article metrics loading...

/content/journals/10.1075/aral.18.1.04bur
1995-01-01
2019-12-15
Loading full text...

Full text loading...

References

  1. Burston, J.
    (1993a) The validity of French VCE results in tertiary level language placement. Carrefours11, 1: 27–30.
    [Google Scholar]
  2. (1993b) The validity of French VCE results in tertiary level language placement: Part 2. Carrefours11, 2: 42–47.
    [Google Scholar]
  3. Burston, J. , M. Monville-Burston and J. Harfouch
    (1994) Comparison of evaluation measures in first year students’ placement. Paper presented atthe Second Conference of the Australian Society of French Studies Conference, Melbourne, 12–14 July 1994.
    [Google Scholar]
  4. Burston J. and M. Monville-Burston
    (forthcoming) Practical design and implementation considerations of a computer-adaptive foreign language test: the Monash/Melbourne French CAT. CALICO Journal.
    [Google Scholar]
  5. Hambleton, R. K. and L. L. Cook
    (1977) Latent trait models and their use in the analysis of educational test data. Journal of Educational Measurement14,2:75–96. doi: 10.1111/j.1745‑3984.1977.tb00030.x
    https://doi.org/10.1111/j.1745-3984.1977.tb00030.x [Google Scholar]
  6. Hatch, E. and A. Lazaraton
    (1991) The research manual: Design and statistics for Applied Linguistics. New York, Newbury House.
    [Google Scholar]
  7. Lord, F. M.
    (1980) Applications of item response theory to practical testing problems. Hillsdale, N.J., Lawrence Erlbaum.
    [Google Scholar]
  8. Lunz, M. E. and B. A. Bergstrom
    (1991) Comparability of decisions for computer adaptive and written examinations. Journal of Allied Health20,1:15–23.
    [Google Scholar]
  9. Lunz, M. E. , J. A. Stahl , and B. A. Bergstrom
    (1993) Targeting, test length, test precision and decision accuracy for computerized adaptive tests. Paper presented atthe annual meeting of the American Educational Research Association, Atlanta, April 1993.
    [Google Scholar]
  10. Rasch, G.
    (1960) Probabilistic model for some intelligence and attainment tests. Chicago, University of Chicago Press.
    [Google Scholar]
  11. Traub, R. E. and R. G. Wolfe
    (1981) Latent trait theories and the assessment of educational achievement. In D. C. Berliner (ed.) Review of research in education. Washington, D.C., Educational Research Association. doi: 10.2307/1167189
    https://doi.org/10.2307/1167189 [Google Scholar]
  12. Weiss, D. J.
    (1982) Improving measurement quality and efficiency with adaptive testing. Applied Psychological Measurement6,4:473–492. doi: 10.1177/014662168200600408
    https://doi.org/10.1177/014662168200600408 [Google Scholar]
  13. Weiss, D. J. and G. G. Kinsbury
    (1984) Application of computerized adaptive testing to educational problems. Journal of Educational Measurement21,4:361–375. doi: 10.1111/j.1745‑3984.1984.tb01040.x
    https://doi.org/10.1111/j.1745-3984.1984.tb01040.x [Google Scholar]
  14. Wright, B. D. and M. H. Stone
    (1979) Best test design. Chicago, Ill., MESA.
    [Google Scholar]
  15. Wright, B. D. and G. Masters
    (1982) Rating scale analysis. Chicago, Ill., MESA.
    [Google Scholar]
http://instance.metastore.ingenta.com/content/journals/10.1075/aral.18.1.04bur
Loading
  • Article Type: Research Article
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error