1887
Volume 23, Issue 1
  • ISSN 1384-6655
  • E-ISSN: 1569-9811
USD
Buy:$35.00 + Taxes

Abstract

Current syntactic annotation of large-scale learner corpora mainly resorts to “standard parsers” trained on native language data. Understanding how these parsers perform on learner data is important for downstream research and application related to learner language. This study evaluates the performance of multiple standard probabilistic parsers on learner English. Our contributions are three-fold. Firstly, we demonstrate that the common practice of constructing a gold standard – by manually correcting the pre-annotation of a single parser – can introduce bias to parser evaluation. We propose an alternative annotation method which can control for the annotation bias. Secondly, we quantify the influence of learner errors on parsing errors, and identify the learner errors that impact on parsing most. Finally, we compare the performance of the parsers on learner English and native English. Our results have useful implications on how to select a standard parser for learner English.

Loading

Article metrics loading...

/content/journals/10.1075/ijcl.16080.hua
2018-05-31
2024-10-07
Loading full text...

Full text loading...

References

  1. Berzak, Y. , Huang, Y. , Barbu, A. , Korhonen, A. , & Katz, B.
    (2016a) Anchoring and agreement in syntactic annotations. In J. Su (Ed.), Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp.2215–2224). Austin, TX: ACL.10.18653/v1/D16‑1239
    https://doi.org/10.18653/v1/D16-1239 [Google Scholar]
  2. Berzak, Y. , Kenney, J. , Spadine, C. , Wang, J. X. , Lam, L. , Mori, K. S. , Garza, S. , & Katz, B.
    (2016b) Universal dependencies for learner English. In K. Erk & N. A. Smith (Eds.), Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (pp.737–746). Berlin: ACL.10.18653/v1/P16‑1070
    https://doi.org/10.18653/v1/P16-1070 [Google Scholar]
  3. Buchholz, S. , & Marsi, E.
    (2006) CoNLL-X shared task on multilingual dependency parsing. In L. Marquez & D. Klein (Eds.), Proceedings of the Tenth Conference on Computational Natural Language Learning (pp.149–164). New York, NY: ACL.10.3115/1596276.1596305
    https://doi.org/10.3115/1596276.1596305 [Google Scholar]
  4. Cer, D. M. , De Marneffe, M. -C. , Jurafsky, D. , & Manning, C. D.
    (2010) Parsing to Stanford dependencies: Trade-offs between speed and accuracy. In N. Calzolari , K. Choukri , B. Maegaard , J. Mariani , J. Odijk , S. Piperidis , M. Rosner & D. Tapias (Eds.), Proceedings of the Seventh International Conference on Language Resources and Evaluation (pp.1628–1632). Valletta: ELRA.
    [Google Scholar]
  5. Charniak, E. , & Johnson, M.
    (2005) Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In K. Knight (Ed.), Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (pp.173–180). Stroudsburg: ACL.
    [Google Scholar]
  6. Council of Europe
    Council of Europe (2001) Common European Framework of Reference for Languages: Learning, Teaching, Assessment. Cambridge: Cambridge University Press.
    [Google Scholar]
  7. De Marneffe, M. -C. , MacCartney, B. , & Manning, C. D.
    (2006) Generating typed dependency parses from phrase structure parses. In N. Calzolari , K. Choukri , A. Gangemi , B. Maegaard , J. Mariani , J. Odijk & D. Tapias (Eds.), Proceedings of the Fifth International Conference on Language Resources and Evaluation (pp.449–454). Genoa: ELRA.
    [Google Scholar]
  8. De Marneffe, M. -C. , & Manning, C. D.
    (2008) Stanford typed dependencies manual (Technical Report). Retrieved fromhttps://nlp.stanford.edu/software/dependencies_manual.pdf (last accessedFebruary 2018).
    [Google Scholar]
  9. Dickinson, M. , & Lee, C. M.
    (2013) Modifying corpus annotation to support the analysis of learner language. CALICO Journal, 26(3), 545–561.10.1558/cj.v26i3.545‑561
    https://doi.org/10.1558/cj.v26i3.545-561 [Google Scholar]
  10. Dickinson, M. , & Ragheb, M.
    (2009) Dependency annotation for learner corpora. In M. Passarotti , A. Przepiorkowski , S. Raynaud & F. Van Eynde (Eds.), Proceedings of the Eighth Workshop on Treebanks and Linguistic Theories (pp.59–70). Milan: EDUCatt.
    [Google Scholar]
  11. Geertzen, J. , Alexopoulou, T. , & Korhonen, A.
    (2013) Automatic linguistic annotation of large scale L2 databases: The EF-Cambridge Open Language Database (EFCAMDAT). In R. T. Miller , K. I. Martin , C. M. Eddington , A. Henery , N. M. Miguel , A. Tseng , A. Tuninetti & D. Walter (Eds.), Proceedings of the 31st Second Language Research Forum: Building Bridges Between Disciplines. Somerville: Cascadilla Proceedings Project.
    [Google Scholar]
  12. Granger, S. , Dagneaux, E. , Meunier, F. , & Paquot, M.
    (2009) The International Corpus of Learner English. Version 2. Handbook and CD-ROM. Louvain-la-Neuve: Presses Universitaires de Louvain.
    [Google Scholar]
  13. James, C.
    (2013) Errors in Language Learning and Use: Exploring Error Analysis. New York, NY: Addison Wesley Longman.
    [Google Scholar]
  14. Klein, D. , & Manning, C. D.
    (2003a) Accurate unlexicalized parsing. In E. W. Hinrichs & D. Roth (Eds.), Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1 (pp.423–430). Sapporo: ACL.
    [Google Scholar]
  15. (2003b) Fast exact inference with a factored model for natural language parsing. In S. Becker , S. Thrun , & K. Obermayer (Eds.), Advances in Neural Information Processing Systems 15 (pp.3–10). Cambridge, MA: MIT Press.
    [Google Scholar]
  16. Kong, L. , & Smith, N. A.
    (2014) An empirical comparison of parsing methods for stanford dependencies (arXiv preprint). Retrieved fromhttps://arxiv.org/abs/1404.4314 (last accessedFebruary 2018).
  17. Korhonen, A.
    (2002) Semantically motivated subcategorization acquisition. In J. Pentheroudakis , N. Calzolari & A. Wu (Eds.), Proceedings of the ACL-02 Workshop on Unsupervised Lexical Acquisition-Volume 9 (pp.51–58). Philadelphia, PA: ACL.10.3115/1118627.1118634
    https://doi.org/10.3115/1118627.1118634 [Google Scholar]
  18. Krivanek, J. , & Meurers, D.
    (2011) Comparing rule-based and data-driven dependency parsing of learner language. In K. Gerdes , E. Hajičová & L. Wanner (Eds.), Proceedings of the First International Conference on Dependency Linguistics (Depling 2011) (pp.310–317). Barcelona: IOS Press.
    [Google Scholar]
  19. Marcus, M. P. , Marcinkiewicz, M. A. , & Santorini, B.
    (1993) Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2), 313–330.
    [Google Scholar]
  20. Martins, A. F. T. , Almeida, M. , & Smith, N. A.
    (2013) Turning on the Turbo: Fast third-order non-projective Turbo parsers. In H. Schuetze (Ed.), Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL) (pp.617–622). Sofia: ACL.
    [Google Scholar]
  21. Nicholls, D.
    (2003) The Cambridge Learner Corpus: Error coding and analysis for lexicography and ELT. In A. Dawn , P. Rayson , A. Wilson & T. McEnery (Eds.), Proceedings of the Corpus Linguistics 2003 Conference (pp.572–581). Lancaster: UCREL.
    [Google Scholar]
  22. Nivre, J. , Hall, J. , Nilsson, J. , Chanev, A. , Eryigit, G. , Kübler, S. , Marinov, S. , & Marsi, E.
    (2007) MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2), 95–135.
    [Google Scholar]
  23. Ott, N. , & Ziai, R.
    (2010) Evaluating dependency parsing performance on German learner language. In M. Dickinson , K. Müürisep & M. Passarotti (Eds.), Proceedings of the Ninth International Workshop on Treebanks and Linguistic Theories (pp.175–186). Tartu: NEALT.
    [Google Scholar]
  24. Paquot, M. , & Plonsky, L.
    (2017) Quantitative research methods and study quality in learner corpus research. International Journal of Learner Corpus Research, 3(1), 61–94.10.1075/ijlcr.3.1.03paq
    https://doi.org/10.1075/ijlcr.3.1.03paq [Google Scholar]
  25. Petrov, S. , & Klein, D.
    (2007) Improved inference for unlexicalized parsing. In B. Carpenter , A. Stent & J. D. Williams (Eds.), Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL) (pp.404–411). Rochester: ACL.
    [Google Scholar]
  26. Ragheb, M. , & Dickinson, M.
    (2011) Avoiding the comparative fallacy in the annotation of learner corpora. In G. Granena , J. Koeth , S. Lee-Ellis , A. Lukyanchenko , G. P. Botana & E. Rhoades (Eds.), Selected Proceedings of the 2010 Second Language Research Forum: Reconsidering SLA Research, Dimensions, and Directions (pp.114–124). Somerville, MA: Cascadilla Proceedings Project.
    [Google Scholar]
  27. (2013) Inter-annotator agreement for dependency annotation of learner language. In J. Tetreault , J. Burstein & C. Leacock (Eds.), Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications (pp.169–179). Atlanta, GA: ACL.
    [Google Scholar]
  28. Rankin, T.
    (2015) Review of Clausal Complements in Native and Learner English: A Corpus-Based Study with LINDSEI and VICOLSE . International Journal of Learner Corpus Research, 1(2), 279–283.10.1075/ijlcr.1.2.07ran
    https://doi.org/10.1075/ijlcr.1.2.07ran [Google Scholar]
  29. Rosen, A. , Hana, J. , Štindlová, B. , & Feldman, A.
    (2014) Evaluating and automating the annotation of a learner corpus. Language Resources and Evaluation, 48(1), 65–92.10.1007/s10579‑013‑9226‑3
    https://doi.org/10.1007/s10579-013-9226-3 [Google Scholar]
  30. Santorini, B.
    (1990) Part-of-speech tagging guidelines for the Penn Treebank Project (3rd revision, 2nd printing) (Technical report). Retrieved fromhttps://catalog.ldc.upenn​.edu/docs/LDC99T42/tagguid1.pdf (last accessedFebruary 2018).
    [Google Scholar]
  31. Tono, Y. , & Díez-Bedmar, M. B.
    (2014) Focus on learner writing at the beginning and intermediate stages: The ICCI corpus. International Journal of Corpus Linguistics, 19(2), 163–177.10.1075/ijcl.19.2.01ton
    https://doi.org/10.1075/ijcl.19.2.01ton [Google Scholar]
/content/journals/10.1075/ijcl.16080.hua
Loading
/content/journals/10.1075/ijcl.16080.hua
Loading

Data & Media loading...

  • Article Type: Research Article
Keyword(s): annotation bias; dependency parsing; learner English; learner error; parsing accuracy
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error