1887
Volume 51, Issue 2
  • ISSN 1810-7478
  • E-ISSN: 2589-5230

Abstract

Abstract

The -expressions ‘who’ and ‘what’ in Mandarin Chinese not only convey an interrogative meaning but also exhibit existential and universal readings in specific contexts (Huang 1982, Cheng 1991, 1995, Li 1992, Tsai 1994, Lin 1996, 1998). Focusing on the universal interpretation of , this paper has three objectives. First, we demonstrate that current state-of-the-art large language models (LLMs) such as ChatGPT, lack reliability in distinguishing these three distinct readings of . Second, we develop a specialized natural language processing and understanding (NLP/NLU) system capable of processing and interpreting across diverse contexts with greater accuracy, transparency, and consistency. Unlike current LLMs, our system is built upon Wang et al.’s (2019a, 2019b) generative linguistics-based NLP/NLU software tools, Articut and Loki, enabling it to require significantly less training data to interpret the universal reading of . Third, we compare our model’s performance with that of ChatGPT, demonstrating its superior accuracy and robustness in interpreting the universal reading of .

Available under the CC BY-NC 4.0 license.
Loading

Article metrics loading...

/content/journals/10.1075/consl.24041.chu
2025-11-06
2025-12-04
Loading full text...

Full text loading...

/deliver/fulltext/consl.24041.chu.html?itemId=/content/journals/10.1075/consl.24041.chu&mimeType=html&fmt=ahah

References

  1. Atil, Berk, Alexa Chittams, Liseng Fu, Ferhan Ture, Lixinyu Xu, and Breck Baldwin
    2024LLM Stability: A Detailed Analysis with some Surprises. RetrievedNovember, 1, 2024, fromhttps://arxiv.org/html/2408.04667v2
    [Google Scholar]
  2. Attali, Yigal, and Maya Bar-Hillel
    2003 Guess where: The position of correct answers in multiple-choice test items as a psychometric variable. Journal of Educational Measurement40.21:109–128. 10.1111/j.1745‑3984.2003.tb01099.x
    https://doi.org/10.1111/j.1745-3984.2003.tb01099.x [Google Scholar]
  3. Bender, Emily M.
    2013Linguistic Fundamentals for Natural Language Processing: 100 Essentials from Morphology and Syntax. San Rafael, CA: Morgan & Claypool Publishers. 10.1007/978‑3‑031‑02150‑3
    https://doi.org/10.1007/978-3-031-02150-3 [Google Scholar]
  4. Bender, Emily M., and Alexander Koller
    2020 Climbing towards NLU: On meaning, form, and understanding in the age of data. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ed. byDan Jurafsky, Joyce Chai, Natalie Schluter and Joel Tetreault, 5185–5198. Seattle, WA: Association for Computational Linguistics. 10.18653/v1/2020.acl‑main.463
    https://doi.org/10.18653/v1/2020.acl-main.463 [Google Scholar]
  5. Berent, Iris, and Gary Marcus
    2019 No integration without structured representations: Response to Pater. Language95.11:75–86. 10.1353/lan.2019.0011
    https://doi.org/10.1353/lan.2019.0011 [Google Scholar]
  6. Berwick, Robert C., Noam Chomsky, and Massimo Piattelli-Palmarini
    2013 Poverty of the stimulus stands: Why recent challenges fail. Rich Languages from Poor Inputs, ed. byMassimo Piatteli-Palmarini and Robert C. Berwick, 19–42. New York & Oxford: Oxford University Press.
    [Google Scholar]
  7. Blair-Stanek, Andrew, and Benjamin van Durme
    2025LLMs Provide Unstable Answers to Legal Questions. RetrievedNovember, 1, 2024, fromhttps://arxiv.org/abs/2502.05196
    [Google Scholar]
  8. Burch, Robert
    2001 Charles Sanders Peirce. Stanford Encyclopedia of Philosophy, ed. byEdward Zalta and Uri Nodelman. RetrievedNovember, 1, 2024, fromhttps://plato.stanford.edu/entries/peirce/#dia
    [Google Scholar]
  9. Chen, Haifeng
    2012 Lun feizhenxing xunwen “shei” tezhi yiwenju [On non-veridical interrogatives with “who” specificity]. Qiqihaer Daxue Xuebao [Journal of Qiqihar University] 61:86–87.
    [Google Scholar]
  10. Chen, Lei, Bobo Li, Li Zheng, Haining Wang, Zixiang Meng, Runfeng Shi, Hao Fei, Jun Zhou, Fei Li, Chong Teng, and Donghong Ji
    2024 What factors influence LLMs’ Judgments? A case study on question answering. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ed. byNicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti and Nianwen Xue, 17473–17485. Torino, Italy: European Language Resources Association (ELRA) and International Committee on Computational Linguistics (ICCL).
    [Google Scholar]
  11. Cheng, Chieh-Chih
    2014 A Developmental Study on the Non-Interrogative Interpretations of Mandarin Wh-words. MA thesis, National Tsing Hua University, Hsinchu.
    [Google Scholar]
  12. Cheng, Lisa Lai-Shen
    1991 On the Typology of Wh-questions. Doctoral Dissertation, Massachusetts Institute of Technology, Cambridge, MA.
  13. 1995 On dou quantification. Journal of East Asian Linguistics4.31:197–234. 10.1007/BF01731509
    https://doi.org/10.1007/BF01731509 [Google Scholar]
  14. Cheng, Lisa Lai-Shen, and Cheng-Teh James Huang
    1996 Two types of donkey sentences. Natural Language Semantics4.21:121–163. 10.1007/BF00355411
    https://doi.org/10.1007/BF00355411 [Google Scholar]
  15. 2020 Revisiting donkey anaphora in Mandarin Chinese: A reply to Pan and Jiang (2015). International Journal of Chinese Linguistics7.21:167–186. 10.1075/ijchl.19020.che
    https://doi.org/10.1075/ijchl.19020.che [Google Scholar]
  16. Chomsky, Noam
    1970 Remarks on Nominalization. Readings in English Transformational Grammar, ed. byRoderick Jacobs and Peter Rosenbaum, 184–221. Waltham, MA: Ginn & Co.
    [Google Scholar]
  17. 1973 Conditions on Transformations. A Festschrift for Morris Halle, ed. byStephen-R. Anderson and Paul Kiparsky, 232–86. New York: Holt, Rinehart and Winston.
    [Google Scholar]
  18. 1980Rules and Representations. New York: Columbia University Press.
    [Google Scholar]
  19. Cui, Songren, and Kuo-Ming Sung
    2022 Negations and questions. A Reference Grammar for Teaching Chinese: Syntax and Discourse, ed. bySongren Cui and Kuo-Ming Sung, 71–115. Singapore: Springer Publishing. 10.1007/978‑981‑33‑4207‑1_3
    https://doi.org/10.1007/978-981-33-4207-1_3 [Google Scholar]
  20. Dentella, Vittoria, Fritz Günther, and Evelina Leivada
    2023 Systematic testing of three Language Models reveals low language accuracy, absence of response stability, and a yes-response bias. Proceedings of the National Academy of Sciences (PNAS), vol.120.511, ed. byMay Berenbaum, article number: e2309583120. Washington, D.C.: National Academy of Sciences (NAS). 10.1073/pnas.2309583120
    https://doi.org/10.1073/pnas.2309583120 [Google Scholar]
  21. Diebold, Francis X.
    2012 On the origin(s) and development of the term “Big Data.” PIER Working Paper, ed. byPenn Institute for Economic Research, article number: 12–037. Philadelphia, PA: University of Pennsylvania. 10.2139/ssrn.2152421
    https://doi.org/10.2139/ssrn.2152421 [Google Scholar]
  22. Douven, Igor
    2017 Peirce on abduction. Stanford Encyclopedia of Philosophy, ed. byEdward Zalta and Uri Nodelman. RetrievedNovember, 1, 2024, fromhttps://plato.stanford.edu/entries/abduction/peirce.html
    [Google Scholar]
  23. Everaert, Martin B., Marinus Antonius Christianus Huybregts, Noam Chomsky, Robert C. Berwick, and Johan J. Bolhuis
    2015 Structures, not strings: Linguistics as part of the cognitive sciences. Trends in Cognitive Sciences19.121:729–743. 10.1016/j.tics.2015.09.008
    https://doi.org/10.1016/j.tics.2015.09.008 [Google Scholar]
  24. Fodor, Jerry. A., and Zenon W. Pylyshyn
    1988 Connectionism and cognitive architecture: A critical analysis. Cognition28.1–21:3–71. 10.1016/0010‑0277(88)90031‑5
    https://doi.org/10.1016/0010-0277(88)90031-5 [Google Scholar]
  25. Gao, Wencheng, and Xiaofeng Zhang
    2021 A study of negative polarity items in Chinese existential sentences. Linguistics and Literature Studies9.11:12–21. 10.13189/lls.2021.090102
    https://doi.org/10.13189/lls.2021.090102 [Google Scholar]
  26. Gundersen, Odd Erik, and Sigbjørn Kjensmo
    2018 State of the art: reproducibility in artificial intelligence. Proceedings of the 32nd AAAI Conference on Artificial Intelligence and 30th Innovative Applications of Artificial Intelligence Conference and 8th AAAI Symposium on Educational Advances in Artificial Intelligence, ed. bySheila Mcllraith and Kilian Weinberger, 1644–1651. Washington, D.C.: AAAI Press. 10.1609/aaai.v32i1.11503
    https://doi.org/10.1609/aaai.v32i1.11503 [Google Scholar]
  27. Hendrycks, Dan, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt
    2020Measuring Massive Multitask Language Understanding. RetrievedNovember, 1st, 2024, fromhttps://arxiv.org/abs/2009.03300
    [Google Scholar]
  28. Huang, Cheng-Teh James
    1982 Logical Relations in Chinese and the Theory of Grammar. Doctoral Dissertation, Massachusetts Institute of Technology, Cambridge, MA.
  29. Huang, Rui-Heng Ray
    2012 On two types of existential subjects in Chinese A-not-A questions. Language and Linguistics13.61:1171–1210.
    [Google Scholar]
  30. Huang, Haiquan, Zhou Peng, and Stephen Crain
    2018Wh-Questions, universal statements and free choice inferences in child Mandarin. Journal of Psycholinguistic Research47.61:1391–1409. 10.1007/s10936‑017‑9535‑6
    https://doi.org/10.1007/s10936-017-9535-6 [Google Scholar]
  31. Jackendoff, Ray
    1977X-bar Syntax: A Study of Phrase Structure. Cambridge, MA: MIT Press.
    [Google Scholar]
  32. Kambhampati, Subbarao
    2024 Can large language models reason and plan?Annals of New York Academy of Sciences1534.11:15–18. 10.1111/nyas.15125
    https://doi.org/10.1111/nyas.15125 [Google Scholar]
  33. Lake, Brendon, and Marco Baroni
    2018 Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. Proceedings of the 35th International Conference on Machine Learning, ed. byJennifer Dy and Andreas Krause, 4487–4499. Stockholm, Sweden: International Machine Learning Society (IMLS).
    [Google Scholar]
  34. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton
    2015 Deep learning. Nature521.75531:436–44. 10.1038/nature14539
    https://doi.org/10.1038/nature14539 [Google Scholar]
  35. Lee, Hun-Tak Thomas
    1986 Studies on Quantification in Chinese. Doctoral Dissertation, University of California, Los Angeles.
  36. Leivada, Evelina, Elliot Murphy, and Gary Marcus
    2023 DALL-E 2 fails to reliably capture common syntactic processes. Social Sciences and Humanities Open. 8.11:1–10.
    [Google Scholar]
  37. Leivada, Evelina, Gary Marcus, Fritz Günther, and Elliot Murphy
    2024aA Sentence is Worth a Thousand Pictures: Can Large Language Models Understand Hum4n L4ngu4ge and the W0rld behind W0rds?RetrievedNovember, 1, 2024, fromhttps://arxiv.org/abs/2308.00109
    [Google Scholar]
  38. Leivada, Evelina, Dentella, Vittoria, and Günther, Fritz
    2024b Evaluating the language abilities of Large Language Models vs. humans: Three caveats. Biolinguistics181:1–12. 10.5964/bioling.14391
    https://doi.org/10.5964/bioling.14391 [Google Scholar]
  39. Li, Yen-Hui Audrey
    1992 Indefinite wh in Mandarin Chinese. Journal of East Asian Linguistics1.21:125–155. 10.1007/BF00130234
    https://doi.org/10.1007/BF00130234 [Google Scholar]
  40. Lin, Jo-Wang
    1996 Polarity licensing and wh-phrase quantification in Chinese. Doctoral dissertation, University of Massachusetts at Amherst, Amherst, MA.
  41. 1998 On existential polarity wh-phrases in Chinese. Journal of East Asian Linguistics7.31:219–255. 10.1023/A:1008284513325
    https://doi.org/10.1023/A:1008284513325 [Google Scholar]
  42. 2004 Choice functions and scope of existential polarity wh-phrases in Mandarin Chinese. Linguistics and Philosophy27.41:451–491. 10.1023/B:LING.0000024407.76999.f7
    https://doi.org/10.1023/B:LING.0000024407.76999.f7 [Google Scholar]
  43. 2014Wh-expressions in Mandarin Chinese. The Handbook of Chinese Linguistics, ed. byCheng-Teh James Huang, Yen-Hui Audrey Li and Andrew Simpson, 180–207. Hoboken, NJ: John Wiley & Sons. 10.1002/9781118584552.ch8
    https://doi.org/10.1002/9781118584552.ch8 [Google Scholar]
  44. Linzen, Tal
    2019 What can linguistics and deep learning contribute to each other? Response to Pater. Language95.11:99–108. 10.1353/lan.2019.0015
    https://doi.org/10.1353/lan.2019.0015 [Google Scholar]
  45. Linzen, Tal, and Marco Baroni
    2021 Syntactic structure from deep learning. Annual Review of Linguistics7.11:195–212. 10.1146/annurev‑linguistics‑032020‑051035
    https://doi.org/10.1146/annurev-linguistics-032020-051035 [Google Scholar]
  46. Liu, Mingming
    2019 Unifying universal and existential wh’s in Mandarin. Proceedings of the 29th Semantics and Linguistic Theory Conference (SALT-29), ed. byKatherine Blake, Forrest Davis, Kaelyn Lamp and Joseph Rhyne, 258–278. Los Angeles, CA: University of California. 10.3765/salt.v29i0.4611
    https://doi.org/10.3765/salt.v29i0.4611 [Google Scholar]
  47. Lu, Sin-En, Bo-Han Lu, Chao-Yi Lu, and Richard Tzong-Han Tsai
    2022 Exploring methods for building dialects-Mandarin code-mixing corpora: A case study in Taiwanese Hokkien. Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, ed. byYoav Goldberg, Zornitsa Kozareva and Yue Zhang, 6287–6305. Abu Dhabi, United Arab Emirates: Association for Computational Linguistics. 10.18653/v1/2022.findings‑emnlp.469
    https://doi.org/10.18653/v1/2022.findings-emnlp.469 [Google Scholar]
  48. Marcus, Gary F.
    2001The Algebraic Mind: Integrating Connectionism and Cognitive Science. Cambridge, MA: MIT Press. 10.7551/mitpress/1187.001.0001
    https://doi.org/10.7551/mitpress/1187.001.0001 [Google Scholar]
  49. 2018Deep Learning: A Critical Appraisal. RetrievedNovember, 1, 2024, fromhttps://arxiv.org/abs/1801.00631
    [Google Scholar]
  50. 2024Taming Silicon Valley: How we Can Ensure that AI Works for us. Cambridge, MA: MIT Press. 10.7551/mitpress/15782.001.0001
    https://doi.org/10.7551/mitpress/15782.001.0001 [Google Scholar]
  51. Marcus, Gary F., Brinkmann Ursula, Clahsen Harald, Wiese Richard, and Pinker Steven
    1995 German inflection: The exception that proves the rule. Cognitive Psychology29.31:189–256. 10.1006/cogp.1995.1015
    https://doi.org/10.1006/cogp.1995.1015 [Google Scholar]
  52. Marcus, Gary F., and Ernest Davis
    2019Rebooting AI: Building Artificial Intelligence we can Trust. New York & London: Vintage Books.
    [Google Scholar]
  53. Mirzadeh, Iman, Keivan Alizadeh, Hooman Shahrokhi, Oncel Tuzel, Samy Bengio, and Mehrdad Farajtabar
    2024GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models. 10.48550/arXiv.2410.05229. RetrievedNovember, 1, 2024fromhttps://arxiv.org/abs/2410.05229
    https://doi.org/10.48550/arXiv.2410.05229 [Google Scholar]
  54. Moravec, Hans
    1988Mind Children. Cambridge, MA: Harvard University Press.
    [Google Scholar]
  55. 1999 Rise of the robots. Scientific American281.61:124–135. 10.1038/scientificamerican1299‑124
    https://doi.org/10.1038/scientificamerican1299-124 [Google Scholar]
  56. Murphy, Elliot, and Evelina Leivada
    2022 A model for learning strings is not a model of language. Proceedings of the National Academy of Sciences (PNAS), vol119.231, ed. byMay Berenbaum, article number: e2201651119. Washington, D.C.: National Academy of Sciences (NAS). 10.1073/pnas.2201651119
    https://doi.org/10.1073/pnas.2201651119 [Google Scholar]
  57. Nunan, David
    1993Introducing Discourse Analysis. London: Penguin English.
    [Google Scholar]
  58. OpenAI
    OpenAI 2023GPT-4 Technical Report. RetrievedNovember, 1, 2024, fromhttps://arxiv.org/abs/2303.08774
    [Google Scholar]
  59. OpenAI
    OpenAI 2024GPT-4. RetrievedNovember, 1, 2024, fromhttps://openai.com/gpt-4
    [Google Scholar]
  60. Pater, Joe
    2019 Generative linguistics and neural networks at 60: Foundation, friction, and fusion. Language95.11:41–74. 10.1353/lan.2019.0009
    https://doi.org/10.1353/lan.2019.0009 [Google Scholar]
  61. Pinker, Steven
    1994The Language Instinct. New York: William Morrow and Company. 10.1037/e412952005‑009
    https://doi.org/10.1037/e412952005-009 [Google Scholar]
  62. 1999Words and Rules: The Ingredients of Language. New York: Basic Books.
    [Google Scholar]
  63. Pinker, Steven, and Alan Prince
    1988 On language and connectionism: Analysis of a parallel distributed processing model of language acquisition. Cognition28.1–21:73–193. 10.1016/0010‑0277(88)90032‑7
    https://doi.org/10.1016/0010-0277(88)90032-7 [Google Scholar]
  64. Qin, Tian, Naomi Saphra, and David Alvarez-Melis
    2024Sometimes I am a Tree: Data Drives Unstable Hierarchical Generalization. RetrievedNovember, 1, 2024, fromhttps://arxiv.org/abs/2412.04619
    [Google Scholar]
  65. Renze, Matthew, and Erhan Guven
    2024 The effect of sampling temperature on problem solving in large language models. Proceedings of Findings of the association for computational linguistics: EMNLP 2024, ed. byYaser Al-Onaizan, Mohit Bansal and Yun-Nung Chen, 7346–7356. Miami, FL: Association for Computational Linguistics. 10.18653/v1/2024.findings‑emnlp.432
    https://doi.org/10.18653/v1/2024.findings-emnlp.432 [Google Scholar]
  66. Saffran, Jenny R., Richard N. Aslin, and Elissa L. Newport
    1996 Statistical learning by 8-month-old infants. Science274.52941:1926–1928. 10.1126/science.274.5294.1926
    https://doi.org/10.1126/science.274.5294.1926 [Google Scholar]
  67. Shan, Wei
    2010 Tezhiwen biao fouding yongfa yanjiu [The study on the negative usage of wh-questions]. Jiamusi Daxue Shehuikexue Xuebao [Journal of Social Science of Jiamusi University] 28.51:63–65.
    [Google Scholar]
  68. Shi, Hongli
    2021 Hanyu feiyiwen yongfa yiwenci de jufa weizhi ji yuyi fenxi [The syntactic positions and semantics of non-interrogative wh-items in Mandarin Chinese]. Zhongguo Yuwen Tongxun [Current Research in Chinese Linguistics] 100.11:41–53.
    [Google Scholar]
  69. Stowell, Tim
    1981 Origins of Phrase Structure. Doctoral dissertation, Massachusetts Institute of Technology, Cambridge, MA.
  70. Su, Yi Esther, Yu Jin, Guo-Bin Wan, Ji-Shui Zhang, and Lin-Yan Su
    2014 Interpretation of wh-words in Mandarin-speaking high-functioning children with autism spectrum disorders. Research in Autism Spectrum Disorders8.101:1364–1372. 10.1016/j.rasd.2014.07.008
    https://doi.org/10.1016/j.rasd.2014.07.008 [Google Scholar]
  71. Tang, Ke
    2011 Zhiren Yiwenci Gongxian Xianxiang de Renzhi Yanjiu [A Cognitive Research on the Co-occurrence of Interrogatives Denoting Persons]. MA thesis, Hunan Normal University, Hunan, China.
    [Google Scholar]
  72. Trinh, Trieu H., and Minh-Thang Luong
    2024AlphaGeometry: An Olympiad-level AI system for Geometry. London: Google DeepMind.
    [Google Scholar]
  73. Tsai, Wei-Tien Dylan
    1994 On Economizing the Theory of A-Bar Dependencies. Doctoral dissertation, Massachusetts Institute of Technology, Cambridge, MA.
  74. 2001 On subject specificity and theory of syntax-semantics interface. Journal of East Asian Linguistics10.21:129–168. 10.1023/A:1008321327978
    https://doi.org/10.1023/A:1008321327978 [Google Scholar]
  75. Wang, Peiyi, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui
    2023 Large language models are not fair evaluators. Proceedings of the Findings of the Association for Computational Linguistics: ACL 2024, ed. byLun-Wei Ku, Andre Martins and Vivek Srikumar, 9440–9450. Bangkok, Thailand. Association for Computational Linguistics (ACL).
    [Google Scholar]
  76. Wang, Wen-jet, Chen, Chia-jung, Lee, Chia-ming, Lai, Chien-yu, and Lin, Hsin-hung
    2019aArticut: Chinese Word Segmentation and POS Tagging System [Computer program]. RetrievedFebruary, 1, 2024, fromhttps://api.droidtown.co
    [Google Scholar]
  77. 2019bLinguistics-Oriented Keyword Interface NLU System [Computer program]. RetrievedFebruary, 1, 2024, fromhttps://api.droidtown.co
    [Google Scholar]
  78. Wang, Yaxue, and Zhaoting Li
    2013 Guoyu yiwenci de feiyiwen yongfa ertong xide yanjiu — yi “shenme” han “shei” wei li [A study on child acquisition of non-interrogative use of Chinese wh-words “shenme” and “shei”]. Shaoguan Xueyuan Xuebao [Journal of Shaoguan College] 34.91:132–138.
    [Google Scholar]
  79. Wei, Sheng-Lun, Cheng-Kuang Wu, Hen-Hsen Huang, and Hsin-Hsi Chen
    2024 Unveiling selection biases: Exploring order and token sensitivity in Large Language Models. Proceedings of the Findings of the Association for Computational Linguistics: ACL 2024, ed. byLun-Wei Ku, Andre Martins and Vivek Srikumar, 5598–5621. Bangkok, Thailand. Association for Computational Linguistics (ACL). 10.18653/v1/2024.findings‑acl.333
    https://doi.org/10.18653/v1/2024.findings-acl.333 [Google Scholar]
  80. Wu, Tianyu, Shizhu He, Jingping Liu, Siqi Sun, Kang Liu, Qing-Long Han, and Tang Yang
    2023 A brief overview of ChatGPT: The history, status quo and potential future development. IEEE/CAA Journal of Automatica Sinica10.51:1122–1136. 10.1109/JAS.2023.123618
    https://doi.org/10.1109/JAS.2023.123618 [Google Scholar]
  81. Xie, Zhiguo
    2007 Nonveridicality and existential polarity wh-phrases in Mandarin. Proceedings from the 43rd Annual Meeting of the Chicago Linguistic Society, ed. byMalcolm Elliott, James Kirby, Osamu Sawada, Eleni Staraki and Suwon Yoon, 121–135. Chicago, IL: Chicago Linguistic Society.
    [Google Scholar]
  82. Yang, Chung-Yu Barry
    2024 Revisiting sentence-final adjunct WHAT. Language and Linguistics25.11:162–186.
    [Google Scholar]
  83. Yang, Charles
    2004 Universal Grammar, Statistics, or both. Trends in Cognitive Science8.101:451–456. 10.1016/j.tics.2004.08.006
    https://doi.org/10.1016/j.tics.2004.08.006 [Google Scholar]
  84. Yang, Yang, Leticia Pablos, and Lisa Lai-Shen Cheng
    2023 The processing mechanisms of Mandarin wh-questions. Journal of Chinese Linguistics51.11:147–171. 10.1353/jcl.2023.0009
    https://doi.org/10.1353/jcl.2023.0009 [Google Scholar]
  85. Zhang, Junge
    2006 “Shei,” “nage(ren),” “shenmeren” zhi yitong [The similarities and differences of “who”, “which person”, and “what person”]. Fuyang Shifan Xueyuan Xuebao [Journal of Fuyang Normal University] 61:69–72.
    [Google Scholar]
  86. Zheng, Lianmin, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, and Eric Xing
    2023 Judging LLM-as-a-judge with MT-bench and chatbot arena. RetrievedNovember, 1, 2024, fromhttps://arxiv.org/abs/2306.05685
/content/journals/10.1075/consl.24041.chu
Loading
/content/journals/10.1075/consl.24041.chu
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error