-
oa ChatGPT as an informant
- Source: Nota Bene, Volume 1, Issue 2, Dec 2024, p. 242 - 260
-
- 05 Apr 2024
- 08 Jun 2024
- 24 Jan 2025
Abstract
Abstract
While previous machine learning protocols have failed to achieve even observational adequacy in acquiring natural language, generative large language models (LLMs) now produce large amounts of free text with few grammatical errors. This is surprising in view of what is known as “the logical problem of language acquisition”. Given the likely absence of negative evidence in the training process, how would the LLM acquire the information that certain strings are to be avoided as ill-formed? We attempt to employ Dutch-speaking ChatGPT as a linguistic informant by capitalizing on the documented “few shot learning” ability of LLMs. We then investigate whether ChatGPT has acquired familiar island constraints, in particular the CNPC, and compare its performance to that of native speakers. Although descriptive and explanatory adequacy may remain out of reach, initial results indicate that ChatGPT performs well over chance in detecting island violations.