Doing historical linguistics using contemporary data

MyBook is a cheap paperback edition of the original book and will be sold at uniform, low price.
This Chapter is currently unavailable for purchase.

Retrieving linguistic data from earlier stages of languages is a notoriously difficult task. Using large electronic corpora combined with data on frequency this task can to some extent be solved. In this article I focus on the use of token frequency as described in functional Grammaticalization Theory. Deverbal nouns are non-prototypical members of the noun class. As they get older they tend to develop into more prototypical nouns. In Grammaticalization Theory this process is called lexicalization. This was tested on some zero suffix nouns in the Norwegian newspaper corpus in 2004 using modern texts only. In this article I test these findings using older texts from the same corpus.


This is a required field
Please enter a valid email address