Lexical signals of word relations
Editors’ introduction<br /> Renouf’s contribution, like those of Scott and Berber Sardinha, uses Corpus Linguistics (CL) techniques, analysing large amounts of text by computer. The focus in her paper is on the identification of signals of semantic relations. Thus the frame “or, more exactly” might signal a general-particular relationship in “we stopped by the side of a lake, or, more exactly a loch”. <i>Lake</i> is more general (a superordinate) than <i>loch</i> (a hyponym of <i>lake</i>). Renouf’s endeavour was to trace signals like “more exactly” in large text databases, looking at and teasing out the meaning relations which crop up. Her work thus complements Jordan’s analysis of Basis-Assessment in this volume, using quite different methods. <br />At the same time, she attempts to find out how good a match may be made between lexical signal and meaning — it is quite possible in language that one form might carry numerous functions (and conversely that the same meaning relation can be realised by numerous forms). This implies a need to pin down the meanings identified.<br />The context underlying this work is that of the influential work of Cruse (1986), which takes a non-CL view of meaning, heavily dependent on logic, and the notion of contextual normality by which Cruse means the test a near-native speaker or native speaker may make, as to whether a given string like “notable events such as a solar eclipse” seems normal. This is by no means a straightforward decision. Thus in<br /> (1) notable events such as meeting the President<br /> (2) notable events such as drinking tea<br />the degree of contextual normality would vary in rather unpredictable ways, depending on whether or not one regularly drank tea or worked in the President’s office.<br />The approach Renouf takes is to examine large numbers of texts using a computer, to see instead what forms are actually attested. In so doing, she finds that Cruse’s neat and logical patterns do not seem to be as neatly reflected in the evidence of large numbers of examples from newspaper text.<br />These two methods are not really a simple matter of pre-CL and post-CL, nor are they alternatives, in our view. For a start, the notion of “attested” examples in Linguistics is not new at all, and much early work before computers went into collecting slips of paper with heard or read examples of words. Perhaps the best known example is the many thousands of slips collected by innumerable contributors, for the construction of the Oxford English Dictionary. Second, we would argue that to use CL techniques does not constitute a new Linguistics, any more than using a spade constitutes “Spade Gardening”. It is merely a matter of accessing resources.<br />Third, it is not possible to use CL techniques without recourse to one’s intuitions, e.g. as to what is contextually normal. A CL method is only able to identify positive hits: when one finds numerous examples of a given string of words, one may conclude that it must be contextually normal. If one finds one only, one does not know whether it is a joke or nonce construction or is just rather unusual; and if the string is not found even in a large database, that in itself does not guarantee that it would be contextually abnormal, since new contextually normal strings can be created at any time.<br />Renouf’s paper thus shows that lexical semantics needs insights from logic and intuitions of contextual normality, together with CL methods enabling access to large numbers of examples.