1887
Volume 3, Issue 2
  • ISSN 0929-9971
  • E-ISSN: 1569-9994
USD
Buy:$35.00 + Taxes

Abstract

This paper examines evaluation criteria for term-extraction software. These tools have gained popularity over the past few years, but they come in all sorts of structures and their performance cannot be compared (qualitatively) to that of humans performing the same task. The lists obtained after an automated extraction must always be filtered by users. The evaluation form proposed here consists of a certain number of preprocessing criteria (such as the language analyzed by the software, identification strategies used, etc.) and a postprocessing criterion (performance of software) that users must take into account before they start using such systems. Each criterion is defined and illustrated with examples. Commercial tools have also been tested.

Loading

Article metrics loading...

/content/journals/10.1075/term.3.2.04hom
1996-01-01
2024-12-07
Loading full text...

Full text loading...

/content/journals/10.1075/term.3.2.04hom
Loading
  • Article Type: Research Article
Keyword(s): Complex Terms; Computational Terminology; Evaluation; Term-Extraction Software
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error