
Full text loading...
Abstract
In this study, we applied and evaluated a scoring method known as comparative judgement to assess spoken-language interpreting. This methodological exploration represents an extension of previous efforts to optimise scoring methods for assessing interpreting. Essentially, comparative judgement requires judges to compare two similar objects and make a binary decision about their relative qualities. To evaluate its reliability, validity and usefulness in the assessment of interpreting, we recruited two groups of judges (novice and experienced) to assess 66 two-way English/Chinese interpretations based on a computerised comparative judgement system. Our data analysis shows that the new method produced reliable and valid results across judge types and interpreting directions. However, the judges held polarised opinions about the method’s usefulness: while some considered it convenient, efficient and reliable, the opposite view was expressed by others. We discuss the results by providing an integrated analysis of the data collected, outline the perceived drawbacks and propose possible solutions to the drawbacks. We call for more evidence-based, substantive investigation into comparative judgement as a potentially useful method for assessing spoken-language interpreting in certain settings.
Article metrics loading...
Full text loading...
References
Data & Media loading...