
Full text loading...
Abstract
We introduce a vocalization-based behavioral coding protocol, which is designed to assess engagement in in-the-wild child-robot interactions. We evaluate inter-coder agreement between 3–4 coders using the protocol in two training data sets and two experimental data sets in two languages (English and Japanese), both assessing the results as they are, and by grouping behavior codes into broader categories. Using the results of the coding, we analyze segments of the four experimental interactions. We find that this methodology has merit for vocalization-based behavioral analysis, especially when used to build a consensus between multiple behavioral coders to account for ambiguity. It still has several limitations, including a generally low intercoder agreement rate even when the controls are in agreement, which we attribute to the ambiguity of voice recordings of group interactions, meaning that the use of multiple coders to build consensus is not an option but a necessity to eliminate clearly subjective results.
Article metrics loading...
Full text loading...
References
Data & Media loading...