- Home
- e-Journals
- Pragmatics & Cognition
- Previous Issues
- Volume 8, Issue, 2000
Pragmatics & Cognition - Volume 8, Issue 1, 2000
Volume 8, Issue 1, 2000
-
Putting names to faces: A review and tests of the models
Author(s): Derek R. Carson, A. Mike Burton and Vicki Brucepp.: 9–62 (54)More LessIt is well established that retrieval of names is harder than the retrieval of other identity specific information. This paper offers a review of the more influential accounts put forward as explanations of why names are so difficult to retrieve. A series of five experiments tests a number of these accounts. Experiments One to Three examine the claims that names are hard to recall because they are typically meaningless (Cohen 1990), or unique (Burton and Bruce 1992; Brédart, Valentine, Calder, and Gassi 1995). Participants are shown photographs of unfamiliar people (Experiments One and Two) or familiar people (Experiment Three) and given three pieces of information about each: a name, a unique piece of information, and a shared piece of information. Learning follows an incidental procedure, and participants are given a surprise recall test. In each experiment shared information is recalled most often, followed by unique information, followed by name. Experiment Four tests both the ‘uniqueness’ account and an account based on the specificity of the naming response (Brédart 1993). Participants are presented with famous faces and asked to categorise them by semantic group (occupation). Results indicate that less time is needed to perform this task when the group is a subset of a larger semantic category. A final experiment examines the claim that names might take longer to access because they are less often retrieved than other classes of information. Latencies show that participants remain more efficient when categorising faces by their occupation than by their name even when they have received extra practice of naming the faces. We conclude that the explanation best able to account for the data is that names are stored separately from other semantic information and can only be accessed after other identity specific information has been retrieved. However, we also argue that the demands we make of these explanations make it likely that no single theory will be able to account for all existing data.
-
A linked aggregate code for processing faces
Author(s): Michael J. Lyons, Kazunori Morikawa and Shigeru Akamatsupp.: 63–81 (19)More LessA model of face representation, inspired by known biology of the visual system, is compared to experimental data on the perception of facial similarity. The face representation model uses aggregate primary visual cortex (V1) cell responses topographically linked to a grid covering the face, allowing comparison of shape and texture at corresponding points in two facial images. When a set of relatively similar faces was used as stimuli, this “linked aggregate code” (LAC) predicted human performance in similarity judgment experiments. When faces of different categories were used, natural facial dimensions such as sex and race emerged from the LAC model without training. The dimensional structure of the LAC similarity measure for the mixed-category task displayed some psychologically plausible features, but also highlighted shortcomings of the proposed representation. The results suggest that the LAC based similarity measure may be useful as an interesting starting point for further modeling studies of face representation in higher visual areas.
-
Exploring the relations between categorization and decision making with regard to realistic face stimuli
Author(s): James T. Townsend, Kam M. Silva, Jesse Spencer-Smith and Michael J. Wengerpp.: 83–105 (23)More LessCategorization and decision making are combined in a task with photorealistic faces. Two different types of face stimuli were assigned probabilistically into one of two fictitious groups; based on the category, faces were further probabilistically assigned to be hostile or friendly. In Part I, participants are asked to categorize a face into one of two categories, and to make a decision concerning interaction. A Markov model of categorization followed by decision making provides reasonable fits to Part I data. A Markov model predicting decision making followed by categorization is rejected. In Part II, a no-parameter model predicts decisions using categorization and decision responses collected in separate trials, suggesting that Part 1 results are not an artifact of the presentation of categorization and decision questions within a single trial. Decisions concerning interaction (defensive/friendly) appear to be based on information from the category decision, and not from the face stimuli alone.
-
Female facial beauty: The fertility hypothesis
Author(s): Victor S. Johnstonpp.: 107–122 (16)More LessPrior research on facial beauty has suggested that the average female face in a population is perceived to be the most attractive face. This finding, however, is based on an image processing methodology that appears to be flawed. An alternative method for generating attractive faces is described and the findings using this procedure are compared with the reports of other experimenters. The results suggest that (1) beautiful female faces are not average, but vary from the average in a systematic manner, and (2) female beauty can best be explained by a sexual selection viewpoint, whereby selection favors cues that are reliable indicators of fertility.
-
Recognizing expression from familiar and unfamiliar faces
Author(s): Jean Yves Baudouin, Stéphane Sansone and Guy Tiberghienpp.: 123–146 (24)More LessThe aim of this study was to clarify the relationship between accessing the identity of a face and making decisions about its expression. Three experiments are reported in which undergraduate subjects made expression decisions about familiar and unfamiliar faces. The decision was slowed either by concealing the mouth region with a black rectangle (experiment 1) or by using a short presentation time (experiments 2 and 3). Results of experiment 1 showed that subjects recognized the displayed expression of celebrities better than those of unknown persons when information from the mouth was not available. Results of experiment 2 showed that they recognized the expression displayed by celebrities more easily when the presentation time was short. Experiment 3, using familiarized faces, replicated the results of experiments 1 and 2 and ruled out a possible explanation of these results by the use of some identity specific representations that are expressive. Implications for face recognition models are discussed.
-
The semantics of human facial expressions
Author(s): Anna Wierzbickapp.: 147–183 (37)More LessThis paper points out that a major shift of paradigm is currently going on in the study of the human face and it seeks to articulate and to develop the fundamental assumptions underlying this shift. The main theses of the paper are: 1) Facial expressions can convey meanings comparable to the meanings of verbal utterances. 2) Semantic analysis (whether of verbal utterances or of facial expressions) must distinguish between the context-independent invariant and its contextual interpretations. 3) Certain components of facial behavior (“facial gestures”) do have constant context-independent meanings. 4) The meanings of facial components and configurations of components have an inherent first-person and present tense orientation. 5) The basis for the interpretation of facial gestures is, above all, experiential. 6) The meanings of some facial expressions are universally intelligible and can be interpreted without reference to any local conventions. 7) To be fruitful, the semantic analysis of facial expressions needs a methodology. This can be derived from the methodological experience of linguistic semantics. The author illustrates and supports these theses by analyzing a range of universally interpretable facial expressions such as the following ones: “brow furrowed” (i.e. eyebrows drawn together); eyebrows raised; eyes wide open; corners of the mouth raised; corners of the mouth lowered; mouth open (while not speaking); lips pressed together; upper lip and nose “raised” (and, consequently, nose wrinkled).
-
Automatic facial expression interpretation: Where human-computer interaction, artificial intelligence and cognitive science intersect
Author(s): Christine L. Lisetti and Diane J. Schianopp.: 185–235 (51)More LessWe discuss here one of our projects, aimed at developing an automatic facial expression interpreter, mainly in terms of signaled emotions. We present some of the relevant findings on facial expressions from cognitive science and psychology that can be understood by and be useful to researchers in Human-Computer Interaction and Artificial Intelligence. We then give an overview of HCI applications involving automated facial expression recognition, we survey some of the latest progresses in this area reached by various approaches in computer vision, and we describe the design of our facial expression recognizer. We also give some background knowledge about our motivation for understanding facial expressions and we propose an architecture for a multimodal intelligent interface capable of recognizing and adapting to computer users’ affective states. Finally, we discuss current interdisciplinary issues and research questions which will need to be addressed for further progress to be made in the promising area of computational facial expression recognition.
-
Living with difficulties of facial processing: Some ontological consequences of clinical facial problems
Author(s): Jonathan Colepp.: 237–260 (24)More LessThe present paper considers the processing of facial information from a personal and narrative aspect, attempting to address the effects that deficits in such processing have on people’s perceptions of themselves and of others. The approach adopted has been a narrative and mainly subjective one, entering the experience of several subjects with facial problems to tease out the interactions between their facial problems and their relations with others. The subjects are those with blindness, either congenital or acquired, autism, Moebius syndrome (the congenital absence of facial expression), Bell’s palsy and facial disfigurement. From these biographical experiences the effect of facial problems on people’s perception of self and their social existence is explored. Facial information processing is being examined to brilliant effect scientifically: the effects of problems in the system on individuals’ self esteem may be informed, in part, by a clinical, descriptive approach.
Volumes & issues
-
Volume 30 (2023)
-
Volume 29 (2022)
-
Volume 28 (2021)
-
Volume 27 (2020)
-
Volume 26 (2019)
-
Volume 25 (2018)
-
Volume 24 (2017)
-
Volume 23 (2016)
-
Volume 22 (2014)
-
Volume 21 (2013)
-
Volume 20 (2012)
-
Volume 19 (2011)
-
Volume 18 (2010)
-
Volume 17 (2009)
-
Volume 16 (2008)
-
Volume 15 (2007)
-
Volume 14 (2006)
-
Volume 13 (2005)
-
Volume 12 (2004)
-
Volume 11 (2003)
-
Volume 10 (2002)
-
Volume 9 (2001)
-
Volume 8 (2000)
-
Volume 7 (1999)
-
Volume 6 (1998)
-
Volume 5 (1997)
-
Volume 4 (1996)
-
Volume 3 (1995)
-
Volume 2 (1994)
-
Volume 1 (1993)
Most Read This Month
Article
content/journals/15699943
Journal
10
5
false