<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>SAM</title>
<link>https://sam.ensam.eu:443</link>
<description>The DSpace digital repository system captures, stores, indexes, preserves, and distributes digital research material.</description>
<pubDate xmlns="http://apache.org/cocoon/i18n/2.1">Thu, 12 Mar 2026 13:41:04 GMT</pubDate>
<dc:date>2026-03-12T13:41:04Z</dc:date>
<item>
<title>The effects of speech–gesture cooperation in animated agents’ behavior in multimedia presentations</title>
<link>http://hdl.handle.net/10985/6758</link>
<description>The effects of speech–gesture cooperation in animated agents’ behavior in multimedia presentations
BUISINE, Stéphanie; MARTIN, Jean-Claude
Until now, research on arrangement of verbal and non-verbal information in multimedia presentations has not considered multimodal behavior of animated agents. In this paper, we will present an experiment exploring the effects of different types of speech–gesture cooperation in agents’ behavior: redundancy (gestures duplicate pieces of information conveyed by speech), complementarity (distribution of information across speech and gestures) and a control condition in which gesture does not convey semantic information. Using a Latin-square design, these strategies were attributed to agents of different appearances to present different objects. Fifty-four male and 54 female users attended three short presentations performed by the agents, recalled the content of presentations and evaluated both the presentations and the agents. Although speech–gesture cooperation was not consciously perceived, it proved to influence users’ recall performance and subjective evaluations: redundancy increased verbal information recall, ratings of the quality of explanation, and expressiveness of agents. Redundancy also resulted in higher likeability scores for the agents and a more positive perception of their personality. Users’ gender had no influence on this set of results.
</description>
<pubDate>Mon, 01 Jan 2007 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/6758</guid>
<dc:date>2007-01-01T00:00:00Z</dc:date>
<dc:creator>BUISINE, Stéphanie</dc:creator>
<dc:creator>MARTIN, Jean-Claude</dc:creator>
<dc:description>Until now, research on arrangement of verbal and non-verbal information in multimedia presentations has not considered multimodal behavior of animated agents. In this paper, we will present an experiment exploring the effects of different types of speech–gesture cooperation in agents’ behavior: redundancy (gestures duplicate pieces of information conveyed by speech), complementarity (distribution of information across speech and gestures) and a control condition in which gesture does not convey semantic information. Using a Latin-square design, these strategies were attributed to agents of different appearances to present different objects. Fifty-four male and 54 female users attended three short presentations performed by the agents, recalled the content of presentations and evaluated both the presentations and the agents. Although speech–gesture cooperation was not consciously perceived, it proved to influence users’ recall performance and subjective evaluations: redundancy increased verbal information recall, ratings of the quality of explanation, and expressiveness of agents. Redundancy also resulted in higher likeability scores for the agents and a more positive perception of their personality. Users’ gender had no influence on this set of results.</dc:description>
</item>
<item>
<title>Multimodal complex emotions : gesture expressivity and blended facial expressions</title>
<link>http://hdl.handle.net/10985/6779</link>
<description>Multimodal complex emotions : gesture expressivity and blended facial expressions
MARTIN, Jean-Claude; NIEWIADOMSKI, Radoslaw; DEVILLERS, Laurence; BUISINE, Stéphanie; PELACHAUD, Catherine
One of the challenges of designing virtual humans is the definition of appropriate models of the relation between realistic emotions and the coordination of behaviors in several modalities. In this paper, we present the annotation, representation and modeling of multimodal visual behaviors occurring during complex emotions. We illustrate our work using a corpus of TV interviews. This corpus has been annotated at several levels of information: communicative acts, emotion labels, and multimodal signs. We have defined a copy-synthesis approach to drive an Embodied Conversational Agent from these different levels of information. The second part of our paper focuses on a model of complex (superposition and masking of) emotions in facial expressions of the agent. We explain how the complementary aspects of our work on corpus and computational model is used to specify complex emotional behaviors.
</description>
<pubDate>Sun, 01 Jan 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/6779</guid>
<dc:date>2006-01-01T00:00:00Z</dc:date>
<dc:creator>MARTIN, Jean-Claude</dc:creator>
<dc:creator>NIEWIADOMSKI, Radoslaw</dc:creator>
<dc:creator>DEVILLERS, Laurence</dc:creator>
<dc:creator>BUISINE, Stéphanie</dc:creator>
<dc:creator>PELACHAUD, Catherine</dc:creator>
<dc:description>One of the challenges of designing virtual humans is the definition of appropriate models of the relation between realistic emotions and the coordination of behaviors in several modalities. In this paper, we present the annotation, representation and modeling of multimodal visual behaviors occurring during complex emotions. We illustrate our work using a corpus of TV interviews. This corpus has been annotated at several levels of information: communicative acts, emotion labels, and multimodal signs. We have defined a copy-synthesis approach to drive an Embodied Conversational Agent from these different levels of information. The second part of our paper focuses on a model of complex (superposition and masking of) emotions in facial expressions of the agent. We explain how the complementary aspects of our work on corpus and computational model is used to specify complex emotional behaviors.</dc:description>
</item>
<item>
<title>Impact of Expressive Wrinkles on Perception of a Virtual Character’s Facial Expressions of Emotions</title>
<link>http://hdl.handle.net/10985/6733</link>
<description>Impact of Expressive Wrinkles on Perception of a Virtual Character’s Facial Expressions of Emotions
COURGEON, Matthieu; BUISINE, Stéphanie; MARTIN, Jean-Claude
Facial animation has reached a high level of photorealism. Skin is rendered with grain and translucency, wrinkles are accurate and dynamic. These recent visual improvements are not fully tested for their contribution to the perceived expressiveness of virtual characters. This paper presents a perceptual study assessing the impact of different rendering modes of expressive wrinkles on users’ perception of facial expressions of basic and complex emotions. Our results suggest that realistic wrinkles increase agent’s expressivity and user’s preference, but not the recognition of emotion categories. This study was conducted using our real time facial animation platform that is designed for perceptive evaluations of affective interaction.
</description>
<pubDate>Thu, 01 Jan 2009 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/6733</guid>
<dc:date>2009-01-01T00:00:00Z</dc:date>
<dc:creator>COURGEON, Matthieu</dc:creator>
<dc:creator>BUISINE, Stéphanie</dc:creator>
<dc:creator>MARTIN, Jean-Claude</dc:creator>
<dc:description>Facial animation has reached a high level of photorealism. Skin is rendered with grain and translucency, wrinkles are accurate and dynamic. These recent visual improvements are not fully tested for their contribution to the perceived expressiveness of virtual characters. This paper presents a perceptual study assessing the impact of different rendering modes of expressive wrinkles on users’ perception of facial expressions of basic and complex emotions. Our results suggest that realistic wrinkles increase agent’s expressivity and user’s preference, but not the recognition of emotion categories. This study was conducted using our real time facial animation platform that is designed for perceptive evaluations of affective interaction.</dc:description>
</item>
<item>
<title>Fusion of children’s speech and 2D gestures when conversing with 3D characters</title>
<link>http://hdl.handle.net/10985/6778</link>
<description>Fusion of children’s speech and 2D gestures when conversing with 3D characters
MARTIN, Jean-Claude; BUISINE, Stéphanie; PITEL, Guillaume; BERNSEN, Niels Ole
Most existing multi-modal prototypes enabling users to combine 2D gestures and speech input are task-oriented. They help adult users solve particular information tasks often in 2D standard Graphical User Interfaces. This paper describes the NICE Andersen system, which aims at demonstrating multi-modal conversation between humans and embodied historical and literary characters. The target users are 10–18 years old children and teenagers. We discuss issues in 2D gesture recognition and interpretation as well as temporal and semantic dimensions of input fusion, ranging from systems and component design through technical evaluation and user evaluation with two different user groups. We observed that recognition and understanding of spoken deictics were quite robust and that spoken deictics were always used in multimodal input. We identified the causes of the most frequent failures of input fusion and suggest possible improvements for removing these errors. The concluding discussion summarises the knowledge provided by the NICE Andersen system on how children gesture and combine their 2D gestures with speech when conversing with a 3D character, and looks at some of the challenges facing theoretical solutions aimed at supporting unconstrained speech/2D gesture fusion.
</description>
<pubDate>Sun, 01 Jan 2006 00:00:00 GMT</pubDate>
<guid isPermaLink="false">http://hdl.handle.net/10985/6778</guid>
<dc:date>2006-01-01T00:00:00Z</dc:date>
<dc:creator>MARTIN, Jean-Claude</dc:creator>
<dc:creator>BUISINE, Stéphanie</dc:creator>
<dc:creator>PITEL, Guillaume</dc:creator>
<dc:creator>BERNSEN, Niels Ole</dc:creator>
<dc:description>Most existing multi-modal prototypes enabling users to combine 2D gestures and speech input are task-oriented. They help adult users solve particular information tasks often in 2D standard Graphical User Interfaces. This paper describes the NICE Andersen system, which aims at demonstrating multi-modal conversation between humans and embodied historical and literary characters. The target users are 10–18 years old children and teenagers. We discuss issues in 2D gesture recognition and interpretation as well as temporal and semantic dimensions of input fusion, ranging from systems and component design through technical evaluation and user evaluation with two different user groups. We observed that recognition and understanding of spoken deictics were quite robust and that spoken deictics were always used in multimodal input. We identified the causes of the most frequent failures of input fusion and suggest possible improvements for removing these errors. The concluding discussion summarises the knowledge provided by the NICE Andersen system on how children gesture and combine their 2D gestures with speech when conversing with a 3D character, and looks at some of the challenges facing theoretical solutions aimed at supporting unconstrained speech/2D gesture fusion.</dc:description>
</item>
</channel>
</rss>
