Affective multimodal engagement
Start date: 01 Aug 2007,
End date: 31 Jul 2009
The long-term aim of this project is to contribute to the development of technologies that improve the sense of engagement of their users (positive usability) by taking into account their affective states. The proposal describes a comprehensive framework for the study of affective postural displays as an indicator of human affective states.The proposed project has three goals. The first goal is to define a computationally tractable model of emotion in which emotion is described in terms of the intensities of its autonomic response, its communicative intent, and the influence of cultural factors. These factors reflect the physiological nature of emotions and the known influence of social and task context. To address the scarcity of data in this area, we aim to collect posture data in a quantitative and principled manner, through three case studies that systematically vary the three factors of our putative model, either individually or in combination.A major difficulty in constructing our model is how to accurately determine the intended signal of affective displays. Thus, the second goal of this project is to propose a robust alternative to the methods currently used in the literature. We propose multi-modal cross-validation, that is complementing the motion capture data with recordings from other modalities such as bio-feedback and eye-tracking, and studying the perception of synthetic avatars in which the congruence of different modalities of emotion expression (e.g., facial expressions and body postures) is manipulated.The completion of those two steps will open the way for the final goal of this project, that is, the design and implementation of a computational model for the contextual recognition of affect from body posture. This is an essential step toward designing systems that can recognize, and therefore regulate, the affective states of their users.
Get Access to the 1st Network for the European Cooperation