Feedback facial expressions and emotions

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Feedback facial expressions and emotions. / Navarretta, Costanza.

I: Journal on Multimodal User Interfaces, Bind 8, 2014, s. 135.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Navarretta, C 2014, 'Feedback facial expressions and emotions', Journal on Multimodal User Interfaces, bind 8, s. 135. https://doi.org/10.1007/s12193-013-0145-9

APA

Navarretta, C. (2014). Feedback facial expressions and emotions. Journal on Multimodal User Interfaces, 8, 135. https://doi.org/10.1007/s12193-013-0145-9

Vancouver

Navarretta C. Feedback facial expressions and emotions. Journal on Multimodal User Interfaces. 2014;8:135. https://doi.org/10.1007/s12193-013-0145-9

Author

Navarretta, Costanza. / Feedback facial expressions and emotions. I: Journal on Multimodal User Interfaces. 2014 ; Bind 8. s. 135.

Bibtex

@article{be0eef24892646bd857e2446e6938ef1,
title = "Feedback facial expressions and emotions",
abstract = "The paper investigates the relation between emotions and feedback facial expressions in video and audio recorded Danish dyadic first encounters. In particular, we train a classifier on the manual annotations of the corpus in order to investigate to which extent the encoding of emotions contribute to the prediction of the feedback functions of facial expressions. This work builds upon and extends previous research on (a) the annotation and analysis of emotions in the corpus in which it was suggested that emotions are related to specific communicative functions, and (b) the prediction of feedback head movements using multimodal information. The results of the experiments show that information on multimodal behaviours which co-occur with the facial expressions improve the classifier performance. The improvement of the F-measure with respect to the unimodal baseline is of 0.269 and this result is parallel to that obtained for head movements in the same corpus. The experiments also show that the annotations of emotions contribute further to the prediction of feedback facial expressions confirming their relation. The best results are obtained training the classifier on the shape of facial expressions and co-occurring head movements, emotion labels, the gesturer{\textquoteright}s and the interlocutor{\textquoteright}s speech and can be used in applied systems.",
author = "Costanza Navarretta",
year = "2014",
doi = "10.1007/s12193-013-0145-9",
language = "English",
volume = "8",
pages = "135",
journal = "Journal on Multimodal User Interfaces",
issn = "1783-7677",
publisher = "Springer",

}

RIS

TY - JOUR

T1 - Feedback facial expressions and emotions

AU - Navarretta, Costanza

PY - 2014

Y1 - 2014

N2 - The paper investigates the relation between emotions and feedback facial expressions in video and audio recorded Danish dyadic first encounters. In particular, we train a classifier on the manual annotations of the corpus in order to investigate to which extent the encoding of emotions contribute to the prediction of the feedback functions of facial expressions. This work builds upon and extends previous research on (a) the annotation and analysis of emotions in the corpus in which it was suggested that emotions are related to specific communicative functions, and (b) the prediction of feedback head movements using multimodal information. The results of the experiments show that information on multimodal behaviours which co-occur with the facial expressions improve the classifier performance. The improvement of the F-measure with respect to the unimodal baseline is of 0.269 and this result is parallel to that obtained for head movements in the same corpus. The experiments also show that the annotations of emotions contribute further to the prediction of feedback facial expressions confirming their relation. The best results are obtained training the classifier on the shape of facial expressions and co-occurring head movements, emotion labels, the gesturer’s and the interlocutor’s speech and can be used in applied systems.

AB - The paper investigates the relation between emotions and feedback facial expressions in video and audio recorded Danish dyadic first encounters. In particular, we train a classifier on the manual annotations of the corpus in order to investigate to which extent the encoding of emotions contribute to the prediction of the feedback functions of facial expressions. This work builds upon and extends previous research on (a) the annotation and analysis of emotions in the corpus in which it was suggested that emotions are related to specific communicative functions, and (b) the prediction of feedback head movements using multimodal information. The results of the experiments show that information on multimodal behaviours which co-occur with the facial expressions improve the classifier performance. The improvement of the F-measure with respect to the unimodal baseline is of 0.269 and this result is parallel to that obtained for head movements in the same corpus. The experiments also show that the annotations of emotions contribute further to the prediction of feedback facial expressions confirming their relation. The best results are obtained training the classifier on the shape of facial expressions and co-occurring head movements, emotion labels, the gesturer’s and the interlocutor’s speech and can be used in applied systems.

U2 - 10.1007/s12193-013-0145-9

DO - 10.1007/s12193-013-0145-9

M3 - Journal article

VL - 8

SP - 135

JO - Journal on Multimodal User Interfaces

JF - Journal on Multimodal User Interfaces

SN - 1783-7677

ER -

ID: 111097639