Change search
Link to record
Permanent link

Direct link
Israelsson, Alexandra
Publications (4 of 4) Show all publications
Israelsson, A. (2025). Understanding the Recognition of Dynamic Multimodal Expressions of Single and Blended Emotions. (Doctoral dissertation). Stockholm: Department of Psychology, Stockholm University
Open this publication in new window or tab >>Understanding the Recognition of Dynamic Multimodal Expressions of Single and Blended Emotions
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Nonverbal emotion expressions are multimodal patterns of behavior that unfold dynamically. However, several key questions remain about the nature of dynamic multimodal emotion expressions. This thesis aimed to better understand how such nonverbal expressions—both single and blended emotions—are recognized.

Study I investigated individual differences in the ability to recognize single emotions in the general population using the ERAM test. Study 1 focused on emotion recognition in relation to emotional competencies, personality, and socio-emotional dysfunction, and evaluation of the psychometric properties of the ERAM test. Study 2 examined emotion recognition in relation to metacognitive judgments using an online version of the ERAM test. Overall, results suggested that emotion recognition is related to empathy, emotion understanding, openness, and alexithymia, and also with metacognitive skills. The results further revealed that accuracy was highest in the multimodal condition and positively correlated across modalities. Lastly, no differences were found between the in-lab and online versions of the test, indicating that the ERAM can be reliably administered online.

Study II investigated the ability to recognize blended emotions (pairwise combinations of 5 emotions). Recordings of actors displaying two emotions in equal proportions (50:50 expressions) were used. The test combined a rating scale and a forced-choice task, and participants were instructed to choose 2 out of 5 available scales. Study 1 examined multimodal emotion recognition, whereas Study 2 examined emotion recognition in unimodal conditions (video-only and audio-only). Recognition accuracy was highest when expressions were presented multimodally, and also higher in the video-only condition, consistent with the findings on single emotions in Study I. Both studies further showed that all combinations were recognized above chance level, regardless of presentation modality.

Study III investigated the recognition of blended emotions with varying proportions (70:30, 50:50, 30:70 expressions), using recordings from the 6 best-recognized actors in Study II. Study 1 examined emotion recognition using a restricted response format (prompted to use 2 out of 5 scales, as in Study II), while Study 2 used an unrestricted version of the same test (free to choose any number of the 5 scales). The results showed that the majority of blends across all proportions were recognized above chance level in both formats (restricted vs. unrestricted). Results further revealed that the more prominent emotion received higher ratings than the less prominent in most combinations. These findings replicated and extended those in Study II, and suggest that both the quality and quantity of emotions can be recognized.

Together, the results showed that, in the general population, individual differences in emotion recognition of single emotions are related to broader affective, personality, and metacognitive processes. They also revealed that recognition accuracy of both single and blended emotions is highest when dynamic multimodal expressions are presented multimodally. This thesis contributes to a growing body of work that underscores the significance of studying emotions conveyed dynamically through the face, voice, and body, and emphasizes the need to increase the number and complexity of emotions under study, as emotion recognition ability appears to be more nuanced and flexible than previously thought.

Place, publisher, year, edition, pages
Stockholm: Department of Psychology, Stockholm University, 2025. p. 71
Keywords
blended emotions, emotion recognition ability, facial expression, individual difference, multimodal expression, non-linguistic vocalizations, single emotions
National Category
Psychology
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-244601 (URN)978-91-8107-314-0 (ISBN)978-91-8107-315-7 (ISBN)
Public defence
2025-09-12, Lärosal 19, Albano, hus 2, Albanovägen 18 and online via Zoom, public link is available at the department website, stockholm, 13:00 (English)
Opponent
Supervisors
Available from: 2025-08-20 Created: 2025-06-24 Last updated: 2025-09-04Bibliographically approved
Israelsson, A., Seiger, A. & Laukka, P. (2023). Blended Emotions can be Accurately Recognized from Dynamic Facial and Vocal Expressions. Journal of Nonverbal Behavior, 47(3), 267-284
Open this publication in new window or tab >>Blended Emotions can be Accurately Recognized from Dynamic Facial and Vocal Expressions
2023 (English)In: Journal of Nonverbal Behavior, ISSN 0191-5886, Vol. 47, no 3, p. 267-284Article in journal (Refereed) Published
Abstract [en]

People frequently report feeling more than one emotion at the same time (i.e., blended emotions), but studies on nonverbal communication of such complex states remain scarce. Actors (N = 18) expressed blended emotions consisting of all pairwise combinations of anger, disgust, fear, happiness, and sadness – using facial gestures, body movement, and vocal sounds – with the intention that both emotions should be equally prominent in the resulting expression. Accuracy of blended emotion recognition was assessed in two preregistered studies using a combined forced-choice and rating scale task. For each recording, participants were instructed to choose two scales (out of 5 available scales: anger, disgust, fear, happiness, and sadness) that best described their perception of the emotional content and judge how clearly each of the two chosen emotions were perceived. Study 1 (N = 38) showed that all emotion combinations were accurately recognized from multimodal (facial/bodily/vocal) expressions, with significantly higher ratings on scales corresponding to intended vs. non-intended emotions. Study 2 (N = 51) showed that all emotion combinations were also accurately perceived when the recordings were presented in unimodal visual (facial/bodily) and auditory (vocal) conditions, although accuracy was lower in the auditory condition. To summarize, results suggest that blended emotions, including combinations of both same-valence and other-valence emotions, can be accurately recognized from dynamic facial/bodily and vocal expressions. The validated recordings of blended emotion expressions are freely available for research purposes. 

Place, publisher, year, edition, pages
Springer Nature, 2023
Keywords
blended emotions, compound emotions, facial expression, mixed emotions, multimodal expression, non-linguistic vocalizations
National Category
Psychology (excluding Applied Psychology)
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-220275 (URN)10.1007/s10919-023-00426-9 (DOI)000988996500001 ()2-s2.0-85159703037 (Scopus ID)
Note

This research was supported by the Marianne and Marcus Wallenberg Foundation (MMW 2018.0059). Open access funding provided by Stockholm University.

Available from: 2023-08-21 Created: 2023-08-21 Last updated: 2025-06-24Bibliographically approved
Laukka, P., Bänziger, T., Israelsson, A., Sanchez Cortes, D., Tornberg, C., Scherer, K. R. & Fischer, H. (2021). Investigating individual differences in emotion recognition ability using the ERAM test. Acta Psychologica, 220, Article ID 103422.
Open this publication in new window or tab >>Investigating individual differences in emotion recognition ability using the ERAM test
Show others...
2021 (English)In: Acta Psychologica, ISSN 0001-6918, E-ISSN 1873-6297, Vol. 220, article id 103422Article in journal (Refereed) Published
Abstract [en]

Individuals vary in emotion recognition ability (ERA), but the causes and correlates of this variability are not well understood. Previous studies have largely focused on unimodal facial or vocal expressions and a small number of emotion categories, which may not reflect how emotions are expressed in everyday interactions. We investigated individual differences in ERA using a brief test containing dynamic multimodal (facial and vocal) expressions of 5 positive and 7 negative emotions (the ERAM test). Study 1 (N = 593) showed that ERA was positively correlated with emotional understanding, empathy, and openness, and negatively correlated with alexithymia. Women also had higher ERA than men. Study 2 was conducted online and replicated the recognition rates from Study 1 (which was conducted in lab) in a different sample (N = 106). Study 2 also showed that participants who had higher ERA were more accurate in their meta-cognitive judgments about their own accuracy. Recognition rates for visual, auditory, and audio-visual expressions were substantially correlated in both studies. Results provide further clues about the underlying structure of ERA and its links to broader affective processes. The ERAM test can be used for both lab and online research, and is freely available for academic research.

Place, publisher, year, edition, pages
Elsevier, 2021
Keywords
emotion recognition test, emotion understanding, empathy, meta-cognitive judgments, multimodal expressions, personality, sex differences
National Category
Psychology
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-197170 (URN)10.1016/j.actpsy.2021.103422 (DOI)000706372300017 ()
Note

This study was funded by the Swedish Research Council through a grant to PL (grant no. 2012-801). Open access publication fees were covered by Stockholm University.

Available from: 2021-09-28 Created: 2021-09-28 Last updated: 2025-06-24Bibliographically approved
Israelsson, A., Sandberg, G., Zeitler, S. & Laukka, P.Which emotions and how much of each? Judging both quality and quantity of blended emotions conveyed through dynamic multimodal expressions.
Open this publication in new window or tab >>Which emotions and how much of each? Judging both quality and quantity of blended emotions conveyed through dynamic multimodal expressions
(English)Manuscript (preprint) (Other academic)
Abstract [en]

Few studies explore nonverbal communication of blended emotions, despite common reports of experiencing multiple emotions simultaneously. In two pre-registered experiments, we investigated if participants can accurately judge how prominently different emotions are expressed in dynamic multimodal portrayals of blended emotion-pairs. Actors portrayed pairwise combinations of anger, disgust, fear, happiness, and sadness in varying proportions, using facial gestures, body movement and vocal sounds. In Study 1, participants were instructed to choose two scales (out of 5 available scales: anger, disgust, fear, happiness, and sadness) that best described their impression of the emotional content of the portrayals, and rate how clearly both of the two chosen emotions were perceived. In Study 2, participants were instead free to choose any number of scales (out of the 5 available ones) in the emotion rating task. Results were consistent across both studies and showed that all blended emotions were accurately perceived with significantly higher ratings on scales corresponding to intended vs. non-intended emotions. Participants could also accurately judge which was the dominant and the less dominant emotion. To summarize, results revealed nuanced perception of both emotional quality and quantity, which may help us navigate in a complex social environment.

Keywords
blended emotions, compound emotions, facial expression, mixed emotions, multimodal expression, non-linguistic vocalizations
National Category
Psychology
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-244597 (URN)
Funder
Marianne and Marcus Wallenberg Foundation, 2018.0059
Note

The study was conceived by PL and AI and all authors contributed to the design. PL and AI contributed materials, and GS and SZ contributed to data collection. All authors contributed to data analysis. PL and AI wrote the paper with input from GS and SZ.

We thank Anni Fredriksson for assistance with the data collection for Study 1, and Tim Lachmann for programming the judgment tasks.

Available from: 2025-06-23 Created: 2025-06-23 Last updated: 2025-06-24
Organisations

Search in DiVA

Show all publications