Change search
Link to record
Permanent link

Direct link
Publications (10 of 73) Show all publications
Gustafsson, P. U., Lachmann, T. & Laukka, P. (2025). Machine Learning Predicts Accuracy in Eyewitnesses’ Voices. Journal of nonverbal behavior, 49, 155-169
Open this publication in new window or tab >>Machine Learning Predicts Accuracy in Eyewitnesses’ Voices
2025 (English)In: Journal of nonverbal behavior, ISSN 0191-5886, E-ISSN 1573-3653, Vol. 49, p. 155-169Article in journal (Refereed) Published
Abstract [en]

An important task in criminal justice is to evaluate the accuracy of eyewitness testimony. In this study, we examined if machine learning could be used to detect accuracy. Specifically, we examined if support vector machines (SVMs) could accurately classify testimony statements as correct or incorrect based purely on the nonverbal aspects of the voice. We analyzed 3,337 statements (76.61% accurate) from 51 eyewitness testimonies along 94 acoustic variables. We also examined the relative importance of each of the acoustic variables, using Lasso regression. Results showed that the machine learning algorithms were able to predict accuracy between 20 and 40% above chance level (AUC = 0.50). The most important predictors included acoustic variables related to the amplitude (loudness) of speech and the duration of pauses, with higher amplitude predicting correct recall and longer pauses predicting incorrect recall. Taken together, we find that machine learning methods are capable of predicting whether eyewitness testimonies are correct or incorrect with above-chance accuracy and comparable to human performance, but without detrimental human biases. This offers a proof-of-concept for machine learning in evaluations of eyewitness accuracy, and opens up new avenues of research that we hope might improve social justice.

Keywords
eyewitness accuracy, eyewitness testimony, machine learning, forensic voice comparison, non-verbal cues
National Category
Psychology
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-235733 (URN)10.1007/s10919-024-00474-9 (DOI)001308307600001 ()2-s2.0-105002998145 (Scopus ID)
Note

Open access funding provided by Stockholm University.

Available from: 2024-11-20 Created: 2024-11-20 Last updated: 2025-05-23Bibliographically approved
Asperholm, M., Strandqvist, A., Falhammar, H., Frankling, K., Åkerlund, I., Laukka, P., . . . Herlitz, A. (2025). Spatial ability, episodic memory, and emotion recognition in women with congenital adrenal hyperplasia or complete androgen insensitivity syndrome. Hormones and Behavior, 173, Article ID 105747.
Open this publication in new window or tab >>Spatial ability, episodic memory, and emotion recognition in women with congenital adrenal hyperplasia or complete androgen insensitivity syndrome
Show others...
2025 (English)In: Hormones and Behavior, ISSN 0018-506X, E-ISSN 1095-6867, Vol. 173, article id 105747Article in journal (Refereed) Published
Abstract [en]

Conditions like congenital adrenal hyperplasia (CAH) and complete androgen insensitivity syndrome (CAIS) may provide information contributing to the explanation of sex differences in cognition. Using online tests and questionnaires, we examined how prenatal androgen exposure and/or sex chromosomes influence spatial ability, episodic memory, and emotion recognition in women with classic CAH (C-CAH; n = 29), non-classic CAH (NC-CAH; n = 13), CAIS (n = 11), and female (n = 147) and male (n = 142) controls. Results showed that (1) female and male controls differed on most cognitive tasks, whereas (2) women with C-CAH or CAIS did not consistently differ from either female or male controls. Investigating the relative advantage on either the female (episodic memory, emotion recognition) or male-favoring tasks (spatial ability), indicated that women with (3) C-CAH had a cognitive profile that was different from female and male controls, (4) CAIS were not different from male controls, whereas (5) NC-CAH had a relative advantage on female-favoring tasks. These findings suggest that excessive prenatal androgen exposure (C-CAH) may shift cognitive performance toward a male-typical pattern, though not to the male level. Additionally, aspects associated with having 46,XY karyotype, but lacking prenatal androgen receptivity (CAIS) may also influence cognition in a male-typical direction, providing mixed support for the prenatal androgen hypothesis.

Keywords
CAH, CAIS, Cognition, Disorder of sex development, Emotion recognition, Episodic memory, Sex differences, Spatial
National Category
Psychology (Excluding Applied Psychology) Neurosciences
Identifiers
urn:nbn:se:su:diva-243892 (URN)10.1016/j.yhbeh.2025.105747 (DOI)001501065000001 ()2-s2.0-105006735772 (Scopus ID)
Available from: 2025-06-09 Created: 2025-06-09 Last updated: 2025-06-09Bibliographically approved
Jang, D., Lybeck, M., Sanchez Cortes, D., Elfenbein, H. A. & Laukka, P. (2024). Estrogen predicts multimodal emotion recognition accuracy across the menstrual cycle. PLOS ONE, 19(10), Article ID e0312404.
Open this publication in new window or tab >>Estrogen predicts multimodal emotion recognition accuracy across the menstrual cycle
Show others...
2024 (English)In: PLOS ONE, E-ISSN 1932-6203, Vol. 19, no 10, article id e0312404Article in journal (Refereed) Published
Abstract [en]

Researchers have proposed that variation in sex hormones across the menstrual cycle modulate the ability to recognize emotions in others. Existing research suggests that accuracy is higher during the follicular phase and ovulation compared to the luteal phase, but findings are inconsistent. Using a repeated measures design with a sample of healthy naturally cycling women (N = 63), we investigated whether emotion recognition accuracy varied between the follicular and luteal phases, and whether accuracy related to levels of estrogen (estradiol) and progesterone. Two tasks assessed recognition of a range of positive and negative emotions via brief video recordings presented in visual, auditory, and multimodal blocks, and non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Multilevel models did not show differences in emotion recognition between cycle phases. However, coefficients for estrogen were significant for both emotion recognition tasks. Higher within-person levels of estrogen predicted lower accuracy, whereas higher between-person estrogen levels predicted greater accuracy. This suggests that in general having higher estrogen levels increases accuracy, but that higher-than-usual estrogen at a given time decreases it. Within-person estrogen further interacted with cycle phase for both tasks and showed a quadratic relationship with accuracy for the multimodal task. In particular, women with higher levels of estrogen were more accurate in the follicular phase and middle of the menstrual cycle. We propose that the differing role of within- and between-person hormone levels could explain some of the inconsistency in previous findings.

Keywords
emotions, estrogens, progesterone, menstrual cycle, facial expressions, sex hormons, face recognition, vocalization
National Category
Psychology
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-235732 (URN)10.1371/journal.pone.0312404 (DOI)001358579900021 ()39436872 (PubMedID)2-s2.0-85207185098 (Scopus ID)
Funder
Swedish Research Council, 2012-801
Available from: 2024-11-20 Created: 2024-11-20 Last updated: 2025-02-05Bibliographically approved
Bojner Horwitz, E., Thorarinn Johnson, D., Lyberg-Åhlander, V., Sahlén, B., Laukka, P. & Bygdéus, P. (2024). Making space for singing in the 21st century classroom: A focus group interview study with primary school music teachers in Sweden. British Journal of Music Education, 41(2), 209-223
Open this publication in new window or tab >>Making space for singing in the 21st century classroom: A focus group interview study with primary school music teachers in Sweden
Show others...
2024 (English)In: British Journal of Music Education, ISSN 0265-0517, E-ISSN 1469-2104, Vol. 41, no 2, p. 209-223Article in journal (Refereed) Published
Abstract [en]

The present study aimed to increase understanding of how singing activities may be initiated in primary school, and what support and assistance teachers require to conduct singing activities as an integrated part of the school day. Five music teachers participated in a focus group interview. The following main themes were identified: 1) pedagogical and methodological flexibility, 2) the role of routines and familiarity, 3) the embodied and multimodal dimensions of singing, 4) the importance of accompaniment and instruments, 5) the experience of insecurity and obstacles and 6) the perceived synergies between singing and other learning activities. This knowledge may be important to integrate within music teacher education in order to secure singing’s place in schools.

Keywords
singing activities, primary school, music teacher, education, sånghälsa
National Category
Psychology
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-225115 (URN)10.1017/s0265051723000360 (DOI)001128965000001 ()2-s2.0-85180935407 (Scopus ID)
Available from: 2024-01-08 Created: 2024-01-08 Last updated: 2024-11-20Bibliographically approved
Laukka, P., Månsson, K. N. .., Sanchez Cortes, D., Manzouri, A., Frick, A., Fredborg, W. & Fischer, H. (2024). Neural correlates of individual differences in multimodal emotion recognition ability. Cortex, 175, 1-11
Open this publication in new window or tab >>Neural correlates of individual differences in multimodal emotion recognition ability
Show others...
2024 (English)In: Cortex, ISSN 0010-9452, E-ISSN 1973-8102, Vol. 175, p. 1-11Article in journal (Refereed) Published
Abstract [en]

Studies have reported substantial variability in emotion recognition ability (ERA) – an important social skill – but possible neural underpinnings for such individual differences are not well understood. This functional magnetic resonance imaging (fMRI) study investigated neural responses during emotion recognition in young adults (N=49) who were selected for inclusion based on their performance (high or low) during previous testing of ERA. Participants were asked to judge brief video recordings in a forced-choice emotion recognition task, wherein stimuli were presented in visual, auditory and multimodal (audiovisual) blocks. Emotion recognition rates during brain scanning confirmed that individuals with high (vs. low) ERA received higher accuracy for all presentation blocks. fMRI-analyses focused on key regions of interest (ROIs) involved in the processing of multimodal emotion expressions, based on previous meta-analyses. In neural response to emotional stimuli contrasted with neutral stimuli, individuals with high (vs. low) ERA showed higher activation in the following ROIs during the multimodal condition: right middle superior temporal gyrus (mSTG), right posterior superior temporal sulcus (PSTS), and right inferior frontal cortex (IFC). Overall, results suggest that individual variability in ERA may be reflected across several stages of decisional processing, including extraction (mSTG), integration (PSTS) and evaluation (IFC) of emotional information.

Keywords
emotion recognition, facial expression, fMRI, individual differences, vocal expression
National Category
Psychology
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-228739 (URN)10.1016/j.cortex.2024.03.009 (DOI)001236628700001 ()38691922 (PubMedID)2-s2.0-85191491399 (Scopus ID)
Note

This work was supported by the Swedish Research Council (2012-00801 to P.L and 2013-00854 to H.F.) and the Marianne and Marcus Wallenberg Foundation (MMW 2018.0059 to P.L). Open access funding was provided by Stockholm University.

Available from: 2024-04-24 Created: 2024-04-24 Last updated: 2024-07-01Bibliographically approved
Israelsson, A., Seiger, A. & Laukka, P. (2023). Blended Emotions can be Accurately Recognized from Dynamic Facial and Vocal Expressions. Journal of Nonverbal Behavior, 47(3), 267-284
Open this publication in new window or tab >>Blended Emotions can be Accurately Recognized from Dynamic Facial and Vocal Expressions
2023 (English)In: Journal of Nonverbal Behavior, ISSN 0191-5886, Vol. 47, no 3, p. 267-284Article in journal (Refereed) Published
Abstract [en]

People frequently report feeling more than one emotion at the same time (i.e., blended emotions), but studies on nonverbal communication of such complex states remain scarce. Actors (N = 18) expressed blended emotions consisting of all pairwise combinations of anger, disgust, fear, happiness, and sadness – using facial gestures, body movement, and vocal sounds – with the intention that both emotions should be equally prominent in the resulting expression. Accuracy of blended emotion recognition was assessed in two preregistered studies using a combined forced-choice and rating scale task. For each recording, participants were instructed to choose two scales (out of 5 available scales: anger, disgust, fear, happiness, and sadness) that best described their perception of the emotional content and judge how clearly each of the two chosen emotions were perceived. Study 1 (N = 38) showed that all emotion combinations were accurately recognized from multimodal (facial/bodily/vocal) expressions, with significantly higher ratings on scales corresponding to intended vs. non-intended emotions. Study 2 (N = 51) showed that all emotion combinations were also accurately perceived when the recordings were presented in unimodal visual (facial/bodily) and auditory (vocal) conditions, although accuracy was lower in the auditory condition. To summarize, results suggest that blended emotions, including combinations of both same-valence and other-valence emotions, can be accurately recognized from dynamic facial/bodily and vocal expressions. The validated recordings of blended emotion expressions are freely available for research purposes. 

Place, publisher, year, edition, pages
Springer Nature, 2023
Keywords
blended emotions, compound emotions, facial expression, mixed emotions, multimodal expression, non-linguistic vocalizations
National Category
Psychology (excluding Applied Psychology)
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-220275 (URN)10.1007/s10919-023-00426-9 (DOI)000988996500001 ()2-s2.0-85159703037 (Scopus ID)
Note

This research was supported by the Marianne and Marcus Wallenberg Foundation (MMW 2018.0059). Open access funding provided by Stockholm University.

Available from: 2023-08-21 Created: 2023-08-21 Last updated: 2024-01-12Bibliographically approved
Högman, L., Gavalova, G., Laukka, P., Kristiansson, M., Källman, M. V., Fischer, H. & Johansson, A. G. M. (2023). Cognition, prior aggression, and psychopathic traits in relation to impaired multimodal emotion recognition in psychotic spectrum disorders. Frontiers in Psychiatry, 14, Article ID 1111896.
Open this publication in new window or tab >>Cognition, prior aggression, and psychopathic traits in relation to impaired multimodal emotion recognition in psychotic spectrum disorders
Show others...
2023 (English)In: Frontiers in Psychiatry, E-ISSN 1664-0640, Vol. 14, article id 1111896Article in journal (Refereed) Published
Abstract [en]

Background: Psychopathic traits have been associated with impaired emotion recognition in criminal, clinical and community samples. A recent study however, suggested that cognitive impairment reduced the relationship between psychopathy and emotion recognition. We therefore investigated if reasoning ability and psychomotor speed were impacting emotion recognition in individuals with psychotic spectrum disorders (PSD) with and without a history of aggression, as well as in healthy individuals, more than self-rated psychopathy ratings on the Triarchic Psychopathy Measure (TriPM).

Methods: Eighty individuals with PSD (schizophrenia, schizoaffective disorder, delusional disorder, other psychoses, psychotic bipolar disorder) and documented history of aggression (PSD+Agg) were compared with 54 individuals with PSD without prior aggression (PSD-Agg) and with 86 healthy individuals on the Emotion Recognition Assessment in Multiple Modalities (ERAM test). Individuals were psychiatrically stable and in remission from possible substance use disorders. Scaled scores on matrix reasoning, averages of dominant hand psychomotor speed and self-rated TriPM scores were obtained.

Results: Associations existed between low reasoning ability, low psychomotor speed, patient status and prior aggression with total accuracy on the ERAM test. PSD groups performed worse than the healthy group. Whole group correlations between total and subscale scores of TriPM to ERAM were found, but no associations with TriPM scores within each group or in general linear models when accounting for reasoning ability, psychomotor speed, understanding of emotion words and prior aggression.

Conclusion: Self-rated psychopathy was not independently linked to emotion recognition in PSD groups when considering prior aggression, patient status, reasoning ability, psychomotor speed and emotion word understanding.

Keywords
emotion recognition, psychosis, schizophrenia, aggression, violence, psychopathy
National Category
Psychiatry
Identifiers
urn:nbn:se:su:diva-238960 (URN)10.3389/fpsyt.2023.1111896 (DOI)001019838000001 ()2-s2.0-85164474694 (Scopus ID)
Available from: 2025-02-03 Created: 2025-02-03 Last updated: 2025-02-03Bibliographically approved
Gustafsson, P. U., Laukka, P. & Lindholm, T. (2023). The Voice of Eyewitness Accuracy. In: ICPS 2023 Brussels: Poster Brochure. Paper presented at International Convention of Psychological Science (ICPS) 2023, Brussels, March 9–11, 2023. (pp. 41-41). Association for Psychological Science
Open this publication in new window or tab >>The Voice of Eyewitness Accuracy
2023 (English)In: ICPS 2023 Brussels: Poster Brochure, Association for Psychological Science , 2023, p. 41-41Conference paper, Poster (with or without abstract) (Other academic)
Abstract [en]

In two studies, we examined vocal characteristics of accuracy. Participants watched a staged-crime film and were interviewed as eyewitnesses. A mega- analysis showed that correct responses were uttered with 1) a higher pitch, 2) greater energy in the first formant region, 3) higher speech rate and 4) shorter pauses.

Place, publisher, year, edition, pages
Association for Psychological Science, 2023
Keywords
eyewitness accuracy, vocal characteristics, staged crime film
National Category
Psychology
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-221868 (URN)
Conference
International Convention of Psychological Science (ICPS) 2023, Brussels, March 9–11, 2023.
Available from: 2023-10-05 Created: 2023-10-05 Last updated: 2023-10-06Bibliographically approved
Döllinger, L., Letellier, I., Högman, L., Laukka, P., Fischer, H. & Hau, S. (2023). Trainee psychotherapists’ emotion recognition accuracy during 1.5 years of psychotherapy education compared to a control group: No improvement after psychotherapy training. PeerJ, 11, Article ID e16235.
Open this publication in new window or tab >>Trainee psychotherapists’ emotion recognition accuracy during 1.5 years of psychotherapy education compared to a control group: No improvement after psychotherapy training
Show others...
2023 (English)In: PeerJ, E-ISSN 2167-8359, Vol. 11, article id e16235Article in journal (Refereed) Published
Abstract [en]

The ability to recognize and work with patients’ emotions is considered an important part of most psychotherapy approaches. Surprisingly, there is little systematic research on psychotherapists' ability to recognize other people’s emotional expressions. In this study, we compared trainee psychotherapists’ non-verbal emotion recognition accuracy to a control group of undergraduate students at two time points: at the beginning and at the end of one and a half years of theoretical and practical psychotherapy training. Emotion recognition accuracy (ERA) was assessed using two standardized computer tasks, one for recognition of dynamic multimodal (facial, bodily, vocal) expressions and one for recognition of facial micro expressions. Initially, 154 participants enrolled in the study, 72 also took part in the follow-up. The trainee psychotherapists were moderately better at recognizing multimodal expressions, and slightly better at recognizing facial micro expressions, than the control group at the first test occasion. However, mixed multilevel modeling indicated that the ERA change trajectories for the two groups differed significantly. While the control group improved in their ability to recognize multimodal emotional expressions from pretest to follow-up, the trainee psychotherapists did not. Both groups improved their micro expression recognition accuracy, but the slope for the control group was significantly steeper than the trainee psychotherapists’. These results suggest that psychotherapy education and clinical training do not always contribute to improved emotion recognition accuracy beyond what could be expected due to time or other factors. Possible reasons for that finding as well as implications for the psychotherapy education are discussed.  

National Category
Applied Psychology
Identifiers
urn:nbn:se:su:diva-221021 (URN)10.7717/peerj.16235 (DOI)001177956500001 ()2-s2.0-85180968441 (Scopus ID)
Funder
Marcus and Amalia Wallenberg Foundation, MAW 2013.0130Forte, Swedish Research Council for Health, Working Life and Welfare, 2013-02727
Available from: 2023-09-13 Created: 2023-09-13 Last updated: 2024-03-27Bibliographically approved
Elfenbein, H. A., Laukka, P., Althoff, J., Chui, W., Iraki, F. K., Rockstuhl, T. & Thingujam, N. S. (2022). What Do We Hear in the Voice? An Open-Ended Judgment Study of Emotional Speech Prosody. Personality and Social Psychology Bulletin, 48(7), 1087-1104
Open this publication in new window or tab >>What Do We Hear in the Voice? An Open-Ended Judgment Study of Emotional Speech Prosody
Show others...
2022 (English)In: Personality and Social Psychology Bulletin, ISSN 0146-1672, E-ISSN 1552-7433, Vol. 48, no 7, p. 1087-1104Article in journal (Refereed) Published
Abstract [en]

The current study investigated what can be understood from another person's tone of voice. Participants from five English-speaking nations (Australia, India, Kenya, Singapore, and the United States) listened to vocal expressions of nine positive and nine negative affective states recorded by actors from their own nation. In response, they wrote open-ended judgments of what they believed the actor was trying to express. Responses cut across the chronological emotion process and included descriptions of situations, cognitive appraisals, feeling states, physiological arousal, expressive behaviors, emotion regulation, and attempts at social influence. Accuracy in terms of emotion categories was overall modest, whereas accuracy in terms of valence and arousal was more substantial. Coding participants' 57,380 responses yielded a taxonomy of 56 categories, which included affective states as well as person descriptors, communication behaviors, and abnormal states. Open-ended responses thus reveal a wide range of ways in which people spontaneously perceive the intent behind emotional speech prosody.

Keywords
appraisal theory, emotion process, emotion recognition, free responses, speech prosody, vocal expression
National Category
Psychology
Research subject
Psychology
Identifiers
urn:nbn:se:su:diva-197120 (URN)10.1177/01461672211029786 (DOI)000677396800001 ()34296644 (PubMedID)2-s2.0-85111399533 (Scopus ID)
Available from: 2021-09-28 Created: 2021-09-28 Last updated: 2022-08-16Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8771-6818

Search in DiVA

Show all publications