Change search
Refine search result
1 - 38 of 38
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the 'Create feeds' function.
  • 1. Bhatara, Anjali
    et al.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology. Södertörn University, Sweden.
    Boll-Avetisyan, Natalie
    Granjon, Lionel
    Anger Elfenbein, Hillary
    Bänziger, Tanja
    Second Language Ability and Emotional Prosody Perception2016In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 11, no 6, e0156855Article in journal (Refereed)
    Abstract [en]

    The present study examines the effect of language experience on vocal emotion perception in a second language. Native speakers of French with varying levels of self-reported English ability were asked to identify emotions from vocal expressions produced by American actors in a forced-choice task, and to rate their pleasantness, power, alertness and intensity on continuous scales. Stimuli included emotionally expressive English speech (emotional prosody) and non-linguistic vocalizations (affect bursts), and a baseline condition with Swiss-French pseudo-speech. Results revealed effects of English ability on the recognition of emotions in English speech but not in non-linguistic vocalizations. Specifically, higher English ability was associated with less accurate identification of positive emotions, but not with the interpretation of negative emotions. Moreover, higher English ability was associated with lower ratings of pleasantness and power, again only for emotional prosody. This suggests that second language skills may sometimes interfere with emotion recognition from speech prosody, particularly for positive emotions.

  • 2. Bhatara, Anjali
    et al.
    Laukka, PetriStockholm University, Faculty of Social Sciences, Department of Psychology.Levitin, Daniel J.
    Expression of emotion in music and vocal communication2014Collection (editor) (Other academic)
    Abstract [en]

    Two of the most important social skills in humans are the ability to determine the moods of those around us, and to use this to guide our behavior. To accomplish this, we make use of numerous cues. Among the most important are vocal cues from both speech and non-speech sounds. Music is also a reliable method for communicating emotion. It is often present in social situations and can serve to unify a group's mood for ceremonial purposes (funerals, weddings) or general social interactions. Scientists and philosophers have speculated on the origins of music and language, and the possible common bases of emotional expression through music, speech and other vocalizations. They have found increasing evidence of commonalities among them. However, the domains in which researchers investigate these topics do not always overlap or share a common language, so communication between disciplines has been limited. The aim of this Research Topic is to bring together research across multiple disciplines related to the production and perception of emotional cues in music, speech, and non-verbal vocalizations. This includes natural sounds produced by human and non-human primates as well as synthesized sounds. Research methodology includes survey, behavioral, and neuroimaging techniques investigating adults as well as developmental populations, including those with atypical development. Studies using laboratory tasks as well as studies in more naturalistic settings are included.

  • 3. Bhatara, Anjali
    et al.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Levitin, Daniel J.
    Expression of emotion in music and vocal communication: introduction to the research topic2014In: Frontiers in Psychology, ISSN 1664-1078, Vol. 5, 399- p.Article in journal (Refereed)
    Abstract [en]

    In social interactions, we must gauge the emotional state of others in order to behave appropriately. We rely heavily on auditory cues, specifically speech prosody, to do this. Music is also a complex auditory signal with the capacity to communicate emotion rapidly and effectively and often occurs in social situations or ceremonies as an emotional unifier.

    In sum, the main contribution of this Research Topic, along with highlighting the variety of research being done already, is to show the places of contact between the domains of music and vocal expression that occur at the level of emotional communication. In addition, we hope it will encourage future dialog among researchers interested in emotion in fields as diverse as computer science, linguistics, musicology, neuroscience, psychology, speech and hearing sciences, and sociology, who can each contribute knowledge necessary for studying this complex topic.

  • 4.
    Cortes, Diana S.
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology.
    Asperholm, M.
    Fredborg, W.
    Döllinger, Lillian
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Clinical psychology.
    Xiao, Shanshan
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Högman, Lennart
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Dang, J.
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Intranasal Oxytocin and Response Inhibition in Young and Older Adults2017Conference paper (Refereed)
    Abstract [en]

    In normal aging, people are confronted with impairment in both socioemotional and cognitive abilities. Specifically, there are age-related declines in inhibitory processes that regulate attention towards irrelevant material. In last years, the intranasal administration of the neuropeptide oxytocin has mainly been related to improvements in several domains such as emotion recognition and memory, but to date the effects of oxytocin in aging remain largely unknown. In a randomized, double blind, placebo controlled, within-subjects study design, we investigated whether oxytocin facilitates inhibitory processing in older adults compared to younger adults. In total, 41 older adults (51% women; age range 65-75 years) and 37 younger adults (49% women; age range 20-30 years) participated in this study two times, receiving a single intranasal dose of 40 IU of placebo and oxytocin in randomized order 45 minutes before engaging in the task. Participants were tested approximately a month apart and mostly at the same hour during both occasions. Inhibition was measured with a Go/NoGo task which included happy and neutral faces as targets (Go stimuli) and distractors (NoGo stimuli) shown on a computer screen. Participants were instructed to press a button any time they saw a target and remain passive when encountering a distractor. Preliminary results indicate effects for happy and neutral faces, but only in the distractor condition. For happy distractors, women rejected correctly happy faces more accurately than men did, both in the placebo and oxytocin conditions. A main effect of age was observed for the neutral distractors, where older adults were more successful in inhibiting responses than younger adults during oxytocin and placebo treatments. We did not observe effects of oxytocin in the different tasks. The role of oxytocin was not clear distinguished in the tasks. In sum, our findings showed that age and gender can influence inhibition but their effects depend on the displayed emotions. This suggests that the ability to inhibit interfering distractors may remain intact despite of age and that deficits in inhibition may be selective. The role of oxytocin in inhibition needs to be further investigated since it is possible that it is context dependent.

  • 5.
    Cortes, Diana S.
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology.
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Age differences in judgments of attractiveness, likeability, and trustworthiness of faces2016In: Program of SANS 2016, 2016, 58-58 p., B-23Conference paper (Other academic)
    Abstract [en]

    People constantly evaluate faces to obtain social information. However, the link between aging and social evaluation of faces is not well understood. Todorov and colleagues introduced a data-driven model defined by valence and dominance as the two main components underlying social judgments of faces. They also created a stimulus set consisting of computer-generated faces which systematically vary along various social dimensions (e.g., Todorov et al., 2013, Emotion, 13, 724-38). We utilized a selection of these facial stimuli to investigate age-related differences in judgments of the following dimensions: attractiveness, competence, dominance, extraversion, likeability, threat, and trustworthiness. Participants rated how well the faces represented the intended social dimensions on 9-point scales ranging from not at all to extremely well. Results from 71 younger (YA; mean age = 23.42 years) and 60 older adults (OA; mean age = 69.19 years) showed that OA evaluated untrustworthy faces as more trustworthy, dislikeable faces as more likeable, and unattractive faces as more attractive compared to YA. OA also evaluated attractive faces as more attractive compared to YA, whereas YA did rate likeable and trustworthy faces as more likeable and trustworthy than did OA. In summary, our findings showed that OA evaluated negative social features less negatively compared to YA. This suggests that older and younger persons may use different cues for social evaluation of faces, and is in line with prior research suggesting age-related decline in the ability to recognize negative emotion expressions.

  • 6.
    Cortes, Diana S.
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology.
    Lindahl, Christina
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology.
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Memory for faces and voices varies as a function of sex and expressed emotion2017In: PLoS ONE, ISSN 1932-6203, E-ISSN 1932-6203, Vol. 12, no 6, e0178423Article in journal (Refereed)
    Abstract [en]

    We investigated how memory for faces and voices (presented separately and in combination) varies as a function of sex and emotional expression (anger, disgust, fear, happiness, sadness, and neutral). At encoding, participants judged the expressed emotion of items in forced-choice tasks, followed by incidental Remember/Know recognition tasks. Results from 600 participants showed that accuracy (hits minus false alarms) was consistently higher for neutral compared to emotional items, whereas accuracy for specific emotions varied across the presentation modalities (i.e., faces, voices, and face-voice combinations). For the subjective sense of recollection (“remember” hits), neutral items received the highest hit rates only for faces, whereas for voices and face-voice combinations anger and fear expressions instead received the highest recollection rates. We also observed better accuracy for items by female expressers, and own-sex bias where female participants displayed memory advantage for female faces and face-voice combinations. Results further suggest that own-sex bias can be explained by recollection, rather than familiarity, rates. Overall, results show that memory for faces and voices may be influenced by the expressions that they carry, as well as by the sex of both items and participants. Emotion expressions may also enhance the subjective sense of recollection without enhancing memory accuracy.

  • 7.
    Döllinger, Lillian
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Bänziger, Tanja
    Högman, Lennart
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Makower, Irena
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Cortes, Diana S.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Hau, Stephan
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Improving psychotherapeutic competencies using socioemotional perceptual training procedures 2016Conference paper (Other academic)
  • 8. Eyben, Florian
    et al.
    Scherer, Klaus R.
    Schuller, Björn W.
    Sundberg, Johan
    André, Elisabeth
    Busso, Carlos
    Devillers, Laurence Y.
    Epps, Julien
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Perception and psychophysics.
    Narayanan, Shrikanth S.
    Truong, Khiet P.
    The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing2016In: IEEE Transactions on Affective Computing, ISSN 1949-3045, E-ISSN 1949-3045, Vol. 7, no 2, 190-202 p.Article in journal (Refereed)
    Abstract [en]

    Work on voice sciences over recent decades has led to a proliferation of acoustic parameters that are used quite selectively and are not always extracted in a similar fashion. With many independent teams working in different research areas, shared standards become an essential safeguard to ensure compliance with state-of-the-art methods allowing appropriate comparison of results across studies and potential integration and combination of extraction and recognition systems. In this paper we propose a basic standard acoustic parameter set for various areas of automatic voice analysis, such as paralinguistic or clinical speech analysis. In contrast to a large brute-force parameter set, we present a minimalistic set of voice parameters here. These were selected based on a) their potential to index affective physiological changes in voice production, b) their proven value in former studies as well as their automatic extractability, and c) their theoretical significance. The set is intended to provide a common baseline for evaluation of future research and eliminate differences caused by varying parameter sets or even different implementations of the same parameters. Our implementation is publicly available with the openSMILE toolkit. Comparative evaluations of the proposed feature set and large baseline feature sets of INTERSPEECH challenges show a high performance of the proposed set in relation to its size.

  • 9. Feingold, Daniel
    et al.
    Hasson-Ohayon, Ilanit
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology. Södertörn University, Sweden.
    Vishne, Tali
    Dembinsky, Yael
    Kravets, Shlomo
    Emotion recognition deficits among persons with schizophrenia: Beyond stimulus complexity level and presentation modality2016In: Psychiatry Research, ISSN 0165-1781, E-ISSN 1872-7123, Vol. 240, 60-65 p.Article in journal (Refereed)
    Abstract [en]

    Studies have shown that persons with schizophrenia have lower accuracy in emotion recognition compared to persons without schizophrenia. However, the impact of the complexity level of the stimuli or the modality of presentation has not been extensively addressed. Forty three persons with a diagnosis of schizophrenia and 43 healthy controls, matched for age and gender, were administered tests assessing emotion recognition from stimuli with low and high levels of complexity presented via visual, auditory and semantic channels. For both groups, recognition rates were higher for high-complexity stimuli compared to low-complexity stimuli. Additionally, both groups obtained higher recognition rates for visual and semantic stimuli than for auditory stimuli, but persons with schizophrenia obtained lower accuracy than persons in the control group for all presentation modalities. Persons diagnosed with schizophrenia did not present a level of complexity specific deficit or modality-specific deficit compared to healthy controls. Results suggest that emotion recognition deficits in schizophrenia are beyond level of complexity of stimuli and modality, and present a global difficulty in cognitive functioning.

  • 10. Gold, Rinat
    et al.
    Butler, Pamela
    Revheim, Nadine
    Leitman, David I.
    Hansen, John A.
    Gur, Ruben C.
    Kantrowitz, Joshua T.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Juslin, Patrik N.
    Silipo, Gail S.
    Javitt, Daniel C.
    Auditory Emotion Recognition Impairments in Schizophrenia: Relationship to Acoustic Features and Cognition2012In: American Journal of Psychiatry, ISSN 0002-953X, E-ISSN 1535-7228, Vol. 169, no 4, 424-432 p.Article in journal (Refereed)
    Abstract [en]

    Objective: Schizophrenia is associated with deficits in the ability to perceive emotion based on tone of voice. The basis for this deficit remains unclear, however, and relevant assessment batteries remain limited. The authors evaluated performance in schizophrenia on a novel voice emotion recognition battery with well-characterized physical features, relative to impairments in more general emotional and cognitive functioning. Method: The authors studied a primary sample of 92 patients and 73 comparison subjects. Stimuli were characterized according to both intended emotion and acoustic features (e.g., pitch, intensity) that contributed to the emotional percept. Parallel measures of visual emotion-recognition, pitch perception, general cognition, and overall outcome were obtained. More limited measures were obtained in an independent replication sample of 36 patients, 31 age-matched comparison subjects, and 188 general comparison subjects. Results: Patients showed statistically significant large-effect-size deficits in voice emotion recognition (d=1.1) and were preferentially impaired in recognition of emotion based on pitch features but not intensity features. Emotion recognition deficits were significantly correlated with pitch perception impairments both across (r=0.56) and within (r=0.47) groups. Path analysis showed both sensory-specific and general cognitive contributions to auditory emotion recognition deficits in schizophrenia. Similar patterns of results were observed in the replication sample. Conclusions: The results demonstrate that patients with schizophrenia show a significant deficit in the ability to recognize emotion based on tone of voice and that this deficit is related to impairment in detecting the underlying acoustic features, such as change in pitch, required for auditory emotion recognition. This study provides tools for, and highlights the need for, greater attention to physical features of stimuli used in studying social cognition in neuropsychiatric disorders.

  • 11. Henningsson, Susanne
    et al.
    Zettergren, Anna
    Hovey, Daniel
    Jonsson, Lina
    Svärd, Joakim
    Stockholm University, Faculty of Social Sciences, Aging Research Center (ARC), (together with KI).
    Cortes, Diana S.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Melke, Jonas
    Ebner, Natalie C.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology. Stockholm University, Faculty of Social Sciences, Aging Research Center (ARC), (together with KI).
    Westberg, Lars
    Association between polymorphisms in NOS3 and KCNH2 and social memory2015In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 9, 393Article in journal (Refereed)
    Abstract [en]

    Social memory, including the ability to recognize faces and voices, is essential for social relationships. It has a large heritable component, but the knowledge about the contributing genes is sparse. The genetic variation underlying inter-individual differences in social memory was investigated in an exploratory sample (n = 55), genotyped with a chip comprising approximately 200,000 single nucleotide polymorphisms (SNPs), and in a validation sample (n = 582), where 30 SNPs were targeted. In the exploratory study face identity recognition was measured. The validation study also measured vocal sound recognition, as well as recognition of faces and vocal sounds combined (multimodal condition). In the exploratory study, the 30 SNPs that were associated with face recognition at puncorrected < 0.001 and located in genes, were chosen for further study. In the validation study two of these SNPs showed significant associations with recognition of faces, vocal sounds, and multimodal stimuli: rs1800779 in the gene encoding nitric oxide synthase 3 (NOS3) and rs3807370 in the gene encoding the voltage-gated channel, subfamily H, member 2 (KCNH2), in strong linkage disequilibrium with each other. The uncommon alleles were associated with superior performance, and the effects were present for men only (p < 0.0002). The exploratory study also showed a weaker but significant association with (non-emotional) word recognition, an effect that was independent of the effect on face recognition. This study demonstrates evidence for an association between NOS3 and KCNH2SNPs and social memory.

  • 12. Holding, Benjamin C.
    et al.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology.
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Bänziger, Tanja
    Axelsson, John
    Stockholm University, Faculty of Social Sciences, Stress Research Institute. Karolinska Institutet.
    Sundelin, Tina
    Stockholm University, Faculty of Social Sciences, Stress Research Institute. Karolinska Institutet.
    Multimodal Emotion Recognition Is Resilient to Insufficient Sleep: Results From Cross-Sectional and Experimental Studies2017In: Sleep, ISSN 0161-8105, E-ISSN 1550-9109, Vol. 40, no 11, zsx145Article in journal (Refereed)
    Abstract [en]

    Objectives: Insufficient sleep has been associated with impaired recognition of facial emotions. However, previous studies have found inconsistent results, potentially stemming from the type of static picture task used. We therefore examined whether insufficient sleep was associated with decreased emotion recognition ability in two separate studies using a dynamic multimodal task.

    Methods: Study 1 used a cross-sectional design consisting of 291 participants with questionnaire measures assessing sleep duration and self-reported sleep quality for the previous night. Study 2 used an experimental design involving 181 participants where individuals were quasi-randomized into either a sleep-deprivation (N = 90) or a sleep-control (N = 91) condition. All participants from both studies were tested on the same forced-choice multimodal test of emotion recognition to assess the accuracy of emotion categorization.

    Results: Sleep duration, self-reported sleep quality (study 1), and sleep deprivation (study 2) did not predict overall emotion recognition accuracy or speed. Similarly, the responses to each of the twelve emotions tested showed no evidence of impaired recognition ability, apart from one positive association suggesting that greater self-reported sleep quality could predict more accurate recognition of disgust (study 1).

    Conclusions: The studies presented here involve considerably larger samples than previous studies and the results support the null hypotheses. Therefore, we suggest that the ability to accurately categorize the emotions of others is not associated with short-term sleep duration or sleep quality and is resilient to acute periods of insufficient sleep.

  • 13. Holding, J.B.C.
    et al.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology.
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Axelsson, John
    Sundelin, Tina
    Total sleep deprivation does not impact emotioncategorisation in dynamic stimuli2016In: Abstracts of the 23rd Congress of the European Sleep Research Society, 13–16 September 2016, Bologna, Italy. Journal of Sleep Research, 2016, Vol. 25(S1), 152-152 p., P193Conference paper (Refereed)
    Abstract [en]

    Previous studies have highlighted a deficit in facial emotion recognition after sleep loss. However, while some studies suggest an overall deficit in ability, others have only found effects in individual emotions, or no effect at all. The aim of this study was to investigate this relationship in a large sample and to utilise a dynamic test of emotion recognition in multiple modalities. 145 individuals (91 female, ages 18–45) participated in a sleep-deprivation experiment. Participants were randomised into: one night of total sleep deprivation (TSD) or normal sleep (8–9 h in bed). The following day participants completed a computerised emotional recognition test, consisting of 72 visual, audio, and audio-visual clips, representing 12 different emotions. The stimuli were divided into “easy” and “hard” depending on the intensity of emotional display. A mixed ANOVA revealed significant main effects of modality and difficulty, P < 0.001, but no main effect of condition, P = 0.31, on emotional recognition accuracy. Additionally, there was no interaction between condition and difficulty, P = 0.96, or modality, P = 0.67. This study indicates that sleep deprivation does not reduce the ability to recognise emotions. Given that some studies have only found effects on single emotions, it is possible that the effects of sleep loss are more specific than investigated here. However, it is also possible that previous findings relate to the types of static stimuli used. The ability to recognise emotions is key to social perception; this study suggests that this ability is resilient to one night of sleep deprivation.

  • 14. Juslin, Patrik N.
    et al.
    Liljeström, Simon
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Västfjäll, Daniel
    Lundqvist, Lars-Olov
    Emotional reactions to music in a nationally representative sample of Swedish adults: Prevalence and causal influences2011In: Musicae scientiae, ISSN 1029-8649, Vol. 15, no 2, 174-207 p.Article in journal (Refereed)
    Abstract [en]

    Empirical studies have indicated that listeners value music primarily for its ability to arouse emotions. Yet little is known about which emotions listeners normally experience when listening to music, or about the causes of these emotions. The goal of this study was therefore to explore the prevalence of emotional reactions to music in everyday life and how this is influenced by various factors in the listener, the music, and the situation. A self-administered mail questionnaire was sent to a random and nationally representative sample of 1,500 Swedish citizens between the ages of 18 and 65, and 762 participants (51%) responded to the questionnaire. Thirty-two items explored both musical emotions in general (semantic estimates) and the most recent emotion episode featuring music for each participant (episodic estimates). The results revealed several variables (e.g., personality, age, gender, listener activity) that were correlated with particular emotions. A multiple discriminant analysis indicated that three of the most common emotion categories in a set of musical episodes (i.e., happiness, sadness, nostalgia) could be predicted with a mean accuracy of 70% correct based on data obtained from the questionnaire. The results may inform theorizing about musical emotions and guide the selection of causal variables for manipulation in future experiments.

  • 15. Kantrowitz, J. T.
    et al.
    Scaramello, N.
    Jakubovitz, A.
    Lehrfeld, J. M.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Elfenbein, H. A.
    Silipo, G.
    Javitt, D. C.
    Amusia and protolanguage impairments in schizophrenia2014In: Psychological Medicine, ISSN 0033-2917, E-ISSN 1469-8978, Vol. 44, no 13, 2739-2748 p.Article in journal (Refereed)
    Abstract [en]

    Background. Both language and music are thought to have evolved from a musical protolanguage that communicated social information, including emotion. Individuals with perceptual music disorders (amusia) show deficits in auditory emotion recognition (AER). Although auditory perceptual deficits have been studied in schizophrenia, their relationship with musical/protolinguistic competence has not previously been assessed. Method. Musical ability was assessed in 31 schizophrenia/schizo-affective patients and 44 healthy controls using the Montreal Battery for Evaluation of Amusia (MBEA). AER was assessed using a novel battery in which actors provided portrayals of five separate emotions. The Disorganization factor of the Positive and Negative Syndrome Scale (PANSS) was used as a proxy for language/thought disorder and the MATRICS Consensus Cognitive Battery (MCCB) was used to assess cognition. Results. Highly significant deficits were seen between patients and controls across auditory tasks (p<0.001). Moreover, significant differences were seen in AER between the amusia and intact music-perceiving groups, which remained significant after controlling for group status and education. Correlations with AER were specific to the melody domain, and correlations between protolanguage (melody domain) and language were independent of overall cognition. Discussion. This is the first study to document a specific relationship between amusia, AER and thought disorder, suggesting a shared linguistic/protolinguistic impairment. Once amusia was considered, other cognitive factors were no longer significant predictors of AER, suggesting that musical ability in general and melodic discrimination ability in particular may be crucial targets for treatment development and cognitive remediation in schizophrenia.

  • 16. Kantrowitz, Joshua T.
    et al.
    Hoptman, Matthew J.
    Leitman, David I.
    Moreno-Ortega, Marta
    Lehrfeld, Jonathan M.
    Dias, Elisa
    Sehatpour, Pejman
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Silipo, Gail
    Javitt, Daniel C.
    Neural Substrates of Auditory Emotion Recognition Deficits in Schizophrenia2015In: Journal of Neuroscience, ISSN 0270-6474, E-ISSN 1529-2401, Vol. 35, no 44, 14909-14921 p.Article in journal (Refereed)
    Abstract [en]

    Deficits in auditory emotion recognition (AER) are a core feature of schizophrenia and a key component of social cognitive impairment. AER deficits are tied behaviorally to impaired ability to interpret tonal (“prosodic”) features of speech that normally convey emotion, such as modulations in base pitch (F0M) and pitch variability (F0SD). These modulations can be recreated using synthetic frequency modulated (FM) tones that mimic the prosodic contours of specific emotional stimuli. The present study investigates neural mechanisms underlying impaired AER using a combined event-related potential/resting-state functional connectivity (rsfMRI) approach in 84 schizophrenia/schizoaffective disorder patients and 66 healthy comparison subjects. Mismatch negativity (MMN) to FM tones was assessed in 43 patients/36 controls. rsfMRI between auditory cortex and medial temporal (insula) regions was assessed in 55 patients/51 controls. The relationship between AER, MMN to FM tones, and rsfMRI was assessed in the subset who performed all assessments (14 patients, 21 controls). As predicted, patients showed robust reductions in MMN across FM stimulus type (p = 0.005), particularly to modulations in F0M, along with impairments in AER and FM tone discrimination. MMN source analysis indicated dipoles in both auditory cortex and anterior insula, whereas rsfMRI analyses showed reduced auditory-insula connectivity. MMN to FM tones and functional connectivity together accounted for ∼50% of the variance in AER performance across individuals. These findings demonstrate that impaired preattentive processing of tonal information and reduced auditory-insula connectivity are critical determinants of social cognitive dysfunction in schizophrenia, and thus represent key targets for future research and clinical intervention.

  • 17. Kantrowitz, Joshua T.
    et al.
    Jakubovitz, Aaron
    Scaramello, Nayla
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Silipo, Gail
    Javitt, Daniel C.
    Are Schizophrenia Patients Amusical?: The Role of Pitch and Rhythm in Auditory Emotion Recognition Impairments in Schizophrenia2013In: Biological Psychiatry, ISSN 0006-3223, E-ISSN 1873-2402, Vol. 73, no 9, Suppl.18-Suppl. 18 p.Article in journal (Other academic)
  • 18. Kantrowitz, Joshua T.
    et al.
    Leitman, David I.
    Lehrfeld, Jonathan M.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Juslin, Patrik N.
    Butler, Pamela D.
    Silipo, Gail
    Javitt, Daniel C.
    Reduction in Tonal Discriminations Predicts Receptive Emotion Processing Deficits in Schizophrenia and Schizoaffective Disorder2013In: Schizophrenia Bulletin, ISSN 0586-7614, E-ISSN 1745-1701, Vol. 39, no 1, 86-93 p.Article in journal (Refereed)
    Abstract [en]

    Introduction: Schizophrenia patients show decreased ability to identify emotion based upon tone of voice (voice emotion recognition), along with deficits in basic auditory processing. Interrelationship among these measures is poorly understood. Methods: Forty-one patients with schizophrenia/schizoaffective disorder and 41 controls were asked to identify the emotional valence (happy, sad, angry, fear, or neutral) of 38 synthesized frequency-modulated (FM) tones designed to mimic key acoustic features of human vocal expressions. The mean (F0M) and variability (F0SD) of fundamental frequency (pitch) and absence or presence of high frequency energy (HF500) of the tones were independently manipulated to assess contributions on emotion identification. Forty patients and 39 controls also completed tone-matching and voice emotion recognition tasks. Results: Both groups showed a nonrandom response pattern (P < .0001). Stimuli with highest and lowest F0M/F0SD were preferentially identified as happy and sad, respectively. Stimuli with low F0M and midrange F0SD values were identified as angry. Addition of HF500 increased rates of angry and decreased rates of sad identifications. Patients showed less differentiation of response across frequency changes, leading to a highly significant between-group difference in response pattern to maximally identifiable stimuli (d = 1.4). The differential identification pattern for FM tones correlated with deficits in basic tone-matching ability (P = .01), voice emotion recognition (P < .001), and negative symptoms (P < .001).Conclusions: Specific FM tones conveyed reliable emotional percepts in both patients and controls and correlated highly with deficits in ability to recognize information based upon tone of voice, suggesting significant bottom-up contributions to social cognition and negative symptom impairments in schizophrenia.

  • 19. Karlsson, Sara
    et al.
    Henningsson, Susanne
    Hovey, Daniel
    Zettergren, Anna
    Jonsson, Lina
    Cortes, Diana S.
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Melke, Jonas
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology.
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.
    Westberg, Lars
    Social memory associated with estrogen receptor polymorphisms in women2016In: Social Cognitive & Affective Neuroscience, ISSN 1749-5016, E-ISSN 1749-5024, Vol. 11, no 6, 877-883 p.Article in journal (Refereed)
    Abstract [en]

    The ability to recognize the identity of faces and voices is essential for social relationships. Although the heritability of social memory is high, knowledge about the contributing genes is sparse. Since sex differences and rodent studies support an influence of estrogens and androgens on social memory, polymorphisms in the estrogen and androgen receptor genes (ESR1, ESR2, AR) are candidates for this trait. Recognition of faces and vocal sounds, separately and combined, was investigated in 490 subjects, genotyped for 10 single nucleotide polymorphisms (SNPs) in ESR1, four in ESR2 and one in the AR. Four of the associations survived correction for multiple testing: women carrying rare alleles of the three ESR2 SNPs, rs928554, rs1271572 and rs1256030, in linkage disequilibrium with each other, displayed superior face recognition compared with non-carriers. Furthermore, the uncommon genotype of the ESR1 SNP rs2504063 was associated with better recognition of identity through vocal sounds, also specifically in women. This study demonstrates evidence for associations in women between face recognition and variation in ESR2, and recognition of identity through vocal sounds and variation in ESR1. These results suggest that estrogen receptors may regulate social memory function in humans, in line with what has previously been established in mice.

  • 20.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology.
    Vocal Communication of Emotion2017In: Encyclopedia of Personality and Individual Differences / [ed] Virgil Zeigler-Hill, Todd K. Shackelford, Springer, 2017Chapter in book (Refereed)
    Abstract [en]

    Vocal communication of emotion refers to the process wherein speakers express emotions by modulating nonverbal aspects of their speech, and listeners utilize the nonverbal aspects of speech in order to make inferences about the emotional experience of the speaker.

  • 21.
    Laukka, Petri
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Audibert, Nicolas
    Aubergé, Véronique
    Exploring the determinants of the graded structure of vocal emotion expressions2012In: Cognition & Emotion, ISSN 0269-9931, E-ISSN 1464-0600, Vol. 26, no 4, 710-719 p.Article in journal (Refereed)
    Abstract [en]

    We examined what determines the typicality, or graded structure, of vocal emotion expressions. Separate groups of judges rated acted and spontaneous expressions of anger, fear, and joy with regard to their typicality and three main determinants of the graded structure of categories: category members’ similarity to the central tendency of their category (CT); category members’ frequency of instantiation, i.e., how often they are encountered as category members (FI); and category members’ similarity to ideals associated with the goals served by its category, i.e., suitability to express particular emotions. Partial correlations and multiple regression analysis revealed that similarity to ideals, rather than CT or FI, explained most variance in judged typicality. Results thus suggest that vocal emotion expressions constitute ideal-based goal-derived categories, rather than taxonomic categories based on CT and FI. This could explain how prototypical expressions can be acoustically distinct and highly recognisable but occur relatively rarely in everyday speech.

  • 22.
    Laukka, Petri
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Eerola, Tuomas
    Thingujam, Nutankumar S.
    Yamasaki, Teruo
    Beller, Gregory
    Universal and Culture-Specific Factors in the Recognition and Performance of Musical Affect Expressions2013In: Emotion, ISSN 1528-3542, E-ISSN 1931-1516, Vol. 13, no 3, 434-449 p.Article in journal (Refereed)
    Abstract [en]

    We present a cross-cultural study on the performance and perception of affective expression in music. Professional bowed-string musicians from different musical traditions (Swedish folk music, Hindustani classical music, Japanese traditional music, and Western classical music) were instructed to perform short pieces of music to convey 11 emotions and related states to listeners. All musical stimuli were judged by Swedish, Indian, and Japanese participants in a balanced design, and a variety of acoustic and musical cues were extracted. Results first showed that the musicians' expressive intentions could be recognized with accuracy above chance both within and across musical cultures, but communication was, in general, more accurate for culturally familiar versus unfamiliar music, and for basic emotions versus nonbasic affective states. We further used a lens-model approach to describe the relations between the strategies that musicians use to convey various expressions and listeners' perceptions of the affective content of the music. Many acoustic and musical cues were similarly correlated with both the musicians' expressive intentions and the listeners' affective judgments across musical cultures, but the match between musicians' and listeners' uses of cues was better in within-cultural versus cross-cultural conditions. We conclude that affective expression in music may depend on a combination of universal and culture-specific factors.

  • 23.
    Laukka, Petri
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Elfenbein, H.A.
    Chui, W.
    Thingujam, N.S.
    Iraki, F.K.
    Rockstuhl, T.
    Althoff, J.
    Presenting the VENEC corpus: Development of a cross-cultural corpus of vocal emotion expressions and a novel method of annotating emotion appraisals2010In: Proceedings of the LREC 2010 Workshop on Corpora for Research on Emotion and Affect / [ed] L. Devillers, B. Schuller, R. Cowie, E. Douglas-Cowie, & A. Batliner, Valetta, Malta: European Language Resources Association , 2010, 53-57 p.Conference paper (Refereed)
    Abstract [en]

    We introduce the Vocal Expressions of Nineteen Emotions across Cultures (VENEC) corpus and present results from initial evaluation efforts using a novel method of annotating emotion appraisals. The VENEC corpus consists of 100 professional actors from 5 English speaking cultures (USA, India, Kenya, Singapore, and Australia) who vocally expressed 19 different affects/emotions (affection, amusement, anger, contempt, disgust, distress, fear, guilt, happiness, interest, lust, negative surprise, neutral, positive surprise, pride, relief, sadness, serenity, and shame), each with 3 levels of emotion intensity, by enacting finding themselves in various emotion-eliciting situations. In all, the corpus contains approximately 6,500 stimuli offering great variety of expressive styles for each emotion category due to speaker, culture, and emotion intensity effects. All stimuli have further been acoustically analyzed regarding pitch, intensity, voice quality, and durational cues. In the appraisal rating study, listeners rated a selection of VENEC-stimuli with regard to the characteristics of the emotion eliciting situation, described in terms of 8 emotion appraisal dimensions (novelty, intrinsic pleasantness, goal conduciveness, urgency, power, self- and other-responsibility, and norm compatibility). First, results showed that the inter-rater reliability was acceptable for all scales except responsibility. Second, the perceived appraisal profiles for the different vocal expressions were generally in accord with predictions based on appraisal theory. Finally, listeners’ appraisal ratings on each scale were significantly correlated with several acoustic characteristics. The results show that listeners can reliably infer several aspects of emotion-eliciting situations from vocal affect expressions, and thus suggest that vocal affect expressions may carry cognitive representational information.

  • 24.
    Laukka, Petri
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Elfenbein, Hillary Anger
    Emotion appraisal dimensions can be inferred from vocal expressions2012In: Social Psychological and Personality Science, ISSN 1948-5506, Vol. 3, no 5, 529-536 p.Article in journal (Refereed)
    Abstract [en]

    Vocal expressions are thought to convey information about speakers’ emotional states but may also reflect the antecedent cognitive appraisal processes that produced the emotions. We investigated the perception of emotion-eliciting situations on the basis of vocal expressions. Professional actors vocally portrayed different emotions by enacting emotion-eliciting situations. Judges then rated these expressions with respect to the emotion-eliciting situation described in terms of appraisal dimensions (i.e., novelty, intrinsic pleasantness, goal conduciveness, urgency, power, self- and other responsibility, and norm compatibility), achieving good agreement. The perceived appraisal profiles for the different emotions were generally in accord with predictionsbased on appraisal theory. The appraisal ratings also correlated with a variety of acoustic measures related to pitch, intensity, voice quality, and temporal characteristics. Results suggest that several aspects of emotion-eliciting situations can be inferred reliably and validly from vocal expressions which, thus, may carry information about the cognitive representation of events.

  • 25.
    Laukka, Petri
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Elfenbein, Hillary Anger
    Söder, Nela
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Nordström, Henrik
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Althoff, Jean
    Chui, Wanda
    Iraki, Frederick K.
    Rockstuhl, Thomas
    Thingujam, Nutankumar S.
    Cross-cultural decoding of positive and negative non-linguistic emotion vocalizations2013In: Frontiers in Psychology, ISSN 1664-1078, Vol. 4, 353- p.Article in journal (Refereed)
    Abstract [en]

    Which emotions are associated with universally recognized non-verbal signals? We address this issue by examining how reliably non-linguistic vocalizations (affect bursts) can convey emotions across cultures. Actors from India, Kenya, Singapore, and USA were instructed to produce vocalizations that would convey nine positive and nine negative emotions to listeners. The vocalizations were judged by Swedish listeners using a within-valence forced-choice procedure, where positive and negative emotions were judged in separate experiments. Results showed that listeners could recognize a wide range of positive and negative emotions with accuracy above chance. For positive emotions, we observed the highest recognition rates for relief, followed by lust, interest, serenity and positive surprise, with affection and pride receiving the lowest recognition rates. Anger, disgust, fear, sadness, and negative surprise received the highest recognition rates for negative emotions, with the lowest rates observed for guilt and shame. By way of summary, results showed that the voice can reveal both basic emotions and several positive emotions other than happiness across cultures, but self-conscious emotions such as guilt, pride, and shame seem not to be well recognized from non-linguistic vocalizations.

  • 26.
    Laukka, Petri
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Cognitive psychology.
    Elfenbein, Hillary Anger
    Thingujam, Nutankumar S.
    Rockstuhl, Thomas
    Iraki, Frederick K.
    Chui, Wanda
    Althoff, Jean
    The expression and recognition of emotions in the voice across five nations: A lens model analysis based on acoustic features2016In: Journal of Personality and Social Psychology, ISSN 0022-3514, E-ISSN 1939-1315, Vol. 111, no 5, 686-705 p.Article in journal (Refereed)
    Abstract [en]

    This study extends previous work on emotion communication across cultures with a large-scale investigation of the physical expression cues in vocal tone. In doing so, it provides the first direct test of a key proposition of dialect theory, namely that greater accuracy of detecting emotions from one’s own cultural group—known as in-group advantage—results from a match between culturally specific schemas in emotional expression style and culturally specific schemas in emotion recognition. Study 1 used stimuli from 100 professional actors from five English-speaking nations vocally conveying 11 emotional states (anger, contempt, fear, happiness, interest, lust, neutral, pride, relief, sadness, and shame) using standard-content sentences. Detailed acoustic analyses showed many similarities across groups, and yet also systematic group differences. This provides evidence for cultural accents in expressive style at the level of acoustic cues. In Study 2, listeners evaluated these expressions in a 5 × 5 design balanced across groups. Cross-cultural accuracy was greater than expected by chance. However, there was also in-group advantage, which varied across emotions. A lens model analysis of fundamental acoustic properties examined patterns in emotional expression and perception within and across groups. Acoustic cues were used relatively similarly across groups both to produce and judge emotions, and yet there were also subtle cultural differences. Speakers appear to have a culturally nuanced schema for enacting vocal tones via acoustic cues, and perceivers have a culturally nuanced schema in judging them. Consistent with dialect theory’s prediction, in-group judgments showed a greater match between these schemas used for emotional expression and perception.

  • 27.
    Laukka, Petri
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Neiberg, Daniel
    Elfenbein, Hillary Anger
    Evidence for cultural dialects in vocal emotion expression: acoustic classification within and across five nations2014In: Emotion, ISSN 1528-3542, E-ISSN 1931-1516, Vol. 14, no 3, 445-449 p.Article in journal (Refereed)
    Abstract [en]

    The possibility of cultural differences in the fundamental acoustic patterns used to express emotion through the voice is an unanswered question central to the larger debate about the universality versus cultural specificity of emotion. This study used emotionally inflected standard-content speech segments expressing 11 emotions produced by 100 professional actors from 5 English-speaking cultures. Machine learning simulations were employed to classify expressions based on their acoustic features, using conditions where training and testing were conducted on stimuli coming from either the same or different cultures. A wide range of emotions were classified with above-chance accuracy in cross-cultural conditions, suggesting vocal expressions share important characteristics across cultures. However, classification showed an in-group advantage with higher accuracy in within- versus cross-cultural conditions. This finding demonstrates cultural differences in expressive vocal style, and supports the dialect theory of emotions according to which greater recognition of expressions from in-group members results from greater familiarity with culturally specific expressive styles.

  • 28.
    Laukka, Petri
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Quick, Lina
    Emotional and motivational uses of music in sports and exercise: A questionnaire study among athletes2013In: Psychology of Music, ISSN 0305-7356, E-ISSN 1741-3087, Vol. 41, no 2, 198-215 p.Article in journal (Refereed)
    Abstract [en]

    Music is present in many sport and exercise situations, but empirical investigations on the motives for listening to music in sports remain scarce. In this study, Swedish elite athletes (N = 252) answered a questionnaire that focused on the emotional and motivational uses of music in sports and exercise. The questionnaire contained both quantitative items that assessed the prevalence of various uses of music, and open-ended items that targeted specific emotional episodes in relation to music in sports. Results showed that the athletes most often reported listening to music during pre-event preparations, warm-up, and training sessions; and the most common motives for listening to music were to increase pre-event activation, positive affect, motivation, performance levels and to experience flow. The athletes further reported that they mainly experienced positive affective states (e.g., happiness, alertness, confidence, relaxation) in relation to music in sports, and also reported on their beliefs about the causes of the musical emotion episodes in sports. In general, the results suggest that the athletes used music in purposeful ways in order to facilitate their training and performance.

  • 29.
    Laukka, Petri
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Åhs, Fredrik
    Furmark, Tomas
    Fredrikson, Mats
    Neurofunctional correlates of expressed vocal affect in social phobia2011In: Cognitive, Affective, & Behavioral Neuroscience, ISSN 1530-7026, Vol. 11, no 3, 413-425 p.Article in journal (Refereed)
    Abstract [en]

    We investigated the neural correlates of expressed vocal affect in patients with social phobia. A group of 36 patients performed an anxiogenic public-speaking task while regional cerebral blood flow (rCBF) was assessed using oxygen-15 positron emission tomography. The patients’ speech was recorded and content masked using low-pass filtering (which obscures linguistic content but preserves nonverbal affective cues). The content-masked speech samples were then evaluated with regard to their level of vocally expressed nervousness. We hypothesized that activity in prefrontal and subcortical brain areas previously implicated in emotion regulation would be associated with the degree of expressed vocal affect. Regression analyses accordingly revealed significant negative correlations between expressed vocal affect and rCBF in inferior frontal gyrus, putamen, and hippocampus. Further, functional connectivity was revealed between inferior frontal gyrus and (a) anterior cingulate cortex and (b) amygdala and basal ganglia. We suggest that brain areas important for emotion regulation may also form part of a network associated with the modulation of affective prosody in social phobia.

  • 30. Leitman, David I.
    et al.
    Wolf, Daniel H.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Ragland, J. Daniel
    Valdez, Jeffrey N.
    Turetsky, Bruce I.
    Gur, Raquel E.
    Gur, Ruben C.
    Not pitch perfect: Sensory contributions to affective communication impairment in schizophrenia2011In: Biological Psychiatry, ISSN 0006-3223, E-ISSN 1873-2402, Vol. 70, no 7, 611-618 p.Article in journal (Refereed)
    Abstract [en]

    Background: Schizophrenia patients have vocal affect (prosody) deficits that are treatment resistant and associated with negativesymptoms and poor outcome. The neural correlates of this dysfunction are unclear. Prior study has suggested that schizophrenia vocal affectperception deficits stem from an inability to use acoustic cues, notably pitch, in decoding emotion.

    Methods: Functional magnetic resonance imaging was performed in 24 schizophrenia patients and 28 healthy control subjects, during theperformance of a four-choice (happiness, fear, anger, neutral) vocal affect identification task in which items for each emotion variedparametrically in affective salient acoustic cue levels.

    Results: We observed that parametric increases in cue levels in schizophrenia failed to produce the same identification rate increases as incontrol subjects. These deficits correlated with diminished reciprocal activation changes in superior temporal and inferior frontal gyri andreduced temporo-frontal connectivity. Task activation also correlated with independent measures of pitch perception and negativesymptom severity.

    Conclusions: These findings illustrate the interplay between sensory and higher-order cognitive dysfunction in schizophrenia. Sensorycontributions to vocal affect deficits also suggest that this neurobehavioral marker could be targeted by pharmacological or behavioralremediation of acoustic feature discrimination.

  • 31. Leitman, David I.
    et al.
    Wolf, Daniel H.
    Ragland, J. Daniel
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Loughead, James
    Valdez, Jeffrey N.
    Javitt, Daniel C.
    Turetsky, Bruce I.
    Gur, Ruben C.
    “It’s not what you say, but how you say it”: A reciprocal temporo-frontal network for affective prosody2010In: Frontiers in Human Neuroscience, ISSN 1662-5161, Vol. 4, no Article 19Article in journal (Refereed)
    Abstract [en]

    Humans communicate emotion vocally by modulating acoustic cues such as pitch, intensity and voice quality. Research has documented how the relative presence or absence of such cues alters the likelihood of perceiving an emotion, but the neural underpinnings of acoustic cue-dependent emotion perception remain obscure. Using functional magnetic resonance imaging in 20 subjects we examined a reciprocal circuit consisting of superior temporal cortex, amygdala and inferior frontal gyrus that may underlie affective prosodic comprehension. Results showed that increased saliency of emotion-specific acoustic cues was associated with increased activation in superior temporal cortex (planum temporale (PT), posterior superior temporal gyrus (pSTG), and posterior superior middle gyrus (pMTG)) and amygdala, whereas decreased saliency of acoustic cues was associated with increased inferior frontal activity and temporo-frontal connectivity. These results suggest that sensory-integrative processing is facilitated when the acoustic signal is rich in affective information, yielding increased activation in temporal cortex and amygdala. Conversely, when the acoustic signal is ambiguous, greater evaluative processes are recruited, increasing activation in inferior frontal gyrus (IFG) and IFG STG connectivity. Auditory regions may thus integrate acoustic information with amygdala input to form emotion-specific representations, which are evaluated within inferior frontal regions.

  • 32. Letellier, Isabelle
    et al.
    Döllinger, Lillian
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Högman, Lennart
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Neal, Emma
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Bänziger, Tanja
    Makower, Irena
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Hau, Stephan
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Assessing the impact of attachment on emotion recognition: Accuracy scores and types of confusion2016Conference paper (Other academic)
  • 33. Letellier, Isabelle
    et al.
    Döllinger, Lillian
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Högman, Lennart
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Neal, Emma
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Bänziger, Tanja
    Makower, Irena
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Hau, Stephan
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Avoidant attachment impairs global accuracy in emotion recognition2016Conference paper (Other academic)
  • 34. Letellier, Isabelle
    et al.
    Döllinger, Lillian
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Högman, Lennart
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Neal, Emma
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Bänziger, Tanja
    Makower, Irena
    Fischer, Håkan
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Hau, Stephan
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    The role of the psychotherapist’s perception of emotion in therapy: Presentation of a Forte-Marie Curie Project2016Conference paper (Other academic)
  • 35. Neiberg, Daniel
    et al.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Ananthakrishnan, Gopal
    Classification of affect in speech using normalized time-frequency cepstra2010In: Speech Prosody 2010, 2010, 100071-1-4 p.Conference paper (Refereed)
    Abstract [en]

    Subtle temporal and spectral differences between categorical realizations of para-linguistic phenomena (e.g., affective vocal expressions) are hard to capture and describe. In this paper we present a signal representation based on Time Varying Constant-Q Cepstral Coeffcients (TVCQCC) derived for this purpose. A method which utilizes the special properties of the constant Q-transform for mean F0 estimation and normalization is described. The coeffcients are invariant to segment length, and as a special case, a representation for prosody is considered. Speaker independent classifcation results using v-SVM with the Berlin EMO-DB and two closed sets of basic (anger, disgust, fear, happiness, sadness, neutral) and social/interpersonal (affection, pride, shame) emotions recorded by forty professional actors from two English dialect areas are reported. The accuracy for the Berlin EMO-DB is 71.2 %, and the accuracies for the first set including basic emotions was 44.6% and for the second set including basic and social emotions the accuracy was 31.7% . It was found that F0 normalization boosts the performance and a combined feature set shows the best performance.

  • 36. Neiberg, Daniel
    et al.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Elfenbein, Hillary Anger
    Intra-, inter-, and cross-cultural classification of vocal affect2011In: Procedings of INTERSPEECH 2011: 12th Annual Conference of the International Speech Communication Association, International Speech Communication Association , 2011, 1581-1584 p.Conference paper (Refereed)
    Abstract [en]

    We present intra-, inter- and cross-cultural classifications of vocal expressions. Stimuli were selected from the VENEC corpus and consisted of portrayals of 11 emotions, each expressed with 3 levels of intensity. Classification (nu-SVM) was based on acoustic measures related to pitch, intensity, formants, voice source and duration. Results showed that mean recall across emotions was around 2.4-3 times higher than chance level for both intra- and inter-cultural conditions. For cross-cultural conditions, the relative performance dropped 26%, 32%, and 34% for high, medium, and low emotion intensity, respectively. This suggests that intracultural models were more sensitive to mismatched conditions for low emotion intensity. Preliminary results further indicated that recall rate varied as a function of emotion, with lust and sadness showing the smallest performance drops in the crosscultural condition.

  • 37. Thingujam, Nutankumar S.
    et al.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Elfenbein, Hillary Anger
    Distinct emotional abilities converge: Evidence from emotional understanding and emotion recognition through the voice2012In: journal of Research in Personality, ISSN 0092-6566, E-ISSN 1095-7251, Vol. 46, no 3, 350-354 p.Article in journal (Refereed)
    Abstract [en]

    One key criterion for whether Emotional Intelligence (EI) truly fits the definition of ‘‘intelligence’’ is that individual branches of EI should converge. However, for performance tests that measure actual ability, such convergence has been elusive. Consistent with theoretical perspectives for intelligence, we approach this question using EI measures that have objective standards for right answers. Examining emotion recognition through the voice—that is, the ability to judge an actor’s intended portrayal—and emotional understanding—that is, the ability to understand relationships and transitions among emotions—we find substantial convergence, r = .53. Results provide new data to inform the often heated debate about the validity of EI, and further the basis of optimism that EI may truly be considered intelligence.

  • 38.
    Yamasaki, Teruo
    et al.
    Osaka Shoin Women’s University, Japan.
    Yamada, Keiko
    Osaka Shoin Women’s University, Japan.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Viewing the world through the prism of music: Effects of music on perceptions of the environment2015In: Psychology of Music, ISSN 0305-7356, E-ISSN 1741-3087, Vol. 43, no 1, 61-74 p.Article in journal (Refereed)
    Abstract [en]

    Questionnaire and interview studies suggest that music is valued for its role in managing the listener’s impression of the environment, but systematic investigations on the topic are scarce. We present a field experiment wherein participants were asked to rate their impression of four different environments (a quiet residential area, traveling by train in the suburbs, at a busy crossroads, and in a tranquil park area) on bipolar adjective scales, while listening to music (which varied regarding level of perceived activation and valence) or in silence. Results showed that the evaluation of the environment was in general affected in the direction of the characteristics of the music, especially in conditions where the perceived characteristics of the music and environment were incongruent. For example, highly active music increased the activation ratings of environments which were perceived as inactive without music, whereas inactive music decreased the activation ratings of environments which were perceived as highly active without music. Also, highly positive music increased the positivity ratings of the environments. In sum, the findings suggest that music may function as a prism that modifies the impression of one’s surroundings. Different theoretical explanations of the results are discussed.

1 - 38 of 38
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf