Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Friberg, Anders
    et al.
    Hellwagner, Martin
    Helgason, Pétur
    Salomão, Gláucia Laís
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics. Stockholm University, Faculty of Humanities, Department of Linguistics, SUBIC - Stockholm University Brain Imaging Centre. Speech, Music and Hearing, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology 1 , Lindstedtsvägen 24, 10044 Stockholm, Sweden.
    Elowsson, Anders
    Lemaitre, Guillaume
    Ternström, Sten
    Prediction of three articulatory categories in vocal sound imitations using models for auditory receptive fields2018In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 144, no 3, p. 1467-1483Article in journal (Refereed)
    Abstract [en]

    Vocal sound imitations provide a new challenge for understanding the coupling between articulatory mechanisms and the resulting audio. In this study, the classification of three articulatory categories, phonation, supraglottal myoelastic vibrations, and turbulence, have been modeled from audio recordings. Two data sets were assembled, consisting of different vocal imitations by four professional imitators and four non-professional speakers in two different experiments. The audio data were manually annotated by two experienced phoneticians using a detailed articulatory description scheme. A separate set of audio features was developed specifically for each category using both time-domain and spectral methods. For all time-frequency transformations, and for some secondary processing, the recently developed Auditory Receptive Fields Toolbox was used. Three different machine learning methods were applied for predicting the final articulatory categories. The result with the best generalization was found using an ensemble of multilayer perceptrons. The cross-validated classification accuracy was 96.8% for phonation, 90.8% for supraglottal myoelastic vibrations, and 89.0% for turbulence using all the 84 developed features. A final feature reduction to 22 features yielded similar results.

  • 2. Gordon, Amy R.
    et al.
    Lundström, Johan N.
    Stockholm University, Faculty of Humanities, Department of Linguistics, SUBIC - Stockholm University Brain Imaging Centre. Karolinska Institutet, Sweden; Monell Chemical Senses Center, USA.
    Kimball, Bruce A.
    Karshikoff, Bianka
    Sorjonen, Kimmo
    Axelsson, John
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Stress Research Institute.
    Lekander, Mats
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Stress Research Institute. Stockholm University, Faculty of Social Sciences, Department of Political Science.
    Olsson, Mats J.
    Human scent as a first-line defense against disease2023In: Scientific Reports, E-ISSN 2045-2322, Vol. 13, no 1, article id 16709Article in journal (Refereed)
    Abstract [en]

    Individuals may have a different body odor, when they are sick compared to healthy. In the non-human animal literature, olfactory cues have been shown to predict avoidance of sick individuals. We tested whether the mere experimental activation of the innate immune system in healthy human individuals can make an individuals' body odor be perceived as more aversive (intense, unpleasant, and disgusting). Following an endotoxin injection (lipopolysaccharide; 0.6 ng/kg) that creates a transient systemic inflammation, individuals smelled more unpleasant compared to a placebo group (saline injection). Behavioral and chemical analyses of the body odor samples suggest that the volatile components of samples from sick individuals changed qualitatively rather than quantitatively. Our findings support the hypothesis that odor cues of inflammation in axillary sweat are detectable just a few hours after experimental activation of the innate immune system. As such, they may trigger behavioral avoidance, hence constituting a first line of defense against pathogens of infected conspecifics.

  • 3.
    Salomão, Gláucia Laís
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics. Stockholm University, Faculty of Humanities, Department of Linguistics, SUBIC - Stockholm University Brain Imaging Centre.
    How voice mirrors emotions: relationships between emotions and physiological, aerodynamic and acoustic characteristics of phonation2020In: Proceedings of the Virtual Congress of the Otorhinolaryngology Foundation / [ed] International Archives of Otorhinolaryngology, 2020Conference paper (Refereed)
  • 4.
    Salomão, Gláucia Laís
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics. Stockholm University, Faculty of Humanities, Department of Linguistics, SUBIC - Stockholm University Brain Imaging Centre.
    Sundberg, Johan
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Scherer, Klaus
    Assessing Physiological and Acoustic Indicators of Two Opera Singers’ Voice Characteristics2023In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524Article in journal (Refereed)
  • 5.
    Salomão, Gláucia Laís
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics. Stockholm University, Faculty of Humanities, Department of Linguistics, SUBIC - Stockholm University Brain Imaging Centre.
    Sundberg, Johan
    Scherer, Klaus
    Emotion Expression in Singing Voice Source: Characteristics and Emotional Coloring Used by Three World-Renowned Male Singers2017In: Proceedings of The Voice Foundation Annual 's 46th Annual Symposium, 2017Conference paper (Refereed)
  • 6.
    Salomão, Gláucia Laís
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics. Stockholm University, Faculty of Humanities, Department of Linguistics, SUBIC - Stockholm University Brain Imaging Centre.
    Sundberg, Johan
    Scherer, Klaus
    Emotional Expressivity in Singing: Assessing Physiological and Acoustic Indicators of Two Opera Singers’ Voice2023In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524Article in journal (Refereed)
  • 7.
    Salomão, Gláucia Laís
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, SUBIC - Stockholm University Brain Imaging Centre. Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sundberg, Johan
    Scherer, Klaus
    What does LTAS tell about the voice source?2018In: Proceedings of The Voice Foundation Annual 's 46th Annual Symposium, 2018Conference paper (Refereed)
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf