Change search
Refine search result
1 - 44 of 44
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Burnham, Denis
    et al.
    University of Western Sydney, Australia.
    Kasisopa, Benjawan
    University of Western Sydney, Australia.
    Reid, Amanda
    University of Western Sydney, Australia.
    Luksaneeyanawin, Sudaporn
    Chulalongkorn University, Thailand.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Attina, Virginia
    University of Western Sydney, Australia.
    Rattanasone, Nan Xu
    Macquarie University, Australia.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Webster, Diane
    University of Western Sydney, Australia.
    Universality and language-specific experience in the perception of lexical tone and pitch2015In: Applied Psycholinguistics, ISSN 0142-7164, E-ISSN 1469-1817, Vol. 36, no 6, p. 1459-1491Article in journal (Refereed)
    Abstract [en]

    Two experiments focus on Thai tone perception by native speakers of tone languages (Thai, Cantonese, and Mandarin), a pitch–accent (Swedish), and a nontonal (English) language. In Experiment 1, there was better auditory-only and auditory–visual discrimination by tone and pitch–accent language speakers than by nontone language speakers. Conversely and counterintuitively, there was better visual-only discrimination by nontone language speakers than tone and pitch–accent language speakers. Nevertheless, visual augmentation of auditory tone perception in noise was evident for all five language groups. In Experiment 2, involving discrimination in three fundamental frequency equivalent auditory contexts, tone and pitch–accent language participants showed equivalent discrimination for normal Thai speech, filtered speech, and violin sounds. In contrast, nontone language listeners had significantly better discrimination for violin sounds than filtered speech and in turn speech. Together the results show that tone perception is determined by both auditory and visual information, by acoustic and linguistic contexts, and by universal and experiential factors.

  • 2.
    Cortes, Elisabet Eir
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gerholm, ToveStockholm University, Faculty of Humanities, Department of Linguistics, General Linguistics.Marklund, EllenStockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.Marklund, UlrikaStockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.Molnar, MonikaNilsson Björkenstam, KristinaStockholm University, Faculty of Humanities, Department of Linguistics, Computational Linguistics.Schwarz, Iris-CorinnaStockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.Sjons, JohanStockholm University, Faculty of Humanities, Department of Linguistics, Computational Linguistics.
    WILD 2015: Book of Abstracts2015Conference proceedings (editor) (Other academic)
    Abstract [en]

    WILD 2015 is the second Workshop on Infant Language Development, held June 10-12 2015 in Stockholm, Sweden. WILD 2015 was organized by Stockholm Babylab and the Department of Linguistics, Stockholm University. About 150 delegates met over three conference days, convening on infant speech perception, social factors of language acquisition, bilingual language development in infancy, early language comprehension and lexical development, neurodevelopmental aspects of language acquisition, methodological issues in infant language research, modeling infant language development, early speech production, and infant-directed speech. Keynote speakers were Alejandrina Cristia, Linda Polka, Ghislaine Dehaene-Lambertz, Angela D. Friederici and Paula Fikkert.

    Organizing this conference would of course not have been possible without our funding agencies Vetenskapsrådet and Riksbankens Jubiléumsfond. We would like to thank Francisco Lacerda, Head of the Department of Linguistics, and the Departmental Board for agreeing to host WILD this year. We would also like to thank the administrative staff for their help and support in this undertaking, especially Ann Lorentz-Baarman and Linda Habermann.

    The WILD 2015 Organizing Committee: Ellen Marklund, Iris-Corinna Schwarz, Elísabet Eir Cortes, Johan Sjons, Ulrika Marklund, Tove Gerholm, Kristina Nilsson Björkenstam and Monika Molnar.

  • 3.
    Gerholm, Tove
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Salomão, Gláucia Laís
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    The Swedish MINT Project: modelling infant language acquisition from parten-child interaction2015Conference paper (Other academic)
    Abstract [en]

    The MINT-project is a longitudinal study of verbal and nonverbal interaction between 73 Swedish children and their parents, recorded in lab environment from 3 months to 3 years of age. The overall goal of the project is to deepen our understanding of how language acquisition takes place in a multimodal and interactional framework. 

  • 4.
    Gustavsson, Lisa
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Klintfors, Eeva
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gerholm, Tove
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Assessing language acquisition from parent-child interaction: An event-related potential study on perception of intonation contours in infancy2013Conference paper (Other academic)
    Abstract [en]

    The aim of this paper is to present our multidisciplinary project to study parent-child interaction. The goal of the project is to identify, test, and simulate components of child and adult speechand gestures and the consequences they might have on child language acquisition. Since typical parent-child interaction is built upon both interlocutors’ intention-reading, responsiveness to joint-attention, and imitation of speech/gestures, we make video recordings along with recordings of speech data to grasp the integration of semantic and pragmatic aspects of language acquisition. The understanding of parent-child interaction benefits further frominformation on brain activation involved in speech processing. As a first step to achieve the project goals, an electroencephalography/event-related potential (EEG/ERP) study exploring children’s early perception of intonation contours involved in human interactions was performed. This paper discusses the characteristics of integration of multimodal social-emotional (speech,prosody, faces, posture) signals as part of the dynamics of communication in typically developing children. Possible application fields are social signal processing (SSP; an emerging research domain that aims to provide computers ability to understand human social signals), and improvement of diagnosis of late or atypical language development in pathologies that affect the dynamics of social interaction (such as autism spectrum disorders).

  • 5. Kalashnikova, Marina
    et al.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Burnham, Denis
    OZI: Australian English Communicative Development Inventory2016In: First language, ISSN 0142-7237, E-ISSN 1740-2344, Vol. 36, no 4, p. 407-427Article in journal (Refereed)
    Abstract [en]

    For more than 20 years, the MacArthur–Bates Communicative Development Inventory(CDI) and its adaptations for languages other than English have been used as reliable measures of infants’ and toddlers’ early receptive and productive vocabulary size. This article introduces the OZI, the Australian English adaptation of the MacArthur–Bates CDI, now normed for 12- to 30-month-old children. The findings of two studies are presented: (1) a comparison study that demonstrated that toddlers (N = 64) acquiring Australian English(24- and 30-month-olds) obtain higher productive vocabulary scores on the OZI than the MacArthur–Bates CDI; and (2) an OZI norming study that included 12- to 30-month-old Australian infants and toddlers (N = 1496). These studies provide (i) evidence for the greater applicability of the OZI for infants and toddlers learning Australian English and (ii) productive vocabulary acquisition norms for Australian English for ages 12–30 months, a research and diagnostic tool highly anticipated by researchers and clinicians around Australia.

  • 6.
    Klintfors, Eeva
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gerholm, Tove
    Stockholm University, Faculty of Humanities, Department of Linguistics, General Linguistics. Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Modellering av förälder-barn interaktion (MINT): Komponenter hos audio-visuella ledtrådar och deras konsekvenser för språkinlärning2013Conference paper (Other academic)
  • 7.
    Klintfors, Eeva
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gerhom, Tove
    Stockholm University, Faculty of Humanities, Department of Linguistics, General Linguistics.
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    The Stockholm Babylab Multimodal Approach: Modelling Infant Language Acquisition Longitudinally from Parent-Child Interaction2013Conference paper (Refereed)
    Abstract [en]

    Auditory communicative interaction is in general best analyzed with the help of simultaneously recorded visual information about discourse objects and the positioning of interlocutors in space. Access to visual information is even more important in parent-child interaction since this type of communica-tion is largely based on use of contextual gestures, gaze and imitation. The un-derstanding of parent-child interaction benefits further from information on brain activation involved in speech processing. This paper introduces the Stockholm Babylab approach to study multimodal language learning in typi-cally developing infants and young children. Our effort is to build a multimodal corpus that incorporates EEG (electroencephalography) data in the model. Ap-plication fields are social signal processing (SSP), improvement of diagnosis of late or atypical language development, and further development of habilitation methods for individuals with neurocognitive and language deficits.   

  • 8.
    Klintfors, Eeva
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sundberg, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjursäter, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Söderlund, Göran
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Development of communicative skills in 8- to 16-month-old children: A longitudinal study2008In: Proceedings of The 9th International Conference on Spoken Language Processing, Interspeech2008, Brisbane, September, 2008., 2008, p. 1972-1975Conference paper (Refereed)
    Abstract [en]

    The aim of this study was to assess development of communicative skills in 8- to 16-month-old children. Information on 24 Swedish children’s speech comprehension and production, as well as their utilization of communicative gestures was collected. A version of the MacArthur Communicative Development Inventory, the Swedish Early Communicative Development Inventory (SECDI), was used. The number of comprehended phrases, size of receptive and productive vocabularies, as well as the subjects’ gesture score was estimated according to standardized scoring instructions. The study was performed longitudinally based on 71 completed forms. The children’s performance was validated with existing norm-data collected from a large set of randomly selected children. The results showed an overall agreement with the norm-data. The performance of the subjects was though less stable and delayed about one month was compared to the norm-data. Adequacy of SECDI for screening language delay is discussed.

  • 9.
    Lacerda, Francisco
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sundberg, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Walberg, Heléne
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjursäter, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Klintfors, Eeva
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Söderlund, Göran
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Development of lateralised speech perception: Brain activation in 4-month-olds and adults2008In: Papers from the first Birgit Rausing Language Program Conference in Linguistics, Lund, June 2008 / [ed] Kai-Uwe Alter, Lund: Centre for Languages and Literature, Lund University , 2008, p. 45-46Conference paper (Other academic)
    Abstract [en]

    Previous fMRI and PET studies indicated left hemispheric activation for the processing of speech and phonetic features, and right hemispheric activation for the processing of emotions and melody. EEG measures are not commonly employed in lateralisation research, neither has lateralisation been extensively investigated in a developmental perspective. This study aims to fill this gap. Although ERP measures cannot precisely localize task-activated cortical regions, differences in voltage reflect scalp electricity in response to the activation of a distinct neural network (Giard et al., 1995). With EEG, we study the lateralisation of speech processing in 4- and 9-month-olds and adults in two conditions, spoken sentences and their spectrally rotated version. Rotating is a technique to render speech intelligible while preserving language-specific characteristics such as rhythm. A previous EEG study showed no lateralisation in 4-month-olds for the processing of speech versus nonspeech sounds as measured with 65 electrodes (Dehaene-Lambertz, 2000). For refined measurement, we employ 128 electrodes and use infant-directed speech stimuli for both conditions. It has been established that language comprehension tasks elicit bilateral hemispheric activation (Awad, Warren, Scott, Turkheimer, & Wise, 2007; Davis et al., 2007), showing intrahemispheric localisation differences within the left hemisphere for phonetic features and comprehension in speech processing tasks (Obleser, Zimmermann, Van Meter, & Rauschecker, 2007; Scott, Blank, Rosen, & Wise, 2000). As sentence comprehension increases with age, we predict an increase of activation in the left hemisphere when comparing 4- to 9-month-olds and 9-month-olds to adults. The melodic and emotional components of the infant-directed speech stimuli should elicit right-hemispheric activation which is predicted to decrease with age. Therefore, the measured differences between left- and right-hemispheric activation are predicted to increase with age. Left-hemispheric activation is also expected for the rotated speech condition due to the speech-likeness of the stimuli, however to a lesser degree than in the natural sentence condition (Ischebeck, Friederici, & Alter, 2007). This activation difference between the speech and rotated speech conditions should also increase with age as comprehension increases. The results are presented in regard to these hypotheses and discussed in the light of recent research.

  • 10. Lam-Cassettari, Christa
    et al.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Daddy counts: Australian and Swedish fathers? early speech input reflects infants? receptive vocabulary at 12 months 2017Conference paper (Other academic)
    Abstract [en]

    Parental input is known to predict language development. This study uses the LENA input duration estimates for female and male voices in two infant language environments, Australian English and Swedish, to predict receptive vocabulary size at 12 months. The Australian English learning infants were 6 months (N = 18, 8 girls), the Swedish learning infants were 8 months (N = 12, 6 girls). Their language environment was recorded on two days: one weekday in the primary care of the mother, and one weekend day when also the father spent time with the family. At 12 months, parents filled in a CDI form, the OZI for Australian English and the SECDI‐I for Swedish. In multiple regressions across languages, only male speech input duration predicted vocabulary scores significantly (β = .56;p = .01). Analysing boys and girls separately, male speech input predicts only boys’ vocabulary (β =.79 ; p= .01). Analysing languages separately for boys, the Australian English results are similar (β =.74 ; p= .02). Discussed in terms of differences in infant age, sample size, sex distribution and language, these findings can still contribute to the growing list of benefits of talker variability for early language acquisition.

  • 11.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Using rotated speech to approximate the acoustic mismatch negativity response to speech2018In: Brain and Language, ISSN 0093-934X, E-ISSN 1090-2155, Vol. 176, p. 26-35Article in journal (Refereed)
    Abstract [en]

    The mismatch negativity (MMN) response is influenced by the magnitude of the acoustic difference between standard and deviant, and the response is typically larger to linguistically relevant changes than to linguistically irrelevant changes. Linguistically relevant changes between standard and deviant typically co-occur with differences between the two acoustic signals. It is therefore not straightforward to determine the contribution of each of those two factors to the MMN response. This study investigated whether spectrally rotated speech can be used to determine the impact of the acoustic difference on the MMN response to a combined linguistic and acoustic change between standard and deviant. Changes between rotated vowels elicited an MMN of comparable amplitude to the one elicited by a within-category vowel change, whereas the between-category vowel change resulted in an MMN amplitude of greater magnitude. A change between rotated vowels resulted in an MMN ampltude more similar to that of a within-vowel change than a complex tone change did. This suggests that the MMN amplitude reflecting the acoustic difference between two speech sounds can be well approximated by the MMN amplitude elicited in response to their rotated counterparts, in turn making it possible to estimate the part of the response specific to the linguistic difference.

  • 12.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    MMR categorization effect at 8 months is related toreceptive vocabulary size at 12 to 14 months2017In: Many Paths to Language (MPaL), 2017, p. 91-92Conference paper (Refereed)
  • 13.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Amount of speech exposure predicts vowel perception in four- to eight-month-olds2019In: Developmental Cognitive Neuroscience, ISSN 1878-9293, E-ISSN 1878-9307, Vol. 36, article id 100622Article in journal (Refereed)
    Abstract [en]

    During the first year of life, infants shift their focus in speech perception from acoustic to linguistic information. This perceptual reorganization is related to exposure, and a direct relation has previously been demonstrated between amount of daily language exposure and mismatch response (MMR) amplitude to a native consonant contrast at around one year of age. The present study investigates the same relation between amount of speech exposure and MMR amplitude to a native vowel contrast at four to eight months of age. Importantly, the present study uses spectrally rotated speech in an effort to take general neural maturation into account. The amplitude of the part of the MMR that is tied specifically to speech processing correlates with amount of daily speech exposure, as estimated using the LENA system.

  • 14.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Amount of speech exposure predicts vowel perception in four-to-eight-month-oldsManuscript (preprint) (Other academic)
    Abstract [en]

    During the first year of life, infants shift their focus in speech perception from acoustic to linguistic information. This perceptual reorganization is related to exposure, and a direct relation has been demonstrated previously between amount of daily language exposure and mismatch response (MMR) amplitude to a native consonant contrast at around one year of age. The present study investigates the same relation between amount of speech exposure and MMR amplitude to a native vowel contrast at four to eight months of age. Importantly, the present study uses spectrally rotated speech in an effort to take general neural maturation into account. The amplitude of the part of the MMR that is tied specifically to speech processing correlates with amount of daily speech exposure, as estimated using the LENA system.

  • 15.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Mismatch negativity at Fz in response to within-category changes of the vowel /i/2014In: NeuroReport, ISSN 0959-4965, E-ISSN 1473-558X, Vol. 25, no 10, p. 756-759Article in journal (Refereed)
    Abstract [en]

    The amplitude of the mismatch negativity response for acoustic within-category deviations in speech stimuli was investigated by presenting participants with different exemplars of the vowel /i/ in an odd-ball paradigm. The deviants differed from the standard either in terms of fundamental frequency, the first formant, or the second formant. Changes in fundamental frequency are generally more salient than changes in the first formant, which in turn are more salient than changes in the second formant. The mismatch negativity response was expected to reflect this with greater amplitude for more salient deviations. The fundamental frequency deviants did indeed result in greater amplitude than both first formant deviants and second formant deviants, but no difference was found between the first formant deviants and the second formant deviants. It is concluded that greater difference between standard and within-category deviants across different acoustic dimensions results in greater mismatch negativity amplitude, suggesting that the processing of linguistically irrelevant changes in speech sounds may be processed similar to nonspeech sound changes.

  • 16.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Vowel categorization correlates with speech exposure in 8-month-olds2017Conference paper (Refereed)
    Abstract [en]

    During the first year of life, infants ability to discriminate non-native speech contrasts attenuates, whereas their ability to discriminate native contrasts improves. This transition reflects the development of speech sound categorization, and is hypothesized to be modulated by exposure to spoken language. The ERP mismatch response has been used to quantify discrimination ability in infants, and its amplitude has been shown to be sensitive to amount of speech exposure on group level (Rivera-Gaxiola et al., 2011). In the present ERP-study, the difference in mismatch response amplitudes for spoken vowels and for spectrally rotated vowels, quantifies categorization in 8-month-old infants (N=15, 7 girls). This categorization measure was tested for correlation with infants? daily exposure to male speech, female speech, and the sum of male and female speech, as measured by all-day home recordings and analyzed using LENA software. A positive correlation was found between the categorization measure and total amount of daily speech exposure (r = .526, p = .044). The present study is the first to report a relation between speech exposure and speech sound categorization in infants on subject level, and the first to compensate for the acoustic part of the mismatch response in this context.

  • 17.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Relationship between parent-rated productive vocabulary size and phonological complexity in Swedish infants2011Conference paper (Refereed)
    Abstract [en]

    The Swedish Communicative Development Inventory (SECDI) is an important tool to assess infants’ productive vocabulary as reported by parents. The instructions SECDI gives to parents and their intuitive judgements naturally favour a strong semantic perspective. This study investigates the relationship between the reported productive vocabulary size and the phonological complexity of infant utterances. Productive vocabulary size was assessed in 17- to 18-month-olds (N=330) and in 20- to 21-month-olds (N=85). It is hypothesised that words with low phonological complexity are more frequently reported by parents and that phonological complexity will increase with infant age. Productive vocabulary size was measured from parental reports submitted via an online version of SECDI. To evaluate phonological complexity, only the part with single words was used – apart from 16 items consisting of lexicalised phrases, family names or multiple alternative utterances that were excluded. Phonological complexity was computed as the sum of the number of syllables (1 to 4), consonant clusters (0 to 4), and fricatives (0 to 3) occurring in each of the remaining 694 words. It ranged from 1 to 9 (low 1-3; high 7-9). Parents reported significantly more words with low phonological complexity. There is a significant interaction between the complexity level of the reported words and infant age. Words with more syllables, consonant clusters or fricatives were less frequent in the parental reports. This shows that data acquired with SECDI is not necessarily limited to a semantic perspective but can even provide information about phonological complexity.

  • 18.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Pause and utterance duration in child-directed speech in relation to child vocabulary size2015In: Journal of Child Language, ISSN 0305-0009, E-ISSN 1469-7602, Vol. 42, no 5, p. 1158-1171Article in journal (Refereed)
    Abstract [en]

    This study compares parental pause and utterance duration in conversations with Swedish speaking children at age 1;6 who have either a large, typical, or small expressive vocabulary, as measured by the Swedish version of the McArthur-Bates CDI. The adjustments that parents do when they speak to children are similar across all three vocabulary groups; they use longer utterances than when speaking to adults, and respond faster to children than they do to other adults. However, overall pause duration varies with the vocabulary size of the children, and as a result durational aspects of the language environment to which the children are exposed differ between groups. Parents of children in the large vocabulary size group respond faster to child utterances than do parents of children in the typical vocabulary size group, who in turn respond faster to child utterances than do parents of children in the small vocabulary size group.

  • 19.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Relationship between parental communicative adjustments and vocabulary production in Swedish infants2011Conference paper (Refereed)
    Abstract [en]

    Parental linguistic input and interaction style are essential to infant language development. The current study investigates the relationship between Swedish children’s productive vocabulary size and parental communicative adjustments at 18 months (N = 60) and 24 months (N = 61). Vocabulary size is reported with the Swedish adaptation of the MacArthur CDI Words and Sentences (SECDI) while parental communicative adjustments are measured by parental inclination to wait for infants’ vocal communicative initiative and parental inclination to adjust utterance duration to match the duration of infant vocalization. Pauses between utterances and utterance duration of parents and children are tagged in audio recordings of daily-life situations involving parent and child at the family home, such as mealtime, playtime, or reading time. Infants with large productive vocabularies are expected to have parents who are more inclined to wait for communicative initiatives on the part of the infant and to adjust utterance duration to match infant vocalizations. On the other hand, infants with small productive vocabularies are expected to have parents who are less inclined to give room to communicative initiatives and to match input duration to infant production. Small vocabularies are defined by the lowest quartile (0-25%), while large vocabularies are represented by the highest quartile (75-100%) of SECDI scores. Parental communicative adjustments show differential effects on productive vocabulary size at the two ages 18 and 24 months. This indicates a relationship between parental communicative input, as measured in the aspects of turn-taking pauses and duration adjustment, and child vocabulary development.

  • 20.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics. Danderyds Hospital, Sweden.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Introducing WCM-SE: The word complexity measure phonetically justified and adapted to Swedish2018In: Clinical Linguistics & Phonetics, ISSN 0269-9206, E-ISSN 1464-5076, Vol. 32, no 11, p. 1042-1053Article in journal (Refereed)
    Abstract [en]

    This paper presents the Word Complexity Measure for Swedish (WCM-SE), an adaptation of the original WCM developed for English by Stoel-Gammon. These measures are used to calculate the phonological complexity of words or vocalizations, based on a number of phonological complexity parameters. Each production receives a complexity score based on how many of the parameters are present in the production.Using phonological complexity scores to measure expressive phonology is suitable for assessing very young children, children with early phonology and children with phonological deficits. It is useful forboth relational and independent analyses and enables comparisons between children and across development.The original WCM uses eight phonological complexity parameters in three domains: word patterns, syllable structures and sound classes. The parameters selected are phonological characteristics that are acquired late in development among English-speaking children.In the WCM-SE, complexity parameters in the domain sound classes were modified or added according to Swedish or universal patterns of phonology development. The parameters' complexity is accounted for in terms of language-general phonetic characteristics.

  • 21.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Contingent turn-taking between parents and 6-month-olds: Primary caregivers respond faster than secondary caregiversManuscript (preprint) (Other academic)
    Abstract [en]

    In this study, temporal contingency in parent- infant turn-taking was investigated. Six-month-old infants (n = 14; 7 girls) were recorded when interacting with their primary and secondary caregivers in separate ten-minute sessions. Infant vocalizations and adjacent caregiver utterances were identified in the recordings and duration was calculated for caregiver-infant and infant-caregiver switching pauses. Primary caregivers respond significantly faster to infant vocalizations than do secondary caregivers. Further, infants respond faster to primary caregiver than to secondary caregiver, but the difference failed to reach significance. No effects were found for infant sex. Switching pause duration in interaction between infants and their primary caregiver is shorter than between infants and their secondary caregiver.

  • 22.
    Mårback, Sebastian
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sjöberg, Gustav
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Eklund, Robert
    Karolinska Institute.
    Uhm... What's going on? An EEG study on perception of filled pauses in spontaneous Swedish speech2009In: Proceedings of the 22nd Swedish Phonetics Conference / [ed] Department of Linguistics, Phonetics group, 2009, p. 92-95Conference paper (Refereed)
    Abstract [en]

    Filled pauses have been shown to play a significant role in comprehension and long-term storage of speech. Behavioral and neurophysiological studies suggest that filled pauses can help mitigate semantic and/or syntactic incongruity in spoken language. The purpose of the present study was to explore how filled pauses affect the processing of spontaneous speech in the listener. Brain activation of eight subjects was measured by electroencephalography (EEG), while they listened to recordings of Wizard-of-Oz travel booking dialogues.

    The results show a P300 component in the Primary Motor Cortex, but not in the Broca or Wernicke areas. A possible interpretation could be that the listener is preparing to engage in speech. However, a larger sample is currently being collected.

  • 23.
    Renner, Lena
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Markelius, Marie
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sundberg, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Productional abilities can affect the perception of mispronounced words: An eye-tracking study with Swedish two-year-old children2014Conference paper (Refereed)
  • 24.
    Renner, Lena
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sundberg, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Combining EEG signals and Eye-tracking data to investigate the relationship between phonological and lexical acquisition2012Conference paper (Refereed)
  • 25.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Measuring vocabulary development in Australian toddlers with the Australian English vocabulary inventory OZI2006In: Karitane STEPS Conference 2006: Little steps - big progress: Toddlers, 15 months - 5 years, 2006, p. 15-Conference paper (Other academic)
    Abstract [en]

    The MacArthur Communicative Development Inventory CDI (Fenson et al, 1993), an American English parental vocabulary checklist, was adapted to Australian English, resulting in the Australian English vocabulary inventory OZI, validated for 16- to 30-month-old children. High correlations between OZI and CDI were observed in a first study (r=.99 for 24-month-olds, r=.97 for 30-month-olds), however the OZI assessed the vocabulary of Australian toddlers more appropriately, as the scores were significantly higher than on the CDI. A frequency count for the first 500 words as well as preliminary vocabulary norms for age groups from several hundred children are presented from the ongoing second study.

  • 26.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics. fonetik.
    Speech perception, phonological sensitivity, and articulation in early vocabulary development2007Doctoral thesis, monograph (Other academic)
    Abstract [en]

    Speech perception, articulation, and word learning are three major tiers of language development in young children, integrating perceptual and productive language abilities. Infant speech perception precedes speech production and is the basis for native language learning. In speech production, children refine their articulation skills beginning with their first vocalic utterances until they reach adult performance level. The third tier describes children’s vocabulary development from their first words to their established receptive and productive lexicon after the vocabulary spurt. Speech perception, articulation, and word learning interact at the level of lexical representations.

    By investigating the relationship between the attention to phonological detail in speech and word learning, the degree of phonological detail in the lexical representations can be inferred. This relationship can be described by two models: the vocabulary-driven and phonology-driven model. The vocabulary-driven model proposes that the structure of the lexicon influences attention to phonological detail in speech perception, and this model is consistent with the Lexical Restructuring Model. On the other hand the phonology-driven model proposes that vocabulary increases as a result of increased attention to phonological detail in speech.

    To infer the phonological specifications of lexical representations of words in 2½- to 3-year-olds, the variables vocabulary, phonological sensitivity, language-specific speech perception and articulation accuracy were tested in a longitudinal study with 60 participants. For these variables, new measures were developed, adapted, and tested. It was found that phonological sensitivity at 30 months predicted vocabulary at 33 months, but not the opposite. This supports the prediction of the phonology-driven model. However, in an augmented version of the vocabulary-driven model that included all variables, articulation at 30 months was found to predict phonological sensitivity at 33 months. These results are discussed in the light of the Lexical Restructuring Model, and the interaction of speech perception, articulation skills, and lexical representations, and suggestions for future research are offered.

  • 27.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Botros, Noor
    Lord, Alekzandra
    Marcusson, Amelie
    Tidelius, Henrik
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    The LENATM system applied to Swedish: Reliability of the Adult Word Count estimate2017In: Proceedings of Interspeech 2017 / [ed] Francisco Lacerda, David House, Mattias Heldner, Joakim Gustafson, Sofia Strömbergsson, Marcin Włodarczak, The International Speech Communication Association (ISCA), 2017, p. 2088-2092Conference paper (Refereed)
    Abstract [en]

    The Language Environment Analysis system LENATM is used to capture day-long recordings of children’s natural audio environment. The system performs automated segmentation of the recordings and provides estimates for various measures. One of those measures is Adult Word Count (AWC), an approximation of the number of words spoken by adults in close proximity to the child. The LENA system was developed for and trained on American English, but it has also been evaluated on its performance when applied to Spanish, Mandarin and French. The present study is the first evaluation of the LENA system applied to Swedish, and focuses on the AWC estimate. Twelve five-minute segments were selected at random from each of four day-long recordings of 30-month-old children. Each of these 48 segments was transcribed by two transcribers,and both number of words and number of vowels were calculated (inter-transcriber reliability for words: r = .95,vowels: r = .93). Both counts correlated with the LENA system’s AWC estimate for the same segments (words: r = .67, vowels: r = .66). The reliability of the AWC as estimated by the LENA system when applied to Swedish is therefore comparableto its reliability for Spanish, Mandarin and French.

  • 28.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bowey, Judith A.
    Burnham, Denis
    MARCS Auditory Laboratories, University of Western Sydney, Australia.
    Phoneme sensitivity predicts vocabulary size in 2 1/2- to 3-year-olds2006In: 11th Australasian International Conference on Speech Science and Technology: Auckland, New Zealand, 2006, p. 28-Conference paper (Refereed)
    Abstract [en]

    Sixty Australian English speaking toddlers were tested in a longitudinal study at 30, 33, and 36 months on vocabulary size, phoneme sensitivity, language-specific speech perception, and articulation accuracy. Vocabulary size was measured with the Australian English adaptation of the MacArthur Communicative Development Inventory and the Peabody Picture Vocabulary Test III. Phoneme Sensitivity (PS) comprised scores from mispronunciation detection, rhyme detection, and nonword repetition tasks. Language Specific Speech Perception (LSSP) was calculated by subtracting the score for nonnative speech perception from the native score, indicating the degree of specialisation in the native language. Articulation accuracy (AA) was measured with an adaptation of the Queensland Articulation Test. Results showed (i) linear improvements in all new measures, appropriately depicting the developmental trend; (ii) significant correlations between AA and vocabulary size; (iii) predictability of vocabulary size by PS and vice versa at 30, 33, and 36 months. The results provide further evidence for the important role phoneme-sensitive speech perception plays in the process of lexical acquisition.

  • 29.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Burnham, Denis
    MARCS Auditory Laboratories, University of Western Sydney, Australia.
    New measures to chart toddlers' speech perception and language development: A test of the lexical restructuring hypothesis2006In: 9th International Conference on Spoken Language Processing: Interspeech 2006 Pittsburgh, PA, 2006, p. 89-Conference paper (Refereed)
    Abstract [en]

    Language acquisition factors at work in toddlers between 2 1/2 and 3 years of age were investigated in the first longitudinal study of this kind. New age-appropriate tasks were devised to measure the development of vocabulary size; articulation accuracy, sensitivity to the phonemic features of, in this case, Australian English; and the degree of specialisation towards the native tongue, as measured by language-specific speech perception; LSSP, with 45 Australian English learning toddlers (18 male, 27 female) at 30, 33, and 36 months of age. Results indicated that (i) that nearly all measures improved linearly over age; (ii) that there were significant correlations between articulation ability and vocabulary size; and (iii) that, in confirmation of the lexical restructuring hypothesis, vocabulary size is significantly predicted by the broad range of native language abilities under the rubric of phoneme sensitivity, but not under the more specific measure of LSSP.

  • 30.
    Schwarz, Iris-Corinna
    et al.
    MARCS Auditory Laboratories, University of Western Sydney, Australia.
    Burnham, Denis
    MARCS Auditory Laboratories, University of Western Sydney, Australia.
    Bowey, Judith A.
    An expressive vocabulary inventory adaptation to Australian English2003In: 14th Australian Language and Speech Conference, Brisbane, 2003, p. 43-Conference paper (Other academic)
    Abstract [en]

    The MacArthur Communicative Development Inventory CDI (Fenson et al., 1993) is widely used to assess early expressive vocabulary development. Its toddler form, Words and Sentences, is validated as a parental vocabulary checklist for 16- to 30-month-old children. The CDI was developed for American English, but has been adapted to assess British (Hamilton, Plunkett, & Schafer, 2000) and New Zealand English vocabularies (Reese & Read, 2000). In this study, the CDI is adapted to Australian English, and various limitations, such as length and imbalance of gender-based items are overcome. This inventory omits grammatical aspects of the original CDI in order to shorten the checklist (see WORDS short form), Corkum & Dunham, 1996) and replaces inappropriate words with Australian English equivalents. This Australian adaptation of the CDI, and its correlations with the original CDI based on approximately 100 children (24 and 30 months) will be presented.

  • 31.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Clausnitzer, Ann-Christin
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Phonetic correlates of perceived affect in mothers’ and fathers’ speech to Swedish 12-month-olds2018In: Abstract Book: Day 1, Sunday, July 1st, 2018, p. 262-263Conference paper (Refereed)
    Abstract [en]

    Infants prefer to listen to infant-directed speech (IDS) over adult-directed speech (ADS). IDS contains a greater amount of affect than ADS (Singh, Morgan & Best, 2002). Affect in infant-directed speech has been said to foster social bonds, maintain attention and teach language. In order to identify phonetic correlates of affect, prosodic features such as fundamental frequency, pitch range, pitch contour, vowel duration and rhythm have been tried (Katz, Cohn & Moore, 1996; Trainor, Austin & Desjardins, 2000). However, affect ratings are typically carried out on low-pass filtered speech in order to obscure semantic cues to affect. It is possible that more than semantic meaning is distorted by the filtering process. In the present study, acoustic-phonetic correlates to affect were studied in un-filtered short speech segments. One-syllable speech segments were rated on a scale ranging from highly negative via neutral to highly positive affect. Formant (F1, F2, F3), pitch (mean, maximum, minimum, range, contour), and vowel duration measures were obtained from the speech samples, and relations between acoustic measures and rated affect were analyzed. The speech samples were the syllables /mo/, /na/, and /li/ produced by Swedish mothers (n = 29) and fathers (n = 21) when talking to their 12-month-old children. Recordings of IDS took place during free play in a laboratory setting, and the syllables were the names of soft toys that the parents were asked to use when interacting with their child. Parents and children participated in a longitudinal interaction study, and this was their fourth visit at the laboratory, so they were familiar with task, setting and toys. ADS exemplars of the syllables were also selected from a sub-sample of the mothers (n = 14), recorded at their first visit to the laboratory. Participants in the perceptual rating experiment (n = 35; 21 female; mean age = 28.6 years; age range = 19-45 years) were presented with one syllable at a time and asked to rate the affect conveyed on a scale from -4 (high negative affect) to +4 (high positive affect), with 0 as midpoint (neutral affect). The experiment was self-paced, and participants could listen to each syllable as many times as they liked. Each experiment session lasted between 30 and 50 minutes. A mixed-effects model was designed with AffectRating as dependent variable, Rater as random effects variable, and RaterGender, RaterHasChildren, F1, F2, F3, MeanPitch, PitchRange as well as VowelDuration as fixed effects variables. Minimum pitch, maximum pitch and pitch contour were excluded from the analysis since they were correlated with pitch range. Significant results were found for F1, F3, MeanPitch, PitchRange and VowelDuration. Higher F1 and/or F3 resulted in more negative perceived affect whereas higher mean pitch, greater pitch range, and/or longer vowel duration resulted in more positive perceived affect. The relation between perceived affect and formant values could be related to differences in perceived affect for different vowels, rather than variations in the formant values per se. It would be interesting to look at variation within separate vowel categories. The relation between positive affect and prosodic exaggerations suggests that some acoustic characteristics of IDS could be a result of parents conveying positive affect to their children.

  • 32.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Forsén, Malin
    Institutionen för klinisk vetenskap, intervention och teknik.
    Johansson, Linnea
    Institutionen för klinisk vetenskap, intervention och teknik.
    Lång, Catarina
    Institutionen för klinisk vetenskap, intervention och teknik.
    Narel, Anna
    Institutionen för klinisk vetenskap, intervention och teknik.
    Valdés, Tanya
    Institutionen för klinisk vetenskap, intervention och teknik.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Language-specific speech perception as mismatch negativity in 10-month-olds' ERP data2009In: Proceedings FONETIK 2009 - The XXIIth Swedish Phonetics Conference / [ed] Department of Linguistics, Stockholm University, Stockholm: Department of Linguistics , 2009, p. 130-135Conference paper (Other academic)
    Abstract [en]

    Discrimination of native and nonnative speech contrasts, the heart of the concept of language-specific speech perception, is sensitive to developmental change in speech perception during infancy. Using the mismatch negativity paradigm, seven Swedish language environment 10-month-olds were tested on their perception of six different consonantal and tonal Thai speech contrasts, native and nonnative tothe infants. Infant brain activation in response to the speech contrasts was measured with event-related potentials (ERPs). They show mismatch negativity at 300 ms, significant for contrast change in the native condition, but not for contrast change in the nonnative condition. Differences in native and nonnative speech discrimination are clearly reflected in the ERPs and confirm earlier findings obtained by behavioural techniques. ERP measurement thus suitably complements infant speech discrimination research.

  • 33.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Walberg, Heléne
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Cerebral activation patterns of speech perception in 4-month-olds and adults2009In: BrainTalk: Discourse with and in the Brain / [ed] Kai Alter, Merle Horne, Magnus Lindgren, Mikael Roll, Janne von Koss Torkildsen, Lund, Sweden: Centre for Languages and Literature, Lund University , 2009, p. 231-246Chapter in book (Other (popular science, discussion, etc.))
    Abstract [en]

    Speech perception in infancy is language-general, but language-specific in adulthood. Is this reflected in event-related brain potentials (ERPs)? ERPs from 128 electrodes were recorded for three speech perception conditions in 4-month-old Swedish infants (N=9) and adults (N=13).The stimuli were simple sentences in infant-directed speech (IDS) in Swedish, familiar in semantic content and prosodic characteristics; in spectrally rotated Swedish with incomprehensible semantic content, but familiar prosodic characteristics; and in Portuguese, unfamiliar in both aspects.As predicted, infants show bilateral activation across conditions, but also present two left-hemispheric advantages as potential first signs of native language specialisation. In adults, left-hemispheric activation shows condition-sensitive ERP patterns: processing rotated Swedish is similar to Swedish, but different to Portuguese. The right-hemispheric dominance in adults is explained by the strong prosodic and weak semantic characteristics of the IDS stimuli. Infants' language-general and adults' language-specific speech perception is reflected in the ERPs, although 4-month-olds show first signs of specialisation.

  • 34.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Lam-Cassettari, Christa
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Positive affect in Swedish and Australian mothers’ speech to their 3- to 12-month-old infants2018Conference paper (Refereed)
    Abstract [en]

    Affect is an important feature of infant-directed speech (IDS). IDS towards infants during the first year of life varies in degree of affect. In Australian English (AuE), positive affect in mothers’ IDS increases over age from birth to twelve months, with a dip at nine months (Kitamura & Burnham, 2003).

    This study investigates whether affect in Swedish (Swe) mothers’ IDS towards their infants develops in a similar pattern compared to the Australian English data. It also introduces a cross-linguistic perspective of affect perception in IDS as Swedish native speakers rate both the Swe and AuE IDS samples.

    The adult raters (N=16; 8 female, mean age 36.4 years; SD = 10.1) assessed affect polarity and affect degree in low-pass filtered IDS samples on a scale from -4 to +4 (highly negative to highly positive). The 25 s long samples were cut from interactions between mothers and their infants at three, six, nine and twelve months and low-pass filtered. The Australian material was sampled from the same dataset as used in Kitamura and Burnham (2003); the Swedish material was recorded at Stockholm Babylab (Gerholm et al., 2015).

    Separate repeated measures ANOVAs were conducted on the mean affect ratings of AuE and Swe IDS, with infant age as within-subject factor, followed up with polynomial contrasts. For AuE IDS, a significant main effect was found for age (F(45,3)=10.356; p<.001), with a linear (F(15,1)=20.542; p<.001) and a cubic trend (F(15,1)=7.780; p=.014). For Swe IDS, a significant main effect was found for age (F(45,3)=4.186; p=.011), with a linear (F(15,1)=10.993; p=.005) and a quadratic trend (F(15,1)=6.124; p=.026). In both languages, positive affect decreases over age.

    While cross-linguistic affect perception of AuE IDS is still similar to the original, Kitamura and Burnham’s data show a more pronounced cubic trend and a general increase of affect in IDS over the first year. In this study, affect development in AuE IDS shows a steep increase from three to six months, followed by a decrease from six to nine months and a slight recovery from nine to twelve months. Affect in Swe IDS follows a different developmental trajectory, as it decreases from three to nine months to recover with an increase from nine to twelve months. This is a first indication for language-specific differences in IDS affect over the first year. Future ratings of the same material with AuE native speakers will show if the difference in the AuE results is an effect of rater language.

  • 35.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Dybäck, Matilda
    Wallgren, Johanna
    Uhlén, Inger
    Pupil dilation indicates auditory signal detection - towards an objective hearing test based on eye-tracking2016Conference paper (Refereed)
    Abstract [en]

    Purpose: The long-term objective of this project is to develop an objective hearing threshold test that can be used in early infancy, using pupildilation as an indicator of hearing. The study purposes are 1) to identify relevant time-windows for analysis of pupillary responses to various auditory stimuli in adults, and 2) to evaluate a trial-minus-baseline approach to deal with unrelated pupillary responses in adults. Method: Participants’ pupil size is recorded using a Tobii T120 Eye-tracker. In the first test, participants fixate on a blank screen while sound stimuli are presented. From this data, typical pupillary responses and the relevant analysis time-window is determined and used in future tests. In the second test, participants watch movie clips while sound stimuli are presented. Visually identical sound and no-sound trials will be compared in order to isolate the pupillary changes tied to hearing sound from those related to changes in brightness in the visual stimuli. Results and conclusion: Data is currently being collected. Results from the pilot study indicate that the pupillary response related to sound detection occurs at around 900 ms after stimulus onset, and that a trial-minus-baseline approach is a viable option to eliminate unrelated pupillary responses.

  • 36.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwitteck, Annika
    Mismatch negativity as tool in language acquisition research: Discrimination on auditory-phonetic and/or semantic levels?2013Conference paper (Other academic)
    Abstract [en]

    Phoneme discrimination is one of the first steps on the infants’ path to identify word boundaries and to build a lexicon. Within the first year of life, infants turn their early global discrimination skills into language-specific tools, paving the way to fast word acquisition. This development has been demonstrated in countless behavioural discrimination studies, but also with electrophysiological measures such as event-related potentials (ERPs). The typical ERP-component that responds to a deviant in a series of standard stimuli, thereby indicating discrimination between standard and deviant, is called mismatch negativity (MMN). MMN is generally an early ERP-component at around 150 to 200 ms after stimulus onset, representing automatic processing on a pre-attentive level. It can be elicited in a passive listening paradigm while watching a silenced movie and is therefore especially suitable for infant studies. However, higher cognitive processing such as semantic discrimination can also be indicated by MMN with a latency of 400 to 450 ms after stimulus onset. This study tests whether auditory-phonetic and semantic processing levels are reflected in the MMN when comparing a simple syllable discrimination task to a word discrimination task, after the standard and deviant syllable stimuli have been supplied with semantic content. The syllables [be]-[de] are expected to be discriminated in the first experimental block (50 trials), indicated by a typical MMN in the participants’ ERPs. In a second block (50 naming trials), these syllables become names for two different jackalope soft toys while the participant eye movements are recorded. In the third block (50 trials), the MMN paradigm is run again, but this time with the expectation that higher level processing is indicated in the MMN response. This could be either reflected by a significantly more pronounced MMN curve or by a greater latency of the MMN response. We report pilot results of adult participants (N=7), run to evaluate the experimental paradigm before testing toddlers. 

  • 37.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwitteck, Annika
    Mismatch negativity reflects phonemic as well as lexical discrimination: Does this also hold for toddlers?2014Conference paper (Refereed)
    Abstract [en]

    Phoneme discrimination is an important factor in language acquisition. Within the first year of life, infants turn their early global discrimination skills into language-specific tools, facilitating lexical growth. However, word learning impedes phoneme discrimination between 14 and 17 months of age (e.g., Stager & Werker, 1997; Yoshida, Fennell, Swingley, & Werker, 2009).

    Phoneme discrimination can also be demonstrated by studying event-related potentials (ERPs) (e.g., Rivera-Gaxiola, Silva-Pereyra, & Kuhl, 2005). The typical ERP-component that responds to a deviant in a series of standard stimuli, thereby indicating discrimination between standard and deviant, is called mismatch negativity (MMN). MMN is an early ERP-component at around 150 to 250 ms after stimulus onset, representing automatic processing on a pre-attentive level (Näätänen, Paavilainen, Rinne, & Alho, 2007). It can be elicited in a passive listening paradigm and is therefore especially suitable for infant studies. However, even higher-level cognition such as the processing of lexical information can be reflected in the MMN (Shtyrov, Hauk, & Pulvermüller, 2004). In 5-year-old children, lexical processing as part of the MMN was indicated at a latency of 400 to 450 ms after stimulus onset (Korpilahti, Krause, Holopainen, & Lang, 2001).

    In the present study, a simple auditory syllable discrimination task is converted into a word discrimination task by supplying semantic content to standard and deviant syllables. The first experimental block contained 50 discrimination trials with four to seven repetitions of the standard syllable with an interstimulus interval of 500 ms. The stimuli consisted of one exemplar per syllable [be] and [de] (400 ms duration each), recorded in infant-directed speech by a female speaker. In a second block (50 naming trials) these syllables gained a semantic dimension by associative pairing with two different jackalope soft toys. Four different pictures per soft toy were presented together with the naming syllable in randomised order. The third experimental block equalled the first. As designed for toddlers, the study was kept to a suitable duration of about 10 min.

    It was predicted that the first block elicits a typical MMN, indicating discrimination on a phonemic level, but in the third block, a greater latency in the MMN was expected, indicating discrimination on a lexical level. Adult pilot data (N=6), collected to evaluate the extremely short MMN-paradigm and provide a reference group, revealed a strong frontocentral MMN candidate for discrimination on a phonemic level in the first experimental block with deviant negativity between 180 and 250 ms, as well as on a lexical level in the third block, indicated by a deviant negativity with a latency of 360 ms. This shows also that just 50 trials can elicit a MMN, setting the stage to test toddlers. The results of 20-month-olds are predicted to conform to the adult pilot data, possibly with a greater latency both in the phonemic and the lexical MMN. MMN may therefore offer an alternative method to investigate the development of phoneme discrimination in word learning contexts between the ages of 14 and 20 months. 

  • 38.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Contingency differences in parent-infant turn-taking between primary and secondary caregivers in relation to turn-taking experience2017In: Many Paths to Language (MPaL), 2017, p. 59-60Conference paper (Refereed)
    Abstract [en]

    Contingent turn-taking between parents and infants is positively correlated with child language outcome (Tamis-LeMonda, Bornstein & Baumwell, 2001; Marklund, Marklund, Lacerda & Schwarz, 2015). Many studies focus exclusively on mothers (e.g., Sung, Fausto-Sterling, Garcia Coll & Seifer, 2013). However, infants in Western countries acquire language with input both from mothers and fathers in varying degree, depending on how the family chooses to organize their parental leave. Sweden is an ideal country to study both mothers and fathers as caregivers for infants.

    Parental contingency is often reported as response frequency within a time window after infant vocalizations (e.g., Johnson, Caskey, Rand, Tucker & Vohr, 2014). In this study, turn-taking contingency is measured by the duration of parent-child and child-parent switching pauses around infant vocalization with potential communicative intent. Fourteen (7 girls) infants and their primary and secondary caregivers were recorded in the family home when the infant was six months (M = 5 months 29 days, range: 5 months 3 days – 6 months 16 days). The audio recordings were collected two different days and lasted approximately ten minutes each. One of the days was a typical weekday on which the primary caregiver – in all cases the mother – was at home with the infant. The other day was a typical weekend day on which also the secondary caregiver – in all cases the father – was at home and spent time with the infant. On each of these days, a daylong LENA recording was also made to estimate the amount of exposure to female and male speech input on a typical day. Using Wavesurfer 1.8.5 (Sjölander & Beskow, 2010), on- and offset of all infant vocalizations were tagged as well as on- and offset for the surrounding switching pauses. If parent utterance and infant vocalization overlapped, switching pause duration received a negative value.

    Two repeated measures ANOVAs were used to determine the effects of caregiver type (primary/secondary) and infant sex (girl/boy) on pause duration in infant-parent and parent-infant switching pauses. A main effect was found for caregiver type in infant-parent switching pauses (F(12,1) = 5.214; p = .041), as primary caregivers responded on average about 500 ms faster to infant vocalizations than secondary caregivers, with no effect of or interaction with infant sex. In parent-infant switching pauses, the main effect for caregiver type was almost significant (F(12,1) = 4.574; p = .054), with no effect of or interaction with infant sex. It is therefore fair to say that turn-taking between primary caregivers and 6-month-olds is more contingent than turn-taking between secondary caregivers and 6-month-olds.

    Four linear regressions were then used to predict parent-infant and infant-parent switching pause duration from the average duration of female speech exposure and the average duration of male speech exposure across the two days, with the assumption that female speech duration equals speech input from the primary caregiver and male speech duration the secondary caregiver. None of the regression analyses turned out to be significant. However, it is likely that the greater contingency between primary caregivers and the infant is a function of greater turn-taking experience, that is, conversational turns rather than mere exposure to speech. Therefore, we will look next at the number of conversational turns for each caregiver separately and investigate whether they predict parental response contingency.

    The present study shows that vocal turn-taking is more contingent between infants and primary caregivers than with secondary caregivers. Primary caregivers respond significantly faster to infant vocalizations than secondary caregivers and in turn, infants have a tendency to respond faster to primary caregivers. It is likely that this relationship is mediated by turn-taking experience, although this could not be shown with regression analyses using LENA estimates of total duration of speech exposure to primary and secondary caregiver.

     

     

  • 39.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Nazem, Atena
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Olsson, Sofia
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Uhlén, Inger
    Karolinska Institutet, Sweden.
    Towards a contingent anticipatory infant hearing test using eye-tracking2014In: Proceedings from FONETIK 2014: Stockholm, June 9-11, 2014 / [ed] Mattias Heldner, Stockholm: Department of Linguistics, Stockholm University , 2014, p. 35-40Conference paper (Other academic)
    Abstract [en]

    Early identification of infant hearing impairment is imperative to prevent developmental language difficulties. The current diagnostic method is Visual Reinforcement Audiometry (VRA) in which infant response to sound isobserved to establish hearing thresholds. Together with the Karolinska Institute, we are developing an observer-independent contingent anticipatory infant hearing test using eye-tracking to increase reliability and significance levels of the current clinical practice. The present pilot study addresses in particular the first phase of the test in which the eye response is conditioned to occur at sound detection. The aim is to establish how well 6.5-month-olds associate the presence of sound to a certain location via a visual reward.

  • 40.
    Schwarz, Iris-Corinna
    et al.
    Catholic University Eichstätt, Germany.
    Sulz, Serge K.D.
    Kinder- und Jugendanamnese: Fragebogen für den Patienten VDS-KJ2000Book (Refereed)
    Abstract [de]

    Vollständiges Anamnesesystem für die Therapieplanung in der Kinder- und Jugendlichen-Verhaltenstherapie. Bestehend aus:

    Basis-Anamnesebogen (16 Seiten)

    Zusatzbogen für Säuglinge und Kleinkinder (4 Seiten)

    Zusatzbogen für Kinder im Kindergartenalter (4 Seiten)

    Zusatzbogen für Kinder im Grundschulalter (6 Seiten)

    Zusatzbogen für Kinder im Schul- und Jugendalter (7 Seiten)

    Eigenanamnese für Schul- und Jugendalter (6 Seiten)

    Antrags- und Falldokumentationsleitfaden

  • 41.
    Uhlén, Inger
    et al.
    Karolinska University Hospital.
    Sköld, Birgitta
    Karolinska University Hospital.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Mattson, Louise
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Eye tracking for establishing hearing thresholds in infants - evaluation of a new methodology2010Conference paper (Other academic)
    Abstract [en]

    Hearing test in small children is a challenge during the first 1-2 years and in children with other disabilities even longer. With neonatal hearing screening hearing aids can be fitted as early as two months of age. Programming of the hearing aid then has to be based on ABR thresholds until the child is old enough to give a distinct behavioral response, typically at 4-6 months. However, ABR is not frequency specific and it requires a quite or sleeping child. Visual reinforcement audiometry (VRA). is based upon the head-turn paradigm and involves that the infant builds up an association between the presence of a sound stimulus and a reward display. This behavioral observation test suffers from poor reliability, lengthy test times across several sessions, heavy experimenter bias, and interpretative ambiguity of the broad variety of possible infant responses.

    This presentation describes a new method to objectively, automatically and adaptively determine reactions to sound stimuli. With an eye tracker and a computer based set-up the infants response, in anticipation towards a reward at the noted presence of an auditory stimulus (similar to VRA), can be registered, using eye movements instead of head turns. High test reliability and experimenter independence are achieved by the program´s automatic detection of infant response and adaptation of the next stimulus level. Result objectivity is improved by increasing the number of test trials for each frequency and hearing level, as well as by providing a significance level for each tested frequency depending on the number of trials.

  • 42.
    Zora, Hatice
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Heldner, Mattias
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Perceptual correlates of Turkish word stress and their contribution to automatic lexical access: Evidence from early ERP components2016In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 10, article id 7Article in journal (Refereed)
    Abstract [en]

    Perceptual correlates of Turkish word stress and their contribution to lexical access were studied using the mismatch negativity (MMN) component in event-related potentials (ERPs). The MMN was expected to indicate if segmentally identical Turkish words were distinguished on the sole basis of prosodic features such as fundamental frequency (f0), spectral emphasis (SE) and duration. The salience of these features in lexical access was expected to be reflected in the amplitude of MMN responses. In a multi-deviant oddball paradigm, neural responses to changes in f0, SE, and duration individually, as well as to all three features combined, were recorded for words and pseudowords presented to 14 native speakers of Turkish. The word and pseudoword contrast was used to differentiate language-related effects from acoustic-change effects on the neural responses. First and in line with previous findings, the overall MMN was maximal over frontal and central scalp locations. Second, changes in prosodic features elicited neural responses both in words and pseudowords, confirming the brain’s automatic response to any change in auditory input. However, there were processing differences between the prosodic features, most significantly in f0: While f0 manipulation elicited a slightly right-lateralized frontally-maximal MMN in words, it elicited a frontal P3a in pseudowords. Considering that P3a is associated with involuntary allocation of attention to salient changes, the manipulations of f0 in the absence of lexical processing lead to an intentional evaluation of pitch change. f0 is therefore claimed to be lexically specified in Turkish. Rather than combined features, individual prosodic features differentiate language-related effects from acoustic-change effects. The present study confirms that segmentally identical words can be distinguished on the basis of prosodic information alone, and establishes the salience of f0 in lexical access.

  • 43.
    Zora, Hatice
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Riad, Tomas
    Stockholm University, Faculty of Humanities, Department of Swedish Language and Multilingualism, Scandinavian Languages.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Heldner, Mattias
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lexical Specification of Prosodic Information in Swedish: Evidence from Mismatch Negativity2016In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 10, article id 533Article in journal (Refereed)
    Abstract [en]

    Like that of many other Germanic languages, the stress system of Swedish has mainly undergone phonological analysis. Recently, however, researchers have begun to recognize the central role of morphology in these systems. Similar to the lexical specification of tonal accent, the Swedish stress system is claimed to be morphologically determined and morphemes are thus categorized as prosodically specified and prosodically unspecified. Prosodically specified morphemes bear stress information as part of their lexical representations and are classified as tonic (i.e., lexically stressed), pretonic and posttonic, whereas prosodically unspecified morphemes receive stress through a phonological rule that is right-edge oriented, but is sensitive to prosodic specification at that edge. The presence of prosodic specification is inferred from vowel quality and vowel quantity; if stress moves elsewhere, vowel quality and quantity change radically in phonologically stressed morphemes, whereas traces of stress remain in lexically stressed morphemes. The present study is the first to investigate whether stress is a lexical property of Swedish morphemes by comparing mismatch negativity (MMN) responses to vowel quality and quantity changes in phonologically stressed and lexically stressed words. In a passive oddball paradigm, 15 native speakers of Swedish were presented with standards and deviants, which differed from the standards in formant frequency and duration. Given that vowel quality and quantity changes are associated with morphological derivations only in phonologically stressed words, MMN responses are expected to be greater in phonologically stressed words than in lexically stressed words that lack such an association. The results indicated that the processing differences between phonologically and lexically stressed words were reflected in the amplitude and topography of MMN responses. Confirming the expectation, MMN amplitude was greater for the phonologically stressed word than for the lexically stressed word and showed a more widespread topographic distribution. The brain did not only detect vowel quality and quantity changes but also used them to activate memory traces associated with derivations. The present study therefore implies that morphology is directly involved in the Swedish stress system and that changes in phonological shape due to stress shift cue upcoming stress and potential addition of a morpheme.

  • 44.
    Zora, Hatice
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Heldner, Mattias
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Neural correlates of lexical stress: mismatch negativity reflects fundamental frequency and intensity2015In: NeuroReport, ISSN 0959-4965, E-ISSN 1473-558X, Vol. 26, no 13, p. 791-796Article in journal (Refereed)
    Abstract [en]

    Neural correlates of lexical stress were studied using the mismatch negativity (MMN) component in event-related potentials. The MMN responses were expected to reveal the encoding of stress information into long-term memory and the contributions of prosodic features such as fundamental frequency (F0) and intensity toward lexical access. In a passive oddball paradigm, neural responses to changes in F0, intensity, and in both features together were recorded for words and pseudowords. The findings showed significant differences not only between words and pseudowords but also between prosodic features. Early processing of prosodic information in words was indexed by an intensity-related MMN and an F0-related P200. These effects were stable at right-anterior and mid-anterior regions. At a later latency, MMN responses were recorded for both words and pseudowords at the mid-anterior and posterior regions. The P200 effect observed for F0 at the early latency for words developed into an MMN response. Intensity elicited smaller MMN for pseudowords than for words. Moreover, a larger brain area was recruited for the processing of words than for the processing of pseudowords. These findings suggest earlier and higher sensitivity to prosodic changes in words than in pseudowords, reflecting a language-related process. The present study, therefore, not only establishes neural correlates of lexical stress but also confirms the presence of long-term memory traces for prosodic information in the brain.

1 - 44 of 44
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf