Change search
Refine search result
12 1 - 50 of 60
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics. Stockholm University, Faculty of Social Sciences, Department of Special Education.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ulrika
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lam-Cassettari, Christa
    Affect in Infant-Directed Speech of Swedish-Speaking Mothers and Fathers to 3-, 6-, 9-, and 12-Month-Old Infants2023In: Language Learning and Development, ISSN 1547-5441, E-ISSN 1547-3341Article in journal (Refereed)
    Abstract [en]

    Infant-directed speech (IDS) is characterized by a range of register-typical characteristics. Many of those can be objectively measured, such as acoustic-prosodic and structural-linguistic modifications. Perceived vocal affect, however, is a socio-emotional IDS characteristic and is subjectively assessed. Vocal affect goes beyond acoustic-prosodic and structural-linguistic IDS features and includes a perceptive-subjective component in the listener. This study describes vocal affect valence in Swedish IDS during the first year of life, and compares vocal affect between mothers' and fathers' IDS and their adult-directed speech. Adult native speakers of Swedish (N = 16) rated affect valence in low-pass filtered IDS samples from free play interactions of mothers with infants at 3, 6, 9, and 12 months (N = 10) and fathers with their 12-month-olds (N = 6). Across the first year, the trajectory of mothers' positive affect in Swedish IDS shows the highest affect scores toward the youngest infants and the lowest at 9 months of age. This follows a pattern comparable to that in Australian English IDS, showing that mothers express vocal affect similarly across different languages and cultures. Both mothers' and fathers' IDS to 12-month-olds have higher positive vocal affect valence than their adult-directed speech. There was no difference in positive affect valence between mothers' and fathers' IDS, that is, mothers and fathers express vocal affect to the same extent when talking to their infants. In conclusion, the findings of this study indicate that high positive vocal affect characterizes IDS across different languages and speakers.

  • 2.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Marklund, Ulrika
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics. Karolinska Institutet, Sweden.
    An Association Between Phonetic Complexity of Infant Vocalizations and Parent Vowel Hyperarticulation2021In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 12, article id 693866Article in journal (Refereed)
    Abstract [en]

    Extreme or exaggerated articulation of vowels, or vowel hyperarticulation, is a characteristic commonly found in infant-directed speech (IDS). High degrees of vowel hyperarticulation in parent IDS has been tied to better speech sound category development and bigger vocabulary size in infants. In the present study, the relationship between vowel hyperarticulation in Swedish IDS to 12-month-old and phonetic complexity of infant vocalizations is investigated. Articulatory adaptation toward hyperarticulation is quantified as difference in vowel space area between IDS and adult-directed speech (ADS). Phonetic complexity is estimated using the Word Complexity Measure for Swedish (WCM-SE). The results show that vowels in IDS was more hyperarticulated than vowels in ADS, and that parents' articulatory adaptation in terms of hyperarticulation correlates with phonetic complexity of infant vocalizations. This can be explained either by the parents' articulatory behavior impacting the infants' vocalization behavior, the infants' social and communicative cues eliciting hyperarticulation in the parents' speech, or the two variables being impacted by a third, underlying variable such as parents' general communicative adaptiveness.

  • 3. Marklund, Ulrika
    et al.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics. Karolinska Institutet, Sweden.
    Relationship Between Parent Vowel Hyperarticulation in Infant-Directed Speech and Infant Phonetic Complexity on the Level of Conversational Turns2021In: Frontiers in Psychology, E-ISSN 1664-1078, Vol. 12, article id 688242Article in journal (Refereed)
    Abstract [en]

    When speaking to infants, parents typically use infant-directed speech, a speech register that in several aspects differs from that directed to adults. Vowel hyperarticulation, that is, extreme articulation of vowels, is one characteristic sometimes found in infant-directed speech, and it has been suggested that there exists a relationship between how much vowel hyperarticulation parents use when speaking to their infant and infant language development. In this study, the relationship between parent vowel hyperarticulation and phonetic complexity of infant vocalizations is investigated. Previous research has shown that on the level of subject means, a positive correlational relationship exists. However, the previous findings do not provide information about the directionality of that relationship. In this study the relationship is investigated on a conversational turn level, which makes it possible to draw conclusions on whether the behavior of the infant is impacting the parent, the behavior of the parent is impacting the infant, or both. Parent vowel hyperarticulation was quantified using the vhh-index, a measure that allows vowel hyperarticulation to be estimated for individual vowel tokens. Phonetic complexity of infant vocalizations was calculated using the Word Complexity Measure for Swedish. Findings were unexpected in that a negative relationship was found between parent vowel hyperarticulation and phonetic complexity of the immediately following infant vocalization. Directionality was suggested by the fact that no such relationship was found between infant phonetic complexity and vowel hyperarticulation of the immediately following parent utterance. A potential explanation for these results is that high degrees of vowel hyperarticulation either provide, or co-occur with, large amounts of phonetic and/or linguistic information, which may occupy processing resources to an extent that affects production of the next vocalization.

  • 4.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Kallioinen, Petter
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    N1 Repetition-Attenuation for Acoustically Variable Speech and Spectrally Rotated Speech2020In: Frontiers in Human Neuroscience, ISSN 1662-5161, E-ISSN 1662-5161, Vol. 14, article id 534804Article in journal (Refereed)
    Abstract [en]

    The amplitude of the event-related N1 wave decreases with repeated stimulation. This repetition-attenuation has not previously been investigated in response to variable auditory stimuli, nor has the relative impact of acoustic vs. perceptual category repetition been studied. In the present study, N1 repetition-attenuation was investigated for speech and spectrally rotated speech with varying degrees of acoustic and perceptual category variation. In the speech condition, participants (n = 19) listened to stimulus trains consisting of either the same vowel exemplar (no variability condition), different exemplars of the same vowel (low variability condition), or different exemplars of two different vowels (high variability condition). In the rotated speech condition, the spectrally rotated counterparts of the vowels were presented. Findings show N1 repetition-attenuation in the face of acoustic and perceptual category variability, but no impact of the degree of variability on the degree of N1 attenuation. Speech stimuli resulted in less attenuation than the acoustically matched non-speech stimuli, which is in line with previous findings. It remains unclear if the attenuation of the N1 wave is reduced as a result of stimuli being perceived as belonging to perceptual categories or as a result of some other characteristic of speech.

  • 5.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    The Dynamics of Vowel Hypo- and Hyperarticulation in Swedish Infant-Directed Speech to 12-Month-Olds2020In: Frontiers in Communication, E-ISSN 2297-900X, Vol. 5, article id 523768Article in journal (Refereed)
    Abstract [en]

    Vowel hypo- and hyperarticulation (VHH) was investigated in Swedish infant-directed speech (IDS) to Swedish 12-month-olds using a measure that normalizes across speakers and vowels: the vhh-index. The vhh-index gives the degree of VHH for each individual vowel token, which allows for analysis of the dynamics of VHH within a conversation. Using both the vhh-index and traditional measures of VHH, the degree of VHH was compared between Swedish IDS and ADS. The vowel space area was larger in IDS than in ADS, and the average vhh-index as well as the modal value was higher in IDS than in ADS. Further, the proportion of vowel tokens that were highly hyperarticulated (vhh-index > 75th percentile) were fewer in ADS than in IDS. Vowels in Swedish IDS to 12-month-olds are thus concluded to be hyperarticulated compared to vowels in Swedish ADS, both in terms of degree and frequency. Findings are in line with previous reports on VHH in Swedish IDS as well as on VHH in IDS to infants around 12 months in other languages. The study considers the importance of robust formant estimation, highlights the need for replication of studies on VHH in IDS on previously studied languages and ages, and discusses the benefits of the vhh-index. Those benefits include that it normalizes across speakers and vowels, can be used for dynamic measures within speech samples, and permits analyses on token-level.

  • 6.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Amount of speech exposure predicts vowel perception in four- to eight-month-olds2019In: Developmental Cognitive Neuroscience, ISSN 1878-9293, E-ISSN 1878-9307, Vol. 36, article id 100622Article in journal (Refereed)
    Abstract [en]

    During the first year of life, infants shift their focus in speech perception from acoustic to linguistic information. This perceptual reorganization is related to exposure, and a direct relation has previously been demonstrated between amount of daily language exposure and mismatch response (MMR) amplitude to a native consonant contrast at around one year of age. The present study investigates the same relation between amount of speech exposure and MMR amplitude to a native vowel contrast at four to eight months of age. Importantly, the present study uses spectrally rotated speech in an effort to take general neural maturation into account. The amplitude of the part of the MMR that is tied specifically to speech processing correlates with amount of daily speech exposure, as estimated using the LENA system.

  • 7.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Lam-Cassettari, Christa
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Does positive affect promote word learning in Australian English learning and Swedish learning 16-month-olds?2019Conference paper (Refereed)
  • 8.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sjons, Johan
    Stockholm University, Faculty of Humanities, Department of Linguistics, Computational Linguistics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Cortes, Elísabet Eir
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    No distributional learning in adults from attended listening to non-speech2019Conference paper (Refereed)
    Abstract [en]

    Distributional learning is a perceptual process hypothesized to underlie the phenomena of phonetic recalibration and selective adaptation, as well as infant speech sound category learning. However, in order to be conclusively tied to the earliest stages of speech sound category development, that is, the formation of novel perceptual categories, distributional learning must be shown to operate on stimuli for which there are no pre-existing categories. We investigated this in a previous study, finding no evidence of distributional learning in adults from unattended listening to non-speech. Since attention to stimuli impacts distributional learning, the present study focused on distributional learning from attended listening to non-speech. The same paradigm was used as in the previous study, except that participants’ attention was directed towards stimuli by means of a cover task. Non-speech stimuli were spectrally rotated vowels and the mismatch negativity was used to measure perceptual categorization. No distributional learning was found, that is, no effect of attention on distributional learning was demonstrated. This could mean that the distributional learning process does not operate on stimuli where perceptual categories do not already exist, or that the mismatch negativity measure does not capture the earliest stages of perceptual category development.

  • 9.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lam-Cassettari, Christa
    Amount of speech exposure early in infancy is related to receptive vocabulary size at twelve months2018In: Abstract Book: Day 2, Monday, July 2nd, 2018, p. 190-192Conference paper (Refereed)
  • 10. Lam-Cassettari, Christa
    et al.
    Gates, Sophie
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Does exaggerated affect in infant-directed speech support infant word learning?2018Conference paper (Refereed)
  • 11.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics. Danderyds Hospital, Sweden.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Introducing WCM-SE: The word complexity measure phonetically justified and adapted to Swedish2018In: Clinical Linguistics & Phonetics, ISSN 0269-9206, E-ISSN 1464-5076, Vol. 32, no 11, p. 1042-1053Article in journal (Refereed)
    Abstract [en]

    This paper presents the Word Complexity Measure for Swedish (WCM-SE), an adaptation of the original WCM developed for English by Stoel-Gammon. These measures are used to calculate the phonological complexity of words or vocalizations, based on a number of phonological complexity parameters. Each production receives a complexity score based on how many of the parameters are present in the production.Using phonological complexity scores to measure expressive phonology is suitable for assessing very young children, children with early phonology and children with phonological deficits. It is useful forboth relational and independent analyses and enables comparisons between children and across development.The original WCM uses eight phonological complexity parameters in three domains: word patterns, syllable structures and sound classes. The parameters selected are phonological characteristics that are acquired late in development among English-speaking children.In the WCM-SE, complexity parameters in the domain sound classes were modified or added according to Swedish or universal patterns of phonology development. The parameters' complexity is accounted for in terms of language-general phonetic characteristics.

  • 12. Lam-Cassettari, Christa
    et al.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Paternal speech at 6 months is associated with receptive vocabulary at 12 months: Evidence from Australian and Swedish families2018Conference paper (Refereed)
  • 13.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Clausnitzer, Ann-Christin
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Phonetic correlates of perceived affect in mothers’ and fathers’ speech to Swedish 12-month-olds2018In: Abstract Book: Day 1, Sunday, July 1st, 2018, p. 262-263Conference paper (Refereed)
    Abstract [en]

    Infants prefer to listen to infant-directed speech (IDS) over adult-directed speech (ADS). IDS contains a greater amount of affect than ADS (Singh, Morgan & Best, 2002). Affect in infant-directed speech has been said to foster social bonds, maintain attention and teach language. In order to identify phonetic correlates of affect, prosodic features such as fundamental frequency, pitch range, pitch contour, vowel duration and rhythm have been tried (Katz, Cohn & Moore, 1996; Trainor, Austin & Desjardins, 2000). However, affect ratings are typically carried out on low-pass filtered speech in order to obscure semantic cues to affect. It is possible that more than semantic meaning is distorted by the filtering process. In the present study, acoustic-phonetic correlates to affect were studied in un-filtered short speech segments. One-syllable speech segments were rated on a scale ranging from highly negative via neutral to highly positive affect. Formant (F1, F2, F3), pitch (mean, maximum, minimum, range, contour), and vowel duration measures were obtained from the speech samples, and relations between acoustic measures and rated affect were analyzed. The speech samples were the syllables /mo/, /na/, and /li/ produced by Swedish mothers (n = 29) and fathers (n = 21) when talking to their 12-month-old children. Recordings of IDS took place during free play in a laboratory setting, and the syllables were the names of soft toys that the parents were asked to use when interacting with their child. Parents and children participated in a longitudinal interaction study, and this was their fourth visit at the laboratory, so they were familiar with task, setting and toys. ADS exemplars of the syllables were also selected from a sub-sample of the mothers (n = 14), recorded at their first visit to the laboratory. Participants in the perceptual rating experiment (n = 35; 21 female; mean age = 28.6 years; age range = 19-45 years) were presented with one syllable at a time and asked to rate the affect conveyed on a scale from -4 (high negative affect) to +4 (high positive affect), with 0 as midpoint (neutral affect). The experiment was self-paced, and participants could listen to each syllable as many times as they liked. Each experiment session lasted between 30 and 50 minutes. A mixed-effects model was designed with AffectRating as dependent variable, Rater as random effects variable, and RaterGender, RaterHasChildren, F1, F2, F3, MeanPitch, PitchRange as well as VowelDuration as fixed effects variables. Minimum pitch, maximum pitch and pitch contour were excluded from the analysis since they were correlated with pitch range. Significant results were found for F1, F3, MeanPitch, PitchRange and VowelDuration. Higher F1 and/or F3 resulted in more negative perceived affect whereas higher mean pitch, greater pitch range, and/or longer vowel duration resulted in more positive perceived affect. The relation between perceived affect and formant values could be related to differences in perceived affect for different vowels, rather than variations in the formant values per se. It would be interesting to look at variation within separate vowel categories. The relation between positive affect and prosodic exaggerations suggests that some acoustic characteristics of IDS could be a result of parents conveying positive affect to their children.

  • 14.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Lam-Cassettari, Christa
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Positive affect in Swedish and Australian mothers’ speech to their 3- to 12-month-old infants2018Conference paper (Refereed)
    Abstract [en]

    Affect is an important feature of infant-directed speech (IDS). IDS towards infants during the first year of life varies in degree of affect. In Australian English (AuE), positive affect in mothers’ IDS increases over age from birth to twelve months, with a dip at nine months (Kitamura & Burnham, 2003).

    This study investigates whether affect in Swedish (Swe) mothers’ IDS towards their infants develops in a similar pattern compared to the Australian English data. It also introduces a cross-linguistic perspective of affect perception in IDS as Swedish native speakers rate both the Swe and AuE IDS samples.

    The adult raters (N=16; 8 female, mean age 36.4 years; SD = 10.1) assessed affect polarity and affect degree in low-pass filtered IDS samples on a scale from -4 to +4 (highly negative to highly positive). The 25 s long samples were cut from interactions between mothers and their infants at three, six, nine and twelve months and low-pass filtered. The Australian material was sampled from the same dataset as used in Kitamura and Burnham (2003); the Swedish material was recorded at Stockholm Babylab (Gerholm et al., 2015).

    Separate repeated measures ANOVAs were conducted on the mean affect ratings of AuE and Swe IDS, with infant age as within-subject factor, followed up with polynomial contrasts. For AuE IDS, a significant main effect was found for age (F(45,3)=10.356; p<.001), with a linear (F(15,1)=20.542; p<.001) and a cubic trend (F(15,1)=7.780; p=.014). For Swe IDS, a significant main effect was found for age (F(45,3)=4.186; p=.011), with a linear (F(15,1)=10.993; p=.005) and a quadratic trend (F(15,1)=6.124; p=.026). In both languages, positive affect decreases over age.

    While cross-linguistic affect perception of AuE IDS is still similar to the original, Kitamura and Burnham’s data show a more pronounced cubic trend and a general increase of affect in IDS over the first year. In this study, affect development in AuE IDS shows a steep increase from three to six months, followed by a decrease from six to nine months and a slight recovery from nine to twelve months. Affect in Swe IDS follows a different developmental trajectory, as it decreases from three to nine months to recover with an increase from nine to twelve months. This is a first indication for language-specific differences in IDS affect over the first year. Future ratings of the same material with AuE native speakers will show if the difference in the AuE results is an effect of rater language.

  • 15.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Using rotated speech to approximate the acoustic mismatch negativity response to speech2018In: Brain and Language, ISSN 0093-934X, E-ISSN 1090-2155, Vol. 176, p. 26-35Article in journal (Refereed)
    Abstract [en]

    The mismatch negativity (MMN) response is influenced by the magnitude of the acoustic difference between standard and deviant, and the response is typically larger to linguistically relevant changes than to linguistically irrelevant changes. Linguistically relevant changes between standard and deviant typically co-occur with differences between the two acoustic signals. It is therefore not straightforward to determine the contribution of each of those two factors to the MMN response. This study investigated whether spectrally rotated speech can be used to determine the impact of the acoustic difference on the MMN response to a combined linguistic and acoustic change between standard and deviant. Changes between rotated vowels elicited an MMN of comparable amplitude to the one elicited by a within-category vowel change, whereas the between-category vowel change resulted in an MMN amplitude of greater magnitude. A change between rotated vowels resulted in an MMN ampltude more similar to that of a within-vowel change than a complex tone change did. This suggests that the MMN amplitude reflecting the acoustic difference between two speech sounds can be well approximated by the MMN amplitude elicited in response to their rotated counterparts, in turn making it possible to estimate the part of the response specific to the linguistic difference.

  • 16.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Pagmar, David
    Stockholm University, Faculty of Humanities, Department of Linguistics, General Linguistics.
    Gerholm, Tove
    Stockholm University, Faculty of Humanities, Department of Linguistics, General Linguistics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Computational simulations of temporal vocalization behavior in adult-child interaction2017In: Proceedings of Interspeech 2017 / [ed] Francisco Lacerda, David House, Mattias Heldner, Joakim Gustafson, Sofia Strömbergsson, Marcin Włodarczak, The International Speech Communication Association (ISCA), 2017, p. 2208-2212Conference paper (Refereed)
    Abstract [en]

    The purpose of the present study was to introduce a computational simulation of timing in child-adult interaction. The simulation uses temporal information from real adult-child interactions as default temporal behavior of two simulated agents. Dependencies between the agents’ behavior are added, and how the simulated interactions compare to real interaction data as a result is investigated. In the present study, the real data consisted of transcriptions of a mother interacting with her 12- month-old child, and the data simulated was vocalizations. The first experiment shows that although the two agents generate vocalizations according to the temporal characteristics of the interlocutors in the real data, simulated interaction with no contingencies between the two agents’ behavior differs from real interaction data. In the second experiment, a contingency was introduced to the simulation: the likelihood that the adult agent initiated a vocalization if the child agent was already vocalizing. Overall, the simulated data is more similar to the real interaction data when the adult agent is less likely to start speaking while the child agent vocalizes. The results are in line with previous studies on turn-taking in parent-child interaction at comparable ages. This illustrates that computational simulations are useful tools when investigating parent-child interactions.

  • 17.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ulrika
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Contingency differences in parent-infant turn-taking between primary and secondary caregivers in relation to turn-taking experience2017In: Many Paths to Language (MPaL), 2017, p. 59-60Conference paper (Refereed)
    Abstract [en]

    Contingent turn-taking between parents and infants is positively correlated with child language outcome (Tamis-LeMonda, Bornstein & Baumwell, 2001; Marklund, Marklund, Lacerda & Schwarz, 2015). Many studies focus exclusively on mothers (e.g., Sung, Fausto-Sterling, Garcia Coll & Seifer, 2013). However, infants in Western countries acquire language with input both from mothers and fathers in varying degree, depending on how the family chooses to organize their parental leave. Sweden is an ideal country to study both mothers and fathers as caregivers for infants.

    Parental contingency is often reported as response frequency within a time window after infant vocalizations (e.g., Johnson, Caskey, Rand, Tucker & Vohr, 2014). In this study, turn-taking contingency is measured by the duration of parent-child and child-parent switching pauses around infant vocalization with potential communicative intent. Fourteen (7 girls) infants and their primary and secondary caregivers were recorded in the family home when the infant was six months (M = 5 months 29 days, range: 5 months 3 days – 6 months 16 days). The audio recordings were collected two different days and lasted approximately ten minutes each. One of the days was a typical weekday on which the primary caregiver – in all cases the mother – was at home with the infant. The other day was a typical weekend day on which also the secondary caregiver – in all cases the father – was at home and spent time with the infant. On each of these days, a daylong LENA recording was also made to estimate the amount of exposure to female and male speech input on a typical day. Using Wavesurfer 1.8.5 (Sjölander & Beskow, 2010), on- and offset of all infant vocalizations were tagged as well as on- and offset for the surrounding switching pauses. If parent utterance and infant vocalization overlapped, switching pause duration received a negative value.

    Two repeated measures ANOVAs were used to determine the effects of caregiver type (primary/secondary) and infant sex (girl/boy) on pause duration in infant-parent and parent-infant switching pauses. A main effect was found for caregiver type in infant-parent switching pauses (F(12,1) = 5.214; p = .041), as primary caregivers responded on average about 500 ms faster to infant vocalizations than secondary caregivers, with no effect of or interaction with infant sex. In parent-infant switching pauses, the main effect for caregiver type was almost significant (F(12,1) = 4.574; p = .054), with no effect of or interaction with infant sex. It is therefore fair to say that turn-taking between primary caregivers and 6-month-olds is more contingent than turn-taking between secondary caregivers and 6-month-olds.

    Four linear regressions were then used to predict parent-infant and infant-parent switching pause duration from the average duration of female speech exposure and the average duration of male speech exposure across the two days, with the assumption that female speech duration equals speech input from the primary caregiver and male speech duration the secondary caregiver. None of the regression analyses turned out to be significant. However, it is likely that the greater contingency between primary caregivers and the infant is a function of greater turn-taking experience, that is, conversational turns rather than mere exposure to speech. Therefore, we will look next at the number of conversational turns for each caregiver separately and investigate whether they predict parental response contingency.

    The present study shows that vocal turn-taking is more contingent between infants and primary caregivers than with secondary caregivers. Primary caregivers respond significantly faster to infant vocalizations than secondary caregivers and in turn, infants have a tendency to respond faster to primary caregivers. It is likely that this relationship is mediated by turn-taking experience, although this could not be shown with regression analyses using LENA estimates of total duration of speech exposure to primary and secondary caregiver.

     

     

  • 18. Lam-Cassettari, Christa
    et al.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Daddy counts: Australian and Swedish fathers? early speech input reflects infants? receptive vocabulary at 12 months 2017Conference paper (Other academic)
    Abstract [en]

    Parental input is known to predict language development. This study uses the LENA input duration estimates for female and male voices in two infant language environments, Australian English and Swedish, to predict receptive vocabulary size at 12 months. The Australian English learning infants were 6 months (N = 18, 8 girls), the Swedish learning infants were 8 months (N = 12, 6 girls). Their language environment was recorded on two days: one weekday in the primary care of the mother, and one weekend day when also the father spent time with the family. At 12 months, parents filled in a CDI form, the OZI for Australian English and the SECDI‐I for Swedish. In multiple regressions across languages, only male speech input duration predicted vocabulary scores significantly (β = .56;p = .01). Analysing boys and girls separately, male speech input predicts only boys’ vocabulary (β =.79 ; p= .01). Analysing languages separately for boys, the Australian English results are similar (β =.74 ; p= .02). Discussed in terms of differences in infant age, sample size, sex distribution and language, these findings can still contribute to the growing list of benefits of talker variability for early language acquisition.

  • 19.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Cortes, Elísabet Eir
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sjons, Johan
    Stockholm University, Faculty of Humanities, Department of Linguistics, Computational Linguistics.
    MMN responses in adults after exposure to bimodal and unimodal frequency distributions of rotated speech2017In: Proceedings of Interspeech 2017 / [ed] Francisco Lacerda, David House, Mattias Heldner, Joakim Gustafson, Sofia Strömbergsson, Marcin Włodarczak, The International Speech Communication Association (ISCA), 2017, p. 1804-1808Conference paper (Refereed)
    Abstract [en]

    The aim of the present study is to further the understanding of the relationship between perceptual categorization and exposure to different frequency distributions of sounds. Previous studies have shown that speech sound discrimination proficiency is in- fluenced by exposure to different distributions of speech sound continua varying along one or several acoustic dimensions, both in adults and in infants. In the current study, adults were presented with either a bimodal or a unimodal frequency distri- bution of spectrally rotated sounds along a continuum (a vowel continuum before rotation). Categorization of the sounds, quantified as amplitude of the event-related potential (ERP) component mismatch negativity (MMN) in response to two of the sounds, was measured before and after exposure. It was expected that the bimodal group would have a larger MMN amplitude after exposure whereas the unimodal group would have a smaller MMN amplitude after exposure. Contrary to expectations, the MMN amplitude was smaller overall after exposure, and no difference was found between groups. This suggests that either the previously reported sensitivity to frequency distributions of speech sounds is not present for non-speech sounds, or the MMN amplitude is not a sensitive enough measure of categorization to detect an influence from passive exposure, or both.

  • 20.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    MMR categorization effect at 8 months is related toreceptive vocabulary size at 12 to 14 months2017In: Many Paths to Language (MPaL), 2017, p. 91-92Conference paper (Refereed)
  • 21.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Perceptual reorganization of vowels: Separating the linguistic and acoustic parts of the mismatch response2017Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    During the first year of life, infants go from perceiving speech sounds primarily based on their acoustic characteristics, to perceiving speech sounds as belonging to speech sound categories relevant in their native language(s). The transition is apparent in that very young infants typically discriminate both native and non-native speech sound contrasts, whereas older infants show better discrimination for native contrasts and worse or no discrimi­na­tion for non-native contrasts. The rate of this perceptual reorganization depends, among other things, on the salience of the relevant speech sounds within the speech signal. As such, the perceptual reorganization of vowels and lexical tone typically precedes the perceptual reorganization of consonants.

    Perceptual reorganizatoin of speech sounds is often demonstrated by measuring in­fants’ discrimination of specific speech sound contrasts across development. One way of measuring discriminatory ability is to use the mismatch response (MMR). This is a brain response that can be measured using external electroencephalography re­cord­ings. Pre­senting an oddball (deviant) stimulus among a series of standard stimuli elicits a response that, in adults, correlates well with behavioral discrimination. When the two stimuli are speech sounds contrastive in the listeners’ language, the response arguably reflects both acoustic and linguistic processing. In infants, the response is less studied, but has nevertheless already proven useful for studies on the perceptual reorganization of speech sounds.

    The present thesis documents a series of studies with the end game of investigating how amount of speech exposure influences the perceptual reorganization, and whe­ther the learning mechanisms involved in speech sound cate­gory learning is specific to speech or domain-general. In order to be able to compare MMR results across diffe­rent age groups in infancy, a non-speech control condition needed to be devised however, to account for changes in the MMR across development that are attributable to general brain matura­tion rather than language development specifically.

    Findings of studies incorporated in the thesis show that spectrally rotated speech can be used to approximate the acoustic part of the MMR in adults. Subtracting the acoustic part of the MMR from the full MMR thus estimates the part of the MMR that is linked to linguistic, rather than acoustic, processing. The strength of this linguistic part of the MMR in four- and eight-month-old infants is directly related to the daily amount of speech that the infants are exposed to. No evidence of distributional learning of non-speech auditory categories was demonstrated in adults, but the results together with previous research generated hypo­theses for future study.

    In conclusion, the research performed within the scope of this thesis highlight the need of a non-speech control condition for use in developmental speech perception studies using the MMR, demonstrates the viability of one such non-speech control condition, and points toward relevant future research within the topic of speech sound category development.

    Download full text (pdf)
    Perceptual reorganization of vowels
    Download (jpg)
    Omslagsframsida
  • 22.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Botros, Noor
    Lord, Alekzandra
    Marcusson, Amelie
    Tidelius, Henrik
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    The LENATM system applied to Swedish: Reliability of the Adult Word Count estimate2017In: Proceedings of Interspeech 2017 / [ed] Francisco Lacerda, David House, Mattias Heldner, Joakim Gustafson, Sofia Strömbergsson, Marcin Włodarczak, The International Speech Communication Association (ISCA), 2017, p. 2088-2092Conference paper (Refereed)
    Abstract [en]

    The Language Environment Analysis system LENATM is used to capture day-long recordings of children’s natural audio environment. The system performs automated segmentation of the recordings and provides estimates for various measures. One of those measures is Adult Word Count (AWC), an approximation of the number of words spoken by adults in close proximity to the child. The LENA system was developed for and trained on American English, but it has also been evaluated on its performance when applied to Spanish, Mandarin and French. The present study is the first evaluation of the LENA system applied to Swedish, and focuses on the AWC estimate. Twelve five-minute segments were selected at random from each of four day-long recordings of 30-month-old children. Each of these 48 segments was transcribed by two transcribers,and both number of words and number of vowels were calculated (inter-transcriber reliability for words: r = .95,vowels: r = .93). Both counts correlated with the LENA system’s AWC estimate for the same segments (words: r = .67, vowels: r = .66). The reliability of the AWC as estimated by the LENA system when applied to Swedish is therefore comparableto its reliability for Spanish, Mandarin and French.

    Download full text (pdf)
    fulltext
  • 23.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Vowel categorization correlates with speech exposure in 8-month-olds2017Conference paper (Refereed)
    Abstract [en]

    During the first year of life, infants ability to discriminate non-native speech contrasts attenuates, whereas their ability to discriminate native contrasts improves. This transition reflects the development of speech sound categorization, and is hypothesized to be modulated by exposure to spoken language. The ERP mismatch response has been used to quantify discrimination ability in infants, and its amplitude has been shown to be sensitive to amount of speech exposure on group level (Rivera-Gaxiola et al., 2011). In the present ERP-study, the difference in mismatch response amplitudes for spoken vowels and for spectrally rotated vowels, quantifies categorization in 8-month-old infants (N=15, 7 girls). This categorization measure was tested for correlation with infants? daily exposure to male speech, female speech, and the sum of male and female speech, as measured by all-day home recordings and analyzed using LENA software. A positive correlation was found between the categorization measure and total amount of daily speech exposure (r = .526, p = .044). The present study is the first to report a relation between speech exposure and speech sound categorization in infants on subject level, and the first to compensate for the acoustic part of the mismatch response in this context.

  • 24.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Dybäck, Matilda
    Wallgren, Johanna
    Uhlén, Inger
    Pupil dilation indicates auditory signal detection - towards an objective hearing test based on eye-tracking2016Conference paper (Refereed)
    Abstract [en]

    Purpose: The long-term objective of this project is to develop an objective hearing threshold test that can be used in early infancy, using pupildilation as an indicator of hearing. The study purposes are 1) to identify relevant time-windows for analysis of pupillary responses to various auditory stimuli in adults, and 2) to evaluate a trial-minus-baseline approach to deal with unrelated pupillary responses in adults. Method: Participants’ pupil size is recorded using a Tobii T120 Eye-tracker. In the first test, participants fixate on a blank screen while sound stimuli are presented. From this data, typical pupillary responses and the relevant analysis time-window is determined and used in future tests. In the second test, participants watch movie clips while sound stimuli are presented. Visually identical sound and no-sound trials will be compared in order to isolate the pupillary changes tied to hearing sound from those related to changes in brightness in the visual stimuli. Results and conclusion: Data is currently being collected. Results from the pilot study indicate that the pupillary response related to sound detection occurs at around 900 ms after stimulus onset, and that a trial-minus-baseline approach is a viable option to eliminate unrelated pupillary responses.

    Download full text (pdf)
    fulltext
  • 25.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Pause and utterance duration in child-directed speech in relation to child vocabulary size2015In: Journal of Child Language, ISSN 0305-0009, E-ISSN 1469-7602, Vol. 42, no 5, p. 1158-1171Article in journal (Refereed)
    Abstract [en]

    This study compares parental pause and utterance duration in conversations with Swedish speaking children at age 1;6 who have either a large, typical, or small expressive vocabulary, as measured by the Swedish version of the McArthur-Bates CDI. The adjustments that parents do when they speak to children are similar across all three vocabulary groups; they use longer utterances than when speaking to adults, and respond faster to children than they do to other adults. However, overall pause duration varies with the vocabulary size of the children, and as a result durational aspects of the language environment to which the children are exposed differ between groups. Parents of children in the large vocabulary size group respond faster to child utterances than do parents of children in the typical vocabulary size group, who in turn respond faster to child utterances than do parents of children in the small vocabulary size group.

  • 26.
    Cortes, Elisabet Eir
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gerholm, ToveStockholm University, Faculty of Humanities, Department of Linguistics, General Linguistics.Marklund, EllenStockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.Marklund, UlrikaStockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.Molnar, MonikaNilsson Björkenstam, KristinaStockholm University, Faculty of Humanities, Department of Linguistics, Computational Linguistics.Schwarz, Iris-CorinnaStockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.Sjons, JohanStockholm University, Faculty of Humanities, Department of Linguistics, Computational Linguistics.
    WILD 2015: Book of Abstracts2015Conference proceedings (editor) (Other academic)
    Abstract [en]

    WILD 2015 is the second Workshop on Infant Language Development, held June 10-12 2015 in Stockholm, Sweden. WILD 2015 was organized by Stockholm Babylab and the Department of Linguistics, Stockholm University. About 150 delegates met over three conference days, convening on infant speech perception, social factors of language acquisition, bilingual language development in infancy, early language comprehension and lexical development, neurodevelopmental aspects of language acquisition, methodological issues in infant language research, modeling infant language development, early speech production, and infant-directed speech. Keynote speakers were Alejandrina Cristia, Linda Polka, Ghislaine Dehaene-Lambertz, Angela D. Friederici and Paula Fikkert.

    Organizing this conference would of course not have been possible without our funding agencies Vetenskapsrådet and Riksbankens Jubiléumsfond. We would like to thank Francisco Lacerda, Head of the Department of Linguistics, and the Departmental Board for agreeing to host WILD this year. We would also like to thank the administrative staff for their help and support in this undertaking, especially Ann Lorentz-Baarman and Linda Habermann.

    The WILD 2015 Organizing Committee: Ellen Marklund, Iris-Corinna Schwarz, Elísabet Eir Cortes, Johan Sjons, Ulrika Marklund, Tove Gerholm, Kristina Nilsson Björkenstam and Monika Molnar.

    Download full text (pdf)
    fulltext
  • 27.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Mismatch negativity at Fz in response to within-category changes of the vowel /i/2014In: NeuroReport, ISSN 0959-4965, E-ISSN 1473-558X, Vol. 25, no 10, p. 756-759Article in journal (Refereed)
    Abstract [en]

    The amplitude of the mismatch negativity response for acoustic within-category deviations in speech stimuli was investigated by presenting participants with different exemplars of the vowel /i/ in an odd-ball paradigm. The deviants differed from the standard either in terms of fundamental frequency, the first formant, or the second formant. Changes in fundamental frequency are generally more salient than changes in the first formant, which in turn are more salient than changes in the second formant. The mismatch negativity response was expected to reflect this with greater amplitude for more salient deviations. The fundamental frequency deviants did indeed result in greater amplitude than both first formant deviants and second formant deviants, but no difference was found between the first formant deviants and the second formant deviants. It is concluded that greater difference between standard and within-category deviants across different acoustic dimensions results in greater mismatch negativity amplitude, suggesting that the processing of linguistically irrelevant changes in speech sounds may be processed similar to nonspeech sound changes.

  • 28.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwitteck, Annika
    Mismatch negativity reflects phonemic as well as lexical discrimination: Does this also hold for toddlers?2014Conference paper (Refereed)
    Abstract [en]

    Phoneme discrimination is an important factor in language acquisition. Within the first year of life, infants turn their early global discrimination skills into language-specific tools, facilitating lexical growth. However, word learning impedes phoneme discrimination between 14 and 17 months of age (e.g., Stager & Werker, 1997; Yoshida, Fennell, Swingley, & Werker, 2009).

    Phoneme discrimination can also be demonstrated by studying event-related potentials (ERPs) (e.g., Rivera-Gaxiola, Silva-Pereyra, & Kuhl, 2005). The typical ERP-component that responds to a deviant in a series of standard stimuli, thereby indicating discrimination between standard and deviant, is called mismatch negativity (MMN). MMN is an early ERP-component at around 150 to 250 ms after stimulus onset, representing automatic processing on a pre-attentive level (Näätänen, Paavilainen, Rinne, & Alho, 2007). It can be elicited in a passive listening paradigm and is therefore especially suitable for infant studies. However, even higher-level cognition such as the processing of lexical information can be reflected in the MMN (Shtyrov, Hauk, & Pulvermüller, 2004). In 5-year-old children, lexical processing as part of the MMN was indicated at a latency of 400 to 450 ms after stimulus onset (Korpilahti, Krause, Holopainen, & Lang, 2001).

    In the present study, a simple auditory syllable discrimination task is converted into a word discrimination task by supplying semantic content to standard and deviant syllables. The first experimental block contained 50 discrimination trials with four to seven repetitions of the standard syllable with an interstimulus interval of 500 ms. The stimuli consisted of one exemplar per syllable [be] and [de] (400 ms duration each), recorded in infant-directed speech by a female speaker. In a second block (50 naming trials) these syllables gained a semantic dimension by associative pairing with two different jackalope soft toys. Four different pictures per soft toy were presented together with the naming syllable in randomised order. The third experimental block equalled the first. As designed for toddlers, the study was kept to a suitable duration of about 10 min.

    It was predicted that the first block elicits a typical MMN, indicating discrimination on a phonemic level, but in the third block, a greater latency in the MMN was expected, indicating discrimination on a lexical level. Adult pilot data (N=6), collected to evaluate the extremely short MMN-paradigm and provide a reference group, revealed a strong frontocentral MMN candidate for discrimination on a phonemic level in the first experimental block with deviant negativity between 180 and 250 ms, as well as on a lexical level in the third block, indicated by a deviant negativity with a latency of 360 ms. This shows also that just 50 trials can elicit a MMN, setting the stage to test toddlers. The results of 20-month-olds are predicted to conform to the adult pilot data, possibly with a greater latency both in the phonemic and the lexical MMN. MMN may therefore offer an alternative method to investigate the development of phoneme discrimination in word learning contexts between the ages of 14 and 20 months. 

    Download (pdf)
    PosterICIS2014
  • 29.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Nazem, Atena
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Olsson, Sofia
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Uhlén, Inger
    Karolinska Institutet, Sweden.
    Towards a contingent anticipatory infant hearing test using eye-tracking2014In: Proceedings from FONETIK 2014: Stockholm, June 9-11, 2014 / [ed] Mattias Heldner, Stockholm: Department of Linguistics, Stockholm University , 2014, p. 35-40Conference paper (Other academic)
    Abstract [en]

    Early identification of infant hearing impairment is imperative to prevent developmental language difficulties. The current diagnostic method is Visual Reinforcement Audiometry (VRA) in which infant response to sound isobserved to establish hearing thresholds. Together with the Karolinska Institute, we are developing an observer-independent contingent anticipatory infant hearing test using eye-tracking to increase reliability and significance levels of the current clinical practice. The present pilot study addresses in particular the first phase of the test in which the eye response is conditioned to occur at sound detection. The aim is to establish how well 6.5-month-olds associate the presence of sound to a certain location via a visual reward.

    Download full text (pdf)
    Schwarz et al. (2014)
  • 30.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwitteck, Annika
    Mismatch negativity as tool in language acquisition research: Discrimination on auditory-phonetic and/or semantic levels?2013Conference paper (Other academic)
    Abstract [en]

    Phoneme discrimination is one of the first steps on the infants’ path to identify word boundaries and to build a lexicon. Within the first year of life, infants turn their early global discrimination skills into language-specific tools, paving the way to fast word acquisition. This development has been demonstrated in countless behavioural discrimination studies, but also with electrophysiological measures such as event-related potentials (ERPs). The typical ERP-component that responds to a deviant in a series of standard stimuli, thereby indicating discrimination between standard and deviant, is called mismatch negativity (MMN). MMN is generally an early ERP-component at around 150 to 200 ms after stimulus onset, representing automatic processing on a pre-attentive level. It can be elicited in a passive listening paradigm while watching a silenced movie and is therefore especially suitable for infant studies. However, higher cognitive processing such as semantic discrimination can also be indicated by MMN with a latency of 400 to 450 ms after stimulus onset. This study tests whether auditory-phonetic and semantic processing levels are reflected in the MMN when comparing a simple syllable discrimination task to a word discrimination task, after the standard and deviant syllable stimuli have been supplied with semantic content. The syllables [be]-[de] are expected to be discriminated in the first experimental block (50 trials), indicated by a typical MMN in the participants’ ERPs. In a second block (50 naming trials), these syllables become names for two different jackalope soft toys while the participant eye movements are recorded. In the third block (50 trials), the MMN paradigm is run again, but this time with the expectation that higher level processing is indicated in the MMN response. This could be either reflected by a significantly more pronounced MMN curve or by a greater latency of the MMN response. We report pilot results of adult participants (N=7), run to evaluate the experimental paradigm before testing toddlers. 

  • 31.
    Edlund, Jens
    et al.
    KTH Speech, Music and Hearing.
    Alexandersson, Simon
    Beskow, Jonas
    KTH Speech, Music and Hearing.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Heldner, Mattias
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Hjalmarsson, Anna
    KTH Speech, Music and Hearing.
    Kallioinen, Petter
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    3rd party observer gaze as a continuous measure of dialogue flow2012Conference paper (Refereed)
    Abstract [en]

    We present an attempt at using 3rd party observer gaze to get a measure of how appropriate each segment in a dialogue is for a speaker change. The method is a step away from the current dependency of speaker turns or talkspurts towards a more general view of speaker changes. We show that 3rd party observers do indeed largely look at the same thing (the speaker), and how this can be captured and utilized to provide insights into human communication. In addition, the results also suggest that there might be differences in the distribution of 3rd party observer gaze depending on how information-rich an utterance is. 

    Download full text (pdf)
    fulltext
  • 32.
    Engdahl, Johan
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjerva, Johannes
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Byström, Emil
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Acoustic analysis of adults imitating infants: a cross-linguistic perspective2012In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 131, no 4Article in journal (Refereed)
    Abstract [en]

    The present study investigates adult imitations of infant vocalizations in a cross-linguistic perspective. Japanese-learning and Swedish-learning infants were recorded at ages 16-21 and 78-79 weeks. Vowel-like utterances (n=210) were selected from the recordings and presented to Japanese (n=3) and Swedish (n=3) adults. The adults were asked to imitate what they heard, simulating a spontaneous feedback situation between caregiver and infant. Formant data (F1 and F2) was extracted from all utterances and validated by comparing original and formant re-synthesized utterances. The data was normalized for fundamental frequency and time, and the accumulated spectral difference was calculated between each infant utterance and each imitation of that utterance. The mean spectral difference was calculated and compared, grouped by native language of infant and adult, as well as age of the infant. Preliminary results show smaller spectral difference in the imitations of older infants compared to imitations of the younger group, regardless of infant and adult native language. This may be explained by the increasing stability and more speech-like quality of infants' vocalizations as they grow older (and thus have been exposed to their native language for a longer period of time), making their utterances easier for adults to imitate.

  • 33.
    Bjerva, Johannes
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Engdahl, Johan
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Tengstrand, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Preceding non-linguistic stimuli affect categorisation of Swedish plosives2012In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 131, no 4Article in journal (Refereed)
    Abstract [en]

    Speech perception is highly context-dependent. Sounds preceding speech stimuli affect how listeners categorise the stimuli, regardless of whether the context consists of speech or non-speech. This effect is acoustically contrastive; a preceding context with high-frequency acoustic energy tends to skew categorisation towards speech sounds possessing lower-frequency acoustic energy and vice versa (Mann, 1980; Holt, Lotto, Kluender, 2000; Holt, 2005). Partially replicating Holt's study from 2005, the present study investigates the effect of non-linguistic contexts in different frequency bands on speech categorisation. Adult participants (n=15) were exposed to Swedish syllables from a speech continuum ranging from /da/ to /ga/ varying in the onset frequencies of the second and third formants in equal steps. Contexts preceding the speech stimuli consisted of sequences of sine tones distributed in different frequency bands: high, mid and low. Participants were asked to categorise the syllables as /da/ or /ga/. As hypothesised, high frequency contexts shift the category boundary towards /da/, while lower frequency contexts shift the boundary towards /ga/, compared to the mid frequency context.

  • 34.
    Bjerva, Johannes
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Engdahl, Johan
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Anticipatory Looking in Infants and Adults2011In: Proceedings of EyeTrackBehavior 2011, 2011Conference paper (Other academic)
    Abstract [en]

    Infant language acquisition research faces the challenge of dealing with subjects who are unable to provide spoken answers to research questions. To obtain comprehensible data from such subjects eye tracking is a suitable research tool, as the infants’ gaze can be interpreted as behavioural responses. The purpose of the current study was to investigate the amount of training necessary for participants to learn an audio-visual contingency and present anticipatory looking behaviour in response to an auditory stimulus. Infants (n=22) and adults (n=16) were presented with training sequences, every fourth of which was followed by a test sequence. Training sequences contained implicit audio-visual contingencies consisting of a syllable (/da/ or /ga/) followed by an image appearing on the left/right side of the screen. Test sequences were identical to training sequences except that no image appeared. The latency in time to first fixation towards the non-target area during test sequences was used as a measurement of whether the participants had grasped the contingency. Infants were found to present anticipatory looking behaviour after 24 training trials. Adults were found to present anticipatory looking behaviour after 28-36 training trials. In future research a more interactive experiment design will be employed in order to individualise the amount of training, which will increase the time span available for testing.

    Download full text (pdf)
    BjervaMarklundEngdahlLacerda_2011
  • 35.
    Klintfors, Eeva
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Kallioinen, Petter
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Cortical N400-potentials generated by adults in response to semantic incongruities2011In: Proceedings of Fonetik 2011: Speech, Music and Hearing TMH-QPSR Vol. 51 / [ed] Björn Granström, David House, Daniel Neiberg, Sofia Strömbergsson, Stockholm, Sweden: Universitetsserveice AB , 2011, p. 121-124Conference paper (Other academic)
    Abstract [en]

    Eight adult participants were investigated in a pre-experiment for the future assessment of semantic N400 effects in children. The materials were words resented in semantically incongruent vs. congruent picture contexts. For example, he word duck was played while a picture of a tree was shown in the incongruent est condition vs. the word duck was played while a picture of a duck was shown in the congruent test condition. A larger N400 effect was expected in response to the incongruent audio-visual pairings. The results showed in time extended peak-to peak differences between congruent and incongruent audio-visual pairings at the centroparietal, parietal and parieto-occipital recording sites. This study was performed to validate the current materials to be used to answer questions on appearance of the N400 component in children.

    Download full text (pdf)
    fulltext
  • 36.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Event-related gaze analysis in infant eye-tracking studies2011Conference paper (Refereed)
    Abstract [en]

    Eye-tracking is useful in behavioural first language acquisition studies, since it makes it possible to interpret responses of pre-verbal infants. Although data obtained from a high-resolution eye-tracking system offers a range of possible analyses, total looking time towards different areas of the screen is still the most commonly used measurement in infant studies. The present study investigates the possibilities of event-related gaze analysis in an attempt to better make use of the advancing technology. Infants’ ability to match intensity across modalities was tested using both traditional visual preference analysis and event-related gaze analysis. Total looking time towards target (visual size and auditory intensity match) versus non-target was measured, and it was established that infants (n=58, 7.3 months, range 4 to 10 months) look significantly longer to target. The visual stimuli were then organized horizontally on the screen according to size, and auditory stimuli with different intensity levels were presented to the infants. The average horizontal gaze shift (normalized for initial position on the screen) at auditory stimuli onset was plotted over time. As predicted, infants (n=25, mean age 7.9 months, range 7 to 9 months) shift their gaze towards the visual stimuli matching the auditory stimuli in magnitude, although the results are only significant during very short intervals after stimuli onset. In conclusion, the event-related gaze analysis method seems promising since the results are in line with those from the traditional preferential looking paradigm in the present study, suggesting that further fine-tuning of the analysis method is worth pursuing.

    Download full text (pdf)
    Marklund2011
  • 37.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Fundamental frequency as basis for speech segmentation modeling2011Independent thesis Advanced level (degree of Master (One Year)), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    The present study investigates the relevance of fundamental frequency in speech segmentation models intended to simulate infants. Speech from three different conditions (infant-directed speech to 3- and 12-month-olds, and adult-directed speech) was segmented based on fundamental frequency information, using a variant of the dpn-gram segmenting technique (highlighting similar segments as lexical candidates). The spectral distance between segments that were found based on fundamental frequency similarity was calculated, and compared to the spectral distance between segments that were found using transcription as basis for segmentation, as well as to the spectral distance between randomly paired segments from the same speech materials. The results show the greatest within-condition difference in speech directed to 3-month-olds, in which segmenting based on fundamental frequency similarity generated segment pairs with smaller spectral distance than did transcription-based segmentation or random segment pairs. Speech directed to 12-month-olds resulted in a somewhat smaller difference when using fundamental frequency data compared to when using transcriptions. For adult-directed speech, no difference was found in spectral distance between pairs generated by the different bases for segmentation. Neither segmenting speech by highlighting similar segments as lexical candidates, nor using fundamental frequency as basis for segmentation is optimal for a speech segmentation model intended to simulate 12-month-olds or adults. These groups are more likely to segment speech based on their already present or growing linguistic experience than on acoustic similarity only. However, for a model simulating a 3-month-old infant, the present segmentation procedure and its basis for segmentation are more plausible. When modeling speech segmentation in an infant-like manner it is important to take into account both that the cognitive abilities of infants develop rapidly during the first year of life, and that some aspects of their linguistic environment vary during this period.

    Download full text (pdf)
    Marklund2011
  • 38.
    Klintfors, Eeva
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Neurolinguistic responses to perception of speech in incongruent picture context2011In: Proceedings of the 17th International Congress of Phonetic Sciences (ICPhS), August 17-21 2011, Hong Kong, China / [ed] Eric Zee, Hong Kong, China, 2011, p. 1110-1113Conference paper (Refereed)
    Abstract [en]

    This study investigates N400, the event related potential (ERP) that reflects semantic processing in cortex. The stimuli were semantically congruent vs. incongruent audio-visual pairings of objects (i.e. words presented in familiar picture context). As predicted, the results showed in time extended significant peak-to-peak differences between congruent and incongruent picture-word pairings at the centroparietal and the parietal recording sites. The rationale for this study is our pioneer intent to verify use of ERPs in response to the current materials. After extension of this research, the data will be used in comparison to answer questions on appearance of the semantic component in young children. Based on earlier research, we expect that the N400 component in children will be greater in amplitude, delayed in latency and more widely distributed in scalp distribution.

    Download full text (pdf)
    fulltext
  • 39.
    Klintfors, Eeva
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Parallels in infants' attention to speech articulation and to physical changes in speech-unrelated objects2011In: Proceedings of the 12 th Annual Conference of the International Speech Communication Association (Interspeech 2011). / [ed] Piero Cosi, Renato De Mori, Giuseppe Di Fabbrizio, Roberto Pieraccini, Firenze, Italy: International Speech Communication Association (ISCA), Causal Productions Pty Ltd , 2011, p. 2197-2200Conference paper (Refereed)
    Abstract [en]

    The mechanisms of how children develop the capacity to make use of speech articulation cues to support interpretation of the speech signal are not exhaustively explored. The purpose of this study is to investigate if there are parallels in infants’ way of attending to speech articulation and their perception of physical changes in speech-unrelated objects. The current research questions grew out from an earlier study in which it was found that perception of speech in infants was based on a match between auditory and visual prominence – as opposed to a match between sound and to it corresponding face. Data suggested that speech perception in infancy may function as described by Stevens power law, and two methodological supplements to test the validness of this hypothesis were made in the current study. First, a non-articulatory test condition was added to investigate infants’ perception of speech-unrelated objects. Second, amplitude manipulated stimuli were added to introduce systematic changes in loudness. Results confirmed our hypothesis; the visually prominent articulations were favored, and the same pattern was found in response to nonspeech related objects.

    Download full text (pdf)
    fulltext
  • 40.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Relationship between parental communicative adjustments and vocabulary production in Swedish infants2011Conference paper (Refereed)
    Abstract [en]

    Parental linguistic input and interaction style are essential to infant language development. The current study investigates the relationship between Swedish children’s productive vocabulary size and parental communicative adjustments at 18 months (N = 60) and 24 months (N = 61). Vocabulary size is reported with the Swedish adaptation of the MacArthur CDI Words and Sentences (SECDI) while parental communicative adjustments are measured by parental inclination to wait for infants’ vocal communicative initiative and parental inclination to adjust utterance duration to match the duration of infant vocalization. Pauses between utterances and utterance duration of parents and children are tagged in audio recordings of daily-life situations involving parent and child at the family home, such as mealtime, playtime, or reading time. Infants with large productive vocabularies are expected to have parents who are more inclined to wait for communicative initiatives on the part of the infant and to adjust utterance duration to match infant vocalizations. On the other hand, infants with small productive vocabularies are expected to have parents who are less inclined to give room to communicative initiatives and to match input duration to infant production. Small vocabularies are defined by the lowest quartile (0-25%), while large vocabularies are represented by the highest quartile (75-100%) of SECDI scores. Parental communicative adjustments show differential effects on productive vocabulary size at the two ages 18 and 24 months. This indicates a relationship between parental communicative input, as measured in the aspects of turn-taking pauses and duration adjustment, and child vocabulary development.

  • 41.
    Bjerva, Johannes
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Training in Anticipatory Looking Experiments with Adult Participants2011In: Proceedings of the 17th International Congress of Phonetic Sciences / [ed] Wai-Sum Lee & Eric Zee, 2011, p. 316-319Conference paper (Refereed)
    Abstract [en]

    The amount of training necessary to trigger anticipatory looking was investigated in adults (n=16) using a simple testing paradigm, in order to create a baseline for studies on infants’ language acquisition. Participants were presented with training containing implicit associations between two syllables (/da/ and /ga/) and visual events displayed on different areas on the screen. The training series were periodically interrupted by test trials where a syllable was presented but no visual event was displayed. Significantly altered looking behaviour, as measured by participants’ first gaze fixation latency towards the Non-target area (where the visual event should not be expected), was found after 28-36 training trials.

    Download full text (pdf)
    BjervaMarklundLacerda_2011
  • 42.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Ericsson, Anna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Cross-modal magnitude matching as a precursor of multi-modal speech perception2010In: Proceedings from Fonetik 2010, Lund, June 2-4, 2010 / [ed] Susanne Schötz and Gilbert Ambrazaitis, Lund: Department of Linguistics and Phonetics, Lund University , 2010, p. 69-73Conference paper (Refereed)
    Abstract [en]

    Four- to ten-month-old infants (n=58) were examined on their ability to match magnitude across modalities. Their looking behaviour was recorded as they were presented with an intensity modulated auditory stimulus and three possible visual matches. The mean looking times towards a visual target (size envelope matching intensity envelope of the auditory stimulus) and a non-target were calculated. Fivemonth-olds and seven- to ten-month-olds show a significant preference looking towards the target, as do an adult control group. Four- and six-month-olds do not.

    Download full text (pdf)
    FULLTEXT01
  • 43.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Ericsson, Anna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Speech Categorization Context Effects in Seven- to Nine-Month-Old Infants2010In: INTERSPEECH-2010, 2010, p. 1233-1236Conference paper (Refereed)
    Abstract [en]

    Adults have been shown to categorize an ambiguous syllable differently depending on which sound precedes it. The present paper reports preliminary results from an on-going experiment, investigating seven- to nine-month-olds on their sensitivity to non-speech contexts when perceiving an ambiguous syllable. The results suggest that the context effect is present already in infancy. Additional data is currently collected and results will be presented in full at the conference.

  • 44.
    Söderlund, Göran
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Auditory White Noise Enhances Cognitive Performance Under Certain Conditions: Examples from Visuo-Spatial Working Memory and Dichotic Listening Tasks2009In: Proceedings FONETIK 2009 / [ed] Peter Branderud & Hartmut Traunmüller, Stockholm: Department of Linguistics, Stockholm Universitet , 2009, p. 160-164Conference paper (Refereed)
    Abstract [en]

    This study examines when external auditive noise can enhance performance in a dichotic listening and a visuo-spatial working memory task. Noise is typically conceived of as being detrimental for cognitive performance; however, given the mechanism of stochastic resonance (SR), a certain amount of noise can benefit performance. In particular we predict that low performers will be aided by noise whereas high performers decline in performance during the same condition. Data from two experiments will be presented; participants were students at Stockholm University.

    Download full text (pdf)
    fulltext
  • 45.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Walberg, Heléne
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Cerebral activation patterns of speech perception in 4-month-olds and adults2009In: BrainTalk: Discourse with and in the Brain / [ed] Kai Alter, Merle Horne, Magnus Lindgren, Mikael Roll, Janne von Koss Torkildsen, Lund, Sweden: Centre for Languages and Literature, Lund University , 2009, p. 231-246Chapter in book (Other (popular science, discussion, etc.))
    Abstract [en]

    Speech perception in infancy is language-general, but language-specific in adulthood. Is this reflected in event-related brain potentials (ERPs)? ERPs from 128 electrodes were recorded for three speech perception conditions in 4-month-old Swedish infants (N=9) and adults (N=13).The stimuli were simple sentences in infant-directed speech (IDS) in Swedish, familiar in semantic content and prosodic characteristics; in spectrally rotated Swedish with incomprehensible semantic content, but familiar prosodic characteristics; and in Portuguese, unfamiliar in both aspects.As predicted, infants show bilateral activation across conditions, but also present two left-hemispheric advantages as potential first signs of native language specialisation. In adults, left-hemispheric activation shows condition-sensitive ERP patterns: processing rotated Swedish is similar to Swedish, but different to Portuguese. The right-hemispheric dominance in adults is explained by the strong prosodic and weak semantic characteristics of the IDS stimuli. Infants' language-general and adults' language-specific speech perception is reflected in the ERPs, although 4-month-olds show first signs of specialisation.

  • 46.
    Klintfors, Eeva
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sundberg, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjursäter, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Söderlund, Göran
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Development of communicative skills in 8- to 16-month-old children: A longitudinal study2008In: Proceedings of The 9th International Conference on Spoken Language Processing, Interspeech2008, Brisbane, September, 2008., 2008, p. 1972-1975Conference paper (Refereed)
    Abstract [en]

    The aim of this study was to assess development of communicative skills in 8- to 16-month-old children. Information on 24 Swedish children’s speech comprehension and production, as well as their utilization of communicative gestures was collected. A version of the MacArthur Communicative Development Inventory, the Swedish Early Communicative Development Inventory (SECDI), was used. The number of comprehended phrases, size of receptive and productive vocabularies, as well as the subjects’ gesture score was estimated according to standardized scoring instructions. The study was performed longitudinally based on 71 completed forms. The children’s performance was validated with existing norm-data collected from a large set of randomly selected children. The results showed an overall agreement with the norm-data. The performance of the subjects was though less stable and delayed about one month was compared to the norm-data. Adequacy of SECDI for screening language delay is discussed.

    Download full text (pdf)
    fulltext
  • 47.
    Lacerda, Francisco
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sundberg, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Walberg, Heléne
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjursäter, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Klintfors, Eeva
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Söderlund, Göran
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Development of lateralised speech perception: Brain activation in 4-month-olds and adults2008In: Papers from the first Birgit Rausing Language Program Conference in Linguistics, Lund, June 2008 / [ed] Kai-Uwe Alter, Lund: Centre for Languages and Literature, Lund University , 2008, p. 45-46Conference paper (Other academic)
    Abstract [en]

    Previous fMRI and PET studies indicated left hemispheric activation for the processing of speech and phonetic features, and right hemispheric activation for the processing of emotions and melody. EEG measures are not commonly employed in lateralisation research, neither has lateralisation been extensively investigated in a developmental perspective. This study aims to fill this gap. Although ERP measures cannot precisely localize task-activated cortical regions, differences in voltage reflect scalp electricity in response to the activation of a distinct neural network (Giard et al., 1995). With EEG, we study the lateralisation of speech processing in 4- and 9-month-olds and adults in two conditions, spoken sentences and their spectrally rotated version. Rotating is a technique to render speech intelligible while preserving language-specific characteristics such as rhythm. A previous EEG study showed no lateralisation in 4-month-olds for the processing of speech versus nonspeech sounds as measured with 65 electrodes (Dehaene-Lambertz, 2000). For refined measurement, we employ 128 electrodes and use infant-directed speech stimuli for both conditions. It has been established that language comprehension tasks elicit bilateral hemispheric activation (Awad, Warren, Scott, Turkheimer, & Wise, 2007; Davis et al., 2007), showing intrahemispheric localisation differences within the left hemisphere for phonetic features and comprehension in speech processing tasks (Obleser, Zimmermann, Van Meter, & Rauschecker, 2007; Scott, Blank, Rosen, & Wise, 2000). As sentence comprehension increases with age, we predict an increase of activation in the left hemisphere when comparing 4- to 9-month-olds and 9-month-olds to adults. The melodic and emotional components of the infant-directed speech stimuli should elicit right-hemispheric activation which is predicted to decrease with age. Therefore, the measured differences between left- and right-hemispheric activation are predicted to increase with age. Left-hemispheric activation is also expected for the rotated speech condition due to the speech-likeness of the stimuli, however to a lesser degree than in the natural sentence condition (Ischebeck, Friederici, & Alter, 2007). This activation difference between the speech and rotated speech conditions should also increase with age as comprehension increases. The results are presented in regard to these hypotheses and discussed in the light of recent research.

  • 48.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Infants' ability to form verb-action associations2008Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Four- to eight-month-old infants (n=56) were examined on their ability to acquire verb meaning. In a visual preference procedure they were tested on their ability to form verb-action associations by detecting the correlation between auditory speech stimuli and actions presented in short movie clips on a screen. If associations were formed, they were expected to significantly modify their looking behavior after exposure, looking closer to the target than during baseline. Instead of measuring total looking time as response, distance to target was the chosen measure. Eight-month-olds as well as a reference group of adults acquired the verb-action associations. Thus, eight months is the youngest age at which verb meaning acquisition could be demonstrated so far.

    Download full text (pdf)
    FULLTEXT01
  • 49.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjursäter, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    An exploration of the consequences of vocalizations in early infancy2007In: The Scandinavian Workshop on Applied Eye-Tracking, 2007Conference paper (Refereed)
  • 50.
    Klintfors, Eeva
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Investigating the Emergence of Speech Communication: A Study of Infants' Ability to Predict Phonetic Information2007In: Proceedings of EuroCogSci07: The European Cognitive Science Conference 2007, European Cultural Center of Delphi, Delphi/Greece, May 23-27, 2007, East Sussex: Hove , 2007, p. 694-699Conference paper (Refereed)
    Abstract [en]

    The introduction of this paper provides an overview of infants' prediction skills of action goals, as well as their ability to predict perceptual acoustic information. Prediction skills' neurological correlates in general are discussed. A central hypothesis under investigation is that there are commonalities between the development of speech and manipulation. The current research is focused on the communication mode investigating infants' ability to associate images of familiar objects with auditory-stimuli presented both as whole words in intact form and as disrupted (partly noise-replaced) spoken words. The looking behaviour of the infants was measured with the Tobii eye-tracking device. The results suggest that 11 to 16 month-old infants recognize the target object when the word referring to it was intact, i.e. when the name of the object was presented in its entirety. However, the infants did not seem to recognize the target object when the word referring to it was partially masked so that only its initial phonetic information was presented. These results indicate that young infants are sensitive to phonetic information of the words and may need more extensive linguistic experience in order to derive full lexical forms from partially masked words. The paper concludes with suggestions for future demonstrations of anticipation of speech.

    Download full text (pdf)
    fulltext
12 1 - 50 of 60
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf