Change search
Refine search result
12 51 - 67 of 67
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Lacerda, Francisco
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sundberg, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Walberg, Heléne
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjursäter, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Klintfors, Eeva
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Söderlund, Göran
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Development of lateralised speech perception: Brain activation in 4-month-olds and adults2008In: Papers from the first Birgit Rausing Language Program Conference in Linguistics, Lund, June 2008 / [ed] Kai-Uwe Alter, Lund: Centre for Languages and Literature, Lund University , 2008, p. 45-46Conference paper (Other academic)
    Abstract [en]

    Previous fMRI and PET studies indicated left hemispheric activation for the processing of speech and phonetic features, and right hemispheric activation for the processing of emotions and melody. EEG measures are not commonly employed in lateralisation research, neither has lateralisation been extensively investigated in a developmental perspective. This study aims to fill this gap. Although ERP measures cannot precisely localize task-activated cortical regions, differences in voltage reflect scalp electricity in response to the activation of a distinct neural network (Giard et al., 1995). With EEG, we study the lateralisation of speech processing in 4- and 9-month-olds and adults in two conditions, spoken sentences and their spectrally rotated version. Rotating is a technique to render speech intelligible while preserving language-specific characteristics such as rhythm. A previous EEG study showed no lateralisation in 4-month-olds for the processing of speech versus nonspeech sounds as measured with 65 electrodes (Dehaene-Lambertz, 2000). For refined measurement, we employ 128 electrodes and use infant-directed speech stimuli for both conditions. It has been established that language comprehension tasks elicit bilateral hemispheric activation (Awad, Warren, Scott, Turkheimer, & Wise, 2007; Davis et al., 2007), showing intrahemispheric localisation differences within the left hemisphere for phonetic features and comprehension in speech processing tasks (Obleser, Zimmermann, Van Meter, & Rauschecker, 2007; Scott, Blank, Rosen, & Wise, 2000). As sentence comprehension increases with age, we predict an increase of activation in the left hemisphere when comparing 4- to 9-month-olds and 9-month-olds to adults. The melodic and emotional components of the infant-directed speech stimuli should elicit right-hemispheric activation which is predicted to decrease with age. Therefore, the measured differences between left- and right-hemispheric activation are predicted to increase with age. Left-hemispheric activation is also expected for the rotated speech condition due to the speech-likeness of the stimuli, however to a lesser degree than in the natural sentence condition (Ischebeck, Friederici, & Alter, 2007). This activation difference between the speech and rotated speech conditions should also increase with age as comprehension increases. The results are presented in regard to these hypotheses and discussed in the light of recent research.

  • 52.
    Lim, Sung-joo
    et al.
    Carnetgie Mellon University.
    Holt, Lori
    Carnetgie Mellon University.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Learning acoustically complex word-like units within a video-game training paradigm2011In: 161st Meeting of the Acoustical Society of America / [ed] Allan D, Pierce, 2011, p. 2661-2661Conference paper (Refereed)
    Abstract [en]

    Over the course of languagedevelopment, infants learn native speechcategories and word boundaries fromspeech input. Although speech categorylearning and word segmentation learningoccur in parallel, most investigationshave focused on one, assuming somewhatmature develop ofthe other. To investigate the extent to which listeners cansimultaneouslysolve the categorization and segmentation learning challenges, wecreatedan artificial, non-linguistic stimulus space that modeled the acousticcomplexitiesof natural speech by recording a single talker’s multipleutterancesof a set of sentences containing four keywords. There was acousticvariabilityacross utterances, presenting a categorization challenge. Thekeywordswere embedded in continuous speech, presenting a segmentationchallenge.Sentences were spectrally rotated, rendering them whollyunintelligible, andpresented within a video-game training paradigm that doesnot rely uponexplicit feedback and yet is effective in training non-speech andnon-nativespeech categorizationWade & Holt2005; Lim & Holtsubmitted. Withjust 2 h of play, adult listeners could reliably extractword-length sound categoriesfrom continuous sound streams and generalizedlearning to noveltokens. The amount of“sentence”variability within training didnot influencelearning. [Research supported by NIH, NSF, and Riksbanken].

  • 53.
    Lund, Kristina
    et al.
    SVT: Hjärnkontoret.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics. Fonetik.
    Rösten: (Hjärnkontoret, SVT, den 10 oktober 2007, 18.30)2007Other (Other (popular science, discussion, etc.))
    Abstract [sv]

    Programmet har ett avsnitt om rösten, där Per-Åke Lindblad (Foniatriska avdelningen, Karolinska institutet) och Francisco Lacerda (Fonetik, Institutionen för lingvisitk, Stockholms universitet) deltar och berättar om rösten. Inspelningarna gjordes på Huddinge sjukhuset, KI, och i fonetiklaboratoriet, vid institutionen för lingvistik, Stockholms universitet.

    Programmet riktar sig till skolbarn, c:a 12 år. Programledaren är Kristina Lund.

  • 54.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjursäter, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    An exploration of the consequences of vocalizations in early infancy2007In: The Scandinavian Workshop on Applied Eye-Tracking, 2007Conference paper (Refereed)
  • 55.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Ericsson, Anna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Cross-modal magnitude matching as a precursor of multi-modal speech perception2010In: Proceedings from Fonetik 2010, Lund, June 2-4, 2010 / [ed] Susanne Schötz and Gilbert Ambrazaitis, Lund: Department of Linguistics and Phonetics, Lund University , 2010, p. 69-73Conference paper (Refereed)
    Abstract [en]

    Four- to ten-month-old infants (n=58) were examined on their ability to match magnitude across modalities. Their looking behaviour was recorded as they were presented with an intensity modulated auditory stimulus and three possible visual matches. The mean looking times towards a visual target (size envelope matching intensity envelope of the auditory stimulus) and a non-target were calculated. Fivemonth-olds and seven- to ten-month-olds show a significant preference looking towards the target, as do an adult control group. Four- and six-month-olds do not.

  • 56.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Klintfors, Eeva
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Sundberg, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjursäter, Ulla
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Potential relevance of general purpose mechanisms to the onset of language: Audio-visual integration of nouns and verbs.2005In: ESF Research Conference on Brain Development and Cognition in Human Infants: From Action to Cognition, 2005Conference paper (Other academic)
  • 57.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Using rotated speech to approximate the acoustic mismatch negativity response to speech2018In: Brain and Language, ISSN 0093-934X, E-ISSN 1090-2155, Vol. 176, p. 26-35Article in journal (Refereed)
    Abstract [en]

    The mismatch negativity (MMN) response is influenced by the magnitude of the acoustic difference between standard and deviant, and the response is typically larger to linguistically relevant changes than to linguistically irrelevant changes. Linguistically relevant changes between standard and deviant typically co-occur with differences between the two acoustic signals. It is therefore not straightforward to determine the contribution of each of those two factors to the MMN response. This study investigated whether spectrally rotated speech can be used to determine the impact of the acoustic difference on the MMN response to a combined linguistic and acoustic change between standard and deviant. Changes between rotated vowels elicited an MMN of comparable amplitude to the one elicited by a within-category vowel change, whereas the between-category vowel change resulted in an MMN amplitude of greater magnitude. A change between rotated vowels resulted in an MMN ampltude more similar to that of a within-vowel change than a complex tone change did. This suggests that the MMN amplitude reflecting the acoustic difference between two speech sounds can be well approximated by the MMN amplitude elicited in response to their rotated counterparts, in turn making it possible to estimate the part of the response specific to the linguistic difference.

  • 58.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Amount of speech exposure predicts vowel perception in four-to-eight-month-oldsManuscript (preprint) (Other academic)
    Abstract [en]

    During the first year of life, infants shift their focus in speech perception from acoustic to linguistic information. This perceptual reorganization is related to exposure, and a direct relation has been demonstrated previously between amount of daily language exposure and mismatch response (MMR) amplitude to a native consonant contrast at around one year of age. The present study investigates the same relation between amount of speech exposure and MMR amplitude to a native vowel contrast at four to eight months of age. Importantly, the present study uses spectrally rotated speech in an effort to take general neural maturation into account. The amplitude of the part of the MMR that is tied specifically to speech processing correlates with amount of daily speech exposure, as estimated using the LENA system.

  • 59.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Mismatch negativity at Fz in response to within-category changes of the vowel /i/2014In: NeuroReport, ISSN 0959-4965, E-ISSN 1473-558X, Vol. 25, no 10, p. 756-759Article in journal (Refereed)
    Abstract [en]

    The amplitude of the mismatch negativity response for acoustic within-category deviations in speech stimuli was investigated by presenting participants with different exemplars of the vowel /i/ in an odd-ball paradigm. The deviants differed from the standard either in terms of fundamental frequency, the first formant, or the second formant. Changes in fundamental frequency are generally more salient than changes in the first formant, which in turn are more salient than changes in the second formant. The mismatch negativity response was expected to reflect this with greater amplitude for more salient deviations. The fundamental frequency deviants did indeed result in greater amplitude than both first formant deviants and second formant deviants, but no difference was found between the first formant deviants and the second formant deviants. It is concluded that greater difference between standard and within-category deviants across different acoustic dimensions results in greater mismatch negativity amplitude, suggesting that the processing of linguistically irrelevant changes in speech sounds may be processed similar to nonspeech sound changes.

  • 60.
    Marklund, Ellen
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Vowel categorization correlates with speech exposure in 8-month-olds2017Conference paper (Refereed)
    Abstract [en]

    During the first year of life, infants ability to discriminate non-native speech contrasts attenuates, whereas their ability to discriminate native contrasts improves. This transition reflects the development of speech sound categorization, and is hypothesized to be modulated by exposure to spoken language. The ERP mismatch response has been used to quantify discrimination ability in infants, and its amplitude has been shown to be sensitive to amount of speech exposure on group level (Rivera-Gaxiola et al., 2011). In the present ERP-study, the difference in mismatch response amplitudes for spoken vowels and for spectrally rotated vowels, quantifies categorization in 8-month-old infants (N=15, 7 girls). This categorization measure was tested for correlation with infants? daily exposure to male speech, female speech, and the sum of male and female speech, as measured by all-day home recordings and analyzed using LENA software. A positive correlation was found between the categorization measure and total amount of daily speech exposure (r = .526, p = .044). The present study is the first to report a relation between speech exposure and speech sound categorization in infants on subject level, and the first to compensate for the acoustic part of the mismatch response in this context.

  • 61.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Relationship between parent-rated productive vocabulary size and phonological complexity in Swedish infants2011Conference paper (Refereed)
    Abstract [en]

    The Swedish Communicative Development Inventory (SECDI) is an important tool to assess infants’ productive vocabulary as reported by parents. The instructions SECDI gives to parents and their intuitive judgements naturally favour a strong semantic perspective. This study investigates the relationship between the reported productive vocabulary size and the phonological complexity of infant utterances. Productive vocabulary size was assessed in 17- to 18-month-olds (N=330) and in 20- to 21-month-olds (N=85). It is hypothesised that words with low phonological complexity are more frequently reported by parents and that phonological complexity will increase with infant age. Productive vocabulary size was measured from parental reports submitted via an online version of SECDI. To evaluate phonological complexity, only the part with single words was used – apart from 16 items consisting of lexicalised phrases, family names or multiple alternative utterances that were excluded. Phonological complexity was computed as the sum of the number of syllables (1 to 4), consonant clusters (0 to 4), and fricatives (0 to 3) occurring in each of the remaining 694 words. It ranged from 1 to 9 (low 1-3; high 7-9). Parents reported significantly more words with low phonological complexity. There is a significant interaction between the complexity level of the reported words and infant age. Words with more syllables, consonant clusters or fricatives were less frequent in the parental reports. This shows that data acquired with SECDI is not necessarily limited to a semantic perspective but can even provide information about phonological complexity.

  • 62.
    Marklund, Ulrika
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Schwarz, Iris-Corinna
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Pause and utterance duration in child-directed speech in relation to child vocabulary size2015In: Journal of Child Language, ISSN 0305-0009, E-ISSN 1469-7602, Vol. 42, no 5, p. 1158-1171Article in journal (Refereed)
    Abstract [en]

    This study compares parental pause and utterance duration in conversations with Swedish speaking children at age 1;6 who have either a large, typical, or small expressive vocabulary, as measured by the Swedish version of the McArthur-Bates CDI. The adjustments that parents do when they speak to children are similar across all three vocabulary groups; they use longer utterances than when speaking to adults, and respond faster to children than they do to other adults. However, overall pause duration varies with the vocabulary size of the children, and as a result durational aspects of the language environment to which the children are exposed differ between groups. Parents of children in the large vocabulary size group respond faster to child utterances than do parents of children in the typical vocabulary size group, who in turn respond faster to child utterances than do parents of children in the small vocabulary size group.

  • 63. Markowski, Anne
    et al.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Gustavsson, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lära språk kan vara en barnlek2007Other (Other (popular science, discussion, etc.))
    Abstract [sv]

    Med hjälp av spädbarnsjoller och en robot vill svenska forskare visa hur vi lår oss språk.

  • 64.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Forsén, Malin
    Institutionen för klinisk vetenskap, intervention och teknik.
    Johansson, Linnea
    Institutionen för klinisk vetenskap, intervention och teknik.
    Lång, Catarina
    Institutionen för klinisk vetenskap, intervention och teknik.
    Narel, Anna
    Institutionen för klinisk vetenskap, intervention och teknik.
    Valdés, Tanya
    Institutionen för klinisk vetenskap, intervention och teknik.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Language-specific speech perception as mismatch negativity in 10-month-olds' ERP data2009In: Proceedings FONETIK 2009 - The XXIIth Swedish Phonetics Conference / [ed] Department of Linguistics, Stockholm University, Stockholm: Department of Linguistics , 2009, p. 130-135Conference paper (Other academic)
    Abstract [en]

    Discrimination of native and nonnative speech contrasts, the heart of the concept of language-specific speech perception, is sensitive to developmental change in speech perception during infancy. Using the mismatch negativity paradigm, seven Swedish language environment 10-month-olds were tested on their perception of six different consonantal and tonal Thai speech contrasts, native and nonnative tothe infants. Infant brain activation in response to the speech contrasts was measured with event-related potentials (ERPs). They show mismatch negativity at 300 ms, significant for contrast change in the native condition, but not for contrast change in the nonnative condition. Differences in native and nonnative speech discrimination are clearly reflected in the ERPs and confirm earlier findings obtained by behavioural techniques. ERP measurement thus suitably complements infant speech discrimination research.

  • 65.
    Schwarz, Iris-Corinna
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Walberg, Heléne
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Cerebral activation patterns of speech perception in 4-month-olds and adults2009In: BrainTalk: Discourse with and in the Brain / [ed] Kai Alter, Merle Horne, Magnus Lindgren, Mikael Roll, Janne von Koss Torkildsen, Lund, Sweden: Centre for Languages and Literature, Lund University , 2009, p. 231-246Chapter in book (Other (popular science, discussion, etc.))
    Abstract [en]

    Speech perception in infancy is language-general, but language-specific in adulthood. Is this reflected in event-related brain potentials (ERPs)? ERPs from 128 electrodes were recorded for three speech perception conditions in 4-month-old Swedish infants (N=9) and adults (N=13).The stimuli were simple sentences in infant-directed speech (IDS) in Swedish, familiar in semantic content and prosodic characteristics; in spectrally rotated Swedish with incomprehensible semantic content, but familiar prosodic characteristics; and in Portuguese, unfamiliar in both aspects.As predicted, infants show bilateral activation across conditions, but also present two left-hemispheric advantages as potential first signs of native language specialisation. In adults, left-hemispheric activation shows condition-sensitive ERP patterns: processing rotated Swedish is similar to Swedish, but different to Portuguese. The right-hemispheric dominance in adults is explained by the strong prosodic and weak semantic characteristics of the IDS stimuli. Infants' language-general and adults' language-specific speech perception is reflected in the ERPs, although 4-month-olds show first signs of specialisation.

  • 66.
    Söderlund, Göran
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Auditory White Noise Enhances Cognitive Performance Under Certain Conditions: Examples from Visuo-Spatial Working Memory and Dichotic Listening Tasks2009In: Proceedings FONETIK 2009 / [ed] Peter Branderud & Hartmut Traunmüller, Stockholm: Department of Linguistics, Stockholm Universitet , 2009, p. 160-164Conference paper (Refereed)
    Abstract [en]

    This study examines when external auditive noise can enhance performance in a dichotic listening and a visuo-spatial working memory task. Noise is typically conceived of as being detrimental for cognitive performance; however, given the mechanism of stochastic resonance (SR), a certain amount of noise can benefit performance. In particular we predict that low performers will be aided by noise whereas high performers decline in performance during the same condition. Data from two experiments will be presented; participants were students at Stockholm University.

  • 67.
    Traunmüller, Hartmut
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Perceptual relativity in identification of two-formant vowels1987In: Speech Communication, ISSN 0167-6393, E-ISSN 1872-7182, Vol. 6, no 2, p. 143-157Article in journal (Refereed)
    Abstract [en]

    It is attempted to reduce the phonetic quality of vowels to the positions of the peaks in their tonotopical spectra relative to the other peaks, simultaneous or preceding in context. Synthetic two-formant vowels were identified by speakers of languages that differentiate richly among high vowels (Swedish, Turkish). The parameters F1 (204-801 Hz) and F2' (509-3702 Hz) were systematically varied in steps of 0.75 critical bandwidth. F0 was kept close below F1 in all vowels. These were presented in two orders with subsequently rising or falling F2'. Most subjects heard predominantly close vowels. The "isophones" of most subjects could be described in a uniform manner implying a normalization with respect to two reference points, one at a distance of 3 critical bands above F0 and the other one at an absolute position corresponding to 2.8 kHz. It is speculated that this second reference point might represent a default position of the third formant or the like.

12 51 - 67 of 67
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf