Change search
Refine search result
1 - 19 of 19
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Alvarsson, Jesper J.
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Nordström, Henrik
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Lundén, Peter
    Nilsson, Mats E.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Aircraft noise and speech intelligibility in an outdoor living space2014In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 135, no 6, p. 3455-3462Article in journal (Refereed)
    Abstract [en]

    Studies of effects on speech intelligibility from aircraft noise in outdoor places are currently lacking. To explore these effects, first-order ambisonic recordings of aircraft noise were reproduced outdoors in a pergola. The average background level was 47 dB L-Aeq. Lists of phonetically balanced words (L-ASmax,L- word = 54 dB) were reproduced simultaneously with aircraft passage noise (L-ASmax,L- noise = 72-84 dB). Twenty individually tested listeners wrote down each presented word while seated in the pergola. The main results were (i) aircraft noise negatively affects speech intelligibility at sound pressure levels that exceed those of the speech sound (signal-to-noise ratio, S/N < 0), and (ii) the simple A-weighted S/N ratio was nearly as good an indicator of speech intelligibility as were two more advanced models, the Speech Intelligibility Index and Glasberg and Moore's [J. Audio Eng. Soc. 53, 906-918 (2005)] partial loudness model. This suggests that any of these indicators is applicable for predicting effects of aircraft noise on speech intelligibility outdoors.

  • 2.
    Axelsson, Östen
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Progress in soundscape research requires a common agenda2011In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 130, no 4, p. 2495-2495Article in journal (Other academic)
    Abstract [en]

    It is commonly believed that progress and success in any field requires competition. This is probably true, but this belief implies that all competitors have a common view on the objectives. There would not be much competition if all parties ran off in opposite directions, striving to achieve different goals. Nor would it lead to much progress. The present session calls for networking and international collaboration in soundscape research. For such collaboration to be successful, it is critical to agree on a common agenda; a mission; an objective. Recent development in soundscape research makes evident that the objective must be practical and applicable. Our minds must be set to implementing soundscape research in practice to avoid exhausting academic debates, which tend to be ends in themselves and do not contribute to progress. Two excellent, recent examples of international collaboration in soundscape research, contributing to progress, are ISO/TC 43/SC 1/WG 54 and the European COST Action TD0804 “Soundscape of European Cities and Landscapes.” Both illustrate the need for international and interdisciplinary collaboration among acousticians, architects, and urban planners to accelerate progress in soundscape research. The present paper presents possible topics for a common agenda in soundscape research.

  • 3.
    Axelsson, Östen
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    The ISO 12913 series on soundscape: An update, May 20122012In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 131, no 4, article id 3381Article in journal (Refereed)
    Abstract [en]

    In February 2009 the working group ISO/TC 43/SC 1/WG 54 “Perceptual assessment of soundscape quality”, of the International Organization for Standardization (ISO), begun preparing the first International Standard on soundscape “ISO 12913-1 Acoustics — Soundscape — Part 1: Definition and conceptual framework”. This paper presents the latest version of the definition of “soundscape” and its conceptual framework. At its current state of development the framework highlights seven general concepts and their relationships: (1) sound sources, (2) acoustic environment, (3) auditory sensations, (4) interpretation of auditory sensations, (5) responses, (6) context, and (7) outcomes. By providing a standard reference, the working group aims at international consensus in order to avoid ambiguity, and to enable conceptual progress in soundscape research. ISO 12913-1 is expected to be published as an International Standard in 2015. Subsequent parts of the ISO 12913 series will deal with minimum reporting requirements in soundscape research, and methods for measuring soundscape quality.

  • 4.
    Axelsson, Östen
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Tower of Babel, or why bother about international standards?2011In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 130, no 4, p. 2467-2467Article in journal (Other academic)
    Abstract [en]

    While a vast nation like the USA has the capacity to be self-sufficient, many countries lack this privilege. Take Sweden as an example, a small country in northern Europe with a population of 9 million. Swedes are proud to be international. And they should, because how could a nation, which since the days of the Vikings has depended on international trade, sustain itself without a global economy. International standards support the development within this global economy, just like English as business language facilitates global collaboration. Imagine humanity without these common frames of reference.

  • 5.
    Axelsson, Östen
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Nilsson, Mats E.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Water features and acoustic diversity of urban parks2011In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 130, no 4, p. 2533-2533Article in journal (Other academic)
    Abstract [en]

    Water features are well-acknowledged in architecture and urban planning for their visual characteristics. But, how do water features contribute to acoustic diversity and soundscape quality? Visitors in an urban park were recruited to complete a questionnaire on how they perceived the park including its soundscape. Meanwhile, the soundscape was manipulated by turning a fountain on or off at irregular hours. The fountain sounds had a positive effect on soundscape quality in an area close to the fountain, by masking background road-traffic noise. The fountain sound also masked other natural sounds, which may have a negative influence on acoustic diversity and soundscape quality. In addition, some participants may have mistaken the fountain sounds for distant road-traffic noise. Hence, when introducing a water feature in an urban park it is necessary to consider the acoustic characteristics of the water sounds, as well as the placement of the water feature.

  • 6.
    Axelsson, Östen
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Nilsson, Mats E.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Berglund, Birgitta
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    A principal components model of soundscape perception2010In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 128, no 5, p. 2836-2846Article in journal (Refereed)
    Abstract [en]

    There is a need for a model that identifies underlying dimensions of soundscape perception, and which may guide measurement and improvement of soundscape quality. With the purpose to develop such a model, a listening experiment was conducted. One hundred listeners measured 50 excerpts of binaural recordings of urban outdoor soundscapes on 116 attribute scales. The average attribute scale values were subjected to principal components analysis, resulting in three components: Pleasantness, eventfulness, and familiarity, explaining 50, 18 and 6% of the total variance, respectively. The principal-component scores were correlated with physical soundscape properties, including categories of dominant sounds and acoustic variables. Soundscape excerpts dominated by technological sounds were found to be unpleasant, whereas soundscape excerpts dominated by natural sounds were pleasant, and soundscape excerpts dominated by human sounds were eventful. These relationships remained after controlling for the overall soundscape loudness (Zwicker’s N10), which shows that ‘informational’ properties are substantial contributors to the perception of soundscape. The proposed principal components model provides a framework for future soundscape research and practice. In particular, it suggests which basic dimensions are necessary to measure, how to measure them by a defined set of attribute scales, and how to promote high-quality soundscapes.

  • 7.
    Bjerva, Johannes
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Engdahl, Johan
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Tengstrand, Lisa
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Preceding non-linguistic stimuli affect categorisation of Swedish plosives2012In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 131, no 4Article in journal (Refereed)
    Abstract [en]

    Speech perception is highly context-dependent. Sounds preceding speech stimuli affect how listeners categorise the stimuli, regardless of whether the context consists of speech or non-speech. This effect is acoustically contrastive; a preceding context with high-frequency acoustic energy tends to skew categorisation towards speech sounds possessing lower-frequency acoustic energy and vice versa (Mann, 1980; Holt, Lotto, Kluender, 2000; Holt, 2005). Partially replicating Holt's study from 2005, the present study investigates the effect of non-linguistic contexts in different frequency bands on speech categorisation. Adult participants (n=15) were exposed to Swedish syllables from a speech continuum ranging from /da/ to /ga/ varying in the onset frequencies of the second and third formants in equal steps. Contexts preceding the speech stimuli consisted of sequences of sine tones distributed in different frequency bands: high, mid and low. Participants were asked to categorise the syllables as /da/ or /ga/. As hypothesised, high frequency contexts shift the category boundary towards /da/, while lower frequency contexts shift the boundary towards /ga/, compared to the mid frequency context.

  • 8.
    De Coensel, B.
    et al.
    University of California, Berkeley.
    Botteldooren, Dick
    Ghent University.
    Berglund, Birgitta
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Nilsson, Mats E.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    A computational model for auditory saliency of environmental sound2009In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 125, no 4 (part2), p. 2528-2528Article in journal (Refereed)
    Abstract [en]

    Because the information flow received by the human auditory system exceeds the processing capacity of the brain, neural mechanisms engage and guide attention toward prominent parts of the auditory scene. Several computational models for auditory saliency have been proposed recently. Most of these are concerned with speech recognition, and therefore apply high temporal and spectral precision to relatively short sound fragments. Here, a simplified model is described that specifically targets the long exposure times usually considered in soundscape research. The model trades temporal and spectra accuracy for computational speed, but nevertheless implements the key elements that are present in the calculation of complex auditory saliency maps. A simplified “cochleagram” is calculated from the 1/3-octave band spectrogram using the Zwicker model for specific loudness. Saliency is determined based on spectro-temporal irregularities, extracted in parallel at different feature scales, using a center-surround mechanism. Finally, conspicuous peaks are selected using within-feature and between-feature competitions. The model is shown to behave as expected for a number of typical sounds. As an illustration, saliency calculation results for a set of recordings in urban parks are compared with other acoustical descriptors and with perceptual attribute scales from questionnaire studies.

  • 9.
    De Coensel, Bert
    et al.
    University of California, Berkeley, USA.
    Botteldooren, Dick
    Ghent University, Belgien.
    De Muer, Tom
    Ghent University, Belgien.
    Berglund, Birgitta
    Gösta Ekman Laboratory for Sensory Research, Karolinska Institutet and Stockholm University, Stockholm, Sweden.
    Nilsson, Mats E.
    Gösta Ekman Laboratory for Sensory Research, Karolinska Institutet and Stockholm University, Stockholm, Sweden.
    Lercher, Peter
    Medical University Innsbruck, Österrike.
    A model for the perception of environmental sound based on notice-events2009In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 126, no 2, p. 656-665Article in journal (Refereed)
    Abstract [en]

    Anapproach is proposed to shed light on the mechanisms underlyinghuman perception of environmental sound that intrudes in everyday living.Most research on exposure-effect relationships aims at relating overall effectsto overall exposure indicators in an epidemiological fashion, without includingavailable knowledge on the possible underlying mechanisms. Here, it isproposed to start from available knowledge on audition and perceptionto construct a computational framework for the effect of environmentalsound on individuals. Obviously, at the individual level additional mechanisms(inter-sensory, attentional, cognitive, emotional) play a role in the perceptionof environmental sound. As a first step, current knowledge ismade explicit by building a model mimicking some aspects ofhuman auditory perception. This model is grounded in the hypothesisthat long-term perception of environmental sound is determined primarily byshort notice-events. The applicability of the notice-event model is illustratedby simulating a synthetic population exposed to typical Flemish environmentalnoise. From these simulation results, it is demonstrated that thenotice-event model is able to mimic the differences between theannoyance caused by road traffic noise exposure and railway trafficnoise exposure that are also observed empirically in other studiesand thus could provide an explanation for these differences.

  • 10. De Coensel, Bert
    et al.
    Nilsson, Mats E.
    Stockholm University, Faculty of Social Sciences, Department of Psychology. Karolinska Institutet, Sweden.
    Berglund, Birgitta
    Stockholm University, Faculty of Social Sciences, Department of Psychology. Karolinska Institutet, Sweden.
    Brown, A. L.
    Perceptual constancy in auditory perception of distance to railway tracks2013In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 134, no 1, p. 474-480Article in journal (Refereed)
    Abstract [en]

    Distance to a sound source can be accurately estimated solely from auditory information. With a sound source such as a train that is passing by at a relatively large distance, the most important auditory information for the listener for estimating its distance consists of the intensity of the sound, spectral changes in the sound caused by air absorption, and the motion-induced rate of change of intensity. However, these cues are relative because prior information/experience of the sound source-its source power, its spectrum and the typical speed at which it moves-is required for such distance estimates. This paper describes two listening experiments that allow investigation of further prior contextual information taken into account by listeners-viz., whether they are indoors or outdoors. Asked to estimate the distance to the track of a railway, it is shown that listeners assessing sounds heard inside the dwelling based their distance estimates on the expected train passby sound level outdoors rather than on the passby sound level actually experienced indoors. This form of perceptual constancy may have consequences for the assessment of annoyance caused by railway noise.

  • 11.
    Engdahl, Johan
    et al.
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Bjerva, Johannes
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Marklund, Ellen
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Byström, Emil
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Lacerda, Francisco
    Stockholm University, Faculty of Humanities, Department of Linguistics, Phonetics.
    Acoustic analysis of adults imitating infants: a cross-linguistic perspective2012In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 131, no 4Article in journal (Refereed)
    Abstract [en]

    The present study investigates adult imitations of infant vocalizations in a cross-linguistic perspective. Japanese-learning and Swedish-learning infants were recorded at ages 16-21 and 78-79 weeks. Vowel-like utterances (n=210) were selected from the recordings and presented to Japanese (n=3) and Swedish (n=3) adults. The adults were asked to imitate what they heard, simulating a spontaneous feedback situation between caregiver and infant. Formant data (F1 and F2) was extracted from all utterances and validated by comparing original and formant re-synthesized utterances. The data was normalized for fundamental frequency and time, and the accumulated spectral difference was calculated between each infant utterance and each imitation of that utterance. The mean spectral difference was calculated and compared, grouped by native language of infant and adult, as well as age of the infant. Preliminary results show smaller spectral difference in the imitations of older infants compared to imitations of the younger group, regardless of infant and adult native language. This may be explained by the increasing stability and more speech-like quality of infants' vocalizations as they grow older (and thus have been exposed to their native language for a longer period of time), making their utterances easier for adults to imitate.

  • 12.
    Nilsson, Mats E.
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Perception and psychophysics.
    Learning to extract a large inter-aural level difference in lag clicks2018In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 143, no 6, p. EL456-EL462Article in journal (Refereed)
    Abstract [en]

    Many blind people learn to use sound reflections to localize objects. However, precedence-effect research has reported evidence both for and against the possibility to improve lateralization of lag clicks preceded by lead clicks. This training study used stimuli more relevant to human echolocation than did previous training studies. One participant, the author, practiced lateralizing a lag-click inter-aural level difference (ILD) of 10 dB for 60 days, with performance measured in the lag-lead peak amplitude ratio at threshold. Clear improvements were observed at interclick intervals of 2-18 ms, suggesting that extracting a large lag-click ILD may improve with practice.

  • 13.
    Nilsson, Mats E.
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Perception and psychophysics.
    Tirado, Carlos
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Perception and psychophysics.
    Szychowska, Malina
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Perception and psychophysics.
    Psychoacoustic evidence for stronger discrimination suppression of spatial information conveyed by lag-click interaural time than interaural level differences2019In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 145, no 1, p. 512-524Article in journal (Refereed)
    Abstract [en]

    Listeners have limited access to spatial information in lagging sound, a phenomenon known as discrimination suppression. It is unclear whether discrimination suppression works differently for interaural time differences (ITDs) and interaural level differences (ILDs). To explore this, three listeners assessed the lateralization (left or right) and detection (present or not) of lag clicks with a large fixed ITD (350 mu s) or ILD (10 dB) following a diotic lead click, with inter-click intervals (ICIs) of 0.125-256 ms. Performance was measured on a common scale for both cues: the lag-lead amplitude ratio [dB] at 75% correct answers. The main finding was that the lateralization thresholds, but not detection thresholds, were more strongly elevated for ITD-only than ILD-only clicks at intermediate ICIs (1-8 ms) in which previous research has found the strongest discrimination suppression effects. Altogether, these findings suggest that discrimination suppression involves mechanisms that make spatial information conveyed by lag-click ITDs less accessible to listeners than spatial information conveyed by lag-click ILDs.

  • 14.
    Nordström, Henrik
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Perception and psychophysics.
    Laukka, Petri
    Stockholm University, Faculty of Social Sciences, Department of Psychology, Perception and psychophysics.
    The time course of emotion recognition in speech and music2019In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 145, no 5, p. 3058-3074Article in journal (Refereed)
    Abstract [en]

    The auditory gating paradigm was adopted to study how much acoustic information is needed to recognize emotions from speech prosody and music performances. In Study 1, brief utterances conveying ten emotions were segmented into temporally fine-grained gates and presented to listeners, whereas Study 2 instead used musically expressed emotions. Emotion recognition accuracy increased with increasing gate duration and generally stabilized after a certain duration, with different trajectories for different emotions. Above-chance accuracy was observed for <= 100 ms stimuli for anger, happiness, neutral, and sadness, and for <= 250 ms stimuli for most other emotions, for both speech and music. This suggests that emotion recognition is a fast process that allows discrimination of several emotions based on low-level physical characteristics. The emotion identification points, which reflect the amount of information required for stable recognition, were shortest for anger and happiness for both speech and music, but recognition took longer to stabilize for music vs speech. This, in turn, suggests that acoustic cues that develop over time also play a role for emotion inferences (especially for music). Finally, acoustic cue patterns were positively correlated between speech and music, suggesting a shared acoustic code for expressing emotions. (C) 2019 Acoustical Society of America.

  • 15.
    Rådsten-Ekman, Maria
    et al.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Lundén, Peter
    Nilsson, Mats E.
    Stockholm University, Faculty of Social Sciences, Department of Psychology.
    Similarity and pleasantness assessments of water-fountain sounds recorded in urban public spaces2015In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 138, no 5, p. 3043-3052Article in journal (Refereed)
    Abstract [en]

    Water fountains are potential tools for soundscape improvement, but little is known about their perceptual properties. To explore this, sounds were recorded from 32 fountains installed in urban parks. The sounds were recorded with a sound-fieldmicrophone and were reproduced using an ambisonic loudspeaker setup. Fifty-seven listeners assessed the sounds with regard to similarity and pleasantness. Multidimensional scaling of similarity data revealed distinct groups of soft variable and loud steady-state sounds. Acoustically, the soft variable sounds were characterized by low overall levels and high temporal variability, whereas the opposite pattern characterized the loud steady-state sounds. The perceived pleasantness of the sounds was negatively related to their overall level and positively related to their temporal variability, whereas spectral centroid was weakly correlated to pleasantness. However, the results of an additional experiment, using the same sounds set equal in overall level, found a negative relationship between pleasantness and spectral centroid, suggesting that spectral factors may influence pleasantness scores in experiments where overall level does not dominate pleasantness assessments. The equal-level experiment also showed that several loud steady-state sounds remained unpleasant, suggesting an inherently unpleasant sound character. From a soundscape design perspective, it may be advisable to avoid fountains generating such sounds.

  • 16.
    Sigray, Peter
    et al.
    Stockholm University, Faculty of Science, Department of Meteorology .
    Andersson, Mathias H.
    Stockholm University, Faculty of Science, Department of Zoology.
    Particle motion measured at an operational wind turbine in relation to hearing sensitivity in fish2011In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 130, no 1, p. 200-207Article in journal (Refereed)
    Abstract [en]

    The effect of sound pressure on the hearing of fish has been extensively investigated in laboratory studies as well as in field trials in contrast to particle motion where few studies have been carried out. To improve this dearth of knowledge, an instrument for measuring particle motion was developed and used in a field trial. The particle motion is measured using a neutrally buoyant sphere, which co-oscillates with the fluid motion. The unit was deployed in close vicinity to a wind turbine foundation at Utgrunden wind farm in the Baltic Sea. Measurements of particle motion were undertaken at different distances from the turbine as well as at varying wind speeds. Levels of particle motion were compared to audiograms for cod (Gadus morhua L.) and plaice (Pleuronectes platessa L.). 

  • 17.
    Traunmüller, Hartmut
    Stockholm University, Faculty of Humanities, Department of Linguistics.
    Analytical expressions for the tonotopic sensory scale1990In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Journal of the Acoustical Society of America, ISSN 0001-4966, Vol. 88, no 1, p. 97-100Article in journal (Refereed)
    Abstract [en]

    Accuracy and simplicity of analytical expressions for the relations between frequency and critical bandwidth as well as critical-band rate (in Bark) are assessed for the purpose of applications in speech perception research and in speech technology. The equivalent rectangular bandwidth (ERB) is seen as a measure of frequency resolution, while the classical critical-band rate is considered a measure of tonotopic position. For the conversion of frequency to critical-band rate, and vice versa, the inversible formula z=[26.81/(1+1960/f )]–0.53 is proposed. Within the frequency range of the perceptually essential vowel formants (0.2–6.7 kHz), it agrees to within ±0.05 Bark with the Bark scale, originally published in the form of a table.

  • 18. van Kempen, Elise E.M.M.
    et al.
    Van Kamp, Irene
    Stellato, Rebecca K.
    Lopez-Barrio, Isabel
    Haines, Mary M.
    Nilsson, Mats E.
    Stockholm University, Faculty of Social Sciences, Department of Psychology. Karolinska institutet, Stockholm, Sweden.
    Clark, Charlotte
    Houthuijs, Danny
    Brunekreef, Bert
    Berglund, Birgitta
    Stockholm University, Faculty of Social Sciences, Department of Psychology. Karolinska institutet, Stockholm, Sweden.
    Stansfeld, Stephen A.
    Children's annoyance reactions to aircraft and road traffic noise2009In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 125, no 2, p. 895-904Article in journal (Refereed)
    Abstract [en]

    Since annoyance reactions of children to environmental noise have rarely been investigated, no source specific exposure-response relations are available. The aim of this paper is to investigate children’s reactions to aircraft and road traffic noise and to derive exposure-response relations. To this end, children’s annoyance reactions to aircraft and road traffic noise in both the home and the school setting were investigated using the data gathered in a cross-sectional multicenter study, carried out among 2844 children age 9–11 years attending 89 primary schools around three European airports. An exposure-response relation was demonstrated between exposure to aircraft noise at school LAeq, 7–23 h and severe annoyance in children: after adjustment for confounders, the percentage severely annoyed children was predicted to increase from about 5.1% at 50 dB to about 12.1% at 60 dB. The findings were consistent across the three samples. Aircraft noise at home LAeq,7–23 h demonstrated a similar relation with severe annoyance. Children attending schools with higher road traffic noise LAeq,7–23 h were more annoyed. Although children were less annoyed at levels above 55 dB, the shapes of the exposure-response relations found among children were comparable to those found in their parents.

  • 19. van Kempen, Elise
    et al.
    van Kamp, Irene
    Nilsson, Mats
    Stockholm University, Faculty of Social Sciences, Department of Psychology. Karolinska institutet, Stockholm, Sweden.
    Lammers, Jan
    Emmen, Harry
    Clark, Charlotte
    Stansfeld, Stephen A.
    The role of annoyance in the relation between transportation noise and children's health and cognition2010In: Journal of the Acoustical Society of America, ISSN 0001-4966, E-ISSN 1520-8524, Vol. 128, no 5, p. 2817-2828Article in journal (Refereed)
    Abstract [en]

    Onthe basis of this study it cannot be ruled outthat the appraisal of the noise affects the association betweenair and road traffic noise exposure and children's health andcognition. However, the conclusion is limited due to the relativelysmall group of annoyed children, which may have influenced ourgroup comparisons. Furthermore, the observed relation between annoyance and perceivedhealth is possibly biased due to the fact that bothwere measured within the same questionnaire. These are the mainconclusions of a cross-sectional multi-center study carried out among 2,844schoolchildren (age 9–11 years) attending 89 primary schools around threeEuropean airports. The aim was to investigate how annoyance affectsthe relation between air and road traffic noise exposure andchildren's health and cognition. Different, sometimes competing, working mechanisms ofhow noise affects children's health are suggested. Some effects aresupposed to be precipitated through (chronic) stress, while others mayarise directly. There is still no theory that can adequatelyaccount for the circumstances in which noise will affect cognitiveperformance.

1 - 19 of 19
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf