Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Modelling speech imitation
Stockholms universitet, Humanistiska fakulteten, Institutionen för lingvistik, Avdelningen för fonetik.ORCID-id: 0000-0003-3981-2551
Stockholms universitet, Humanistiska fakulteten, Institutionen för lingvistik, Avdelningen för fonetik. (Jollergruppen)ORCID-id: 0000-0002-7980-3601
2008 (engelsk)Inngår i: IROS 2008: From motor to interaction learning in robots, 2008Konferansepaper, Publicerat paper (Annet vitenskapelig)
Abstract [en]

The concept of imitation is often pointed out as one of the cornerstones in infants' early language acquisition. Still there are few studies concerning vocal adult-child imitations reported in the literature and results from those are often inconsistent. One reason for the inconsistencies is the lack of a stringent model for what should be classified as imitations. This is not only a problem when trying to learn something about adult-child interactions, but also when trying to make robots that can learn to interact naturally with humans. In order to interact vocally a robot is typically equipped with artificial models of the ear and the vocal tract connected by an artificial neural network. This model is inspired by the motor theory of speech perception [1] and the more recent discovery of mirror neurons [2]. While the robot can use babbling to create an initial map between the acoustic signal and the corresponding vocal tract positions, it needs to overcome interspeaker differences and to acquire key positions of the vocal tract to be able to communicate with humans or other robots. Imitation games are therefore used to train the networks [3, 4, 5]. As we have shown in our previous work [6] these imitation games should preferably go both ways. Having the robot imitating the caregiver is useful for directing the robot towards keypoints, while having the caregiver imitating the robot is more important for learning the map and overcome interspeaker differences. As we will show in this work, both types of imitations can also be found in adult-child interactions. However, while robots usually follow very strict imitation games with predefined turn-taking behaviors, adult-child interactions tend to be much more complex. For the robot to be able to learn its maps under such natural conditions it has to be able to separate imitations from non-imitations. The question we want to answer in this work is therefore the following. How can the robot decide when a pair of utterances should be considered as vocal imitations of each other?

sted, utgiver, år, opplag, sider
2008.
Emneord [en]
speech; imitation; motor; interaction; Learning; Infants; Infant; Language; Language acquisition; Ear; Vocal tract; motor theory of speech perception; Speech Perception
HSV kategori
Identifikatorer
URN: urn:nbn:se:su:diva-14789OAI: oai:DiVA.org:su-14789DiVA, id: diva2:181309
Konferanse
IROS 2008
Tilgjengelig fra: 2008-10-31 Laget: 2008-10-31 Sist oppdatert: 2018-01-13bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

http://webia.lip6.fr/~sigaud/papers/hornsteinIROS08.pdf

Søk i DiVA

Av forfatter/redaktør
Gustavsson, LisaLacerda, Francisco
Av organisasjonen

Søk utenfor DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric

urn-nbn
Totalt: 75 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf