Reference to Objects in Longitudinal Parent-Child Interaction
2012 (English)In: Workshop on Language, Action and Perception (APL), 2012Conference paper (Refereed)
A cognitive model of language learning needs to be dialogue-driven and multimodal to reflect how parent and child interact, using words, eye gaze, and object manipulation.
In this paper, we present a scheme for multimodal annotation of parent-child interaction. We use this annotation for studying invariance across modalities. Our basic hypothesis is that perception of invariance (or synchrony) in multimodal patterns in auditory-visual speech is the device primarily used to reduce complexity in language learning.
To this end, we have added verbal and non-verbal annotation to a corpus of longitudinal video and sound recordings of parent-child dyads. We use this data to try to determine if the amount of synchrony across modalities of parent-child interaction decreases as the child grows older and learns more language and gestures.
Place, publisher, year, edition, pages
child language aquisition, parent-child interaction, multimodal corpora, multimodal annotation, synchrony
Language Technology (Computational Linguistics) General Language Studies and Linguistics
Research subject Computational Linguistics
IdentifiersURN: urn:nbn:se:su:diva-82006OAI: oai:DiVA.org:su-82006DiVA: diva2:565060
The Fourth Swedish Language Technology Conference, October 24--26, 2012, Lund
ProjectsModelling the emergence of linguistic structures in early childhood
FunderSwedish Research Council, 2011-675-86010-31