Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
NeuroEngage: A Multimodal Dataset Integrating fMRI for Analyzing Conversational Engagement in Human-Human and Human-Robot Interactions
Stockholm University, Faculty of Humanities, Department of Linguistics.ORCID iD: 0000-0001-5503-2657
Stockholm University, Faculty of Humanities, Department of Linguistics.
Stockholm University, Faculty of Humanities, Department of Linguistics. Stockholm University, Faculty of Social Sciences, Department of Psychology, Biological psychology.ORCID iD: 0000-0001-6672-1298
Show others and affiliations
Number of Authors: 52025 (English)In: 2025 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE Computer Society , 2025, p. 849-858Conference paper, Published paper (Refereed)
Abstract [en]

This study aimed to deepen our understanding of the behavioral and neurocognitive processes involved in human-human and human-robot communication in a more ecologically valid setting compared to the traditional neurolinguistic paradigms. We collected a novel open-source dataset (N=30 for human-human and N=20 for human-robot interactions), that includes fMRI, eye-tracking, segmented audio, video, and behavioral data, resulting in 30 minutes of free conversations per participant. To enable unrestricted, spontaneous robot behavior, we employed a novel VR-mediated teleoperation system. Our mixed design allowed us to compare participants' perception of humans and robots across three within-subject conditions of conversational engagement: Engaged Communicator, Active Listener, and Passive Listener. We provide an open-access dataset, replicable code for the teleoperation system, and an initial analysis of fMRI, behavioral, and speech data. We observed distinct neural profiles: speaking to the human agent recruited more higher-level frontal regions associated with socio-pragmatic processes, while listening to the robot recruited more sensory areas, including auditory and visual regions. Engagement levels and agent types also affected speech and behavioral patterns, offering valuable insights into conversational dynamics in human-human and human-robot interactions.

Place, publisher, year, edition, pages
IEEE Computer Society , 2025. p. 849-858
Series
ACM/IEEE International Conference on Human-Robot Interaction, E-ISSN 2167-2148
Keywords [en]
conversation, dataset, engagement, fMRI, human-robot interaction, neuroimaging
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:su:diva-243368DOI: 10.1109/HRI61500.2025.10974251Scopus ID: 2-s2.0-105004876905OAI: oai:DiVA.org:su-243368DiVA, id: diva2:1959676
Conference
2025 20th ACM/IEEE International Conference on Human-Robot Interaction (HRI), March 4-6, 2025
Available from: 2025-05-21 Created: 2025-05-21 Last updated: 2025-05-21Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Arvidsson, CarolineBerrebi, JonathanUddén, Julia

Search in DiVA

By author/editor
Arvidsson, CarolineBerrebi, JonathanUddén, Julia
By organisation
Department of LinguisticsBiological psychology
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 39 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf