Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Designing with Gaze: Tama – a Gaze-Aware Smart Speaker Platform
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
Stockholm University, Faculty of Social Sciences, Department of Computer and Systems Sciences.
Show others and affiliations
2019 (English)In: Proceedings of the ACM on Human-Computer Interaction, E-ISSN 2573-0142, Vol. 3, article id 176Article in journal (Refereed) Published
Abstract [en]

Recent developments in gaze tracking present new opportunities for social computing. This paper presents a study of Tama, a gaze actuated smart speaker. Tama was designed taking advantage of research on gaze in conversation. Rather than being activated with a wake word (such as "Ok Google") Tama detects the gaze of a user, moving an articulated 'head' to achieve mutual gaze. We tested Tama's use in a multi-party conversation task, with users successfully activating and receiving a response to over 371 queries (over 10 trials). When Tama worked well, there was no significant difference in length of interaction. However, interactions with Tama had a higher rate of repeated queries, causing longer interactions overall. Video analysis lets us explain the problems users had interacting with gaze. In the discussion, we describe implications for designing new gaze systems, using gaze both as input and output. We also discuss how the relationship to anthropomorphic design and taking advantage of learned skills of interaction. Finally, two paths for future work are proposed, one in the field of speech agents, and the second in using human gaze as an interaction modality more widely.

Place, publisher, year, edition, pages
2019. Vol. 3, article id 176
Keywords [en]
Smart Speaker, Voice Assistant, Gaze Interaction, Gaze Detection
National Category
Information Systems
Research subject
Man-Machine-Interaction (MMI)
Identifiers
URN: urn:nbn:se:su:diva-177146DOI: 10.1145/3359278OAI: oai:DiVA.org:su-177146DiVA, id: diva2:1379866
Available from: 2019-12-17 Created: 2019-12-17 Last updated: 2019-12-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Search in DiVA

By author/editor
McMillan, DonaldBrown, BarryJaber, RazanSolsona Belenguer, Jordi
By organisation
Department of Computer and Systems Sciences
Information Systems

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 1 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf