Open this publication in new window or tab >>Show others...
2025 (English)In: Telematics and informatics, ISSN 0736-5853, E-ISSN 1879-324X, Vol. 101, article id 102299Article in journal (Refereed) Published
Abstract [en]
A deep understanding of Generative Artificial Intelligence (GAI) is crucial not only for technological development but also for formulating effective risk response strategies. However, previous studies have mainly focused on how individual factors affect GAI risk perception while the technical functions and features that are the root causes of user concerns regarding GAI remain unclear. To address this gap, the current study, grounded in affordance theory, explored how perceived affordances of GAI influenced user risk perceptions across six dimensions: information, security, technical, social, ethical, and legal. A hierarchical regression analysis was conducted on a survey of 1,031 GAI users to examine the impact of interactivity, agency, and security affordances on these risk dimensions. The results indicate that higher perceptions of affordances such as bandwidth, synchrony, and transparency are significantly associated with lower risk perceptions across all dimensions. Notably, women reported higher perceived risks than men in most categories, whereas age and GAI usage experience did not significantly affect these perceptions. These findings highlight the importance of enhancing user control, transparency, and privacy protections in GAI system design to effectively mitigate perceived risks. This study contributes to the literature by providing a multidimensional analysis of risk perception in the context of GAI, offering practical insights for the development of inclusive, transparent, and user-centered artificial intelligence systems.
Keywords
Generative artificial intelligence, Human-computer interaction, Perceived affordances, User risk perception
National Category
Artificial Intelligence
Identifiers
urn:nbn:se:su:diva-244358 (URN)10.1016/j.tele.2025.102299 (DOI)001507536000001 ()2-s2.0-105007502439 (Scopus ID)
2025-06-232025-06-232025-06-23Bibliographically approved