Loading...
Banal deception and human-AI ecosystems: A study of people’s perceptions of LLM-generated deceptive behaviour
Zhan, Xiao ; Xu, Yifan ; Abdi, Noura ; Collenette, Joe ; Sarkadi, Stefan
Zhan, Xiao
Xu, Yifan
Abdi, Noura
Collenette, Joe
Sarkadi, Stefan
Citations
Altmetric:
Advisors
Editors
Other Contributors
EPub Date
Publication Date
2025-10-09
Submitted Date
Collections
Files
Loading...
Published version
Adobe PDF, 1.33 MB
Other Titles
Abstract
Large language models (LLMs) can provide users with false, inaccurate, or misleading information, and we consider the output of this type of information as what Natale calls ‘banal’ deceptive behaviour [53]. Here, we investigate peoples’ perceptions of ChatGPT-generated deceptive behaviour and how this affects people’s behaviour and trust. To do this, we use a mixed-methods approach comprising of (i) an online survey with 220 participants and (ii) semi-structured interviews with 12 participants. Our results show that (i) the most common types of deceptive information encountered were over-simplifications and outdated information; (ii) humans’ perceptions of trust and chat-worthiness of ChatGPT are impacted by ‘banal’ deceptive behaviour; (iii) the perceived responsibility for deception is influenced by education level and the perceived frequency of deceptive information; and (iv) users become more cautious after encountering deceptive information, but they come to trust the technology more when they identify advantages of using it. Our findings contribute to understanding human-AI interaction dynamics in the context of Deceptive AI Ecosystems and highlight the importance of user-centric approaches to mitigating the potential harms of deceptive AI technologies.
Citation
Zhan, X., Xu, Y., Abdi, N., Collenette, J., & Sarkadi, S. (2025). Banal deception human-AI ecosystems: A study of people's perceptions of LLM-generated deceptive behaviour. Journal of Artificial Intelligence Research, 84. https://doi.org/10.1613/jair.1.18724
Publisher
AI Access Foundation
Journal
Journal of Artificial Intelligence Research
Research Unit
PubMed ID
PubMed Central ID
Type
Article
Language
Description
©2025 Copyright held by the owner/author(s).
Series/Report no.
ISSN
EISSN
1076-9757
ISBN
ISMN
Gov't Doc
Test Link
Sponsors
Unfunded
