Gender stereotypes in artificial intelligence within the accounting profession using large language models
Abstract
This study investigates how artificial intelligence (AI) perpetuates gender stereotypes in the accounting profession. Through experiments employing large language models (LLMs), we scrutinize how these models assign gender labels to accounting job titles. Our findings reveal differing tendencies among LLMs, with one favouring male labels, another female labels, and a third showing a balanced approach. Statistical analyses indicate significant disparities in labelling patterns, and job titles classified as male are associated with higher salary ranges, suggesting gender-related bias in economic outcomes. This study reaffirms existing literature on gender stereotypes in LLMs and uncovers specific biases in the accounting context. It underscores the transfer of biases from the physical to the digital realm through LLMs and highlights broader implications across various sectors. We propose raising public awareness as a means to mitigate these biases, advocating for proactive measures over relying solely on human intervention.Citation
Leong, K., & Sung, A. (2024). Gender stereotypes in artificial intelligence within the accounting profession using large language models. Humanities and Social Sciences Communications, 11, article-number 1141. https://doi.org/10.1057/s41599-024-03660-8Publisher
SpringerAdditional Links
https://www.nature.com/articles/s41599-024-03660-8Type
ArticleEISSN
2662-9992Sponsors
University of Chesterae974a485f413a2113503eed53cd6c53
10.1057/s41599-024-03660-8
Scopus Count
Collections
Except where otherwise noted, this item's license is described as Licence for this article: http://creativecommons.org/licenses/by-nc-nd/4.0/