UCL Discovery Stage
UCL home » Library Services » Electronic resources » UCL Discovery Stage

Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task

Cecil, Julia; Lermer, Eva; Hudecek, Matthias FC; Sauer, Jan; Gaube, Susanne; (2024) Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task. Scientific Reports , 14 , Article 9736. 10.1038/s41598-024-60220-5. Green open access

[thumbnail of Gaube_s41598-024-60220-5.pdf]
Preview
Text
Gaube_s41598-024-60220-5.pdf

Download (2MB) | Preview

Abstract

Despite the rise of decision support systems enabled by artificial intelligence (AI) in personnel selection, their impact on decision-making processes is largely unknown. Consequently, we conducted five experiments (N = 1403 students and Human Resource Management (HRM) employees) investigating how people interact with AI-generated advice in a personnel selection task. In all pre-registered experiments, we presented correct and incorrect advice. In Experiments 1a and 1b, we manipulated the source of the advice (human vs. AI). In Experiments 2a, 2b, and 2c, we further manipulated the type of explainability of AI advice (2a and 2b: heatmaps and 2c: charts). We hypothesized that accurate and explainable advice improves decision-making. The independent variables were regressed on task performance, perceived advice quality and confidence ratings. The results consistently showed that incorrect advice negatively impacted performance, as people failed to dismiss it (i.e., overreliance). Additionally, we found that the effects of source and explainability of advice on the dependent variables were limited. The lack of reduction in participants’ overreliance on inaccurate advice when the systems’ predictions were made more explainable highlights the complexity of human-AI interaction and the need for regulation and quality standards in HRM.

Type: Article
Title: Explainability does not mitigate the negative impact of incorrect AI advice in a personnel selection task
Location: England
Open access status: An open access version is available from UCL Discovery
DOI: 10.1038/s41598-024-60220-5
Publisher version: https://doi.org/10.1038/s41598-024-60220-5
Language: English
Additional information: This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Keywords: Artificial intelligence, Personnel selection, Decision-making
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
URI: https://discovery-pp.ucl.ac.uk/id/eprint/10192009
Downloads since deposit
55Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item