Franklin, M;
Lagnado, D;
(2022)
Human-AI Interaction Paradigm for Evaluating Explainable Artificial Intelligence.
In:
HCI International 2022 Posters. HCII 2022. Communications in Computer and Information Science.
(pp. pp. 404-411).
Springer Nature
Preview |
Text
Evaluatory XAI.pdf - Accepted Version Download (267kB) | Preview |
Abstract
This article seeks to propose a framework and corresponding paradigm for evaluating explanations provided by explainable artificial intelligence (XAI). The article argues for the need for evaluation paradigms – different people performing different tasks in different contexts will react differently to different explanations. It reviews previous research evaluating XAI explanations while also identifying the main contribution of this work – a flexible paradigm researchers can use to evaluate XAI models, rather than a list of factors. The article then outlines a framework which offers causal relationships between five key factors – mental models, probability estimates, trust, knowledge, and performance. It then outlines a paradigm consisting of a training, testing and evaluation phase. The work is discussed in relation to predictive models, guidelines for XAI developers, and adaptive explainable artificial intelligence - a recommender system capable of predicting what the preferred explanations would be for a specific domain-expert on a particular task.
Archive Staff Only
![]() |
View Item |