UCL Discovery Stage
UCL home » Library Services » Electronic resources » UCL Discovery Stage

Quantitative Ethics in Healthcare Artificial Intelligence

Straw, Isabel; (2024) Quantitative Ethics in Healthcare Artificial Intelligence. Doctoral thesis (Ph.D), UCL (University College London).

[thumbnail of Straw_10200061_thesis_sigs_removed.pdf] Text
Straw_10200061_thesis_sigs_removed.pdf
Access restricted to UCL open access staff until 1 June 2025.

Download (17MB)

Abstract

The deployment of Artificial Intelligence (AI) in medicine has brought issues of health equity to the forefront. Inequitable performance of medical AI algorithms affecting different demographic groups may widen health inequalities, negatively impacting historically marginalised populations. In this research, I identify and characterise bias in healthcare algorithms. My research provides three key contributions to the domain of Machine Learning (ML) fairness and Healthcare AI. First, I provide a conceptual analysis, evaluating the roots of AI bias in healthcare, adopting an anthropological and sociological perspective. Second, I establish a quantitative framework for evaluating and addressing demographic inequities in algorithmic performance. Third, I introduce a novel application of causal modelling for evaluating bias in AI models, taking into account the nuanced challenges associated with achieving ML fairness in medicine. This research significantly contributes to our understanding of AI bias in healthcare, by differentiating between inequities arising due to (1) unintentional harms (e.g. from a lack of representation in datasets), and (2) intentional harms (e.g. from politically shaped medical scoring systems). In taking such an approach, I demonstrate that resolving AI bias in healthcare depends on identifying and targeting the origin of the inequity. For AI Bias that stems from under-representation and the misuse of statistical averages, I evaluate the (in)applicability of traditional fairness methods and explore the role of high-dimensional representation learning for improving model individuation. Secondly, for biases stemming from harmful medical tools, I demonstrate that causal modelling can be an effective approach for uncovering and counteracting these inequities. This study has limitations, including small datasets, missing demographic data, and a narrow focus on two medical domains, which together limits the generalisability of the results. Despite these constraints, my work highlights the need for context-specific solutions to create equitable AI systems in healthcare and the need for socio-technical methodologies that integrate an anthropological understanding of the roots of AI bias.

Type: Thesis (Doctoral)
Qualification: Ph.D
Title: Quantitative Ethics in Healthcare Artificial Intelligence
Language: English
Additional information: Copyright © The Author 2024. Original content in this thesis is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) Licence (https://creativecommons.org/licenses/by-nc/4.0/). Any third-party copyright material present remains the property of its respective owner(s) and is licensed under its existing terms. Access may initially be restricted at the author’s request.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Population Health Sciences > Institute of Health Informatics
URI: https://discovery-pp.ucl.ac.uk/id/eprint/10200061
Downloads since deposit
36Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item