Straw, Isabel;
Callison-Burch, Chris;
(2020)
Artificial Intelligence in mental health and the biases of language based models.
PLOS ONE
, 15
(12)
, Article 0240376. 10.1371/journal.pone.0240376.
Preview |
Text
Straw_Artificial Intelligence in mental health and the biases of language based models.pdf Download (549kB) | Preview |
Abstract
Background: The rapid integration of Artificial Intelligence (AI) into the healthcare field has occurred with little communication between computer scientists and doctors. The impact of AI on health outcomes and inequalities calls for health professionals and data scientists to make a collaborative effort to ensure historic health disparities are not encoded into the future. We present a study that evaluates bias in existing Natural Language Processing (NLP) models used in psychiatry and discuss how these biases may widen health inequalities. Our approach systematically evaluates each stage of model development to explore how biases arise from a clinical, data science and linguistic perspective.// Design/Methods: A literature review of the uses of NLP in mental health was carried out across multiple disciplinary databases with defined Mesh terms and keywords. Our primary analysis evaluated biases within ‘GloVe’ and ‘Word2Vec’ word embeddings. Euclidean distances were measured to assess relationships between psychiatric terms and demographic labels, and vector similarity functions were used to solve analogy questions relating to mental health.// Results: Our primary analysis of mental health terminology in GloVe and Word2Vec embeddings demonstrated significant biases with respect to religion, race, gender, nationality, sexuality and age. Our literature review returned 52 papers, of which none addressed all the areas of possible bias that we identify in model development. In addition, only one article existed on more than one research database, demonstrating the isolation of research within disciplinary silos and inhibiting cross-disciplinary collaboration or communication.// Conclusion: Our findings are relevant to professionals who wish to minimize the health inequalities that may arise as a result of AI and data-driven algorithms. We offer primary research identifying biases within these technologies and provide recommendations for avoiding these harms in the future.
Type: | Article |
---|---|
Title: | Artificial Intelligence in mental health and the biases of language based models |
Location: | United States |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1371/journal.pone.0240376 |
Publisher version: | https://doi.org/10.1371/journal.pone.0240376 |
Language: | English |
Additional information: | © 2020 Straw, Callison-Burch. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. https://creativecommons.org/licenses/by/4.0/ |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Population Health Sciences > Institute of Health Informatics |
URI: | https://discovery-pp.ucl.ac.uk/id/eprint/10203731 |
Archive Staff Only
![]() |
View Item |