Miles, Oliver;
West, Robert;
Nadarzynski, Tom;
(2021)
Health chatbots acceptability moderated by perceived stigma and severity: A cross-sectional survey.
Digital Health
, 7
pp. 1-7.
10.1177/20552076211063012.
Preview |
Text
miles-et-al-2021-health-chatbots-acceptability-moderated-by-perceived-stigma-and-severity-a-cross-sectional-survey.pdf - Published Version Download (565kB) | Preview |
Abstract
Background: Chatbots and virtual voice assistants are increasingly common in primary care without sufficient evidence for their feasibility and effectiveness. We aimed to assess how perceived stigma and severity of various health issues are associated with the acceptability for three sources of health information and consultation: an automated chatbot, a General Practitioner (GP), or a combination of both. Methods: Between May and June 2019, we conducted an online study, advertised via Facebook, for UK citizens. It was a factorial simulation experiment with three within-subject factors (perceived health issue stigma, severity, and consultation source) and six between-subject covariates. Acceptability rating for each consultation source was the dependant variable. A single mixed-model ANOVA was performed. Results: Amongst 237 participants (65% aged over 45 years old, 73% women), GP consultations were seen as most acceptable, followed by GP-chatbot service. Chatbots were seen least acceptable as a consultation source for severe health issues, while the acceptability was significantly higher for stigmatised health issues. No associations between participants’ characteristics and acceptability were found. Conclusions: Although healthcare professionals are perceived as the most desired sources of health information, chatbots may be useful for sensitive health issues in which disclosure of personal information is challenging. However, chatbots are less acceptable for health issues of higher severity and should not be recommended for use within that context. Policymakers and digital service designers need to recognise the limitations of health chatbots. Future research should establish a set of health topics most suitable for chatbot-led interventions and primary healthcare services.
Type: | Article |
---|---|
Title: | Health chatbots acceptability moderated by perceived stigma and severity: A cross-sectional survey |
Location: | United States |
Open access status: | An open access version is available from UCL Discovery |
DOI: | 10.1177/20552076211063012 |
Publisher version: | http://dx.doi.org/10.1177/20552076211063012 |
Language: | English |
Additional information: | Creative Commons CC BY: This article is distributed under the terms of the Creative Commons Attribution 4.0 License (https://creativecommons. org/licenses/by/4.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). |
Keywords: | Science & Technology, Life Sciences & Biomedicine, Health Care Sciences & Services, Health Policy & Services, Public, Environmental & Occupational Health, Medical Informatics, Artificial intelligence, chatbots, bot, healthcare, acceptability, health bot, ARTIFICIAL-INTELLIGENCE |
UCL classification: | UCL UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Population Health Sciences > Institute of Epidemiology and Health UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Population Health Sciences > Institute of Epidemiology and Health > Behavioural Science and Health |
URI: | https://discovery-pp.ucl.ac.uk/id/eprint/10187017 |
Archive Staff Only
View Item |