UCL Discovery Stage
UCL home » Library Services » Electronic resources » UCL Discovery Stage

Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation

Binns, R; Veale, M; Van Kleek, M; Shadbolt, N; (2017) Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation. In: Ciampaglia, GL and Mashhadi, A and Yasseri, T, (eds.) Social Informatics. SocInfo 2017. Lecture Notes in Computer Science, Vol. 10540. (pp. pp. 405-415). Springer: Cham, Switzerland. Green open access

[thumbnail of Veale_finaldraft-liketrainer.pdf]
Preview
Text
Veale_finaldraft-liketrainer.pdf - Accepted Version

Download (151kB) | Preview

Abstract

The internet has become a central medium through which ‘networked publics’ express their opinions and engage in debate. Offensive comments and personal attacks can inhibit participation in these spaces. Automated content moderation aims to overcome this problem using machine learning classifiers trained on large corpora of texts manually annotated for offence. While such systems could help encourage more civil debate, they must navigate inherently normatively contestable boundaries, and are subject to the idiosyncratic norms of the human raters who provide the training data. An important objective for platforms implementing such measures might be to ensure that they are not unduly biased towards or against particular norms of offence. This paper provides some exploratory methods by which the normative biases of algorithmic content moderation systems can be measured, by way of a case study using an existing dataset of comments labelled for offence. We train classifiers on comments labelled by different demographic subsets (men and women) to understand how differences in conceptions of offence between these groups might affect the performance of the resulting models on various test sets. We conclude by discussing some of the ethical choices facing the implementers of algorithmic moderation systems, given various desired levels of diversity of viewpoints amongst discussion participants.

Type: Proceedings paper
Title: Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation
Event: 9th International Conference on Social Informatics (SocInfo 2017)
Location: Oxford
Dates: 13 September 2017 - 15 September 2017
ISBN-13: 978-3-319-67255-7
Open access status: An open access version is available from UCL Discovery
DOI: 10.1007/978-3-319-67256-4_32
Publisher version: https://doi.org/10.1007/978-3-319-67256-4_32
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: algorithmic accountability, discrimination, censorship, platforms, content moderation, machine learning
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL SLASH
UCL > Provost and Vice Provost Offices > UCL SLASH > Faculty of Laws
URI: https://discovery-pp.ucl.ac.uk/id/eprint/1572271
Downloads since deposit
14,592Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item