UCL Discovery Stage
UCL home » Library Services » Electronic resources » UCL Discovery Stage

Learn2Agree: Fitting with Multiple Annotators Without Objective Ground Truth

Berthouze, Nadia; Wang, C; Gao, Y; Fan, C; Hu, J; Lam, TL; Lane, ND; (2023) Learn2Agree: Fitting with Multiple Annotators Without Objective Ground Truth. In: Trustworthy Machine Learning for Healthcare: First International Workshop, TML4H 2023, Virtual Event, May 4, 2023, Proceedings. (pp. pp. 147-162). Springer Nature Green open access

[thumbnail of Berthouze_Learn2AgreeICLRWK.pdf]
Preview
Text
Berthouze_Learn2AgreeICLRWK.pdf - Accepted Version

Download (601kB) | Preview

Abstract

The annotation of domain experts is important for some medical applications where the objective ground truth is ambiguous to define, e.g., the rehabilitation for some chronic diseases, and the prescreening of some musculoskeletal abnormalities without further medical examinations. However, improper uses of the annotations may hinder developing reliable models. On one hand, forcing the use of a single ground truth generated from multiple annotations is less informative for the modeling. On the other hand, feeding the model with all the annotations without proper regularization is noisy given existing disagreements. For such issues, we propose a novel Learning to Agreement (Learn2Agree) framework to tackle the challenge of learning from multiple annotators without objective ground truth. The framework has two streams, with one stream fitting with the multiple annotators and the other stream learning agreement information between annotators. In particular, the agreement learning stream produces regularization information to the classifier stream, tuning its decision to be better in line with the agreement between annotators. The proposed method can be easily added to existing backbones, with experiments on two medical datasets showed better agreement levels with annotators.

Type: Proceedings paper
Title: Learn2Agree: Fitting with Multiple Annotators Without Objective Ground Truth
Event: First International Workshop, TML4H 2023: Trustworthy Machine Learning for Healthcare
ISBN-13: 978-3-031-39538-3
Open access status: An open access version is available from UCL Discovery
DOI: 10.1007/978-3-031-39539-0_13
Publisher version: https://doi.org/10.1007/978-3-031-39539-0_13
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences
UCL > Provost and Vice Provost Offices > School of Life and Medical Sciences > Faculty of Brain Sciences > Div of Psychology and Lang Sciences > UCL Interaction Centre
URI: https://discovery-pp.ucl.ac.uk/id/eprint/10166180
Downloads since deposit
308Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item