UCL Discovery Stage
UCL home » Library Services » Electronic resources » UCL Discovery Stage

Rethinking Semi-supervised Learning with Language Models

Shi, Zhengxiang; Tonolini, Francesco; Aletras, Nikolaos; Yilmaz, Emine; Kazai, Gabriella; Jiao, Yunlong; (2023) Rethinking Semi-supervised Learning with Language Models. In: Findings of the Association for Computational Linguistics: ACL 2023. (pp. pp. 5614-5634). Association for Computational Linguistics Green open access

[thumbnail of 2023.findings-acl.347.pdf]
Preview
Text
2023.findings-acl.347.pdf - Published Version

Download (557kB) | Preview

Abstract

Semi-supervised learning (SSL) is a popular setting aiming to effectively utilize unlabelled data to improve model performance in downstream natural language processing (NLP) tasks. Currently, there are two popular approaches to make use of unlabelled data: Self-training (ST) and Task-adaptive pre-training (TAPT). ST uses a teacher model to assign pseudo-labels to the unlabelled data, while TAPT continues pre-training on the unlabelled data before fine-tuning. To the best of our knowledge, the effectiveness of TAPT in SSL tasks has not been systematically studied, and no previous work has directly compared TAPT and ST in terms of their ability to utilize the pool of unlabelled data. In this paper, we provide an extensive empirical study comparing five state-of-the-art ST approaches and TAPT across various NLP tasks and data sizes, including in- and out-of-domain settings. Surprisingly, we find that TAPT is a strong and more robust SSL learner, even when using just a few hundred unlabelled samples or in the presence of domain shifts, compared to more sophisticated ST approaches, and tends to bring greater improvements in SSL than in fully-supervised settings. Our further analysis demonstrates the risks of using ST approaches when the size of labelled or unlabelled data is small or when domain shifts exist. We offer a fresh perspective for future SSL research, suggesting the use of unsupervised pre-training objectives over dependency on pseudo labels.

Type: Proceedings paper
Title: Rethinking Semi-supervised Learning with Language Models
Event: Association for Computational Linguistics
Location: Toronto, Canada
Open access status: An open access version is available from UCL Discovery
DOI: 10.18653/v1/2023.findings-acl.347
Publisher version: http://dx.doi.org/10.18653/v1/2023.findings-acl.34...
Language: English
Additional information: © 2023 ACL; other materials are copyrighted by their respective copyright holders. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/).
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Civil, Environ and Geomatic Eng
URI: https://discovery-pp.ucl.ac.uk/id/eprint/10178315
Downloads since deposit
504Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item