UCL Discovery Stage
UCL home » Library Services » Electronic resources » UCL Discovery Stage

Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner

Shi, Zhengxiang; Lipani, Aldo; (2023) Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner. In: Proceedings of the 37th Conference on Neural Information Processing Systems. NeurIPS: New Orleans, New Orleans, USA. Green open access

[thumbnail of 2305.01711.pdf]
Preview
Text
2305.01711.pdf - Accepted Version

Download (857kB) | Preview

Abstract

Language models (LMs) trained on vast quantities of unlabelled data have greatly advanced the field of natural language processing (NLP). In this study, we re-visit the widely accepted notion in NLP that continued pre-training LMs on task-related texts improves the performance of fine-tuning (FT) in downstream tasks. Through experiments on eight single-sentence tasks and eight sentence-pair tasks in both semi-supervised and fully-supervised settings, we find that conventional continued pre-training does not consistently provide benefits and can even be detrimental for sentence-pair tasks or when prompt-based FT is used. To tackle these issues, we propose Prompt-based Continued Pre-training (PCP), which combines the idea of instruction tuning with conventional continued pre-training. Our approach aims to improve the performance of prompt-based FT by presenting both task-related texts and prompt templates to LMs through unsupervised pre-training objectives before fine-tuning for the target task. Our empirical evaluations on 21 benchmarks demonstrate that the PCP consistently improves the performance of state-of-the-art prompt-based FT approaches (up to 20.1% absolute) in both semi-supervised and fully-supervised settings, even with only hundreds of unlabelled examples. Additionally, prompt-based FT with the PCP outperforms state-of-the-art semi-supervised approaches with greater simplicity, eliminating the need for an iterative process and extra data augmentation. Our further analysis explores the performance lower bound of the PCP and reveals that the advantages of PCP persist across different sizes of models and datasets.

Type: Proceedings paper
Title: Don't Stop Pretraining? Make Prompt-based Fine-tuning Powerful Learner
Event: NeurIPS 2023, Conference on Neural Information Processing Systems
Location: New Orleans, United States
Open access status: An open access version is available from UCL Discovery
Publisher version: https://papers.nips.cc/paper_files/paper/2023/hash...
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher's terms and conditions.
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Civil, Environ and Geomatic Eng
URI: https://discovery-pp.ucl.ac.uk/id/eprint/10178314
Downloads since deposit
66Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item