RMIT University
Browse

Uncertainty Estimation and Reduction of Pre-trained Models for Text Regression

journal contribution
posted on 2024-11-02, 20:46 authored by Yuxia Wang, Daniel Beck, Timothy Baldwin, Cornelia VerspoorCornelia Verspoor
State-of-the-art classification and regression models are often not well calibrated, and cannot reliably provide uncertainty estimates, limiting their utility in safety-critical applications such as clinical decision-making. While recent work has focused on calibration of classifiers, there is almost no work in NLP on calibration in a regression setting. In this paper, we quantify the calibration of pre- trained language models for text regression, both intrinsically and extrinsically. We further apply uncertainty estimates to augment training data in low-resource domains. Our experiments on three regression tasks in both self-training and active-learning settings show that uncertainty estimation can be used to increase overall performance and enhance model generalization.

History

Journal

Transactions of the Association for Computational Linguistics

Volume

10

Start page

680

End page

696

Total pages

17

Publisher

MIT Press

Place published

United States

Language

English

Copyright

© 2022 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

Former Identifier

2006116032

Esploro creation date

2022-09-09

Usage metrics

    Scholarly Works

    Keywords

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC