Cockayne, J;
Graham, MM;
Oates, CJ;
Sullivan, TJ;
Teymur, O;
(2022)
Testing Whether a Learning Procedure is Calibrated.
Journal of Machine Learning Research
, 23
pp. 1-36.
Preview |
Text
21-1065.pdf - Published Version Download (1MB) | Preview |
Abstract
A learning procedure takes as input a dataset and performs inference for the parameters θ of a model that is assumed to have given rise to the dataset. Here we consider learning procedures whose output is a probability distribution, representing uncertainty about θ after seeing the dataset. Bayesian inference is a prime example of such a procedure, but one can also construct other learning procedures that return distributional output. This paper studies conditions for a learning procedure to be considered calibrated, in the sense that the true data-generating parameters are plausible as samples from its distributional output. A learning procedure whose inferences and predictions are systematically over- or under-confident will fail to be calibrated. On the other hand, a learning procedure that is calibrated need not be statistically efficient. A hypothesis-testing framework is developed in order to assess, using simulation, whether a learning procedure is calibrated. Several vignettes are presented to illustrate different aspects of the framework.
Type: | Article |
---|---|
Title: | Testing Whether a Learning Procedure is Calibrated |
Open access status: | An open access version is available from UCL Discovery |
Publisher version: | http://jmlr.org/papers/v23/21-1065.html |
Language: | English |
Additional information: | © 2022 the Authors. Original content in this paper is licensed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0) Licence (https://creativecommons.org/licenses/by/4.0/). |
Keywords: | calibratedness, credible sets, uncertainty quantification |
UCL classification: | UCL |
URI: | https://discovery-pp.ucl.ac.uk/id/eprint/10166609 |
Archive Staff Only
![]() |
View Item |