Robust Uncertainty Quantification Using Conformalised Monte Carlo Prediction

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deploying deep learning models in safety-critical applications remains a very challenging task, mandating the provision of assurances for the dependable operation of these models. Uncertainty quantification (UQ) methods estimate the model’s confidence per prediction, informing decision-making by considering the effect of randomness and model misspecification. Despite the advances of state-of-the-art UQ methods, they are computationally expensive or produce conservative prediction sets/intervals. We introduce MC-CP, a novel hybrid UQ method that combines a new adaptive Monte Carlo (MC) dropout method with conformal prediction (CP). MC-CP adaptively modulates the traditional MC dropout at runtime to save memory and computation resources, enabling predictions to be consumed by CP, yielding robust prediction sets/intervals. Throughout comprehensive experiments, we show that MC-CP delivers significant improvements over comparable UQ methods, like MC dropout, RAPS and CQR, both in classification and regression benchmarks. MC-CP can be easily added to existing models, making its deployment simple. The MC-CP code and replication package is available at https://github.com/team-daniel/MC-CP.
Original languageEnglish
Title of host publicationThirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)
Publication statusPublished - 25 Feb 2024

Publication series

NameThirty-Eighth AAAI Conference on Artificial Intelligence (AAAI-24)

Bibliographical note

This is an author-produced version of the published paper. Uploaded in accordance with the University’s Research Publications and Open Access policy.

Keywords

  • Uncertainty Estimation
  • deep learning
  • Monte Carlo

Cite this