Shortcuts

References

Please find an exhaustive list of the references of the models, metrics, and datasets used in this library in the sections below.

Uncertainty Models

The following uncertainty models are implemented.

Deep Evidential Classification

For Deep Evidential Classification, consider citing:

Evidential Deep Learning to Quantify Classification Uncertainty

  • Authors: Murat Sensoy, Lance Kaplan, Melih Kandemir

  • Paper: NeurIPS 2018.

Beta NLL in Deep Regression

For Beta NLL in Deep Regression, consider citing:

On the Pitfalls of Heteroscedastic Uncertainty Estimation with Probabilistic Neural Networks

  • Authors: Maximilian Seitzer, Arash Tavakoli, Dimitrije Antic, Georg Martius

  • Paper: ICLR 2022.

Deep Evidential Regression

For Deep Evidential Regression, consider citing:

Deep Evidential Regression

  • Authors: Alexander Amini, Wilko Schwarting, Ava Soleimany, Daniela Rus

  • Paper: NeurIPS 2020.

Bayesian Neural Networks

For Bayesian Neural Networks, consider citing:

Weight Uncertainty in Neural Networks

  • Authors: Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra

  • Paper: ICML 2015.

Deep Ensembles

For Deep Ensembles, consider citing:

Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

  • Authors: Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell

  • Paper: NeurIPS 2017.

BatchEnsemble

For BatchEnsemble, consider citing:

BatchEnsemble: An alternative approach to Efficient Ensemble and Lifelong Learning

  • Authors: Yeming Wen, Dustin Tran, and Jimmy Ba

  • Paper: ICLR 2020.

Masksembles

For Masksembles, consider citing:

Masksembles for Uncertainty Estimation

  • Authors: Nikita Durasov, Timur Bagautdinov, Pierre Baque, and Pascal Fua

  • Paper: CVPR 2021.

MIMO

For MIMO, consider citing:

Training independent subnetworks for robust prediction

  • Authors: Marton Havasi, Rodolphe Jenatton, Stanislav Fort, Jeremiah Zhe Liu, Jasper Snoek, Balaji Lakshminarayanan, Andrew M. Dai, and Dustin Tran

  • Paper: ICLR 2021.

Packed-Ensembles

For Packed-Ensembles, consider citing:

Packed-Ensembles for Efficient Uncertainty Estimation

  • Authors: Olivier Laurent, Adrien Lafage, Enzo Tartaglione, Geoffrey Daniel, Jean-Marc Martinez, Andrei Bursuc, and Gianni Franchi

  • Paper: ICLR 2023.

Monte-Carlo Dropout

For Monte-Carlo Dropout, consider citing:

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

  • Authors: Yarin Gal and Zoubin Ghahramani

  • Paper: ICML 2016.

Data Augmentation Methods

Mixup

For Mixup, consider citing:

mixup: Beyond Empirical Risk Minimization

  • Authors: Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, and David Lopez-Paz

  • Paper: ICLR 2018.

RegMixup

For RegMixup, consider citing:

RegMixup: Mixup as a Regularizer Can Surprisingly Improve Accuracy and Out Distribution Robustness

  • Authors: Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H.S. Torr, Puneet K. Dokania

  • Paper: NeurIPS 2022.

MixupIO

For MixupIO, consider citing:

On the Pitfall of Mixup for Uncertainty Calibration

  • Authors: Deng-Bao Wang, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, and Min-Ling Zhang

  • Paper: CVPR 2023 <https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_On_the_Pitfall_of_Mixup_for_Uncertainty_Calibration_CVPR_2023_paper.pdf>

Warping Mixup

For Warping Mixup, consider citing:

Tailoring Mixup to Data using Kernel Warping functions

  • Authors: Quentin Bouniot, Pavlo Mozharovskyi, and Florence d’Alché-Buc

  • Paper: ArXiv 2023.

Post-Processing Methods

Temperature, Vector, & Matrix scaling

For temperature, vector, & matrix scaling, consider citing:

On Calibration of Modern Neural Networks

  • Authors: Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger

  • Paper: ICML 2017.

Monte-Carlo Batch Normalization

For Monte-Carlo Batch Normalization, consider citing:

Bayesian Uncertainty Estimation for Batch Normalized Deep Networks

  • Authors: Mathias Teye, Hossein Azizpour, and Kevin Smith

  • Paper: ICML 2018.

Metrics

The following metrics are used/implemented.

Expected Calibration Error

For the expected calibration error, consider citing:

Obtaining Well Calibrated Probabilities Using Bayesian Binning

  • Authors: Mahdi Pakdaman Naeini, Gregory F. Cooper, and Milos Hauskrecht

  • Paper: AAAI 2015.

Grouping Loss

For the grouping loss, consider citing:

Beyond Calibration: Estimating the Grouping Loss of Modern Neural Networks

  • Authors: Alexandre Perez-Lebel, Marine Le Morvan, and Gaël Varoquaux

  • Paper: ICLR 2023.

Datasets

The following datasets are used/implemented.

MNIST

Gradient-based learning applied to document recognition

MNIST-C

MNIST-C: A Robustness Benchmark for Computer Vision

  • Authors: Norman Mu, and Justin Gilmer

  • Paper: ICMLW 2019.

Not-MNIST

  • Author: Yaroslav Bulatov

CIFAR-10 & CIFAR-100

Learning multiple layers of features from tiny images

CIFAR-C, Tiny-ImageNet-C, ImageNet-C

Benchmarking neural network robustness to common corruptions and perturbations

  • Authors: Dan Hendrycks and Thomas Dietterich

  • Paper: ICLR 2019.

CIFAR-10 H

Human uncertainty makes classification more robust

  • Authors: Joshua C. Peterson, Ruairidh M. Battleday, Thomas L. Griffiths, and Olga Russakovsky

  • Paper: ICCV 2019.

CIFAR-10 N / CIFAR-100 N

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations

  • Authors: Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, Yang Liu

  • Paper: ICLR 2022.

SVHN

Reading digits in natural images with unsupervised feature learning

  • Authors: Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y. Ng

  • Paper: NeurIPS Workshops 2011.

ImageNet

Imagenet: A large-scale hierarchical image database

  • Authors: Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei

  • Paper: CVPR 2009.

ImageNet-A & ImageNet-0

Natural adversarial examples

  • Authors: Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Song

  • Paper: CVPR 2021.

ImageNet-R

The many faces of robustness: A critical analysis of out-of-distribution generalization

  • Authors: Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan Dorundo, Rahul Desai, Tyler Zhu, Samyak Parajuli, Mike Guo, et al.

  • Paper: ICCV 2021.

Textures

ViM: Out-of-distribution with virtual-logit matching

  • Authors: Haoqi Wang, Zhizhong Li, Litong Feng, and Wayne Zhang

  • Paper: CVPR 2022.

OpenImage-O

Curation:

ViM: Out-of-distribution with virtual-logit matching

  • Authors: Haoqi Wang, Zhizhong Li, Litong Feng, and Wayne Zhang

  • Paper: CVPR 2022.

Original Dataset:

The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale.

  • Authors: Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, et al.

  • Paper: IJCV 2020.

MUAD

MUAD: Multiple Uncertainties for Autonomous Driving Dataset

  • Authors: Gianni Franchi, Xuanlong Yu, Andrei Bursuc, et al.*

  • Paper: BMVC 2022 <https://arxiv.org/pdf/2203.01437.pdf>

Architectures

ResNet

Deep Residual Learning for Image Recognition

  • Authors: Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun

  • Paper: CVPR 2016.

Wide-ResNet

Wide Residual Networks

  • Authors: Sergey Zagoruyko and Nikos Komodakis

  • Paper: BMVC 2016.

VGG

Very Deep Convolutional Networks for Large-Scale Image Recognition

  • Authors: Karen Simonyan and Andrew Zisserman

  • Paper: ICLR 2015.

Layers

Filter Response Normalization Layer: Eliminating Batch Dependence in the Training of Deep Neural Networks

  • Authors: Saurabh Singh and Shankar Krishnan

  • Paper: CVPR 2020.