API Reference#
Routines#
The routine are the main building blocks of the library. They define the framework in which the models are trained and evaluated. They allow for easy computation of different metrics crucial for uncertainty estimation in different contexts, namely classification, regression and segmentation.
Classification#
Routine for training & testing on classification tasks. |
Segmentation#
Routine for training & testing on segmentation tasks. |
Regression#
Routine for training & testing on regression tasks. |
Pixelwise Regression#
Routine for training & testing on pixel regression tasks. |
Baselines#
Warning
The baselines will soon be removed from the library to avoid confusion with the routines.
TorchUncertainty provide lightning-based models that can be easily trained and evaluated. These models inherit from the routines and are specifically designed to benchmark different methods in similar settings, here with constant architectures.
Classification#
ResNet backbone baseline for classification providing support for various versions and architectures. |
|
VGG backbone baseline for classification providing support for various versions and architectures. |
|
Wide-ResNet28x10 backbone baseline for classification providing support for various versions. |
Regression#
MLP baseline for regression providing support for various versions. |
Segmentation#
SegFormer backbone baseline for segmentation providing support for various versions and architectures. |
Monocular Depth Estimation#
Layers#
Ensemble layers#
Packed-Ensembles-style Linear layer. |
|
Packed-Ensembles-style Conv2d layer. |
|
Packed-Ensembles-style MultiheadAttention layer. |
|
Packed-Ensembles-style LayerNorm layer. |
|
Packed-Ensembles-style TransformerEncoderLayer (made up of self-attention followed by a feedforward network). |
|
Packed-Ensembles-style TransformerDecoderLayer (made up of self-attention, multi-head attention, and feedforward network). |
|
BatchEnsemble-style Linear layer. |
|
BatchEnsemble-style Conv2d layer. |
|
Masksembles-style Linear layer. |
|
Masksembles-style Conv2d layer. |
Bayesian layers#
Bayesian Linear Layer with Mixture of Normals prior and Normal posterior. |
|
Bayesian Conv1d Layer with Mixture of Normals prior and Normal posterior. |
|
Bayesian Conv2d Layer with Gaussian Mixture prior and Normal posterior. |
|
Bayesian Conv3d Layer with Gaussian mixture prior and Normal posterior. |
|
LPBNN-style linear layer. |
|
LPBNN-style 2D convolutional layer. |
Density layers#
Linear Layers#
Normal Distribution Linear Density Layer. |
|
Laplace Distribution Linear Density Layer. |
|
Cauchy Distribution Linear Density Layer. |
|
Student's T-Distribution Linear Density Layer. |
|
Normal-Inverse-Gamma Distribution Linear Density Layer. |
Convolution Layers#
Normal Distribution Convolutional Density Layer. |
|
Laplace Distribution Convolutional Density Layer. |
|
Cauchy Distribution Convolutional Density Layer. |
|
Student's T-Distribution Convolutional Density Layer. |
|
Normal-Inverse-Gamma Distribution Convolutional Density Layer. |
Models#
Wrappers#
Functions#
BatchEnsemble wrapper for a model. |
|
Build a Deep Ensembles out of the original models. |
|
MC Dropout wrapper for a model. |
Classes#
Wrap a BatchEnsemble model to ensure correct batch replication. |
|
Ensemble of models at different points in the training trajectory. |
|
Exponential Moving Average (EMA). |
|
MC Dropout wrapper for a model containing nn.Dropout modules. |
|
Stochastic Weight Averaging. |
|
Stochastic Weight Averaging Gaussian (SWAG). |
Metrics#
Classification#
Proper Scores#
Compute the Brier score. |
Computes the Negative Log-Likelihood (NLL) metric for classification tasks. |
Out-of-Distribution Detection#
Selective Classification#
Calculate The Area Under the Generalized Risk-Coverage curve (AUGRC). |
|
Calculate Area Under the Risk-Coverage curve. |
|
Provide coverage at x Risk. |
|
Provide coverage at 5% Risk. |
|
Compute the risk at a specified coverage threshold. |
|
Compute the risk at 80% coverage. |
Calibration#
Computes the Adaptive Top-label Calibration Error (ACE) for classification tasks. |
|
Computes the Calibration Error for classification tasks. |
Conformal Predictions#
Empirical coverage rate metric. |
|
Set size to compute the efficiency of conformal prediction methods. |
Diversity#
Calculate the Disagreement Metric. |
|
The Shannon Entropy Metric to estimate the confidence of a single model or the mean confidence across estimators. |
|
Compute the Mutual Information Metric. |
|
Compute the Variation Ratio. |
Others#
Metric to estimate the Top-label Grouping Loss. |
Regression#
Computes the Negative Log-Likelihood (NLL) metric for classification tasks. |
|
Computes the LOG10 metric. |
|
Mean Absolute Error of the inverse predictions (iMAE). |
|
Compute Mean Absolute Error relative to the Ground Truth (MAErel or ARErel). |
|
Compute mean squared error relative to the Ground Truth (MSErel or SRE). |
|
Mean Squared Error of the inverse predictions (iMSE). |
|
Computes the Mean Squared Logarithmic Error (MSLE) regression metric. |
|
Computes The Scale-Invariant Logarithmic Loss metric. |
|
Computes the Threshold Accuracy metric, also referred to as d1, d2, or d3. |
Segmentation#
Computes Mean Intersection over Union (IoU) score. |
|
SegmentationBinaryAUROC computes the Area Under the Receiver Operating Characteristic Curve (AUROC) for binary segmentation tasks. |
|
SegmentationBinaryAveragePrecision computes the Average Precision (AP) for binary segmentation tasks. |
|
FPR95 metric for segmentation tasks. |
Others#
The Area Under the Sparsification Error curve (AUSE) metric to evaluate the quality of the uncertainty estimates, i.e., how much they coincide with the true errors. |
Losses#
Binary Cross Entropy with Logits Loss with label smoothing. |
|
The Beta Negative Log-likelihood loss. |
|
The Conflictual Loss. |
|
The Confidence Penalty Loss. |
|
The deep evidential classification loss. |
|
The Deep Evidential Regression loss. |
|
Negative Log-Likelihood loss using given distributions as inputs. |
|
The Evidence Lower Bound (ELBO) loss for Bayesian Neural Networks. |
|
Focal-Loss for classification tasks. |
|
KL divergence loss for Bayesian Neural Networks. |
Post-Processing Methods#
Laplace approximation for uncertainty estimation. |
|
Monte Carlo Batch Normalization wrapper. |
Scaling Methods#
Matrix scaling post-processing for calibrated probabilities. |
|
Temperature scaling post-processing for calibrated probabilities. |
|
Vector scaling post-processing for calibrated probabilities. |
OOD Scores#
Abstract base class for Out-of-Distribution (OOD) criteria. |
|
OOD criterion based on the maximum logit value. |
|
OOD criterion based on the energy function. |
|
OOD criterion based on maximum softmax probability. |
|
OOD criterion based on entropy. |
|
OOD criterion based on mutual information. |
|
OOD criterion based on maximum softmax probability. |
|
OOD criterion based on variation ratio. |
Datamodules#
Classification#
DataModule for CIFAR10. |
|
DataModule for CIFAR100. |
|
DataModule for the ImageNet dataset. |
|
DataModule for MNIST. |
|
DataModule for the Tiny-ImageNet dataset. |
UCI Tabular Classification#
The Bank Marketing UCI classification datamodule. |
|
The Dota2 Games UCI classification datamodule. |
|
The HTRU2 UCI classification datamodule. |
|
The online shoppers intention UCI classification datamodule. |
|
The Bank Marketing UCI classification datamodule. |
Regression#
The UCI regression datasets. |
Segmentation#
DataModule for the CamVid dataset. |
|
DataModule for the Cityscapes dataset. |
|
Segmentation DataModule for the MUAD dataset. |
Datasets#
Classification#
The corrupted MNIST-C Dataset. |
|
The notMNIST dataset. |
|
The corrupted CIFAR-10-C Dataset. |
|
The corrupted CIFAR-100-C Dataset. |
|
CIFAR-10H Dataset. |
|
CIFAR-10N Dataset. |
|
CIFAR-100N Dataset. |
|
Initializes the ImageNetA dataset class. |
|
Initializes the ImageNetC dataset class. |
|
Initializes the ImageNetO dataset class. |
|
Initializes the ImageNetR dataset class. |
|
Inspired by https://gist.github.com/z-a-f/b862013c0dc2b540cf96a123a6766e54. |
|
The corrupted TinyImageNet-C Dataset. |
|
OpenImage-O dataset. |
UCI Tabular Classification#
The bank Marketing UCI classification dataset. |
|
The DOTA 2 Games UCI classification dataset. |
|
The HTRU2 UCI classification dataset. |
|
The Online Shoppers Intention UCI classification dataset. |
|
The SpamBase UCI classification dataset. |
Regression#
The UCI regression datasets. |
Segmentation#
CamVid Dataset. |
|
Others & Cross-Categories#
Dataset used for PixMix augmentations. |
|
The MUAD Dataset. |
|
NYUv2 depth dataset. |