API Reference¶
Routines¶
The routine are the main building blocks of the library. They define the framework in which the models are trained and evaluated. They allow for easy computation of different metrics crucial for uncertainty estimation in different contexts, namely classification, regression and segmentation.
Classification¶
Routine for efficient training and testing on classification tasks using LightningModule. |
Regression¶
Routine for efficient training and testing on regression tasks using LightningModule. |
Segmentation¶
Routine for efficient training and testing on segmentation tasks using LightningModule. |
Baselines¶
TorchUncertainty provide lightning-based models that can be easily trained and evaluated. These models inherit from the routines and are specifically designed to benchmark different methods in similar settings, here with constant architectures.
Classification¶
ResNet backbone baseline for classification providing support for various versions and architectures. |
|
VGG backbone baseline for classification providing support for various versions and architectures. |
|
Wide-ResNet28x10 backbone baseline for classification providing support for various versions. |
Regression¶
MLP baseline for regression providing support for various versions. |
Segmentation¶
SegFormer backbone baseline for segmentation providing support for various versions and architectures. |
Layers¶
Ensemble layers¶
Packed-Ensembles-style Linear layer. |
|
Packed-Ensembles-style Conv2d layer. |
|
BatchEnsemble-style Linear layer. |
|
BatchEnsemble-style Conv2d layer. |
|
Masksembles-style Linear layer. |
|
Masksembles-style Conv2d layer. |
Bayesian layers¶
Bayesian Linear Layer with Mixture of Normals prior and Normal posterior. |
|
Bayesian Conv1d Layer with Mixture of Normals prior and Normal posterior. |
|
Bayesian Conv2d Layer with Mixture of Normals prior and Normal posterior. |
|
Bayesian Conv3d Layer with Mixture of Normals prior and Normal posterior. |
Models¶
Deep Ensembles¶
Build a Deep Ensembles out of the original models. |
Monte Carlo Dropout
MC Dropout wrapper for a model. |
Metrics¶
The Area Under the Sparsification Error curve (AUSE) metric to estimate the quality of the uncertainty estimates, i.e., how much they coincide with the true errors. |
|
The Brier Score Metric. |
|
The Negative Log Likelihood Metric. |
|
The Disagreement Metric to estimate the confidence of an ensemble of estimators. |
|
The Negative Log Likelihood Metric. |
|
The Shannon Entropy Metric to estimate the confidence of a single model or the mean confidence across estimators. |
|
The False Positive Rate at 95% Recall metric. |
|
The Log10 metric. |
|
Compute Mean Absolute Error relative to the Ground Truth (MAErel or ARE). |
|
Compute mean squared error relative to the Ground Truth (MSErel or SRE). |
|
The Mutual Information Metric to estimate the epistemic uncertainty of an ensemble of estimators. |
|
The Scale-Invariant Logarithmic Loss metric. |
|
The Threshold Accuracy metric, a.k.a. |
Losses¶
Negative Log-Likelihood loss using given distributions as inputs. |
|
KL divergence loss for Bayesian Neural Networks. |
|
The Evidence Lower Bound (ELBO) loss for Bayesian Neural Networks. |
|
The Beta Negative Log-likelihood loss. |
|
The deep evidential classification loss. |
Post-Processing Methods¶
Temperature scaling post-processing for calibrated probabilities. |
|
Vector scaling post-processing for calibrated probabilities. |
|
Matrix scaling post-processing for calibrated probabilities. |
|
Monte Carlo Batch Normalization wrapper. |
Datamodules¶
Classification¶
DataModule for CIFAR10. |
|
DataModule for CIFAR100. |
|
DataModule for MNIST. |
|
DataModule for ImageNet. |
Regression¶
The UCI regression datasets. |
Segmentation¶
DataModule for the CamVid dataset. |
|
DataModule for the Cityscapes dataset. |
|
Segmentation DataModule for the MUAD dataset. |