.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_tutorials/Post_Hoc_Methods/tutorial_scalers.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_tutorials_Post_Hoc_Methods_tutorial_scalers.py: Histogram Binning, Isotonic Regression, and BBQ tutorial ======================================================== This notebook-style script demonstrates how to *use* existing post-processing scalers from the package to calibrate a pretrained ResNet-18 on CIFAR-100. .. GENERATED FROM PYTHON SOURCE LINES 11-19 1. Loading the Utilities ~~~~~~~~~~~~~~~~~~~~~~~~ We import: - CIFAR100DataModule for data handling - CalibrationError to compute ECE and plot reliability diagrams - the resnet builder and load_hf to fetch pretrained weights - BBQScaler, HistogramBinningScaler, and IsotonicRegressionScaler to calibrate predictions .. GENERATED FROM PYTHON SOURCE LINES 19-33 .. code-block:: Python import torch from torch.utils.data import DataLoader, random_split from torch_uncertainty.datamodules import CIFAR100DataModule from torch_uncertainty.metrics import CalibrationError from torch_uncertainty.models.classification import resnet from torch_uncertainty.post_processing import ( BBQScaler, HistogramBinningScaler, IsotonicRegressionScaler, ) from torch_uncertainty.utils import load_hf .. GENERATED FROM PYTHON SOURCE LINES 34-39 2. Loading a pretrained model from the hub ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Build a ResNet-18 (CIFAR style) and download pretrained weights from the hub. The returned `config` isn't required for this demo but is shown for completeness. .. GENERATED FROM PYTHON SOURCE LINES 39-45 .. code-block:: Python model = resnet(in_channels=3, num_classes=100, arch=18, style="cifar", conv_bias=False) weights, config = load_hf("resnet18_c100") model.load_state_dict(weights) model = model.eval() .. GENERATED FROM PYTHON SOURCE LINES 46-51 3. Setting up the Datamodule and Dataloaders ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Prepare CIFAR-100 test set and create DataLoaders. We split the test set into a calibration subset and a held-out test subset for reliable ECE computation. .. GENERATED FROM PYTHON SOURCE LINES 51-62 .. code-block:: Python dm = CIFAR100DataModule(root="./data", eval_ood=False, batch_size=32) dm.prepare_data() dm.setup("test") dataset = dm.test cal_dataset, test_dataset = random_split(dataset, [5000, len(dataset) - 5000]) test_dataloader = DataLoader(test_dataset, batch_size=128) calibration_dataloader = DataLoader(cal_dataset, batch_size=128) .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0.00/169M [00:00 smoother result, more bins -> more flexible. If you run on GPU you can pass device=torch.device('cuda') or let the scaler infer the device from calibration data by passing device=None. .. GENERATED FROM PYTHON SOURCE LINES 117-136 .. code-block:: Python hist_scaler = HistogramBinningScaler(model=model, num_bins=10, device=None) hist_scaler.fit(dataloader=calibration_dataloader) # Evaluate histogram-binned model on the held-out test set ece.reset() with torch.no_grad(): for sample, target in test_dataloader: # For multiclass this scaler is expected to return log-probabilities; apply softmax. calibrated_out = hist_scaler(sample) probs = calibrated_out.softmax(-1) ece.update(probs, target) print(f"ECE after Histogram Binning - {ece.compute():.3%}.") fig, ax = ece.plot() fig.tight_layout() fig.show() .. image-sg:: /auto_tutorials/Post_Hoc_Methods/images/sphx_glr_tutorial_scalers_003.png :alt: Reliability Diagram :srcset: /auto_tutorials/Post_Hoc_Methods/images/sphx_glr_tutorial_scalers_003.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-script-out .. code-block:: none 0%| | 0/40 [00:00 - Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017). On calibration of modern neural networks. *ICML 2017*. - Naeini, M. P., Cooper, G. F., & Hauskrecht, M. (2015). Obtaining Well Calibrated Probabilities Using Bayesian Binning. *AAAI 2015*. .. rst-class:: sphx-glr-timing **Total running time of the script:** (1 minutes 52.621 seconds) .. _sphx_glr_download_auto_tutorials_Post_Hoc_Methods_tutorial_scalers.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: tutorial_scalers.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: tutorial_scalers.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: tutorial_scalers.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_