batched_wideresnet28x10#

torch_uncertainty.models.batched_wideresnet28x10(in_channels, num_classes, num_estimators, conv_bias=True, dropout_rate=0.3, groups=1, style='imagenet', activation_fn=<function relu>, normalization_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>, repeat_strategy='paper')[source]#

BatchEnsemble of Wide-ResNet-28x10.

Parameters:
  • in_channels (int) – Number of input channels.

  • num_estimators (int) – Number of estimators in the ensemble.

  • conv_bias (bool) – Whether to use bias in convolutions. Defaults to True.

  • dropout_rate (float, optional) – Dropout rate. Defaults to 0.3.

  • num_classes (int) – Number of classes to predict.

  • groups (int) – Number of groups in the convolutions. Defaults to 1.

  • style (bool, optional) – Whether to use the ImageNet structure. Defaults to True.

  • activation_fn (Callable, optional) – Activation function. Defaults to torch.nn.functional.relu.

  • normalization_layer (nn.Module, optional) – Normalization layer. Defaults to torch.nn.BatchNorm2d.

  • repeat_strategy ("legacy"|"paper", optional) –

    The repeat strategy to use during training:

    • ”legacy”: Repeat inputs for each estimator during both training and evaluation.

    • ”paper”(default): Repeat inputs for each estimator only during evaluation.

Returns:

A BatchEnsemble-style Wide-ResNet-28x10.

Return type:

_BatchWideResNet