masked_wideresnet28x10#
- torch_uncertainty.models.masked_wideresnet28x10(in_channels, num_classes, num_estimators, scale, conv_bias=True, dropout_rate=0.3, groups=1, style='imagenet', activation_fn=<function relu>, normalization_layer=<class 'torch.nn.modules.batchnorm.BatchNorm2d'>, repeat_strategy='paper')[source]#
Masksembles of Wide-ResNet-28x10.
- Parameters:
in_channels (int) – Number of input channels.
num_classes (int) – Number of classes to predict.
num_estimators (int) – Number of estimators in the ensemble.
scale (float) – Expansion factor affecting the width of the estimators.
conv_bias (bool) – Whether to use bias in convolutions. Defaults to
True.dropout_rate (float, optional) – Dropout rate. Defaults to
0.3.groups (int) – Number of groups within each estimator. Defaults to
1.style (bool, optional) – Whether to use the ImageNet structure. Defaults to
True.activation_fn (Callable, optional) – Activation function. Defaults to
torch.nn.functional.relu.normalization_layer (nn.Module, optional) – Normalization layer. Defaults to
torch.nn.BatchNorm2d.repeat_strategy ("legacy"|"paper", optional) –
The repeat strategy to use during training:
”legacy”: Repeat inputs for each estimator during both training and evaluation.
”paper”(default): Repeat inputs for each estimator only during evaluation.
- Returns:
A Masksembles-style Wide-ResNet-28x10.
- Return type:
_MaskedWideResNet