Shortcuts

PackedConv2d

class torch_uncertainty.layers.PackedConv2d(in_channels, out_channels, kernel_size, alpha, num_estimators, gamma=1, stride=1, padding=0, dilation=1, groups=1, minimum_channels_per_group=64, bias=True, padding_mode='zeros', first=False, last=False, device=None, dtype=None)[source]

Packed-Ensembles-style Conv2d layer.

Parameters:
  • in_channels (int) – Number of channels in the input image.

  • out_channels (int) – Number of channels produced by the convolution.

  • kernel_size (int or tuple) – Size of the convolving kernel.

  • alpha (float) – The channel multiplier of the convolutional layer.

  • num_estimators (int) – Number of estimators in the ensemble.

  • gamma (int, optional) – Defaults to 1.

  • stride (int or tuple, optional) – Stride of the convolution. Defaults to 1.

  • padding (int, tuple or str, optional) – Padding added to all four sides of the input. Defaults to 0.

  • dilation (int or tuple, optional) – Spacing between kernel elements. Defaults to 1.

  • groups (int, optional) – Number of blocked connexions from input channels to output channels for each estimator. Defaults to 1.

  • minimum_channels_per_group (int, optional) – Smallest possible number of channels per group.

  • bias (bool, optional) – If True, adds a learnable bias to the output. Defaults to True.

  • padding_mode (str, optional) – 'zeros', 'reflect', 'replicate' or 'circular'. Defaults to 'zeros'.

  • first (bool, optional) – Whether this is the first layer of the network. Defaults to False.

  • last (bool, optional) – Whether this is the last layer of the network. Defaults to False.

  • device (torch.device, optional) – The device to use for the layer’s parameters. Defaults to None.

  • dtype (torch.dtype, optional) – The dtype to use for the layer’s parameters. Defaults to None.

Explanation Note:

Increasing alpha will increase the number of channels of the ensemble, increasing its representation capacity. Increasing gamma will increase the number of groups in the network and therefore reduce the number of parameters.

Note

Each ensemble member will only see \(\frac{\text{in_channels}}{\text{num_estimators}}\) channels, so when using groups you should make sure that in_channels and out_channels are both divisible by num_estimators \(\times\)gamma \(\times\) groups. However, the number of input and output channels will be changed to comply with this constraint.

property bias

The bias of the underlying convolutional layer.

property weight

The weight of the underlying convolutional layer.