Shortcuts

MaskedConv2d

class torch_uncertainty.layers.MaskedConv2d(in_channels, out_channels, kernel_size, num_estimators, scale, stride=1, padding=0, dilation=1, groups=1, bias=True, device=None, dtype=None)[source]

Masksembles-style Conv2d layer.

Parameters:
  • in_channels (int) – Number of channels in the input image.

  • out_channels (int) – Number of channels produced by the convolution.

  • kernel_size (int or tuple) – Size of the convolving kernel.

  • num_estimators (int) – Number of estimators in the ensemble.

  • scale (float) – The scale parameter for the masks.

  • stride (int or tuple, optional) – Stride of the convolution. Defaults to 1.

  • padding (int, tuple or str, optional) – Padding added to all four sides of the input. Defaults to 0.

  • dilation (int or tuple, optional) – Spacing between kernel elements. Defaults to 1.

  • groups (int, optional) – Number of blocked connexions from input channels to output channels for each estimator. Defaults to 1.

  • bias (bool, optional) – If True, adds a learnable bias to the output. Defaults to True.

  • device (Any, optional) – The desired device of returned tensor. Defaults to None.

  • dtype (Any, optional) – The desired data type of returned tensor. Defaults to None.

Warning

Be sure to apply a repeat on the batch at the start of the training if you use MaskedConv2d.

Reference:

Masksembles for Uncertainty Estimation, Nikita Durasov, Timur Bagautdinov, Pierre Baque, Pascal Fua.