CamVidDataModule#

class torch_uncertainty.datamodules.segmentation.CamVidDataModule(root, batch_size, eval_batch_size=None, crop_size=640, eval_size=(720, 960), train_transform=None, test_transform=None, group_classes=True, basic_augment=True, val_split=None, num_workers=1, pin_memory=True, persistent_workers=True)[source]#

DataModule for the CamVid dataset.

Parameters:
  • root (str or Path) – Root directory of the datasets.

  • batch_size (int) – Number of samples per batch during training.

  • eval_batch_size (int | None) – Number of samples per batch during evaluation (val and test). Set to batch_size if None. Defaults to None.

  • crop_size (sequence or int, optional) – Desired input image and segmentation mask sizes during training. If crop_size is an int instead of sequence like \((H, W)\), a square crop \((\text{size},\text{size})\) is made. If provided a sequence of length \(1\), it will be interpreted as \((\text{size[0]},\text{size[1]})\). Has to be provided if train_transform is not provided. Otherwise has no effect. Defaults to 640.

  • eval_size (sequence or int, optional) – Desired input image and segmentation mask sizes during evaluation. If size is an int, smaller edge of the images will be matched to this number, i.e., \(\text{height}>\text{width}\), then image will be rescaled to \((\text{size}\times\text{height}/\text{width},\text{size})\). Has to be provided if test_transform is not provided. Otherwise has no effect. Defaults to (720,960).

  • train_transform (nn.Module | None) – Custom training transform. Defaults to None. If not provided, a default transform is used.

  • test_transform (nn.Module | None) – Custom test transform. Defaults to None. If not provided, a default transform is used.

  • group_classes (bool, optional) – Whether to group the 32 classes into 11 superclasses. Default: True.

  • basic_augment (bool) – Whether to apply base augmentations. Defaults to True. Only used if train_transform is not provided.

  • val_split (float or None, optional) – Share of training samples to use for validation. Defaults to None.

  • num_workers (int, optional) – Number of dataloaders to use. Defaults to 1.

  • pin_memory (bool, optional) – Whether to pin memory. Defaults to True.

  • persistent_workers (bool, optional) – Whether to use persistent workers. Defaults to True.

Note

By default this datamodule injects the following transforms into the training and validation/test datasets:

from torchvision.transforms import v2

v2.Compose(
    [
        v2.Resize(640),
        v2.ToDtype(
            dtype={
                tv_tensors.Image: torch.float32,
                tv_tensors.Mask: torch.int64,
                "others": None,
            },
            scale=True,
        ),
    ]
)

This behavior can be modified by setting up train_transform and test_transform at initialization.