Shortcuts

CamVidDataModule

class torch_uncertainty.datamodules.segmentation.CamVidDataModule(root, batch_size, crop_size=640, eval_size=(720, 960), group_classes=True, basic_augment=True, val_split=None, num_workers=1, pin_memory=True, persistent_workers=True)[source]

DataModule for the CamVid dataset.

Parameters:
  • root (str or Path) – Root directory of the datasets.

  • batch_size (int) – Number of samples per batch.

  • crop_size (sequence or int, optional) – Desired input image and segmentation mask sizes during training. If crop_size is an int instead of sequence like \((H, W)\), a square crop \((\text{size},\text{size})\) is made. If provided a sequence of length \(1\), it will be interpreted as \((\text{size[0]},\text{size[1]})\). Defaults to 640.

  • eval_size (sequence or int, optional) – Desired input image and segmentation mask sizes during evaluation. If size is an int, smaller edge of the images will be matched to this number, i.e., \(\text{height}>\text{width}\), then image will be rescaled to \((\text{size}\times\text{height}/\text{width},\text{size})\). Defaults to (720,960).

  • group_classes (bool, optional) – Whether to group the 32 classes into 11 superclasses. Default: True.

  • basic_augment (bool) – Whether to apply base augmentations. Defaults to True.

  • val_split (float or None, optional) – Share of training samples to use for validation. Defaults to None.

  • num_workers (int, optional) – Number of dataloaders to use. Defaults to 1.

  • pin_memory (bool, optional) – Whether to pin memory. Defaults to True.

  • persistent_workers (bool, optional) – Whether to use persistent workers. Defaults to True.

Note

This datamodule injects the following transforms into the training and validation/test datasets:

from torchvision.transforms import v2

v2.Compose(
    [
        v2.Resize(640),
        v2.ToDtype(
            dtype={
                tv_tensors.Image: torch.float32,
                tv_tensors.Mask: torch.int64,
                "others": None,
            },
            scale=True,
        ),
    ]
)

This behavior can be modified by overriding self.train_transform and self.test_transform after initialization.