VariationRatio#

class torch_uncertainty.metrics.classification.VariationRatio(probabilistic=True, reduction='mean', **kwargs)[source]#

Compute the Variation Ratio.

The Variation Ratio is a measure of the uncertainty or disagreement among predictions from multiple estimators. It is defined as the proportion of predicted class labels that are not the chosen (most frequent) class.

Parameters:
  • probabilistic (bool, optional) – Whether to use probabilistic predictions. Defaults to True.

  • reduction (Literal["mean", "sum", "none", None], optional) –

    Determines how to reduce over the batch dimension:

    • 'mean' [default]: Averages score across samples

    • 'sum': Sum score across samples

    • 'none' or None: Returns score per sample

  • kwargs – Additional keyword arguments, see Advanced metric settings.

Inputs:
  • probs: \((B, N, C)\)

    where \(B\) is the batch size, \(C\) is the number of classes and \(N\) is the number of estimators.

Note

A higher variation ratio indicates higher uncertainty or disagreement among the estimators.

Warning

Metric VariationRatio will save all predictions in buffer. For large datasets this may lead to large memory footprint.

Raises:

ValueError – If reduction is not one of 'mean', 'sum', 'none' or None.

Example:

from torch_uncertainty.metrics.classification import VariationRatio

probs = torch.tensor(
    [
        [[0.7, 0.3], [0.6, 0.4], [0.8, 0.2]],  # Example 1, 3 estimators
        [[0.4, 0.6], [0.5, 0.5], [0.3, 0.7]],  # Example 2, 3 estimators
    ]
)

vr = VariationRatio(probabilistic=True, reduction="mean")
vr.update(probs)
result = vr.compute()
print(result)
# output: tensor(0.4500)
compute()[source]#

Computes the variation ratio which amounts to the proportion of predicted class labels which are not the chosen class.

Returns:

Mean disagreement between estimators.

Return type:

Tensor