Disagreement#

class torch_uncertainty.metrics.classification.Disagreement(reduction='mean', **kwargs)[source]#

Calculate the Disagreement Metric.

The Disagreement Metric estimates the confidence of an ensemble of estimators.

Parameters:
  • reduction (str, optional) –

    Determines how to reduce over the \(B\)/batch dimension:

    • 'mean' [default]: Averages score across samples

    • 'sum': Sum score across samples

    • 'none' or None: Returns score per sample

  • kwargs – Additional keyword arguments, see Advanced metric settings.

Inputs:
  • probs: \((B, N, C)\)

where \(B\) is the batch size, \(C\) is the number of classes and \(N\) is the number of estimators.

Note

A higher disagreement means a lower confidence.

Warning

Make sure that the probabilities in probs are normalized to sum to one.

Raises:

ValueError – If reduction is not one of 'mean', 'sum', 'none' or None.

Example:

from torch_uncertainty.metrics.classification import Disagreement

probs = torch.tensor(
    [
        [[0.7, 0.3], [0.6, 0.4], [0.8, 0.2]],  # Example 1, 3 estimators
        [[0.4, 0.6], [0.5, 0.5], [0.3, 0.7]],  # Example 2, 3 estimators
    ]
)

ds = Disagreement(reduction="mean")
ds.update(probs)
result = ds.compute()
print(result)
# output: tensor(0.3333)
compute()[source]#

Compute Disagreement based on inputs passed in to update.

update(probs)[source]#

Update state with prediction probabilities and targets.

Parameters:

probs (torch.Tensor) – Probabilities from the model.