Entropy#

class torch_uncertainty.metrics.classification.Entropy(reduction='mean', **kwargs)[source]#

The Shannon Entropy Metric to estimate the confidence of a single model or the mean confidence across estimators.

Parameters:
  • reduction (str, optional) –

    Determines how to reduce over the \(B\)/batch dimension:

    • 'mean' [default]: Averages score across samples

    • 'sum': Sum score across samples

    • 'none' or None: Returns score per sample

  • kwargs – Additional keyword arguments, see Advanced metric settings.

Inputs:
  • probs: \((B, C)\) or \((B, N, C)\)

where \(B\) is the batch size, \(C\) is the number of classes and \(N\) is the number of estimators.

Note

A higher entropy means a lower confidence.

Raises:

ValueError – If reduction is not one of 'mean', 'sum', 'none' or None.

Example:

from torch_uncertainty.metrics.classification import Entropy

probs = torch.tensor(
    [
        [[0.7, 0.3], [0.6, 0.4], [0.8, 0.2]],  # Example 1, 3 estimators
        [[0.4, 0.6], [0.5, 0.5], [0.3, 0.7]],  # Example 2, 3 estimators
    ]
)
metric = Entropy(reduction="mean")
metric.update(probs)
result = metric.compute()
print(result)  # Mean entropy value across samples
# tensor(0.6269)

# Using single-estimator probabilities
probs = torch.tensor(
    [
        [0.7, 0.3],  # Example 1
        [0.4, 0.6],  # Example 2
    ]
)
metric = Entropy(reduction=None)
metric.update(probs)
result = metric.compute()
print(result)  # Per-sample entropy values
# tensor([0.6109, 0.6730])
compute()[source]#

Computes Entropy based on inputs passed in to update previously.

update(probs)[source]#

Update the current entropy with a new tensor of probabilities.

Parameters:

probs (torch.Tensor) – Probabilities from the model.