Entropy#
- class torch_uncertainty.metrics.classification.Entropy(reduction='mean', **kwargs)[source]#
The Shannon Entropy Metric to estimate the confidence of a single model or the mean confidence across estimators.
- Parameters:
reduction (str, optional) –
Determines how to reduce over the \(B\)/batch dimension:
'mean'
[default]: Averages score across samples'sum'
: Sum score across samples'none'
orNone
: Returns score per sample
kwargs – Additional keyword arguments, see Advanced metric settings.
- Inputs:
probs
: \((B, C)\) or \((B, N, C)\)
where \(B\) is the batch size, \(C\) is the number of classes and \(N\) is the number of estimators.
Note
A higher entropy means a lower confidence.
- Raises:
ValueError – If
reduction
is not one of'mean'
,'sum'
,'none'
orNone
.
Example:
from torch_uncertainty.metrics.classification import Entropy probs = torch.tensor( [ [[0.7, 0.3], [0.6, 0.4], [0.8, 0.2]], # Example 1, 3 estimators [[0.4, 0.6], [0.5, 0.5], [0.3, 0.7]], # Example 2, 3 estimators ] ) metric = Entropy(reduction="mean") metric.update(probs) result = metric.compute() print(result) # Mean entropy value across samples # tensor(0.6269) # Using single-estimator probabilities probs = torch.tensor( [ [0.7, 0.3], # Example 1 [0.4, 0.6], # Example 2 ] ) metric = Entropy(reduction=None) metric.update(probs) result = metric.compute() print(result) # Per-sample entropy values # tensor([0.6109, 0.6730])