RiskAtxCov#

class torch_uncertainty.metrics.classification.RiskAtxCov(cov_threshold, **kwargs)[source]#

Compute the risk at a specified coverage threshold.

This metric calculates the error rate (risk) at a given coverage level. The coverage threshold determines the fraction of samples considered, sorted by model confidence. The metric is useful in evaluating the trade-off between coverage and risk in predictive models.

Parameters:
  • cov_threshold (float) – The coverage threshold at which to compute the risk.

  • kwargs – Additional arguments to pass to the metric class.

Example:

from torch_uncertainty.metrics.classification import RiskAtxCov

# Initialize the metric with a coverage threshold of 0.5 (50%)
metric = RiskAtxCov(cov_threshold=0.5)

# Simulated predicted probabilities (N samples, C classes)
predicted_probs = torch.tensor(
    [
        [0.9, 0.1],  # Correct (class 0)
        [0.7, 0.3],  # Incorrect (class 1)
        [0.95, 0.05],  # Correct (class 0)
        [0.8, 0.2],  # Incorrect (class 1)
        [0.6, 0.4],  # Correct (class 0)
        [0.3, 0.7],  # Correct (class 1)
        [0.85, 0.15],  # Incorrect (class 1)
        [0.2, 0.8],  # Correct (class 1)
    ]
)

# Simulated ground truth labels
ground_truth = torch.tensor([0, 1, 0, 1, 0, 1, 0, 1])

# Update the metric with the probabilities and labels
metric.update(predicted_probs, ground_truth)

# Compute the risk at the specified coverage threshold
risk_at_cov = metric.compute()

# Output the result
print(f"Risk at coverage threshold: {risk_at_cov.item():.2f}")

# output : Risk at coverage threshold: 0.25
compute()[source]#

Compute the risk at given coverage.

Returns:

The risk at given coverage.

Return type:

Tensor

update(probs, targets)[source]#

Store the scores and their associated errors for later computation.

Parameters:
  • probs (Tensor) – The predicted probabilities of shape \((N, C)\).

  • targets (Tensor) – The ground truth labels of shape \((N,)\).