FPRx#

class torch_uncertainty.metrics.classification.FPRx(recall_level, pos_label, **kwargs)[source]#

Compute the False Positive Rate at x% Recall.

The False Positive Rate at x% Recall (FPR@x) is a metric used in tasks like anomaly detection, out-of-distribution (OOD) detection, and binary classification. It measures the proportion of false positives (normal samples misclassified as anomalies) when the model achieves a specified recall level for the positive class (e.g., anomalies or OOD samples).

Parameters:
  • recall_level (float) – The recall level at which to compute the FPR.

  • pos_label (int) – The positive label.

  • kwargs – Additional arguments to pass to the metric class.

Reference:

Improved from hendrycks/anomaly-seg and translated to torch.

Example

from torch_uncertainty.metrics.classification import FPRx

# Initialize the metric with 95% recall and positive label as 1 (e.g., OOD)
metric = FPRx(recall_level=0.95, pos_label=1)

# Simulated model predictions (confidence scores) and ground-truth labels
conf = torch.tensor([0.9, 0.8, 0.7, 0.6, 0.4, 0.2, 0.1])
targets = torch.tensor([1, 0, 1, 0, 0, 1, 0])  # 1: OOD, 0: In-Distribution

# Update the metric with predictions and labels
metric.update(conf, targets)

# Compute FPR at 95% recall
result = metric.compute()
print(f"FPR at 95% Recall: {result.item()}")
# output : FPR at 95% Recall: 0.75
compute()[source]#

Compute the False Positive Rate at x% Recall.

Returns:

The value of the FPRx.

Return type:

Tensor

update(conf, target)[source]#

Update the metric state.

Parameters:
  • conf (Tensor) – The confidence scores.

  • target (Tensor) – The target labels, 0 if ID, 1 if OOD.