MeanGTRelativeAbsoluteError#

class torch_uncertainty.metrics.regression.MeanGTRelativeAbsoluteError(**kwargs)[source]#

Compute Mean Absolute Error relative to the Ground Truth (MAErel or ARErel).

This metric is commonly used in tasks where the relative deviation of predictions with respect to the ground truth is important.

\[\text{MAErel} = \frac{1}{N}\sum_i^N \frac{| y_i - \hat{y_i} |}{y_i}\]

where \(y\) is a tensor of target values, and \(\hat{y}\) is a tensor of predictions.

As input to forward and update the metric accepts the following input:

  • preds (Tensor): Predictions from model

  • target (Tensor): Ground truth values

As output of forward and compute the metric returns the following output:

  • rel_mean_absolute_error (Tensor): A tensor with the relative mean absolute error over the state

Parameters:

kwargs – Additional keyword arguments, see Advanced metric settings.

Reference:

[1] From big to small: Multi-scale local planar guidance for monocular depth estimation.

Example:

from torch_uncertainty.metrics.regression import MeanGTRelativeAbsoluteError
import torch

# Initialize the metric
mae_rel_metric = MeanGTRelativeAbsoluteError()

# Example predictions and targets
preds = torch.tensor([2.5, 1.0, 2.0, 8.0])
target = torch.tensor([3.0, 1.5, 2.0, 7.0])

# Update the metric state
mae_rel_metric.update(preds, target)

# Compute the Relative Mean Absolute Error
result = mae_rel_metric.compute()
print(f"Relative Mean Absolute Error: {result.item()}")
# Output: 0.1607142984867096
update(pred, target)[source]#

Update state with predictions and targets.