QuantileCalibrationError#

class torch_uncertainty.metrics.regression.QuantileCalibrationError(num_bins=15, norm='l1', ignore_index=None, validate_args=True, **kwargs)[source]#

Quantile Calibration Error for regression tasks.

This metric computes the calibration error of quantile predictions against the ground truth values.

Parameters:
  • num_bins (int, optional) – Number of bins to use for calibration. Defaults to 15.

  • norm (str, optional) – Norm to use for calibration error computation. Defaults to “l1”.

  • ignore_index (int, optional) – Index to ignore during calibration. Defaults to None.

  • validate_args (bool, optional) – Whether to validate the input arguments. Defaults to True.

  • kwargs – Additional keyword arguments, see Advanced metric settings.

compute()[source]#

Compute the quantile calibration error.

Returns:

The quantile calibration error.

Return type:

Tensor

Warning

If the distribution does not support the icdf() method, this will return nan values.

plot()[source]#

Plot the quantile calibration reliability diagram.

Raises:

NotImplementedError – If the distribution does not support the icdf() method.

update(dist, target, padding_mask=None)[source]#

Update the metric with new predictions and targets.

Parameters:
  • dist (Distribution) – The predicted distribution.

  • target (Tensor) – The ground truth values.

  • padding_mask (Tensor | None, optional) – A mask to ignore certain values. Defaults to None.