ece / README.md
jordyvl's picture
first version dump - to clean over weekend
ac1bb79
|
raw
history blame
3.01 kB
metadata
title: ECE
datasets:
  - null
tags:
  - evaluate
  - metric
description: binned estimator of expected calibration error
sdk: gradio
sdk_version: 3.0.2
app_file: app.py
pinned: false

Metric Card for ECE

Module Card Instructions: Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.

Metric Description

ECE is a standard metric to evaluate top-1 prediction miscalibration. Generally, the lower the better.

How to Use

Inputs

Output Values

Examples

Limitations and Bias

See [3],[4] and [5]

Citation

[1] Naeini, M.P., Cooper, G. and Hauskrecht, M., 2015, February. Obtaining well calibrated probabilities using bayesian binning. In Twenty-Ninth AAAI Conference on Artificial Intelligence. [2] Guo, C., Pleiss, G., Sun, Y. and Weinberger, K.Q., 2017, July. On calibration of modern neural networks. In International Conference on Machine Learning (pp. 1321-1330). PMLR. [3] Nixon, J., Dusenberry, M.W., Zhang, L., Jerfel, G. and Tran, D., 2019, June. Measuring Calibration in Deep Learning. In CVPR Workshops (Vol. 2, No. 7). [4] Kumar, A., Liang, P.S. and Ma, T., 2019. Verified uncertainty calibration. Advances in Neural Information Processing Systems, 32. [5] Vaicenavicius, J., Widmann, D., Andersson, C., Lindsten, F., Roll, J. and Schön, T., 2019, April. Evaluating model calibration in classification. In The 22nd International Conference on Artificial Intelligence and Statistics (pp. 3459-3467). PMLR.

Further References

Add any useful further references.