Spaces:
Runtime error
Runtime error
small edit to check formula
Browse files
README.md
CHANGED
@@ -22,8 +22,9 @@ pinned: false
|
|
22 |
-->
|
23 |
Expected Calibration Error `ECE` is a standard metric to evaluate top-1 prediction miscalibration.
|
24 |
It measures the L^p norm difference between a model’s posterior and the true likelihood of being correct.
|
25 |
-
|
26 |
-
|
|
|
27 |
It is generally implemented as a binned estimator that discretizes predicted probabilities into a range of possible values (bins) for which conditional expectation can be estimated.
|
28 |
|
29 |
As a metric of calibration *error*, it holds that the lower, the better calibrated a model is.
|
|
|
22 |
-->
|
23 |
Expected Calibration Error `ECE` is a standard metric to evaluate top-1 prediction miscalibration.
|
24 |
It measures the L^p norm difference between a model’s posterior and the true likelihood of being correct.
|
25 |
+
```
|
26 |
+
$$ ECE_p(f)^p= \mathbb{E}_{(X,Y)} \left[\|\mathbb{E}[Y = \hat{y} \mid f(X) = \hat{p}] - f(X)\|^p_p\right]$$, where $$ \hat{y} = \argmax_{y'}[f(X)]_y'$$ is a class prediction with associated posterior probability $$ \hat{p}= \max_{y'}[f(X)]_y'$$.
|
27 |
+
```
|
28 |
It is generally implemented as a binned estimator that discretizes predicted probabilities into a range of possible values (bins) for which conditional expectation can be estimated.
|
29 |
|
30 |
As a metric of calibration *error*, it holds that the lower, the better calibrated a model is.
|