File size: 1,457 Bytes
7aab96e 1b1daa8 7aab96e 1a25027 82937b2 7aab96e 1b1daa8 61c1b3d 1b1daa8 61c1b3d 1b1daa8 61c1b3d 1b1daa8 61c1b3d 1b1daa8 61c1b3d 1b1daa8 fc85e43 1b1daa8 9c2a1a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
title: Bias AUC
emoji: 🏆
colorFrom: gray
colorTo: blue
sdk: gradio
sdk_version: 3.19.1
app_file: app.py
pinned: false
license: apache-2.0
---
# Bias AUC
## Description of Metric
Suite of threshold-agnostic metrics that provide a nuanced view of this unintended bias, by considering the various ways that a classifier’s score distribution can vary across designated groups.
The following are computed where $D^{-}$ is the negative examples in the background set, $D^{+}$ is the positive examples in the background set, $D^{-}_{g}$
is the negative examples in the identity subgroup, and $D^{+}_{g}$ is the positive examples in the identity subgroup:
$$
\begin{aligned}
\text{Subgroup AUC} &= \text{AUC} (D^{-}_{g} + D^{+}_{g} ) &(1)\\
\text{BPSN AUC} &= \text{AUC} (D^{+} + D^{-}_{g} ) &(2)\\
\text{BNSP AUC} &= \text{AUC} (D^{-} + D^{+}_{g} ) &(3)
\end{aligned}
$$
## How to Use
```python
from evaluate import load
target = [['Islam'],
['Sexuality'],
['Sexuality'],
['Islam']]
label = [0, 0, 1, 1]
output = [[0.44452348351478577, 0.5554765462875366],
[0.4341845214366913, 0.5658154487609863],
[0.400595098733902, 0.5994048714637756],
[0.3840397894382477, 0.6159601807594299]]
metric = load('Intel/bias_auc')
metric.add_batch(target=target,
label=label,
output=output)
metric.compute(target=target,
label=label,
output=output,
subgroups = None)
```
|