Update README.md
Browse files
README.md
CHANGED
@@ -188,7 +188,9 @@ print(results)
|
|
188 |
```
|
189 |
|
190 |
### Benchmarking
|
191 |
-
Below is a table that highlights the performance of UTC models on the [CrossNER](https://huggingface.co/datasets/DFKI-SLT/cross_ner) dataset. The values represent the Micro F1 scores, with the estimation done at the word level
|
|
|
|
|
192 |
|----------------------|--------|------------|--------|----------|---------|
|
193 |
| UTC-DeBERTa-small | 0.8492 | 0.8792 | 0.864 | 0.9008 | 0.85 |
|
194 |
| UTC-DeBERTa-base | 0.8452 | 0.8587 | 0.8711 | 0.9147 | 0.8631 |
|
|
|
188 |
```
|
189 |
|
190 |
### Benchmarking
|
191 |
+
Below is a table that highlights the performance of UTC models on the [CrossNER](https://huggingface.co/datasets/DFKI-SLT/cross_ner) dataset. The values represent the Micro F1 scores, with the estimation done at the word level.
|
192 |
+
|
193 |
+
| Model | AI | Literature | Music | Politics | Science |
|
194 |
|----------------------|--------|------------|--------|----------|---------|
|
195 |
| UTC-DeBERTa-small | 0.8492 | 0.8792 | 0.864 | 0.9008 | 0.85 |
|
196 |
| UTC-DeBERTa-base | 0.8452 | 0.8587 | 0.8711 | 0.9147 | 0.8631 |
|