File size: 3,584 Bytes
54be67f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
---
language:
- en
tags:
- efficiency
- coreference-resolution
- maverick
- efficient
- accurate
license:
- cc-by-nc-sa-4.0
datasets:
- LitBank
metrics:
- CoNLL
task_categories:
- coreference-resolution
model-index:
- name: sapienzanlp/maverick-mes-litbank
results:
- task:
type: coreference-resolution
name: coreference-resolution
dataset:
name: litbank
type: coreference
metrics:
- name: Avg. F1
type: CoNLL
value: 78.0
---
# Maverick mes LitBank
Official weights for *Maverick-mes* trained on LitBank and based on DeBERTa-large.
This model achieves 78.0 Avg CoNLL-F1 on LitBank.
Other available models at [SapienzaNLP huggingface hub](https://huggingface.co/collections/sapienzanlp/maverick-coreference-resolution-66a750a50246fad8d9c7086a):
| hf_model_name | training dataset | Score | Singletons |
|:-----------------------------------:|:----------------:|:-----:|:----------:|
| ["sapienzanlp/maverick-mes-ontonotes"](https://huggingface.co/sapienzanlp/maverick-mes-ontonotes) | OntoNotes | 83.6 | No |
| ["sapienzanlp/maverick-mes-litbank"](https://huggingface.co/sapienzanlp/maverick-mes-litbank) | LitBank | 78.0 | Yes |
| ["sapienzanlp/maverick-mes-preco"](https://huggingface.co/sapienzanlp/maverick-mes-preco) | PreCo | 87.4 | Yes |
<!-- | ["sapienzanlp/maverick-s2e-ontonotes"](https://huggingface.co/sapienzanlp/maverick-mes-preco) | OntoNotes | 83.4 | No | No | -->
<!-- | "sapienzanlp/maverick-incr-ontonotes" | Ontonotes | 83.5 | No | No | -->
<!-- | "sapienzanlp/maverick-mes-ontonotes-base" | Ontonotes | 81.4 | No | No | -->
<!-- | "sapienzanlp/maverick-s2e-ontonotes-base" | Ontonotes | 81.1 | No | No | -->
<!-- | "sapienzanlp/maverick-incr-ontonotes-base" | Ontonotes | 81.0 | No | No | -->
<!-- | "sapienzanlp/maverick-s2e-litbank" | LitBank | 77.6 | Yes | No | -->
<!-- | "sapienzanlp/maverick-incr-litbank" | LitBank | 78.3 | Yes | No | -->
<!-- | "sapienzanlp/maverick-s2e-preco" | PreCo | 87.2 | Yes | No | -->
<!-- | "sapienzanlp/maverick-incr-preco" | PreCo | 88.0 | Yes | No | -->
N.B. Each dataset has different annotation guidelines, choose your model according to your use case.
## Maverick: Efficient and Accurate Coreference Resolution Defying recent trends
[![Conference](https://img.shields.io/badge/ACL%202024%20Paper-red)](https://arxiv.org/pdf/2407.21489)
[![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC%20BY--NC%204.0-green.svg)](https://creativecommons.org/licenses/by-nc/4.0/)
[![Pip Package](https://img.shields.io/badge/🐍%20Python%20package-blue)](https://pypi.org/project/maverick-coref/)
[![git](https://img.shields.io/badge/Git%20Repo%20-yellow.svg)](https://github.com/SapienzaNLP/maverick-coref)
### Citation
```
@inproceedings{martinelli-etal-2024-maverick,
title = "Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends",
author = "Martinelli, Giuliano and
Barba, Edoardo and
Navigli, Roberto",
booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2024)",
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
}
``` |