English
coreference-resolution
Eval Results
maverick-mes-preco / README.md
g185's picture
Create README.md
db674ce verified
|
raw
history blame
3.56 kB
metadata
language:
  - en
tags:
  - coreference-resolution
license:
  - cc-by-nc-sa-4.0
datasets:
  - PreCo
metrics:
  - CoNLL
task_categories:
  - coreference-resolution
model-index:
  - name: sapienzanlp/maverick-mes-preco
    results:
      - task:
          type: coreference-resolution
          name: coreference-resolution
        dataset:
          name: preco
          type: coreference
        metrics:
          - name: Avg. F1
            type: CoNLL
            value: 87.4

Maverick mes PreCo

Official weights for Maverick-mes trained on PreCo and based on DeBERTa-large. This model achieves 87.4 Avg CoNLL-F1 on PreCo coreference resolution dataset.

Other available models at SapienzaNLP huggingface hub:

hf_model_name training dataset Score Singletons
"sapienzanlp/maverick-mes-ontonotes" OntoNotes 83.6 No
"sapienzanlp/maverick-mes-litbank" LitBank 78.0 Yes
"sapienzanlp/maverick-mes-preco" PreCo 87.4 Yes

N.B. Each dataset has different annotation guidelines, choose your model according to your use case.

Maverick: Efficient and Accurate Coreference Resolution Defying recent trends

Conference License: CC BY-NC 4.0 Pip Package git

Citation

@inproceedings{martinelli-etal-2024-maverick,
    title = "Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends",
    author = "Martinelli, Giuliano and
      Barba, Edoardo  and
      Navigli, Roberto",
        booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2024)",
    year      = "2024",
    address   = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
}