g185's picture
Create README.md
54be67f verified
|
raw
history blame
3.58 kB
metadata
language:
  - en
tags:
  - efficiency
  - coreference-resolution
  - maverick
  - efficient
  - accurate
license:
  - cc-by-nc-sa-4.0
datasets:
  - LitBank
metrics:
  - CoNLL
task_categories:
  - coreference-resolution
model-index:
  - name: sapienzanlp/maverick-mes-litbank
    results:
      - task:
          type: coreference-resolution
          name: coreference-resolution
        dataset:
          name: litbank
          type: coreference
        metrics:
          - name: Avg. F1
            type: CoNLL
            value: 78

Maverick mes LitBank

Official weights for Maverick-mes trained on LitBank and based on DeBERTa-large. This model achieves 78.0 Avg CoNLL-F1 on LitBank.

Other available models at SapienzaNLP huggingface hub:

hf_model_name training dataset Score Singletons
"sapienzanlp/maverick-mes-ontonotes" OntoNotes 83.6 No
"sapienzanlp/maverick-mes-litbank" LitBank 78.0 Yes
"sapienzanlp/maverick-mes-preco" PreCo 87.4 Yes

N.B. Each dataset has different annotation guidelines, choose your model according to your use case.

Maverick: Efficient and Accurate Coreference Resolution Defying recent trends

Conference License: CC BY-NC 4.0 Pip Package git

Citation

@inproceedings{martinelli-etal-2024-maverick,
    title = "Maverick: Efficient and Accurate Coreference Resolution Defying Recent Trends",
    author = "Martinelli, Giuliano and
      Barba, Edoardo  and
      Navigli, Roberto",
        booktitle = "Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 2024)",
    year      = "2024",
    address   = "Bangkok, Thailand",
    publisher = "Association for Computational Linguistics",
}