Edit model card

AMRBART (large-sized model)

AMRBART model is continually pre-trained on the English text and AMR Graphs based on the BART model. It was introduced in the paper: Graph Pre-training for AMR Parsing and Generation by bai et al. in ACL 2022 and first released in this repository.

Model description

AMRBART follows the BART model which uses a transformer encoder-encoder architecture. AMRBART is pre-trained with 6 tasks:

  • learning to reconstruct the text based on the corrupted text.
  • learning to reconstruct AMR graphs based on the corrupted AMR graph.
  • learning to reconstruct the text based on the corrupted text and its corresponding AMR graph.
  • learning to reconstruct an AMR graph based on the corrupted AMR graph and its corresponding text.
  • learning to reconstruct the text based on the corrupted text and its corresponding corrupted AMR graph.
  • learning to reconstruct an AMR graph based on the corrupted AMR graph and its corresponding corrupted text.

AMRBART is particularly effective when fine-tuned for AMR parsing and AMR-to-text generation tasks.

Training data

The AMRBART model is pre-trained on AMR3.0, a dataset consisting of 55,635 training instances and English Gigaword (we randomly sampled 200,000 sentences).

Intended uses & limitations

You can use the raw model for either AMR encoding or AMR parsing, but it's mostly intended to be fine-tuned on a downstream task.

How to use

Here is how to initialize this model in PyTorch:

from transformers import BartForConditionalGeneration
model = BartForConditionalGeneration.from_pretrained("xfbai/AMRBART-large")

Please refer to this repository for tokenizer initialization and data preprocessing.

BibTeX entry and citation info

Please cite this paper if you find this model helpful

@inproceedings{bai-etal-2022-graph,
    title = "Graph Pre-training for {AMR} Parsing and Generation",
    author = "Bai, Xuefeng  and
      Chen, Yulong and
      Zhang, Yue",
    booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = may,
    year = "2022",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "todo",
    doi = "todo",
    pages = "todo"
}
Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.