distilbart-cnn-6-6 / README.md
lewtun's picture
lewtun HF staff
Add evaluation results on the 3.0.0 config of cnn_dailymail
4d94735
|
raw
history blame
2.44 kB
metadata
language: en
tags:
  - summarization
license: apache-2.0
datasets:
  - cnn_dailymail
  - xsum
thumbnail: https://huggingface.co/front/thumbnails/distilbart_medium.png
model-index:
  - name: sshleifer/distilbart-cnn-6-6
    results:
      - task:
          type: summarization
          name: Summarization
        dataset:
          name: cnn_dailymail
          type: cnn_dailymail
          config: 3.0.0
          split: test
        metrics:
          - name: ROUGE-1
            type: rouge
            value: 42.7007
            verified: true
          - name: ROUGE-2
            type: rouge
            value: 20.1823
            verified: true
          - name: ROUGE-L
            type: rouge
            value: 29.7085
            verified: true
          - name: ROUGE-LSUM
            type: rouge
            value: 39.6714
            verified: true
          - name: loss
            type: loss
            value: 3.0600528717041016
            verified: true
          - name: gen_len
            type: gen_len
            value: 67.5926
            verified: true

Usage

This checkpoint should be loaded into BartForConditionalGeneration.from_pretrained. See the BART docs for more information.

Metrics for DistilBART models

Model Name MM Params Inference Time (MS) Speedup Rouge 2 Rouge-L
distilbart-xsum-12-1 222 90 2.54 18.31 33.37
distilbart-xsum-6-6 230 132 1.73 20.92 35.73
distilbart-xsum-12-3 255 106 2.16 21.37 36.39
distilbart-xsum-9-6 268 136 1.68 21.72 36.61
bart-large-xsum (baseline) 406 229 1 21.85 36.50
distilbart-xsum-12-6 306 137 1.68 22.12 36.99
bart-large-cnn (baseline) 406 381 1 21.06 30.63
distilbart-12-3-cnn 255 214 1.78 20.57 30.00
distilbart-12-6-cnn 306 307 1.24 21.26 30.59
distilbart-6-6-cnn 230 182 2.09 20.17 29.70