MarPla's picture
End of training
bcc042f verified
|
raw
history blame
3.85 kB
metadata
license: mit
base_model: facebook/bart-large-cnn
tags:
  - generated_from_trainer
metrics:
  - rouge
  - bleu
model-index:
  - name: HealthScienceBARTPrincipal
    results: []

HealthScienceBARTPrincipal

This model is a fine-tuned version of facebook/bart-large-cnn on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.8639
  • Rouge1: 57.9681
  • Rouge2: 23.5702
  • Rougel: 42.298
  • Rougelsum: 54.4306
  • Bertscore Precision: 83.6132
  • Bertscore Recall: 84.9752
  • Bertscore F1: 84.2861
  • Bleu: 0.1834
  • Gen Len: 234.8649

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bertscore Precision Bertscore Recall Bertscore F1 Bleu Gen Len
5.8822 0.0826 100 5.6709 48.7438 17.0806 33.6533 45.9891 80.2777 82.0236 81.1384 0.1276 234.8898
5.2537 0.1653 200 5.1160 48.5934 17.8578 34.7839 45.8306 80.2943 82.3539 81.3073 0.1358 234.8898
4.8915 0.2479 300 4.7665 53.8863 19.5528 37.1095 50.6262 81.6835 83.1962 82.4302 0.1499 234.8898
4.6879 0.3305 400 4.5500 53.0399 20.3314 38.0481 49.3041 81.2912 83.6329 82.4409 0.1582 234.8898
4.4472 0.4131 500 4.3787 55.7809 21.4354 39.5787 52.1713 82.3882 84.0466 83.2062 0.1663 234.8898
4.4391 0.4958 600 4.2267 55.0551 21.5312 39.9051 51.3866 82.1951 84.1433 83.1541 0.1686 234.8898
4.386 0.5784 700 4.1013 56.2812 22.3834 40.9161 52.93 82.9407 84.4308 83.6764 0.1738 234.8898
4.198 0.6610 800 4.0168 56.3251 22.6045 41.1441 52.8715 83.2275 84.6518 83.931 0.1762 234.8898
3.9607 0.7436 900 3.9377 57.4072 22.9187 41.6959 53.899 83.4352 84.8095 84.1141 0.1787 234.8898
3.9771 0.8263 1000 3.8963 58.1506 23.5231 42.1596 54.4019 83.6132 85.0153 84.3057 0.1842 234.8898
3.8807 0.9089 1100 3.8447 57.9746 23.8219 42.4743 54.4461 83.6303 85.03 84.3217 0.1867 234.8898
4.0011 0.9915 1200 3.8214 58.2153 23.8513 42.5964 54.7631 83.7005 85.0498 84.3672 0.1867 234.8898

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1