Edit model card

LifeSciencePegasusLargeModel

This model is a fine-tuned version of google/pegasus-large on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 5.6523
  • Rouge1: 44.7761
  • Rouge2: 12.6726
  • Rougel: 29.0847
  • Rougelsum: 40.7566
  • Bertscore Precision: 77.9283
  • Bertscore Recall: 81.5854
  • Bertscore F1: 79.7092
  • Bleu: 0.0886
  • Gen Len: 225.7220

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Bertscore Precision Bertscore Recall Bertscore F1 Bleu Gen Len
6.2586 0.2643 300 6.0453 40.1947 11.1082 26.9714 36.2747 76.6344 80.8385 78.6731 0.0775 225.7220
6.0213 0.5286 600 5.7899 43.2445 12.1722 28.4564 39.1524 77.5194 81.3755 79.3945 0.0856 225.7220
5.9018 0.7929 900 5.6523 44.7761 12.6726 29.0847 40.7566 77.9283 81.5854 79.7092 0.0886 225.7220

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.1.2
  • Datasets 2.2.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
571M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MarPla/LifeSciencePegasusLargeModel

Finetuned
(54)
this model