Edit model card

fine-tuned-BioBART-20-epochs-1500-input-256-output

This model is a fine-tuned version of GanjinZero/biobart-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9257
  • Rouge1: 0.1655
  • Rouge2: 0.0291
  • Rougel: 0.1256
  • Rougelsum: 0.1266
  • Gen Len: 34.62

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 151 6.1052 0.0511 0.0 0.047 0.0474 22.48
No log 2.0 302 1.1483 0.077 0.0156 0.0673 0.0678 11.56
No log 3.0 453 0.9767 0.0744 0.0182 0.0537 0.0557 23.57
4.0217 4.0 604 0.9160 0.1355 0.033 0.1053 0.1042 37.77
4.0217 5.0 755 0.8850 0.1682 0.0352 0.1342 0.1342 41.92
4.0217 6.0 906 0.8736 0.1342 0.0308 0.1037 0.1037 35.34
0.761 7.0 1057 0.8582 0.144 0.0361 0.1082 0.1095 39.27
0.761 8.0 1208 0.8551 0.165 0.0392 0.1233 0.1254 39.55
0.761 9.0 1359 0.8623 0.141 0.0302 0.1169 0.1179 23.69
0.5257 10.0 1510 0.8642 0.1715 0.0436 0.1249 0.1267 45.78
0.5257 11.0 1661 0.8705 0.1702 0.0331 0.1386 0.1385 30.28
0.5257 12.0 1812 0.8761 0.169 0.035 0.1247 0.1254 42.74
0.5257 13.0 1963 0.8938 0.1719 0.0376 0.139 0.1389 29.73
0.368 14.0 2114 0.8907 0.1716 0.0402 0.1371 0.1377 36.07
0.368 15.0 2265 0.9027 0.1677 0.0324 0.1329 0.134 36.82
0.368 16.0 2416 0.9141 0.16 0.0322 0.1268 0.1281 32.87
0.2635 17.0 2567 0.9177 0.1702 0.0324 0.1312 0.1323 35.4
0.2635 18.0 2718 0.9194 0.1713 0.0333 0.1297 0.1312 37.75
0.2635 19.0 2869 0.9234 0.1693 0.0294 0.1293 0.1299 35.69
0.2141 20.0 3020 0.9257 0.1655 0.0291 0.1256 0.1266 34.62

Framework versions

  • Transformers 4.36.2
  • Pytorch 1.12.1+cu113
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
6
Safetensors
Model size
140M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tanatapanun/fine-tuned-BioBART-20-epochs-1500-input-256-output

Finetuned
(12)
this model