Edit model card

maximo-t5-normalize

This model is a fine-tuned version of google/flan-t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9638
  • Rouge1: 38.9057
  • Rouge2: 19.0476
  • Rougel: 39.5803
  • Rougelsum: 39.467
  • Gen Len: 15.2857

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Rouge1 Rouge2 Rougel Rougelsum Gen Len
No log 1.0 8 2.2357 17.3817 0.0 17.3817 17.8983 10.5714
No log 2.0 16 2.1585 23.8431 14.2857 23.5882 24.101 13.2857
No log 3.0 24 1.9795 33.3557 22.3971 34.1438 33.9448 11.7143
No log 4.0 32 1.8823 29.3267 15.2381 28.5832 30.4088 13.5714
No log 5.0 40 1.8984 40.4849 14.2857 40.3257 40.1475 15.8571
No log 6.0 48 1.9456 32.562 19.0476 33.3501 33.7775 16.1429
No log 7.0 56 1.9990 35.1207 19.0476 35.8838 36.2473 17.0
No log 8.0 64 1.9615 38.9057 19.0476 39.5803 39.467 15.2857
No log 9.0 72 1.9613 38.9057 19.0476 39.5803 39.467 15.2857
No log 10.0 80 1.9638 38.9057 19.0476 39.5803 39.467 15.2857

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
6
Safetensors
Model size
77M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for maxadmin/maximo-t5-normalize

Finetuned
(277)
this model