Edit model card

results

This model is a fine-tuned version of google/flan-t5-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1507

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 3
  • total_train_batch_size: 12
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 16
  • training_steps: 1698

Training results

Training Loss Epoch Step Validation Loss
3.0084 0.59 50 2.5425
2.7308 1.18 100 2.4483
2.6435 1.76 150 2.3925
2.5873 2.35 200 2.3558
2.5247 2.94 250 2.3276
2.5323 3.53 300 2.3003
2.4288 4.12 350 2.2771
2.4247 4.71 400 2.2659
2.4014 5.29 450 2.2439
2.3761 5.88 500 2.2336
2.3056 6.47 550 2.2236
2.3443 7.06 600 2.2182
2.2877 7.65 650 2.2066
2.3028 8.24 700 2.1953
2.2589 8.82 750 2.1958
2.2306 9.41 800 2.1834
2.2571 10.0 850 2.1826
2.2109 10.59 900 2.1782
2.2216 11.18 950 2.1802
2.1881 11.76 1000 2.1734
2.1794 12.35 1050 2.1691
2.1933 12.94 1100 2.1654
2.134 13.53 1150 2.1682
2.1698 14.12 1200 2.1564
2.1477 14.71 1250 2.1599
2.1353 15.29 1300 2.1573
2.1206 15.88 1350 2.1525
2.1175 16.47 1400 2.1520
2.1142 17.06 1450 2.1531
2.1152 17.65 1500 2.1529
2.1073 18.24 1550 2.1529
2.099 18.82 1600 2.1520
2.1061 19.41 1650 2.1507

Framework versions

  • PEFT 0.8.2
  • Transformers 4.38.1
  • Pytorch 2.3.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2
Downloads last month
6
Inference Examples
Inference API (serverless) does not yet support peft models for this pipeline type.

Model tree for Komal-patra/EU_AI_ACT_T5_Model

Adapter
(128)
this model

Space using Komal-patra/EU_AI_ACT_T5_Model 1