Edit model card

kaytoo2022/t5_technical_qa_codet5_tokenizer

This model is a fine-tuned version of google/flan-t5-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Train Loss: 0.6959
  • Validation Loss: 2.6033
  • Epoch: 9

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 0.0002, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
  • training_precision: float32

Training results

Train Loss Validation Loss Epoch
3.9033 3.4115 0
3.1530 3.0668 1
2.6561 2.8018 2
2.2084 2.6603 3
1.8253 2.5558 4
1.5026 2.4479 5
1.2387 2.4164 6
1.0228 2.5312 7
0.8322 2.5353 8
0.6959 2.6033 9

Framework versions

  • Transformers 4.42.4
  • TensorFlow 2.17.0
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
5
Inference API
Unable to determine this model's library. Check the docs .

Model tree for kaytoo2022/t5_technical_qa_codet5_tokenizer

Finetuned
(637)
this model