question-generator-v2

This model is a fine-tuned version of microsoft/Phi-3.5-mini-instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7497

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.0483 0.0967 50 0.9260
0.8577 0.1934 100 0.8202
0.7996 0.2901 150 0.7895
0.7802 0.3868 200 0.7784
0.7671 0.4836 250 0.7721
0.761 0.5803 300 0.7688
0.7587 0.6770 350 0.7663
0.7529 0.7737 400 0.7637
0.7562 0.8704 450 0.7616
0.7507 0.9671 500 0.7602
0.7274 1.0638 550 0.7589
0.7422 1.1605 600 0.7574
0.735 1.2573 650 0.7571
0.7367 1.3540 700 0.7555
0.7471 1.4507 750 0.7549
0.7404 1.5474 800 0.7541
0.742 1.6441 850 0.7533
0.7385 1.7408 900 0.7530
0.7352 1.8375 950 0.7525
0.7323 1.9342 1000 0.7516
0.7328 2.0309 1050 0.7515
0.7264 2.1277 1100 0.7510
0.704 2.2244 1150 0.7505
0.7242 2.3211 1200 0.7510
0.7203 2.4178 1250 0.7502
0.7285 2.5145 1300 0.7499
0.7192 2.6112 1350 0.7502
0.7204 2.7079 1400 0.7497

Framework versions

  • PEFT 0.12.0
  • Transformers 4.42.3
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
20
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for emdemor/question-generator-v2

Adapter
(125)
this model