Edit model card

v3_mistral_balance1_lora

This model is a fine-tuned version of peiyi9979/math-shepherd-mistral-7b-prm on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0130
  • Accuracy: 0.9980
  • Precision: 0.9818
  • Recall: 0.9474
  • F1: 0.9643

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 8569382
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • total_eval_batch_size: 32
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1
No log 0 0 0.3256 0.9369 0.1429 0.2456 0.1806
0.4529 0.0258 20 0.2883 0.9484 0.1493 0.1754 0.1613
0.2483 0.0515 40 0.1461 0.9672 0.2 0.0526 0.0833
0.1622 0.0773 60 0.1080 0.9687 0.35 0.1228 0.1818
0.1243 0.1031 80 0.0879 0.9697 0.4524 0.3333 0.3838
0.0678 0.1289 100 0.0700 0.9692 0.4719 0.7368 0.5753
0.0301 0.1546 120 0.0474 0.9836 0.65 0.9123 0.7591
0.0105 0.1804 140 0.0342 0.9911 0.8421 0.8421 0.8421
0.041 0.2062 160 0.0333 0.9926 0.875 0.8596 0.8673
0.0291 0.2320 180 0.0268 0.9930 0.8308 0.9474 0.8852
0.0366 0.2577 200 0.0262 0.9916 0.7941 0.9474 0.864
0.0133 0.2835 220 0.0206 0.9921 0.8154 0.9298 0.8689
0.0075 0.3093 240 0.0188 0.9955 0.9444 0.8947 0.9189
0.0036 0.3351 260 0.0168 0.9945 0.9107 0.8947 0.9027
0.0081 0.3608 280 0.0182 0.9960 0.9153 0.9474 0.9310
0.0155 0.3866 300 0.0145 0.9980 0.9818 0.9474 0.9643
0.0075 0.4124 320 0.0165 0.9975 0.9643 0.9474 0.9558
0.0033 0.4381 340 0.0139 0.9975 0.9643 0.9474 0.9558
0.01 0.4639 360 0.0136 0.9970 0.9474 0.9474 0.9474
0.0018 0.4897 380 0.0146 0.9970 0.9474 0.9474 0.9474
0.0006 0.5155 400 0.0138 0.9975 0.9815 0.9298 0.9550
0.003 0.5412 420 0.0135 0.9965 0.9310 0.9474 0.9391
0.0035 0.5670 440 0.0141 0.9965 0.9808 0.8947 0.9358
0.0024 0.5928 460 0.0148 0.9965 0.9808 0.8947 0.9358
0.0203 0.6186 480 0.0136 0.9970 0.9474 0.9474 0.9474
0.0293 0.6443 500 0.0164 0.9970 0.9811 0.9123 0.9455
0.0078 0.6701 520 0.0149 0.9970 0.9474 0.9474 0.9474
0.0291 0.6959 540 0.0147 0.9975 0.9643 0.9474 0.9558
0.0119 0.7216 560 0.0136 0.9970 0.9474 0.9474 0.9474
0.002 0.7474 580 0.0138 0.9980 0.9818 0.9474 0.9643
0.0009 0.7732 600 0.0140 0.9980 0.9818 0.9474 0.9643
0.0022 0.7990 620 0.0134 0.9980 0.9818 0.9474 0.9643
0.0149 0.8247 640 0.0136 0.9980 0.9818 0.9474 0.9643
0.0397 0.8505 660 0.0140 0.9980 0.9818 0.9474 0.9643
0.0058 0.8763 680 0.0135 0.9980 0.9818 0.9474 0.9643
0.0153 0.9021 700 0.0132 0.9980 0.9818 0.9474 0.9643
0.0122 0.9278 720 0.0132 0.9980 0.9818 0.9474 0.9643
0.0276 0.9536 740 0.0132 0.9980 0.9818 0.9474 0.9643
0.0042 0.9794 760 0.0130 0.9980 0.9818 0.9474 0.9643

Framework versions

  • PEFT 0.13.2
  • Transformers 4.46.0
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
4
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mtzig/v3_mistral_balance1_lora

Adapter
(18)
this model