Edit model card

Hyponatremia_M2_1000steps_1e7rate_01beta_CSFTDPO

This model is a fine-tuned version of tsavage68/Summary4500_M2_200steps_1e7rate_SFT on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0020
  • Rewards/chosen: -4.7124
  • Rewards/rejected: -19.2249
  • Rewards/accuracies: 0.9980
  • Rewards/margins: 14.5125
  • Logps/rejected: -344.9792
  • Logps/chosen: -140.8642
  • Logits/rejected: -2.0739
  • Logits/chosen: -2.0387

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-07
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.5233 0.0112 50 0.5232 -0.0091 -0.3925 0.9980 0.3833 -156.6543 -93.8310 -2.3405 -2.2932
0.0383 0.0224 100 0.0430 -0.7708 -4.7747 0.9980 4.0039 -200.4769 -101.4475 -2.2287 -2.1837
0.0 0.0336 150 0.0070 -2.4385 -10.5541 0.9980 8.1156 -258.2711 -118.1250 -2.1599 -2.1186
0.0016 0.0448 200 0.0031 -3.4937 -14.8511 0.9980 11.3574 -301.2408 -128.6765 -2.1144 -2.0760
0.0002 0.0559 250 0.0029 -3.4966 -15.1134 0.9980 11.6168 -303.8634 -128.7055 -2.1174 -2.0790
0.0 0.0671 300 0.0026 -3.6799 -15.9182 0.9980 12.2384 -311.9122 -130.5386 -2.1042 -2.0666
0.0012 0.0783 350 0.0024 -3.9841 -16.8321 0.9980 12.8480 -321.0512 -133.5813 -2.0951 -2.0582
0.0001 0.0895 400 0.0022 -4.3249 -17.8761 0.9980 13.5512 -331.4908 -136.9887 -2.0841 -2.0480
0.0 0.1007 450 0.0022 -4.4809 -18.3463 0.9980 13.8653 -336.1925 -138.5490 -2.0802 -2.0445
0.0 0.1119 500 0.0022 -4.5041 -18.4203 0.9980 13.9162 -336.9331 -138.7807 -2.0778 -2.0423
0.0 0.1231 550 0.0021 -4.5894 -18.8015 0.9980 14.2121 -340.7446 -139.6336 -2.0767 -2.0412
0.0 0.1343 600 0.0021 -4.6515 -19.0151 0.9980 14.3636 -342.8809 -140.2545 -2.0750 -2.0398
0.0 0.1454 650 0.0020 -4.6765 -19.1006 0.9980 14.4240 -343.7354 -140.5052 -2.0753 -2.0401
0.0 0.1566 700 0.0020 -4.6869 -19.1397 0.9980 14.4528 -344.1270 -140.6091 -2.0750 -2.0398
0.0 0.1678 750 0.0020 -4.6998 -19.1976 0.9980 14.4978 -344.7062 -140.7377 -2.0747 -2.0396
0.0 0.1790 800 0.0020 -4.7132 -19.2365 0.9980 14.5233 -345.0950 -140.8720 -2.0739 -2.0389
0.0096 0.1902 850 0.0020 -4.7099 -19.2301 0.9980 14.5202 -345.0307 -140.8386 -2.0740 -2.0389
0.0 0.2014 900 0.0020 -4.7077 -19.2206 0.9980 14.5129 -344.9359 -140.8168 -2.0737 -2.0386
0.0 0.2126 950 0.0020 -4.7125 -19.2249 0.9980 14.5125 -344.9792 -140.8644 -2.0739 -2.0387
0.0 0.2238 1000 0.0020 -4.7124 -19.2249 0.9980 14.5125 -344.9792 -140.8642 -2.0739 -2.0387

Framework versions

  • Transformers 4.42.4
  • Pytorch 2.0.0+cu117
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
10
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for tsavage68/Summary4500_M2_1000steps_1e7rate_01beta_CSFTDPO