metadata
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- alignment-handbook
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
results: []
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
This model is a fine-tuned version of alignment-handbook/zephyr-7b-sft-full on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1132
- Rewards/chosen: -0.3948
- Rewards/rejected: -0.9598
- Rewards/accuracies: 0.7543
- Rewards/margins: 0.5650
- Logps/rejected: -342.4998
- Logps/chosen: -324.5692
- Logits/rejected: 1.4353
- Logits/chosen: 0.4067
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
0.1732 | 0.1147 | 50 | 0.1641 | 0.0166 | -0.1092 | 0.7069 | 0.1259 | -257.4446 | -283.4267 | -2.4943 | -2.5737 |
0.1412 | 0.2294 | 100 | 0.1347 | -0.4342 | -0.7962 | 0.7284 | 0.3620 | -326.1406 | -328.5106 | -0.1538 | -0.4184 |
0.1307 | 0.3440 | 150 | 0.1261 | -0.3553 | -0.8583 | 0.7284 | 0.5030 | -332.3533 | -320.6210 | 0.7144 | 0.0181 |
0.1238 | 0.4587 | 200 | 0.1199 | -0.4108 | -0.9476 | 0.7328 | 0.5368 | -341.2862 | -326.1717 | 1.2989 | 0.2969 |
0.1185 | 0.5734 | 250 | 0.1166 | -0.3086 | -0.8924 | 0.7543 | 0.5838 | -335.7633 | -315.9550 | 0.8516 | -0.1745 |
0.1228 | 0.6881 | 300 | 0.1155 | -0.3695 | -0.9267 | 0.7457 | 0.5571 | -339.1875 | -322.0434 | 0.8574 | -0.1316 |
0.1213 | 0.8028 | 350 | 0.1136 | -0.4396 | -1.0157 | 0.7629 | 0.5762 | -348.0973 | -329.0486 | 1.5740 | 0.5152 |
0.12 | 0.9174 | 400 | 0.1132 | -0.3948 | -0.9598 | 0.7543 | 0.5650 | -342.4998 | -324.5692 | 1.4353 | 0.4067 |
Framework versions
- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1