Llamas
Collection
8 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the Syed-Hasan-8503/orpo-40k-train-test dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
---|---|---|---|---|---|---|---|---|---|---|---|
1.645 | 0.0140 | 50 | 1.2563 | -2.7945 | -3.7325 | 0.7027 | 0.9380 | -1.4930 | -1.1178 | -1.3468 | -1.1841 |
0.8722 | 0.0280 | 100 | 1.0619 | -3.0769 | -4.7343 | 0.7320 | 1.6574 | -1.8937 | -1.2308 | -1.3817 | -1.2196 |
1.0404 | 0.0419 | 150 | 0.9883 | -3.4545 | -5.6160 | 0.7545 | 2.1615 | -2.2464 | -1.3818 | -1.3639 | -1.2082 |
1.4672 | 0.0559 | 200 | 0.9751 | -3.4539 | -5.6604 | 0.7613 | 2.2065 | -2.2642 | -1.3816 | -1.3683 | -1.2117 |
Base model
meta-llama/Meta-Llama-3-8B-Instruct