Configurations choice
Collection
Choice of configuration based on the results of different fine-tuning. All provide mor or less same results but 1 and 2 are way faster! (lr)
•
52 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-2 and the GaetanMichelet/chat-120_ft_task-2 datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0459 | 1.0 | 11 | 1.1227 |
1.0223 | 2.0 | 22 | 1.1149 |
1.0795 | 3.0 | 33 | 1.1018 |
0.9982 | 4.0 | 44 | 1.0787 |
0.9702 | 5.0 | 55 | 1.0444 |
0.9509 | 6.0 | 66 | 0.9990 |
0.9573 | 7.0 | 77 | 0.9500 |
0.8624 | 8.0 | 88 | 0.9071 |
0.8804 | 9.0 | 99 | 0.8747 |
0.8515 | 10.0 | 110 | 0.8457 |
0.7864 | 11.0 | 121 | 0.8208 |
0.8648 | 12.0 | 132 | 0.8018 |
0.736 | 13.0 | 143 | 0.7867 |
0.7882 | 14.0 | 154 | 0.7728 |
0.7452 | 15.0 | 165 | 0.7604 |
0.6818 | 16.0 | 176 | 0.7485 |
0.7119 | 17.0 | 187 | 0.7387 |
0.7107 | 18.0 | 198 | 0.7307 |
0.6405 | 19.0 | 209 | 0.7238 |
0.6075 | 20.0 | 220 | 0.7188 |
0.6323 | 21.0 | 231 | 0.7152 |
0.557 | 22.0 | 242 | 0.7139 |
0.5692 | 23.0 | 253 | 0.7158 |
0.558 | 24.0 | 264 | 0.7198 |
0.5153 | 25.0 | 275 | 0.7296 |
0.4964 | 26.0 | 286 | 0.7367 |
0.4713 | 27.0 | 297 | 0.7403 |
0.4144 | 28.0 | 308 | 0.7620 |
0.4184 | 29.0 | 319 | 0.7954 |
Base model
meta-llama/Llama-3.1-8B