Llamas
Collection
8 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B on the belle_math dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.6272 | 4.4444 | 500 | 0.9967 |