S3nh's Axolotl Finetuned
Collection
Collection of LLMs finetuned using axolotl library, mostly
•
5 items
•
Updated
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the FreedomIntelligence/Evol-Instruct-Chinese-GPT4 dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0979 | 0.0 | 1 | 1.0964 |
0.9735 | 0.25 | 82 | 0.9782 |
0.9577 | 0.5 | 164 | 0.9619 |
0.9281 | 0.75 | 246 | 0.9536 |
0.8988 | 1.0 | 328 | 0.9519 |
The following bitsandbytes
quantization config was used during training:
Base model
mistralai/Mistral-7B-v0.1