StarChat2 15B
Collection
Model, datasets, and demo for StarChat2 15B. For code to train the models, see: https://github.com/huggingface/alignment-handbook
•
10 items
•
Updated
•
13
This model is a fine-tuned version of bigcode/starcoder2-15b on the HuggingFaceH4/airoboros-3.2, the HuggingFaceH4/Code-Feedback, the HuggingFaceH4/orca-math-word-problems-200k, the HuggingFaceH4/SystemChat and the HuggingFaceH4/capybara datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.6422 | 1.0 | 910 | 0.6910 |
0.5701 | 2.0 | 1820 | 0.6639 |
0.5227 | 3.0 | 2730 | 0.6614 |
Base model
bigcode/starcoder2-15b