Handbook v0.1 models and datasets
Collection
Models and datasets for v0.1 of the alignment handbook
•
6 items
•
Updated
•
24
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the HuggingFaceH4/ultrachat_200k dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.9427 | 1.0 | 2179 | 0.9502 |
Base model
mistralai/Mistral-7B-v0.1