End of training
Browse files- README.md +11 -1
- adapter_model.bin +3 -0
README.md
CHANGED
@@ -2,6 +2,7 @@
|
|
2 |
license: apache-2.0
|
3 |
library_name: peft
|
4 |
tags:
|
|
|
5 |
- generated_from_trainer
|
6 |
base_model: mistralai/Mistral-7B-Instruct-v0.2
|
7 |
model-index:
|
@@ -110,7 +111,9 @@ fsdp_config:
|
|
110 |
|
111 |
# mistral-lora
|
112 |
|
113 |
-
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on
|
|
|
|
|
114 |
|
115 |
## Model description
|
116 |
|
@@ -143,6 +146,13 @@ The following hyperparameters were used during training:
|
|
143 |
- lr_scheduler_warmup_steps: 10
|
144 |
- num_epochs: 1
|
145 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
146 |
### Framework versions
|
147 |
|
148 |
- PEFT 0.8.2
|
|
|
2 |
license: apache-2.0
|
3 |
library_name: peft
|
4 |
tags:
|
5 |
+
- axolotl
|
6 |
- generated_from_trainer
|
7 |
base_model: mistralai/Mistral-7B-Instruct-v0.2
|
8 |
model-index:
|
|
|
111 |
|
112 |
# mistral-lora
|
113 |
|
114 |
+
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the None dataset.
|
115 |
+
It achieves the following results on the evaluation set:
|
116 |
+
- Loss: 0.2163
|
117 |
|
118 |
## Model description
|
119 |
|
|
|
146 |
- lr_scheduler_warmup_steps: 10
|
147 |
- num_epochs: 1
|
148 |
|
149 |
+
### Training results
|
150 |
+
|
151 |
+
| Training Loss | Epoch | Step | Validation Loss |
|
152 |
+
|:-------------:|:-----:|:----:|:---------------:|
|
153 |
+
| 0.149 | 1.0 | 304 | 0.2163 |
|
154 |
+
|
155 |
+
|
156 |
### Framework versions
|
157 |
|
158 |
- PEFT 0.8.2
|
adapter_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:66b571c4631beb855e0dea3a3307b656a9f3026c8ba4caa06e49954c69549f3a
|
3 |
+
size 1132691034
|