Mistral-7B-Instruct-v0.2-FaVe-rank32-10epochs
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4888
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 0.2685 | 10 | 2.1111 |
2.3843 | 0.5369 | 20 | 1.1734 |
2.3843 | 0.8054 | 30 | 0.7606 |
0.8872 | 1.0738 | 40 | 0.6557 |
0.8872 | 1.3423 | 50 | 0.5843 |
0.5911 | 1.6107 | 60 | 0.5225 |
0.5911 | 1.8792 | 70 | 0.4987 |
0.4691 | 2.1477 | 80 | 0.4733 |
0.4691 | 2.4161 | 90 | 0.4418 |
0.3738 | 2.6846 | 100 | 0.4333 |
0.3738 | 2.9530 | 110 | 0.4321 |
0.3494 | 3.2215 | 120 | 0.4451 |
0.3494 | 3.4899 | 130 | 0.4063 |
0.3139 | 3.7584 | 140 | 0.3914 |
0.3139 | 4.0268 | 150 | 0.4101 |
0.2543 | 4.2953 | 160 | 0.4173 |
0.2543 | 4.5638 | 170 | 0.4167 |
0.2209 | 4.8322 | 180 | 0.4115 |
0.2209 | 5.1007 | 190 | 0.3911 |
0.2083 | 5.3691 | 200 | 0.4141 |
0.2083 | 5.6376 | 210 | 0.4179 |
0.1781 | 5.9060 | 220 | 0.4247 |
0.1781 | 6.1745 | 230 | 0.4417 |
0.172 | 6.4430 | 240 | 0.4323 |
0.172 | 6.7114 | 250 | 0.4222 |
0.1614 | 6.9799 | 260 | 0.4341 |
0.1614 | 7.2483 | 270 | 0.4423 |
0.1348 | 7.5168 | 280 | 0.4605 |
0.1348 | 7.7852 | 290 | 0.4410 |
0.1449 | 8.0537 | 300 | 0.4475 |
0.1449 | 8.3221 | 310 | 0.4836 |
0.1211 | 8.5906 | 320 | 0.4939 |
0.1211 | 8.8591 | 330 | 0.4805 |
0.1268 | 9.1275 | 340 | 0.4744 |
0.1268 | 9.3960 | 350 | 0.4832 |
0.1129 | 9.6644 | 360 | 0.4882 |
0.1129 | 9.9329 | 370 | 0.4888 |
Framework versions
- PEFT 0.10.0
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for Ferdi/Mistral-7B-Instruct-v0.2-FaVe-rank32-10epochs
Base model
mistralai/Mistral-7B-Instruct-v0.2