lc-7b-sft-lora-full
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the generator dataset. It achieves the following results on the evaluation set:
- Loss: 1.5104
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.5657 | 1.0 | 31 | 1.6190 |
1.5249 | 2.0 | 62 | 1.5615 |
1.4901 | 3.0 | 93 | 1.5294 |
1.4397 | 4.0 | 124 | 1.5111 |
1.3942 | 5.0 | 155 | 1.5020 |
1.3161 | 6.0 | 186 | 1.4975 |
1.3081 | 7.0 | 217 | 1.4987 |
1.352 | 8.0 | 248 | 1.4962 |
1.3162 | 9.0 | 279 | 1.4946 |
1.3019 | 10.0 | 310 | 1.5006 |
1.296 | 11.0 | 341 | 1.5009 |
1.2174 | 12.0 | 372 | 1.5040 |
1.3063 | 13.0 | 403 | 1.5075 |
1.294 | 14.0 | 434 | 1.5082 |
1.2651 | 15.0 | 465 | 1.5086 |
1.2766 | 16.0 | 496 | 1.5095 |
1.24 | 17.0 | 527 | 1.5098 |
1.2455 | 18.0 | 558 | 1.5100 |
1.33 | 19.0 | 589 | 1.5099 |
1.2447 | 20.0 | 620 | 1.5104 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.1.0+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 0
Model tree for nerottt/lc-7b-sft-lora-full
Base model
mistralai/Mistral-7B-v0.1