UTI_M2_1000steps_1e5rate_SFT
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.2 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.9597
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.0555 | 0.3333 | 25 | 0.9919 |
1.0461 | 0.6667 | 50 | 1.0778 |
1.1438 | 1.0 | 75 | 1.4514 |
0.925 | 1.3333 | 100 | 1.1656 |
0.9875 | 1.6667 | 125 | 1.1640 |
0.9859 | 2.0 | 150 | 1.6609 |
0.7898 | 2.3333 | 175 | 1.2420 |
0.7246 | 2.6667 | 200 | 1.2557 |
0.7078 | 3.0 | 225 | 1.1831 |
0.4316 | 3.3333 | 250 | 1.3381 |
0.4621 | 3.6667 | 275 | 1.3760 |
0.5094 | 4.0 | 300 | 1.3134 |
0.2873 | 4.3333 | 325 | 1.3968 |
0.267 | 4.6667 | 350 | 1.5584 |
0.292 | 5.0 | 375 | 1.4604 |
0.1967 | 5.3333 | 400 | 1.5440 |
0.2125 | 5.6667 | 425 | 1.5934 |
0.2141 | 6.0 | 450 | 1.5512 |
0.1391 | 6.3333 | 475 | 1.6320 |
0.1735 | 6.6667 | 500 | 1.6144 |
0.1688 | 7.0 | 525 | 1.6714 |
0.1265 | 7.3333 | 550 | 1.6959 |
0.1334 | 7.6667 | 575 | 1.6998 |
0.1245 | 8.0 | 600 | 1.7298 |
0.1066 | 8.3333 | 625 | 1.7505 |
0.0982 | 8.6667 | 650 | 1.7773 |
0.1014 | 9.0 | 675 | 1.8197 |
0.0829 | 9.3333 | 700 | 1.8606 |
0.0774 | 9.6667 | 725 | 1.8651 |
0.0846 | 10.0 | 750 | 1.8653 |
0.0739 | 10.3333 | 775 | 1.9064 |
0.0786 | 10.6667 | 800 | 1.9323 |
0.0691 | 11.0 | 825 | 1.9367 |
0.0648 | 11.3333 | 850 | 1.9448 |
0.0649 | 11.6667 | 875 | 1.9546 |
0.0672 | 12.0 | 900 | 1.9559 |
0.06 | 12.3333 | 925 | 1.9592 |
0.0606 | 12.6667 | 950 | 1.9597 |
0.0606 | 13.0 | 975 | 1.9597 |
0.0601 | 13.3333 | 1000 | 1.9597 |
Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.