--- license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.2 tags: - trl - sft - generated_from_trainer model-index: - name: UTI_L3_1000steps_1e6rate_SFT results: [] --- # UTI_L3_1000steps_1e6rate_SFT This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2252 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-------:|:----:|:---------------:| | 2.2806 | 0.3333 | 25 | 2.1563 | | 1.8955 | 0.6667 | 50 | 1.8431 | | 1.7404 | 1.0 | 75 | 1.7251 | | 1.5351 | 1.3333 | 100 | 1.7245 | | 1.5176 | 1.6667 | 125 | 1.7039 | | 1.4594 | 2.0 | 150 | 1.6754 | | 0.9453 | 2.3333 | 175 | 1.8231 | | 1.0165 | 2.6667 | 200 | 1.8012 | | 0.9754 | 3.0 | 225 | 1.7995 | | 0.4991 | 3.3333 | 250 | 2.1098 | | 0.4971 | 3.6667 | 275 | 2.1236 | | 0.5339 | 4.0 | 300 | 2.1092 | | 0.2532 | 4.3333 | 325 | 2.2857 | | 0.2919 | 4.6667 | 350 | 2.3444 | | 0.3192 | 5.0 | 375 | 2.3734 | | 0.1858 | 5.3333 | 400 | 2.5514 | | 0.1947 | 5.6667 | 425 | 2.5828 | | 0.1984 | 6.0 | 450 | 2.5324 | | 0.1429 | 6.3333 | 475 | 2.7141 | | 0.1573 | 6.6667 | 500 | 2.6237 | | 0.1502 | 7.0 | 525 | 2.6715 | | 0.1168 | 7.3333 | 550 | 2.8434 | | 0.1306 | 7.6667 | 575 | 2.7996 | | 0.1182 | 8.0 | 600 | 2.8128 | | 0.1009 | 8.3333 | 625 | 2.9270 | | 0.1053 | 8.6667 | 650 | 2.9832 | | 0.0983 | 9.0 | 675 | 2.9935 | | 0.0887 | 9.3333 | 700 | 3.0662 | | 0.0894 | 9.6667 | 725 | 3.0845 | | 0.0914 | 10.0 | 750 | 3.0977 | | 0.0829 | 10.3333 | 775 | 3.1662 | | 0.0775 | 10.6667 | 800 | 3.1832 | | 0.0841 | 11.0 | 825 | 3.1821 | | 0.0753 | 11.3333 | 850 | 3.2082 | | 0.078 | 11.6667 | 875 | 3.2170 | | 0.0745 | 12.0 | 900 | 3.2223 | | 0.0788 | 12.3333 | 925 | 3.2260 | | 0.0743 | 12.6667 | 950 | 3.2258 | | 0.0718 | 13.0 | 975 | 3.2253 | | 0.0744 | 13.3333 | 1000 | 3.2252 | ### Framework versions - Transformers 4.41.1 - Pytorch 2.0.0+cu117 - Datasets 2.19.1 - Tokenizers 0.19.1