metadata
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: mistralai/Mistral-7B-v0.3
datasets:
- HachiML/oasst1_for_self-rewarding_IFT
- HachiML/oasst1_for_self-rewarding_EFT_MSv0.3
Model Card for Model ID
- HachiML/Mistral-7B-v0.3-sft-lora_sr_5epのAdapterをマージしたモデル
- This model is a fine-tuned version of mistralai/Mistral-7B-v0.3 on following datasets.
- It achieves the following results on the evaluation set:
- Loss: 0.4237
Model Details
Model Description
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by: HachiML
- Model type: Mistral-7B
- Language(s) (NLP): Japanese
- License: Apache-2.0
- Finetuned from model: mistralai/Mistral-7B-v0.3
- Finetuned type: SFT
- Finetuned dataset:
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 5
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.1951 | 1.0 | 262 | 0.4563 |
0.9304 | 2.0 | 524 | 0.4279 |
0.9129 | 3.0 | 786 | 0.4242 |
0.9088 | 4.0 | 1048 | 0.4237 |
0.9089 | 5.0 | 1310 | 0.4237 |
Framework versions
- PEFT 0.11.1
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1