metadata
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
- pytorch
- Mistral
base_model: mistralai/Mistral-7B-v0.1
model-index:
- name: fennec-7b-alpha
results: []
datasets:
- HuggingFaceH4/ultrachat_200k
- openbmb/UltraFeedback
- gsm8k
language:
- en
pipeline_tag: text-generation
fennec-7b-alpha
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on ultrachat_200k, UltraFeedback, and gsm8k datasets.
Model description
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 using supervised fine-tuning on nearly the same datasets as Zephyr-7B-beta.
Training and evaluation data
The evaluation for training can be found here.
The evaluation can be found at the Hugging Face Leaderboard here.
Training procedure
Can be found here.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 7
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 14
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 500
Training results
Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.1