gemma-7b-ultrachat-sft
gemma-7b-ultrachat-sft is an SFT fine-tuned version of google/gemma-7b using the stingning/ultrachat dataset.
Fine-tuning configuration
LoRA
- LoRA r: 8
- LoRA alpha: 16
- LoRA dropout: 0.1
Training arguments
- Epochs: 1
- Batch size: 4
- Gradient accumulation steps: 6
- Optimizer: paged_adamw_32bit
- Max steps: 100
- Learning rate: 0.0002
- Weight decay: 0.001
- Learning rate scheduler type: constant
- Max seq length: 2048
- Downloads last month
- 75
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.