We finetuned facebook/opt-125m on tatsu-lab/alpaca Dataset for 10 epochs using MonsterAPI no-code LLM finetuner.
This dataset is HuggingFaceH4/tatsu-lab/alpaca unfiltered, removing 36 instances of blatant alignment.
The finetuning session got completed in 40 minutes and costed us only $4
for the entire finetuning run!
Hyperparameters & Run details:
- Model: facebook/opt-125m
- Dataset: tatsu-lab/alpaca
- Learning rate: 0.0003
- Number of epochs: 10
- Data split: Training: 90% / Validation: 10%
- Gradient accumulation steps: 1
license: apache-2.0
- Downloads last month
- 5
Model tree for monsterapi/opt125M_alpaca
Base model
facebook/opt-125m