First Version of Instruction Tuned Bloomz-7B1 model on Stanford Alpaca Instruction Tuning dataset (52k data) using HF Deepspeed
Base Model: bigscience/bloomz-7b1
Training Details :
- Epochs: 4
- Batch Size : 5 instantaneous per device x 3 gradient accumulation steps x 8 gpus = 120
- Max Length : 1024
- Weight Decay : 0
- Learning Rate : 5e-5
- Learning Rate Scheduler Type : Linear
- Number of warmup steps : 40
- Machine : 8xA100 80GB
Dataset Details :
Dataset : iamplus/Instruction_Tuning
Files :
- stanford_alpaca_it.csv
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.