Edit model card

Pythia 1.4b Deduped with 8k Context Window

This model fine-tunes Pythia 1.4b model with a context window of 8k tokens. With optimizations like Flash Attention & bitsandbytes, I could fit the model the entire model with a batch size of 1, on a single A100 (40 GB). The fine-tuning took ~30 hours, after which the loss was similar to that of fine-tuning at the context window of 2k tokens.

Downloads last month
15
Safetensors
Model size
1.52B params
Tensor type
FP16
·
BOOL
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train naxautify/pythia-1.4b-deduped-8k