Edit model card

VanillaKD-Pretrain-Qwen-500M

paper | code

VanillaKD-Pretrain-Qwen-500M is a 500M model with Qwen achitecture pre-trained with vanilla token-level knowledge distillation on the Pile for 50B tokens. The teacher model is Qwen1.5-1.8B.

We also open-source the tokenized pre-training corpus for reproducibility.

It is used as the baseline for MiniLLM-Qwen-500M

Evaluation

MiniPLM models achieves better performance given the same computation and scales well across model sizes:

Other Baselines

Citation

@article{miniplm,
    title={MiniPLM: Knowledge Distillation for Pre-Training Language Models}, 
    author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
    journal={arXiv preprint arXiv:2410.17215},
    year={2024}
}
Downloads last month
15
Safetensors
Model size
464M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train MiniLLM/VanillaKD-Pretrain-Qwen-500M