metadata
library_name: transformers
license: apache-2.0
datasets:
- monology/pile-uncopyrighted
- MiniLLM/pile-tokenized
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
VanillaKD-Pretrain-Qwen-1.2B
VanillaKD-Pretrain-Qwen-1.2B is a 1.2B model with Qwen achitecture pre-trained with vanilla token-level knowledge distillation on the Pile for 50B tokens. The teacher model is Qwen1.5-1.8B.
We also open-source the tokenized pre-training corpus for reproducibility.
It is used as the baseline for MiniLLM-Qwen-1.2B
Evaluation
MiniPLM models achieves better performance given the same computation and scales well across model sizes:
Other Baselines
Citation
@article{miniplm,
title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
journal={arXiv preprint arXiv:2410.17215},
year={2024}
}