Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

MiniPLM-Qwen-1.2B - GGUF

Original model description:

library_name: transformers license: apache-2.0 datasets: - monology/pile-uncopyrighted - MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5 language: - en metrics: - accuracy pipeline_tag: text-generation

MinPLM-Qwen-1.2B

paper | code

MiniPLM-Qwen-1.2B is a 1.2B model with Qwen achitecture pre-trained from scratch on the Pile using the MiniPLM knowledge distillation framework with the offcial QWen1.5-1.8B as the teacher model.

We also open-source the pre-training corpus refined by Difference Sampling in MiniPLM for reproducibility.

Evaluation

MiniPLM models achieves better performance given the same computation and scales well across model sizes:

Baseline Models

Citation

@article{miniplm,
    title={MiniPLM: Knowledge Distillation for Pre-Training Language Models}, 
    author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
    journal={arXiv preprint arXiv:2410.17215},
    year={2024}
}
Downloads last month
305
GGUF
Model size
1.16B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .