QuantFactory/web-doc-refining-lm-GGUF
This is quantized version of gair-prox/web-doc-refining-lm created using llama.cpp
Original Model Card
Web-doc-refining-lm
Web-doc-refining-lm is an adapted 0.3B-ProX model, fine-tuned for document level refining via program generation.
Citation
@article{zhou2024programming,
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
journal={arXiv preprint arXiv:2409.17115},
year={2024}
}
- Downloads last month
- 361
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for QuantFactory/web-doc-refining-lm-GGUF
Base model
gair-prox/RedPJ-ProX-0.3B