ProX General Models
Collection
base models trained on ProX curated data.
•
16 items
•
Updated
C4-ProX-1.7B is a small language model. It was and trained on the C4-pro for 50B tokens.
ProX models are evaluated over 10 language model benchmarks in zero-shot setting.
ArC-c | ARC-e | CSQA | HellaS | MMLU | OBQA | PiQA | SIQA | WinoG | SciQ | AVG | |
---|---|---|---|---|---|---|---|---|---|---|---|
raw | 25.3 | 48.8 | 30.1 | 52.4 | 28.8 | 32.2 | 72.0 | 40.6 | 53.6 | 71.7 | 45.5 |
ours | 31.1 | 56.0 | 28.4 | 55.2 | 31.1 | 36.2 | 74.0 | 41.0 | 54.1 | 76.8 | 48.4 |
@article{zhou2024programming,
title={Programming Every Example: Lifting Pre-training Data Quality like Experts at Scale},
author={Zhou, Fan and Wang, Zengzhi and Liu, Qian and Li, Junlong and Liu, Pengfei},
journal={arXiv preprint arXiv:2409.17115},
year={2024}
}