Datasets:

Languages:
English
ArXiv:
Tags:
math

The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

ArXiv | Models | Data | Code | Blog | Sample Explorer

Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, Sean Welleck

The Proof-Pile-2 is a 55 billion token dataset of mathematical and scientific documents. This dataset was created in order to train the Llemma 7B and Llemma 34B models. It consists of three subsets:

  • arxiv (29B tokens): the ArXiv subset of RedPajama
  • open-web-math (15B tokens): The OpenWebMath dataset, which contains much of the high-quality mathematical text from the internet.
  • algebraic-stack (11B tokens): A new dataset of mathematical code, including numerical computing, computer algebra, and formal mathematics.

You can download the dataset as follows

from datasets import load_dataset
ds = load_dataset("EleutherAI/proof-pile-2")

# To load only a specific subset, pass it as an argument, e.g
ds_arxiv = load_dataset("EleutherAI/proof-pile-2", "arxiv")

Schema

Each dataset row has the following structure

{
  "text": ..., # document text
  "meta": ..., # JSON string of metadata, schema specific to data source
}

Dataset Contents

For detailed documentation of the ArXiv and web subsets, refer to RedPajama and OpenWebMath. The following table enumerates the contents of the AlgebraicStack by programming language. The AlgebraicStack is filtered to only include documents that contain mathematics, as judged by hand-crafted, language-specific heuristics.

Language AlgebraicStack tokens
Agda 35.2 M
C 25.1 M
C++ 954.1 M
Coq 281.9 M
Fortran 724.9 M
GAP 3.6 M
Haskell 9.1 M
Idris 10.9 M
Isabelle 1,089.7 M
Julia 531.0 M
Jupyter 199.1 M
Lean 285.6 M
Maple 2.0 M
Matlab 65.8 M
Python 6,098.8 M
R 71.3 M
Tex 567.7 M
Total 10,955.7 M

License

We do not alter the license of any of the underlying data.

Version History

v1.1.0: Contains an updated version of OpenWebMath, precisely the one available at open-web-math/open-web-math. This version of OpenWebMath has slightly improved filtering, for example, removal of very short documents.

v1.0.0: The data used to train the Llemma 7B and Llemma 34B. Uses a development version of OpenWebMath.

Citation

For the entire Proof-Pile-2, cite

@misc{azerbayev2023llemma,
      title={Llemma: An Open Language Model For Mathematics}, 
      author={Zhangir Azerbayev and Hailey Schoelkopf and Keiran Paster and Marco Dos Santos and Stephen McAleer and Albert Q. Jiang and Jia Deng and Stella Biderman and Sean Welleck},
      year={2023},
      eprint={2310.10631},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

For the ArXiv subset, cite

@software{together2023redpajama,
  author = {Together Computer},
  title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
  month = April,
  year = 2023,
  url = {https://github.com/togethercomputer/RedPajama-Data}
}

For OpenWebMath, cite

@misc{paster2023openwebmath,
      title={OpenWebMath: An Open Dataset of High-Quality Mathematical Web Text}, 
      author={Keiran Paster and Marco Dos Santos and Zhangir Azerbayev and Jimmy Ba},
      year={2023},
      eprint={2310.06786},
      archivePrefix={arXiv},
      primaryClass={cs.AI}
}
Downloads last month
3,249

Models trained or fine-tuned on EleutherAI/proof-pile-2