Datasets:

Languages:
English
Size:
n>1T
ArXiv:
License:
kyleclo commited on
Commit
f606534
1 Parent(s): fa8ae12

add more sources

Browse files
Files changed (1) hide show
  1. README.md +15 -4
README.md CHANGED
@@ -52,12 +52,23 @@ At the moment, there are six versions of Dolma available:
52
  | **Source** | **Provenance** | **New?** | **Documents** (millions) | **OLMo tokens** (billions) | **Sample Proportion** | **Cutoff Date** | **Processing**
53
  |--|--|--|--|--|--|--|--|
54
  | Dolma's CC | [Common Crawl](https://commoncrawl.org/) via Dolma v1.6 | Updated | | 1,195.5 | 50% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
55
- | Refined Web | [Refined Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | Yes | | 456.4 | 100% | Feb 2023 | |
56
- | StarCoder | [StarCoder](https://huggingface.co/blog/starcoder) | Yes | | 263.8 | 100% | May 2023 | No further processing |
57
  | C4 | [C4](https://huggingface.co/datasets/c4) via Dolma v1.6 | Updated | | 138.4 | 50% | Apr 2019 | Filtered using the Dolma pipeline; new quality filtering and deduplication steps. |
58
  | Reddit | [PushShift API](https://github.com/pushshift/api) | Updated | | 79.9 | 100% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
59
- | Semantic Scholar | [S2AG/S2ORC](https://www.semanticscholar.org/product/api)/[peS2o](https://huggingface.co/datasets/allenai/peS2o) via Dolma v1.6 | No | 38.8 | 57.2 | 100% | Mar 2023 | Same as Dolma v1.6 |
60
- | Project Gutenberg | [Project Gutenberg](https://www.gutenberg.org/) | No | 0.056 | 6.0 | 100% | Mar 2023 | Same as Dolma v1.6 |
 
 
 
 
 
 
 
 
 
 
 
61
 
62
 
63
  ## Summary Statistics (v1.6)
 
52
  | **Source** | **Provenance** | **New?** | **Documents** (millions) | **OLMo tokens** (billions) | **Sample Proportion** | **Cutoff Date** | **Processing**
53
  |--|--|--|--|--|--|--|--|
54
  | Dolma's CC | [Common Crawl](https://commoncrawl.org/) via Dolma v1.6 | Updated | | 1,195.5 | 50% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
55
+ | Refined Web | [Refined Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) | Yes | | 456.4 | 100% | Feb 2023 | Filtered using the Dolma pipeline; new quality filtering and deduplication steps. |
56
+ | StarCoder | [StarCoder](https://huggingface.co/blog/starcoder) | Yes | | 263.8 | 100% | May 2023 | No further processing. |
57
  | C4 | [C4](https://huggingface.co/datasets/c4) via Dolma v1.6 | Updated | | 138.4 | 50% | Apr 2019 | Filtered using the Dolma pipeline; new quality filtering and deduplication steps. |
58
  | Reddit | [PushShift API](https://github.com/pushshift/api) | Updated | | 79.9 | 100% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
59
+ | Semantic Scholar ([S2ORC](https://aclanthology.org/2020.acl-main.447/) & [S2AG](https://www.semanticscholar.org/product/api)) | [peS2o](https://huggingface.co/datasets/allenai/peS2o) via Dolma v1.6 | No | 38.8 | 57.2 | 100% | Mar 2023 | Same as Dolma v1.6 |
60
+ | arXiv | [RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | Yes | | 28.0 | 100% | Mar 2023 | No further processing. |
61
+ | StackExchange | [RedPajama v1](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | Yes | | 19.6 | 100% | Mar 2023 | No futher processing. |
62
+ | Flan | [Flan](https://arxiv.org/abs/2301.13688) via [Tulu](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) | Yes | | 16.5 | 100% | Mar 2023 | |
63
+ | CC News | [Common Crawl](https://commoncrawl.org/blog/news-dataset-available) | Yes | | 14.3 | 100% | Mar 2023 | Extracted using the Dolma pipeline; new quality filtering and deduplication steps. |
64
+ | OpenWebMath | [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) via [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | Yes | | 12.6 | 100% | Oct 2023 | Training subset; no further processing. |
65
+ | Algebraic Stack | [Proof Pile II](https://huggingface.co/datasets/EleutherAI/proof-pile-2) | Yes | | 12.6 | 100% | Oct 2023 | Training subset; no further processing. |
66
+ | Project Gutenberg | [Project Gutenberg](https://www.gutenberg.org) via Dolma v1.6 | No | 0.0556 | 5.3 | 100% | Mar 2023 | Same as Dolma v1.6 |
67
+ | MegaWika | [MetaWika](https://huggingface.co/datasets/hltcoe/megawika) | Yes | | 4.6 | 100% | Jul 2023 | English web pages cited from Wikipedia; curated using the full Dolma pipeline. |
68
+ | Wikipedia & Wikibooks | [Wikimedia](https://dumps.wikimedia.org) via Dolma v1.6 | No | 6.2 | 3.7 | 200% | Mar 2023 | Same as Dolma v1.6 |
69
+ | **Total** | | | | **2,308.5** | **1,715.1** | | |
70
+
71
+ (A subset of total data was used for training of OLMo 7B-v1.7. The token counts are based on the full dataset, whereas taking into account sampling proportion gives the final actual token counts used for training --- 1.715 trillion tokens.)
72
 
73
 
74
  ## Summary Statistics (v1.6)