Datasets:
Maurice Weber
commited on
Commit
•
1a4d6db
1
Parent(s):
7d6032b
update table
Browse files
README.md
CHANGED
@@ -17,7 +17,8 @@ documents coming from 84 CommonCrawl snapshots and processed using
|
|
17 |
the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus
|
18 |
that additionally come with quality signals, and 20B documents that are deduplicated.
|
19 |
|
20 |
-
Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset
|
|
|
21 |
|
22 |
To familiarize yourself with the dataset, you can load the sample dataset using:
|
23 |
|
@@ -201,10 +202,10 @@ RedPajama-V2 is an open dataset for training large laguage models and includes o
|
|
201 |
| rps_doc_frac_chars_top_4gram | The fraction of characters in the top word 4gram. | Repetitiveness | [RefinedWeb](https://arxiv.org/abs/2306.01116), [Gopher](https://arxiv.org/abs/2112.11446) |
|
202 |
| rps_doc_ldnoobw_words | The number of sequences of words that are contained in the List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words blocklist. The blocklist is obtained from the [LDNOOBW](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) repo. | toxicity | [C4](https://arxiv.org/abs/1910.10683) |
|
203 |
| rps_doc_ut1_blacklist | A categorical id corresponding to the list of categories of the domain of the document. Categories are obtained from the UT1 blacklist. The list is obtained from [UT-Capitole](https://dsi.ut-capitole.fr/blacklists/). | toxicictiy | [RefinedWeb](https://arxiv.org/abs/2306.01116) |
|
204 |
-
| minhash_signature_0.7 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.7
|
205 |
-
| minhash_signature_0.8 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.8 | Deduplication |
|
206 |
-
| minhash_signature_0.9 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.9 | Deduplication |
|
207 |
-
| minhash_signature_1.0 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 1.0 | Deduplication |
|
208 |
|
209 |
#### Document and Token Counts for the Annotated and deduplicated `head_middle` part of the dataset
|
210 |
|
@@ -357,11 +358,17 @@ To cite RedPajama, please use:
|
|
357 |
```
|
358 |
|
359 |
## Acknowledgements
|
360 |
-
We are appreciative to so many partners and collaborators that together are pushing forward the frontier of open LLM models.
|
361 |
-
- Thank you to the OLMo team at AI2 and friends at OpenGPT-X for the insightful discussions about datasets and data quality! Also for everyone who builds on the RedPajama dataset, including Cerebras for their SlimPajama efforts, and the over 500 models built on RedPajam to date by the open-source AI community.
|
362 |
-
- We are grateful to the great team at EleutherAI for paving the path on open training datasets with The Pile and for open-sourcing code we use in training some of the RedPajama models.
|
363 |
-
- Thank you to our partners of RedPajama-v1, including Ontocord.ai, MILA Québec AI Institute, ETH DS3Lab, Université de Montréal, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
|
364 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
365 |
|
366 |
## License
|
367 |
|
|
|
17 |
the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus
|
18 |
that additionally come with quality signals, and 20B documents that are deduplicated.
|
19 |
|
20 |
+
Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset
|
21 |
+
structure and schema.
|
22 |
|
23 |
To familiarize yourself with the dataset, you can load the sample dataset using:
|
24 |
|
|
|
202 |
| rps_doc_frac_chars_top_4gram | The fraction of characters in the top word 4gram. | Repetitiveness | [RefinedWeb](https://arxiv.org/abs/2306.01116), [Gopher](https://arxiv.org/abs/2112.11446) |
|
203 |
| rps_doc_ldnoobw_words | The number of sequences of words that are contained in the List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words blocklist. The blocklist is obtained from the [LDNOOBW](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) repo. | toxicity | [C4](https://arxiv.org/abs/1910.10683) |
|
204 |
| rps_doc_ut1_blacklist | A categorical id corresponding to the list of categories of the domain of the document. Categories are obtained from the UT1 blacklist. The list is obtained from [UT-Capitole](https://dsi.ut-capitole.fr/blacklists/). | toxicictiy | [RefinedWeb](https://arxiv.org/abs/2306.01116) |
|
205 |
+
| minhash_signature_0.7 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.7. The signature is based on 128 hash functions and grouped into 14 bands and 9 rows for LSH. | Deduplication |
|
206 |
+
| minhash_signature_0.8 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.8. The signature is based on 128 hash functions and grouped into 9 bands and 13 rows for LSH. | Deduplication |
|
207 |
+
| minhash_signature_0.9 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 0.9. The signature is based on 128 hash functions and grouped into 5 bands and 25 rows for LSH.. | Deduplication |
|
208 |
+
| minhash_signature_1.0 | Banded minhash signature of the document, for fuzzy deduplication at Jaccard similarity 1.0. The signature is based on 128 hash functions and grouped into 1 band and 128 rows for LSH. | Deduplication |
|
209 |
|
210 |
#### Document and Token Counts for the Annotated and deduplicated `head_middle` part of the dataset
|
211 |
|
|
|
358 |
```
|
359 |
|
360 |
## Acknowledgements
|
|
|
|
|
|
|
|
|
361 |
|
362 |
+
We are appreciative to so many partners and collaborators that together are pushing forward the frontier of open LLM
|
363 |
+
models.
|
364 |
+
|
365 |
+
- Thank you to the OLMo team at AI2 and friends at OpenGPT-X for the insightful discussions about datasets and data
|
366 |
+
quality! Also for everyone who builds on the RedPajama dataset, including Cerebras for their SlimPajama efforts, and
|
367 |
+
the over 500 models built on RedPajam to date by the open-source AI community.
|
368 |
+
- We are grateful to the great team at EleutherAI for paving the path on open training datasets with The Pile and for
|
369 |
+
open-sourcing code we use in training some of the RedPajama models.
|
370 |
+
- Thank you to our partners of RedPajama-v1, including Ontocord.ai, MILA Québec AI Institute, ETH DS3Lab, Université de
|
371 |
+
Montréal, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
|
372 |
|
373 |
## License
|
374 |
|