Datasets:

ArXiv:
Maurice Weber commited on
Commit
7d6032b
1 Parent(s): af8c310

add acknowledgments, citation, blog url

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -17,7 +17,7 @@ documents coming from 84 CommonCrawl snapshots and processed using
17
  the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus
18
  that additionally come with quality signals, and 20B documents that are deduplicated.
19
 
20
- Check out our [blog post](XXXXX) for more details on the build process, dataset structure and schema.
21
 
22
  To familiarize yourself with the dataset, you can load the sample dataset using:
23
 
@@ -344,10 +344,10 @@ deduplicated based on the text, using a Bloomfilter. The duplicates were kept in
344
 
345
  ## Citation
346
 
347
- To cite RedPajama-V2, please use:
348
 
349
  ```
350
- @software{together2023redpajama-v2,
351
  author = {Together Computer},
352
  title = {RedPajama: an Open Dataset for Training Large Language Models},
353
  month = October,
@@ -357,8 +357,11 @@ To cite RedPajama-V2, please use:
357
  ```
358
 
359
  ## Acknowledgements
 
 
 
 
360
 
361
- -- TODO --
362
 
363
  ## License
364
 
 
17
  the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus
18
  that additionally come with quality signals, and 20B documents that are deduplicated.
19
 
20
+ Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset structure and schema.
21
 
22
  To familiarize yourself with the dataset, you can load the sample dataset using:
23
 
 
344
 
345
  ## Citation
346
 
347
+ To cite RedPajama, please use:
348
 
349
  ```
350
+ @software{together2023redpajama,
351
  author = {Together Computer},
352
  title = {RedPajama: an Open Dataset for Training Large Language Models},
353
  month = October,
 
357
  ```
358
 
359
  ## Acknowledgements
360
+ We are appreciative to so many partners and collaborators that together are pushing forward the frontier of open LLM models.
361
+ - Thank you to the OLMo team at AI2 and friends at OpenGPT-X for the insightful discussions about datasets and data quality! Also for everyone who builds on the RedPajama dataset, including Cerebras for their SlimPajama efforts, and the over 500 models built on RedPajam to date by the open-source AI community.
362
+ - We are grateful to the great team at EleutherAI for paving the path on open training datasets with The Pile and for open-sourcing code we use in training some of the RedPajama models.
363
+ - Thank you to our partners of RedPajama-v1, including Ontocord.ai, MILA Québec AI Institute, ETH DS3Lab, Université de Montréal, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
364
 
 
365
 
366
  ## License
367