Maurice Weber
commited on
Commit
•
2c09767
1
Parent(s):
bd77c17
update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,11 @@ used to create a dataset with 20B deduplicated documents.
|
|
21 |
Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset
|
22 |
structure and schema.
|
23 |
|
|
|
|
|
|
|
|
|
|
|
24 |
To familiarize yourself with the dataset, you can load the sample dataset using:
|
25 |
|
26 |
```python
|
@@ -29,10 +34,13 @@ from datasets import load_dataset
|
|
29 |
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
|
30 |
```
|
31 |
|
32 |
-
To download a the dataset for a specific combination of `{partition} x {snapshot_id} x {language}
|
33 |
-
|
34 |
-
|
35 |
-
|
|
|
|
|
|
|
36 |
|
37 |
```python
|
38 |
from datasets import load_dataset
|
@@ -44,10 +52,29 @@ ds = load_dataset("togethercomputer/RedPajama-Data-V2",
|
|
44 |
languages=["en", "de"])
|
45 |
```
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
To download the plain text data, available for both the `head_middle` and `tail` partitions, you can run
|
53 |
|
@@ -106,9 +133,6 @@ for comp in "${COMPS[@]}"; do
|
|
106 |
done
|
107 |
```
|
108 |
|
109 |
-
A full set of scripts to recreate the dataset, including the quality signals, can be
|
110 |
-
found [here](https://github.com/togethercomputer/RedPajama-Data).
|
111 |
-
|
112 |
### Applying Filtering Rules
|
113 |
|
114 |
You can use the quality signals to filter the raw RedPajama-V2 dataset for a given set of rules. For example, consider
|
|
|
21 |
Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset
|
22 |
structure and schema.
|
23 |
|
24 |
+
A full set of scripts to recreate the dataset, including the quality signals, can be
|
25 |
+
found [here](https://github.com/togethercomputer/RedPajama-Data).
|
26 |
+
|
27 |
+
#### Downloading the raw Dataset with Quality Annotations
|
28 |
+
|
29 |
To familiarize yourself with the dataset, you can load the sample dataset using:
|
30 |
|
31 |
```python
|
|
|
34 |
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
|
35 |
```
|
36 |
|
37 |
+
To download a the dataset for a specific combination of `{partition} x {snapshot_id} x {language}`, you can use the
|
38 |
+
following command which downloads the raw (i.e., *not* deduplicated) part of the dataset and the corresponding quality
|
39 |
+
signals. In the example below, we use English and German data from the `head_middle` partition of the 2023-06 and the
|
40 |
+
2022-49 snapshots. The full set of available snapshots is specified in `_CC_SNAPSHOT_IDS`. The available partitions
|
41 |
+
are `tail` and `head_middle`. The available language tags are `en`, `de`, `fr`, `es`, `it`.
|
42 |
+
_Note that this will download the entire snapshots specified in the `snapshots` argument and requires ~1TB of disk space
|
43 |
+
per snapshot_.
|
44 |
|
45 |
```python
|
46 |
from datasets import load_dataset
|
|
|
52 |
languages=["en", "de"])
|
53 |
```
|
54 |
|
55 |
+
#### Downloading the dataset via wget
|
56 |
+
|
57 |
+
If you prefer to download the full dataset via wget, you can download the following lists of urls and use them to
|
58 |
+
download the dataset:
|
59 |
+
|
60 |
+
```bash
|
61 |
+
# get list of urls pointing to the text documents
|
62 |
+
wget "https://data.together.xyz/redpajama-data-v2/v1.0.0/urls/document-urls.txt" -O "document-urls.txt"
|
63 |
+
|
64 |
+
# get list of urls pointing to the quality signals
|
65 |
+
wget "https://data.together.xyz/redpajama-data-v2/v1.0.0/urls/quality_signals-urls.txt" -O "quality_signals-urls.txt"
|
66 |
+
|
67 |
+
# get list of urls pointing to the ids of duplicate documents
|
68 |
+
wget "https://data.together.xyz/redpajama-data-v2/v1.0.0/urls/duplicates-urls.txt" -O "duplicates-urls.txt"
|
69 |
+
|
70 |
+
# get list of urls pointing to the minhash signatures
|
71 |
+
wget "https://data.together.xyz/redpajama-data-v2/v1.0.0/urls/minhash-urls.txt" -O "minhash-urls.txt"
|
72 |
+
```
|
73 |
+
|
74 |
+
You can also directly download subsets of the dataset using the following instructions. Here we use English
|
75 |
+
data from the `2023-06` snapshot and the `head_middle` partition as an example. The full set of CC snapshots included in
|
76 |
+
the dataset is given in `_CC_SNAPSHOT_IDS`. The available partitions are `tail` and `head_middle`. The available
|
77 |
+
language tags are `en`, `de`, `fr`, `es`, `it`.
|
78 |
|
79 |
To download the plain text data, available for both the `head_middle` and `tail` partitions, you can run
|
80 |
|
|
|
133 |
done
|
134 |
```
|
135 |
|
|
|
|
|
|
|
136 |
### Applying Filtering Rules
|
137 |
|
138 |
You can use the quality signals to filter the raw RedPajama-V2 dataset for a given set of rules. For example, consider
|