jurassicpark commited on
Commit
8ead72f
1 Parent(s): 7c94531

Change the Download Command to account for duplicate filenames

Browse files

Downloading directly via wget produces issues because of the common crawl datasets have duplicate filenames across the various years.
This creates issues pausing and resuming the download process and also verifying checksums.

This PR changes the download command to download using the subdirectory structure present in the url.

e.g.
`https://data.together.xyz/redpajama-data-1T/v1.0.0/arxiv/arxiv_023827cd-7ee8-42e6-aa7b-661731f4c70f.jsonl`

downloads to

`arxiv/arxiv_023827cd-7ee8-42e6-aa7b-661731f4c70f.jsonl`

Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -15,10 +15,17 @@ ds = load_dataset("togethercomputer/RedPajama-Data-1T")
15
  ```
16
 
17
  Or you can directly download the files using the following command:
 
18
  ```
19
- wget -i https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt
 
 
 
 
 
20
  ```
21
 
 
22
  A smaller 1B-token sample of the dataset can be found [here](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
23
 
24
  A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).
 
15
  ```
16
 
17
  Or you can directly download the files using the following command:
18
+
19
  ```
20
+ wget 'https://data.together.xyz/redpajama-data-1T/v1.0.0/urls.txt'
21
+ while read line; do
22
+ dload_loc=${line#https://data.together.xyz/redpajama-data-1T/v1.0.0/}
23
+ mkdir -p $(dirname $dload_loc)
24
+ wget "$line" -O "$dload_loc"
25
+ done < urls.txt
26
  ```
27
 
28
+
29
  A smaller 1B-token sample of the dataset can be found [here](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T-Sample).
30
 
31
  A full set of scripts to recreate the dataset from scratch can be found [here](https://github.com/togethercomputer/RedPajama-Data).