Dmitry Chaplinsky
commited on
Commit
•
cdfd766
1
Parent(s):
77f91c1
More useful info to the readme
Browse files
README.md
CHANGED
@@ -25,13 +25,19 @@ You can use the resulting dataset by filtering for the language and amount of th
|
|
25 |
You'll need to apply extra cleansing/evaluation of the instructions you've got because, you know, the internet is still full of crap.
|
26 |
|
27 |
## Recreating the results
|
28 |
-
1. Clone the repo without the LFS files
|
29 |
-
2. Install requirements from `requirements.txt
|
30 |
-
3. Install `pv` and `parallel
|
31 |
-
4. Run `bin/get_seed_urls.sh` to filter urls of interest out of 1.6TB of compressed data. Don't worry about disk space. Worry about the traffic
|
32 |
-
5. Run scrapy spider like this `scrapy crawl webdatacommons_org -s WEB_DATA_COMMONS=web_data_commons_urls_sample.txt -L INFO -o webdatacommons.jsonlines` with `WEB_DATA_COMMONS` pointing to the list of seed URLs from step 4.
|
33 |
-
6. Run `python extract_relevant_structured_data.py --num-threads 12 webdatacommons.jsonlines relevant.jsonlines.bz2
|
34 |
-
7. Run `python export_structured_data.py relevant.jsonlines.bz2 extruct_out.jsonlines.bz2` to obtain the final version of the dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
## License
|
37 |
**Code** of the project has an MIT license.
|
|
|
25 |
You'll need to apply extra cleansing/evaluation of the instructions you've got because, you know, the internet is still full of crap.
|
26 |
|
27 |
## Recreating the results
|
28 |
+
1. Clone the repo without the LFS files.
|
29 |
+
2. Install requirements from `requirements.txt`.
|
30 |
+
3. Install `pv` and `parallel`.
|
31 |
+
4. Run `bin/get_seed_urls.sh` to filter urls of interest out of 1.6TB of compressed data. Don't worry about disk space. Worry about the traffic. That will take around 5h on decent connection.
|
32 |
+
5. Run scrapy spider like this `scrapy crawl webdatacommons_org -s WEB_DATA_COMMONS=web_data_commons_urls_sample.txt -L INFO -o webdatacommons.jsonlines` with `WEB_DATA_COMMONS` pointing to the list of seed URLs from step 4. That might take up to a few weeks.
|
33 |
+
6. Run `python extract_relevant_structured_data.py --num-threads 12 webdatacommons.jsonlines relevant.jsonlines.bz2`. That's fast, probably around 30 minutes.
|
34 |
+
7. Run `python export_structured_data.py relevant.jsonlines.bz2 extruct_out.jsonlines.bz2` to obtain the final version of the dataset.
|
35 |
+
|
36 |
+
## Advices
|
37 |
+
If you want to recreate the results:
|
38 |
+
* Get yourself a server or VPS with enough space (80GB should be enough).
|
39 |
+
* Look at the code. You'd probably want to make changes here and there.
|
40 |
+
* All the python scripts have extra parameters to control the number of threads and the chunk size. Both accept compressed input and output files with the help of smart_open lib.
|
41 |
|
42 |
## License
|
43 |
**Code** of the project has an MIT license.
|