every_prompt / README.md
Dmitry Chaplinsky
Code and amendments to the README
77f91c1
|
raw
history blame
2.52 kB
---
license: mit
task_categories:
- question-answering
pretty_name: Every Prompt
size_categories:
- 1M<n<10M
multilinguality:
- multilingual
---
## Every Prompt
Every Prompt is a data-driven approach to mining instructions from the web.
It contains over a million FAQs and HowTos from around the world in a structured format.
It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of [GCLD3](https://github.com/google/cld3)
It relies on the [Web Data Commons](http://webdatacommons.org) dataset (from October 2022) to find the seed list of sites with [**HowTo**](https://schema.org/HowTo) and [**FAQPage**](https://schema.org/FAQPage) items.
The general pipeline looks like this:
* Download 1.6TB of structured data from webdatacommons to identify the pages with the structured data we need (wget/parallel). That gives us 1,985,925 seed pages
* Crawls the seed pages and tries to extract structured data using [extruct](https://pypi.org/project/extruct/#description) package. That left around 1,358,638 pages which are alive and well-formed.
* Extracts only the relevant structured data of the HowTo/FAQPage type with the help of jmespath. That boils down to 1,266,926 json documents.
* Extracts the textual information out of the structure to identify the text's language, the textual data's length, and the text/data ratio.
You can use the resulting dataset by filtering for the language and amount of the text. You need to convert the structured data into instructions yourself.
You'll need to apply extra cleansing/evaluation of the instructions you've got because, you know, the internet is still full of crap.
## Recreating the results
1. Clone the repo without the LFS files
2. Install requirements from `requirements.txt`
3. Install `pv` and `parallel`
4. Run `bin/get_seed_urls.sh` to filter urls of interest out of 1.6TB of compressed data. Don't worry about disk space. Worry about the traffic
5. Run scrapy spider like this `scrapy crawl webdatacommons_org -s WEB_DATA_COMMONS=web_data_commons_urls_sample.txt -L INFO -o webdatacommons.jsonlines` with `WEB_DATA_COMMONS` pointing to the list of seed URLs from step 4.
6. Run `python extract_relevant_structured_data.py --num-threads 12 webdatacommons.jsonlines relevant.jsonlines.bz2`
7. Run `python export_structured_data.py relevant.jsonlines.bz2 extruct_out.jsonlines.bz2` to obtain the final version of the dataset
## License
**Code** of the project has an MIT license.