|
--- |
|
license: mit |
|
task_categories: |
|
- question-answering |
|
pretty_name: Every Prompt |
|
size_categories: |
|
- 1M<n<10M |
|
multilinguality: |
|
- multilingual |
|
--- |
|
|
|
## Every Prompt |
|
Every Prompt is a data-driven approach to mining instructions from the web. |
|
It contains over a million FAQs and HowTos from around the world in a structured format. |
|
It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of [GCLD3](https://github.com/google/cld3) |
|
|
|
It relies on the [Web Data Commons](http://webdatacommons.org) dataset to find the seed list of sites with [**HowTo**](https://schema.org/HowTo) and [**FAQPage**](https://schema.org/FAQPage) items. |
|
The general pipeline looks like this: |
|
* Download 1.6TB of structured data from webdatacommons to identify the pages with the structured data we need (wget/parallel). That gives us 1,985,925 seed pages |
|
* Crawls the seed pages and tries to extract structured data using [extruct](https://pypi.org/project/extruct/#description) package. That left around 1,358,638 pages which are alive and well-formed. |
|
* Extracts only the relevant structured data of the HowTo/FAQPage type with the help of jmespath. That boils down to 1,266,926 json documents. |
|
* Extracts the textual information out of the structure to identify the text's language, the textual data's length, and the text/data ratio. |
|
|
|
You can use the resulting dataset by filtering for the language and amount of the text. You need to convert the structured data into instructions yourself. |
|
You'll need to apply extra cleansing/evaluation of the instructions you've got because, you know, the internet is still full of crap. |
|
|
|
## License |
|
**Code** of the project has an MIT license. |