File size: 1,748 Bytes
7983a3a
 
c378c7a
 
 
 
 
 
 
7983a3a
c378c7a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
license: mit
task_categories:
- question-answering
pretty_name: Every Prompt
size_categories:
- 1M<n<10M
multilinguality:
  - multilingual
---

## Every Prompt
Every Prompt is a data driven approach to mine instructions from the web.
It contains more than million FAQs and HowTos from all around the world in the structured format.
It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of [GCLD3](https://github.com/google/cld3)

It relies on the [Web Data Commons](http://webdatacommons.org) dataset to find the seed list of sites with [**HowTo**](https://schema.org/HowTo) and [**FAQPage**](https://schema.org/FAQPage) items.
The general pipeline looks like this:
* Download 1.6TB of structured data from webdatacommons to identify the pages with the structured data we need (wget/parallel). That gives us 1,985,925 seed pages
* Crawls the seed pages and tries to extract structured data using [extruct](https://pypi.org/project/extruct/#description) package. That lefts around 1,358,638 pages which are alive and well-formed.
* Extracts only the relevant structured data of the HowTo/FAQPage type with the help of jmespath. That boils down to 1,266,926 json documents.
* Extracts the textual information out of the structure to identify the language of the text, length of the textual data and the text/data ratio.

The resulting dataset you can use by applying the filtering on the language and amount of the text. You need to convert the structured data into instructions yourself.
You'll need to apply extra cleansing/evaluation of the instructions you've got because you know, internet is still full of crap

## License
**Code** of the project has MIT license.