every_prompt / README.md
dchaplinsky's picture
Update README.md
c378c7a
|
raw
history blame
1.75 kB
metadata
license: mit
task_categories:
  - question-answering
pretty_name: Every Prompt
size_categories:
  - 1M<n<10M
multilinguality:
  - multilingual

Every Prompt

Every Prompt is a data driven approach to mine instructions from the web. It contains more than million FAQs and HowTos from all around the world in the structured format. It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of GCLD3

It relies on the Web Data Commons dataset to find the seed list of sites with HowTo and FAQPage items. The general pipeline looks like this:

  • Download 1.6TB of structured data from webdatacommons to identify the pages with the structured data we need (wget/parallel). That gives us 1,985,925 seed pages
  • Crawls the seed pages and tries to extract structured data using extruct package. That lefts around 1,358,638 pages which are alive and well-formed.
  • Extracts only the relevant structured data of the HowTo/FAQPage type with the help of jmespath. That boils down to 1,266,926 json documents.
  • Extracts the textual information out of the structure to identify the language of the text, length of the textual data and the text/data ratio.

The resulting dataset you can use by applying the filtering on the language and amount of the text. You need to convert the structured data into instructions yourself. You'll need to apply extra cleansing/evaluation of the instructions you've got because you know, internet is still full of crap

License

Code of the project has MIT license.