dchaplinsky commited on
Commit
c378c7a
1 Parent(s): 7983a3a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -1,3 +1,29 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ task_categories:
4
+ - question-answering
5
+ pretty_name: Every Prompt
6
+ size_categories:
7
+ - 1M<n<10M
8
+ multilinguality:
9
+ - multilingual
10
  ---
11
+
12
+ ## Every Prompt
13
+ Every Prompt is a data driven approach to mine instructions from the web.
14
+ It contains more than million FAQs and HowTos from all around the world in the structured format.
15
+ It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of [GCLD3](https://github.com/google/cld3)
16
+
17
+ It relies on the [Web Data Commons](http://webdatacommons.org) dataset to find the seed list of sites with [**HowTo**](https://schema.org/HowTo) and [**FAQPage**](https://schema.org/FAQPage) items.
18
+ The general pipeline looks like this:
19
+ * Download 1.6TB of structured data from webdatacommons to identify the pages with the structured data we need (wget/parallel). That gives us 1,985,925 seed pages
20
+ * Crawls the seed pages and tries to extract structured data using [extruct](https://pypi.org/project/extruct/#description) package. That lefts around 1,358,638 pages which are alive and well-formed.
21
+ * Extracts only the relevant structured data of the HowTo/FAQPage type with the help of jmespath. That boils down to 1,266,926 json documents.
22
+ * Extracts the textual information out of the structure to identify the language of the text, length of the textual data and the text/data ratio.
23
+
24
+ The resulting dataset you can use by applying the filtering on the language and amount of the text. You need to convert the structured data into instructions yourself.
25
+ You'll need to apply extra cleansing/evaluation of the instructions you've got because you know, internet is still full of crap
26
+
27
+ ## License
28
+ **Code** of the project has MIT license.
29
+