dchaplinsky commited on
Commit
8b11c67
1 Parent(s): c378c7a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -8
README.md CHANGED
@@ -10,20 +10,19 @@ multilinguality:
10
  ---
11
 
12
  ## Every Prompt
13
- Every Prompt is a data driven approach to mine instructions from the web.
14
- It contains more than million FAQs and HowTos from all around the world in the structured format.
15
  It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of [GCLD3](https://github.com/google/cld3)
16
 
17
  It relies on the [Web Data Commons](http://webdatacommons.org) dataset to find the seed list of sites with [**HowTo**](https://schema.org/HowTo) and [**FAQPage**](https://schema.org/FAQPage) items.
18
  The general pipeline looks like this:
19
  * Download 1.6TB of structured data from webdatacommons to identify the pages with the structured data we need (wget/parallel). That gives us 1,985,925 seed pages
20
- * Crawls the seed pages and tries to extract structured data using [extruct](https://pypi.org/project/extruct/#description) package. That lefts around 1,358,638 pages which are alive and well-formed.
21
  * Extracts only the relevant structured data of the HowTo/FAQPage type with the help of jmespath. That boils down to 1,266,926 json documents.
22
- * Extracts the textual information out of the structure to identify the language of the text, length of the textual data and the text/data ratio.
23
 
24
- The resulting dataset you can use by applying the filtering on the language and amount of the text. You need to convert the structured data into instructions yourself.
25
- You'll need to apply extra cleansing/evaluation of the instructions you've got because you know, internet is still full of crap
26
 
27
  ## License
28
- **Code** of the project has MIT license.
29
-
 
10
  ---
11
 
12
  ## Every Prompt
13
+ Every Prompt is a data-driven approach to mining instructions from the web.
14
+ It contains over a million FAQs and HowTos from around the world in a structured format.
15
  It also has basic pre-processing to calculate the length of the useful text and identify the language of that text with the help of [GCLD3](https://github.com/google/cld3)
16
 
17
  It relies on the [Web Data Commons](http://webdatacommons.org) dataset to find the seed list of sites with [**HowTo**](https://schema.org/HowTo) and [**FAQPage**](https://schema.org/FAQPage) items.
18
  The general pipeline looks like this:
19
  * Download 1.6TB of structured data from webdatacommons to identify the pages with the structured data we need (wget/parallel). That gives us 1,985,925 seed pages
20
+ * Crawls the seed pages and tries to extract structured data using [extruct](https://pypi.org/project/extruct/#description) package. That left around 1,358,638 pages which are alive and well-formed.
21
  * Extracts only the relevant structured data of the HowTo/FAQPage type with the help of jmespath. That boils down to 1,266,926 json documents.
22
+ * Extracts the textual information out of the structure to identify the text's language, the textual data's length, and the text/data ratio.
23
 
24
+ You can use the resulting dataset by filtering for the language and amount of the text. You need to convert the structured data into instructions yourself.
25
+ You'll need to apply extra cleansing/evaluation of the instructions you've got because, you know, the internet is still full of crap.
26
 
27
  ## License
28
+ **Code** of the project has an MIT license.