Datasets:
huu-ontocord
commited on
Commit
•
dc71d24
1
Parent(s):
f82fef1
Update README.md
Browse files
README.md
CHANGED
@@ -369,8 +369,7 @@ Therefore, when using, please only utilize the two columns text and url.
|
|
369 |
### Process for Creating CulturaY
|
370 |
Firstly, to create CulturaY, we began with the HPLT dataset (version 1.1). This is also a notable difference between X and Y. While X was generated from cleaning data from Common Crawl (mC4, Oscar), Y was generated from cleaning raw data from the Internet Archive (HPLT). While Common Crawl is quite popular, data from the Internet Archive is less known and exploited, even though the data from both sources are similar. HPLT or CulturaY could be considered the first publicly released datasets originating from the Internet Archive. Using both CulturaX and CulturaY simultaneously will help your model have a more diverse source of data.
|
371 |
|
372 |
-
Our pipeline is built based on Bloom's data cleaning pipeline: evaluating each document in the dataset according to criteria such as document length, perplexity, bad words ratio, etc., and removing documents that do not perform well in any of these criteria.
|
373 |
-
|
374 |
See our [Blog](https://www.ontocord.ai/blog/cultura-y) for more details.
|
375 |
|
376 |
### Citation
|
|
|
369 |
### Process for Creating CulturaY
|
370 |
Firstly, to create CulturaY, we began with the HPLT dataset (version 1.1). This is also a notable difference between X and Y. While X was generated from cleaning data from Common Crawl (mC4, Oscar), Y was generated from cleaning raw data from the Internet Archive (HPLT). While Common Crawl is quite popular, data from the Internet Archive is less known and exploited, even though the data from both sources are similar. HPLT or CulturaY could be considered the first publicly released datasets originating from the Internet Archive. Using both CulturaX and CulturaY simultaneously will help your model have a more diverse source of data.
|
371 |
|
372 |
+
Our pipeline is built based on Bloom's data cleaning pipeline: evaluating each document in the dataset according to criteria such as document length, perplexity, bad words ratio, etc., and removing documents that do not perform well in any of these criteria.
|
|
|
373 |
See our [Blog](https://www.ontocord.ai/blog/cultura-y) for more details.
|
374 |
|
375 |
### Citation
|