mbrack's picture
Update README.md
2d561c0 verified
metadata
language:
  - de
  - es
  - fr
  - pt
  - it
  - nl
  - el
  - pl
  - cs
  - sk
task_categories:
  - text-generation
pretty_name: Occiglot Fineweb v0.5
size_categories:
  - 10B<n<100B
extra_gated_prompt: >-
  By filling the form below I understand that occiglot-fineweb is a derivative
  collection of multiple datasets which use individual licenses and their
  respective terms and conditions apply.I understand that all uses of the
  textual content in occiglot-fineweb are subject to the terms os use. I
  understand that reusing the textual content in occiglot-fineweb might not be
  legal in all countries/regions and for all use cases. I understand that 
  occiglot-fineweb is mainly targeted towards researchers and meant to be used
  in research. Occiglot reserves the right to revoke my access to this data.
  Occiglot reserves the right to modify this data at any time in accordance to
  take down requests.
extra_gated_fields:
  Name: text
  Email: text
  Affiliation: text
  Country: text
  Usecase: text
  I have explicitly checked that downloading occiglot-fineweb is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the relevant Terms of Use: checkbox

Occiglot Fineweb v0.5

We present a preliminary version of the multilingual Occiglot Fineweb corpus. In this early form, the dataset contains roughly 230M heavily cleaned documents from 10 languages. Occiglot Fineweb builds on our existing collection of curated datasets and pre-filtered web data. Subsequently, all documents were filtered with language-specific derivatives of the fine-web processing pipeline and globally depuplicated.

We are actively working on extending this dataset with more data and further languages. For more information please refer to our blog post or join our Discord server.

Unfortunately, some of the datasets we used do not allow for re-distribution. Consequently, we had to exclude those from this version of our dataset. We are exploring different avenues to make this data available to the public as well.

Datasources

We mainly relied on two sources of data.

1. LLM-Dataset

From LLM-Datasets we took all available datasets for the considered languages (excluding OSCAR). This collection of data for LLM training is curated from various sources and contains multiple high-quality datasets.

2. Web-Data

We sourced web-crawled data from 12 Common-Crawl releases from 2015 until June 2023. All releases were then processed with OSCAR's Ungoliant pipeline.

Filtering

All data was rigorously filtered using language-specific pipelines built upon Huggingface's fine-web filters. In addition to some minor hyper-parameter adjustments we mainly modified 3 aspects to ensure language-specific quality filtering.

  1. Adjust average-word length filters according to lingusitic characteristics of each language
  2. Add language-specific stop words
  3. Add a language-specific policy filter for policy and cookie filtering

Deduplication

We performed minhash deduplication on all data of each language. Importantly, we always retain the duplicate not contained in the web-crawled data. For example, if a wikipedia page is also contained in OSCAR, we drop the OSCAR duplicate, thus keeping the wikipedia subset complete. This dataset structure allows to reliably over- or undersample the custom subsets.

Statistics

Language lang-code # Documents # Tokens (Llama-3)
German de 43.40M 65.02B
Spanish es 42.05M 50.96B
French fr 31.44M 39.90B
Portugese pt 22.58M 26.06B
Italian it 17.32M 25.21B
Dutch nl 16.15M 16.90B
Greek el 12.74M 16.07B
Polish pl 10.03M 15.51B
Czech cs 32.83M 12.74B
Slovak sk 2.86M 9.03B

Acknowledgements

The dataset creation by a compute grant at the 42 supercomputer which is a central component in the development of hessian AI, the AI Innovation Lab (funded by the Hessian Ministry of Higher Education, Research and the Art (HMWK) & the Hessian Ministry of the Interior, for Security and Homeland Security (HMinD)) and the AI Service Centers (funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK)). Some preliminary computations were conducted on the DFKI Pegasus Cluster. Parts of the preliminary data curation were funded by the German Federal Ministry for Economic Affairs and Climate Action (BMWK) through the project OpenGPT-X (project no. 68GX21007D).