File size: 1,765 Bytes
db53cfe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: wikipedia-de-splits
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- wikipedia
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
language:
- de
configs:
- "1"
- "2"
- "3"
- "4"
- "5"
- "6"
- "7"
- "8"
- "9"
- "10"
- "11"
- "12"
- "13"
- "14"
- "15"
- "16"
- "17"
- "18"
- "19"
- "20"
- "21"
- "all"
---
# Dataset Card for yaakov/wikipedia-de-splits
## Dataset Description
The only goal of this dataset is to have random German Wikipedia articles at
various dataset sizes: Small datasets for fast development and large datasets for statistically relevant measurements.
For this purpose, I loaded the 2665357 articles in the `test` set of the pre-processed German Wikipedia dump from 2022-03-01, randomly permuted the articles and created splits of sizes `2**n`: `1, 2, 4, 8, ...`. The split names are strings. The split `'all'` contains all 2665357 available articles.
## Dataset creation
This dataset has been created with the following script:
!apt install git-lfs
!pip install -q transformers datasets
from huggingface_hub import notebook_login
notebook_login()
from datasets import load_dataset
wikipedia_de = load_dataset("wikipedia", "20220301.de")['train']
shuffled = wikipedia_de.shuffle(seed=42)
from datasets import DatasetDict
res = DatasetDict()
k, n = 0, 1
while n <= shuffled.num_rows:
res[str(k)] = shuffled.select(range(n))
k += 1; n *= 2
res['all'] = shuffled
res.push_to_hub('yaakov/wikipedia-de-splits') |