|
--- |
|
license: cc-by-sa-4.0 |
|
dataset_info: |
|
features: |
|
- name: id |
|
dtype: int64 |
|
- name: title |
|
dtype: string |
|
- name: summary |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
- name: categories |
|
sequence: string |
|
splits: |
|
- name: train |
|
num_bytes: 447696713.49705654 |
|
num_examples: 67573 |
|
- name: test |
|
num_bytes: 49749968.50294345 |
|
num_examples: 7509 |
|
download_size: 298225345 |
|
dataset_size: 497446682.0 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
--- |
|
|
|
Creates a pages dataset using Wikipedia. |
|
|
|
Explores the 40 root categories and their sub-categories to collect pages. The produced dataset provides up to 2000 pages per category. |
|
|
|
See https://github.com/tarekziade/mwcat |
|
|
|
|