Datasets:
metadata
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 44521478
num_examples: 63076
download_size: 23091608
dataset_size: 44521478
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-classification
- token-classification
- summarization
language:
- ku
size_categories:
- 10K<n<100K
pretty_name: Kurdish Wikipedia Articles
Summary
Extracted from the wikidump. There are summaries and categories available for each article. Will look into adding them later.
Usage
from datasets import load_dataset
ds = load_dataset("nazimali/kurdish-wikipedia-articles", split="train")
ds
Dataset({
features: ['id', 'url', 'title', 'text'],
num_rows: 63076
})