Datasets:
File size: 925 Bytes
61c8dfa 9cf1559 61c8dfa 9cf1559 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 44521478
num_examples: 63076
download_size: 23091608
dataset_size: 44521478
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- text-classification
- token-classification
- summarization
language:
- ku
size_categories:
- 10K<n<100K
pretty_name: Kurdish Wikipedia Articles
---
## Summary
Extracted from the wikidump. There are summaries and categories available for each article. Will look into adding them later.
## Usage
```python
from datasets import load_dataset
ds = load_dataset("nazimali/kurdish-wikipedia-articles", split="train")
ds
```
```python
Dataset({
features: ['id', 'url', 'title', 'text'],
num_rows: 63076
})
``` |