|
---
|
|
dataset_info:
|
|
features:
|
|
- name: text
|
|
dtype: string
|
|
splits:
|
|
- name: train
|
|
num_bytes: 2180800569
|
|
num_examples: 384589
|
|
download_size: 980379692
|
|
dataset_size: 2180800569
|
|
configs:
|
|
- config_name: default
|
|
data_files:
|
|
- split: train
|
|
path: data/train-*
|
|
language:
|
|
- fa
|
|
tags:
|
|
- farsi
|
|
- persian
|
|
- corpus
|
|
---
|
|
|
|
# Dataset Summary |
|
|
|
Persian data of this dataset is a collection of 400k blog posts ([RohanAiLab/persian_blog](https://huggingface.co/datasets/RohanAiLab/persian_blog/blob/main/README.md)). these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling, creating tokenizer and text generation tasks. |
|
|
|
* **The data in this dataset have been normalized and unnecessary tokens have been removed.** |
|
|
|
**Note:** If you need Persian and Engish corpus together, click [here](https://huggingface.co/datasets/ali619/corpus-dataset-normalized-for-persian-and-english) |