ali619's picture
Update README.md
1bab3e7 verified
|
raw
history blame
970 Bytes
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 2180800569
      num_examples: 384589
  download_size: 980379692
  dataset_size: 2180800569
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
language:
  - fa
tags:
  - farsi
  - persian
  - corpus
  - normalized

Dataset Summary

Persian data of this dataset is a collection of 400k blog posts (RohanAiLab/persian_blog). these posts have been gathered from more than 10 websites. This dataset can be used in different NLP tasks like language modeling, creating tokenizer and text generation tasks.

  • The data in this dataset have been normalized and unnecessary tokens have been removed.

Note: If you need Persian and Engish corpus together, click here