metadata
language:
- fa
size_categories:
- 100K<n<1M
task_categories:
- image-to-image
pretty_name: ParsynthOCR-200K
tags:
- hezar
dataset_info:
features:
- name: image_path
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 560135371.667
num_examples: 179999
- name: test
num_bytes: 63380889
num_examples: 20000
download_size: 568073396
dataset_size: 623516260.667
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
ParsynthOCR is a synthetic dataset for Persian OCR. This version is a preview of the original 4 million samples dataset (ParsynthOCR-4M).
Usage
🤗 Datasets
from datasets import load_dataset
dataset = load_dataset("hezarai/parsynth-ocr-200k")
Hezar
pip install hezar
from hezar.data import Dataset
dataset = Dataset.load("hezarai/parsynth-ocr-200k", split="train")