The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Burmese OCR data
This repository contains a dataset of Burmese text images and their corresponding ground truth text generated from real-life documents, suitable for training Optical Character Recognition (OCR) models.
Processing
The data was curated from the Burma Library archive, which collects and preserves government and NGO documents. These documents were processed using Google Document AI to extract text and bounding boxes. Images of the identified text were then cropped and organized in this dataset.
Organization
Data is stored in "./data" directory, which contains two zip files:
- cleaned.zip: This file contains 9,065 images that have been manually validated by two individuals. While efforts have been made to ensure accuracy, some errors may still be present.
- uncleaned.zip: This file contains 162,863 raw images cropped directly from the output of Google Document AI, without any manual review or correction.
Each zip file contains the directories:
- pic: This directory contains single line text images in .png format cropped from each document.
- gt: This directory contains ground truth text files with the file extension ".gt.txt" corresponding to their images with the same names.
Specification
- All of the images are in .png format and text files are in .gt.txt format.
- Images are in RGB format, and may need to convert appropriate color encoding.
- Character range is in Google Document AI supported character and may contain any of them as long as the document has Burmese character.
Tesseract
If you plan to use this dataset to train Tesseract, please note the following:
- Avoid using the tesstrain package. A known bug in tesstrain prevents it from correctly decoding the virama in Burmese script.
- Use the lstmtraining tool directly from Tesseract. This will ensure proper handling of Burmese script and avoid the tesstrain bug.
- Suggest to apply curriculum training with the steps of synthetic data -> uncleaned data -> cleaned data, with appropriate train/test ratio.
- Tested with the Tesseract version 5.4.1
Example -
lstmtraining --continue_from ./data/mya_plus/checkpoints/mya_checkpoint --model_output ./data/mya/mya --train_listfile ./data/mya/all-lstmf --max_iterations 100000 --learning_rate 0.0001 --target_error_rate 0.01 --traineddata ./usr/share/tessdata/mya.traineddata --net_mode 192
License
The data in this repository is in the Public Domain. You are free to use, modify, and distribute it for any purpose. Citations are welcome and please use the following format.
@misc {alexanderbeatson,
author = { {Alexander Beatson} },
title = { Burmese OCR data },
year = 2024,
url = { https://huggingface.co/datasets/alexbeatson/burmese_ocr_data },
doi = { 10.57967/hf/3361 },
publisher = { Hugging Face },
note = {ORCID: 0000-0002-1829-5965}
}
Notes
- The dataset may contain a variety of fonts, document styles, and image quality, reflecting the diversity of the source documents.
- Uncleaned files are automatically cleaned to some extent that includes removing some ethnic name keywords (those Google AI cannot OCR), and multi-line (known) issue
- While the cleaned data has been validated, it is recommended to perform your own quality checks and consider further cleaning or preprocessing steps as needed for your specific OCR task.
- Cropped the images only if the document has the Burmese character somewhere, thus images may not contain Burmese character but the collective document must have the Burmese character.
- Contributions to improve the dataset, such as additional validation, error correction, or new data sources, are welcome.
- There are some images in non-horizontal orientation (eg. tilted, circular), and may need to filter them using OpenCV further.
- Downloads last month
- 8,269