DunnBC22's picture
Update README.md
980f798
|
raw
history blame
1.94 kB
---
tags:
- generated_from_trainer
- TrOCR
model-index:
- name: trocr-large-printed-e13b_tesseract_MICR_ocr
results: []
license: bsd-3-clause
language:
- en
metrics:
- cer
---
# trocr-large-printed-e13b_tesseract_MICR_ocr
This model is a fine-tuned version of [microsoft/trocr-large-printed](https://huggingface.co/microsoft/trocr-large-printed).
It achieves the following results on the evaluation set:
- Loss: 0.2432
- CER: 0.0036
## Model description
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Optical%20Character%20Recognition%20(OCR)/Tesseract%20MICR%20(E15B%20Dataset)/TrOCR-e13b%20-%20tesseractMICR.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://github.com/DoubangoTelecom/tesseractMICR/tree/master/datasets/e13b
__Histogram of Label Character Lengths__
![Histogram of Label Character Lengths](https://raw.githubusercontent.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/main/Optical%20Character%20Recognition%20(OCR)/Tesseract%20MICR%20(E15B%20Dataset)/Images/Histogram%20of%20Label%20Character%20Length.png)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | CER |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.486 | 1.0 | 841 | 0.5168 | 0.0428 |
| 0.2187 | 2.0 | 1682 | 0.2432 | 0.0036 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3