Datasets:

Modalities:
Text
Formats:
csv
Size:
< 1K
DOI:
Libraries:
Datasets
pandas
License:
File size: 2,147 Bytes
801cc5f
 
 
e48f0db
 
 
 
 
 
 
3ddd9cd
893ab28
e48f0db
82468c1
e48f0db
 
 
 
 
 
 
 
 
 
 
 
82468c1
e48f0db
 
 
 
 
 
 
 
 
 
5dc8056
e48f0db
7f8f2d7
 
 
 
53c4c1f
 
 
 
 
 
 
7f8f2d7
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
license: apache-2.0
---

# image_gen_ocr_eval

**Author:** Peter J. Bevan

**Date:** 15/12/23

**github:** [https://github.com/pbevan1/image-gen-spelling-eval](https://github.com/pbevan1/image-gen-spelling-eval)

---
*Table 1: Normalised Levenshtein similarity scores between instructed text and text present in image (as identified by OCR)*

| Model | object | signage | natural | long | Overall |
| --- | --- | --- | --- | --- | --- |
| DALLE3 | 0.62 | 0.62 | 0.62 | 0.58 | 0.61 |
| DeepFloydIF | 0.57 | 0.56 | 0.66 | 0.39 | 0.54 |
| DALLE2 | 0.44 | 0.35 | 0.42 | 0.22 | 0.36 |
| SDXL | 0.3 | 0.33 | 0.4 | 0.21 | 0.31 |
| SD | 0.28 | 0.26 | 0.32 | 0.22 | 0.27 |
| PlayGroundV2 | 0.19 | 0.23 | 0.17 | 0.2 | 0.2 |
| Wuerstchen | 0.14 | 0.19 | 0.19 | 0.19 | 0.18 |
| Kandinsky | 0.13 | 0.2 | 0.18 | 0.17 | 0.17 |


---

This is a POC that calculates the normalised Levenshtein similarity between prompted text and the text present in the generated image (as recognised by OCR).

To us this to create a metric, we create a dataset of prompts, each instructing to include some text in the image. We also provide a column for ground truth generated text which contains only the instructed text. The below scorer is then run on the generated images, comparing the target text with the actual text, outputting a score. The scores are then averaged to give a benchmark score. A score of 1 indicates a perfect match to the text.

You can find the dataset at https://huggingface.co/datasets/pbevan11/image_gen_ocr_evaluation_data

Since this metric solely looks at text within the generated images and not image quality as a whole, this metric should be used alongside other benchmarks such as those in https://karine-h.github.io/T2I-CompBench/.

---

![Image generation model spelling comparison](model_comparison.png)


```
@misc {peter_j._bevan_2024,
	author       = { {Peter J. Bevan} },
	title        = { image_gen_ocr_evaluation_data (Revision 6182779) },
	year         = 2024,
	url          = { https://huggingface.co/datasets/pbevan11/image_gen_ocr_evaluation_data },
	doi          = { 10.57967/hf/1944 },
	publisher    = { Hugging Face }
}
```