Datasets:

ArXiv:
License:
File size: 11,894 Bytes
1d7a9ba
 
 
 
 
 
 
b837da3
 
a53e9bc
b837da3
 
a53e9bc
b837da3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1d7a9ba
b837da3
 
 
 
46d2f3a
1d7a9ba
46d2f3a
b837da3
 
 
3d48dac
 
 
b837da3
a53e9bc
 
 
1d7a9ba
b837da3
 
 
 
a53e9bc
 
b837da3
 
 
 
 
 
 
 
 
 
a53e9bc
 
b837da3
1d7a9ba
b837da3
 
 
 
 
 
 
 
 
 
 
 
 
a53e9bc
 
b837da3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a53e9bc
 
b837da3
1d7a9ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b837da3
a53e9bc
 
b837da3
 
 
 
5b55dd9
 
b837da3
 
5b55dd9
1d7a9ba
b837da3
 
abb829c
b837da3
abb829c
2c1694a
 
abb829c
 
b837da3
9bf95c0
b837da3
 
 
abb829c
b837da3
 
 
 
 
 
 
 
 
 
 
abb829c
b837da3
 
 
 
 
 
 
 
 
 
 
a53e9bc
 
919a8e2
1d7a9ba
919a8e2
1d7a9ba
 
 
 
 
a53e9bc
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
---
license: mit
tags:
- nlp
---

# **DP-Bench: Document Parsing Benchmark**

<div align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/6524ab1e27d1f3d84ad07705/Q7CC2z4CAJzZ4-CGaSnBO.png" width="800px">
</div>


Document parsing refers to the process of converting complex documents, such as PDFs and scanned images, into structured text formats like HTML and Markdown.
It is especially useful as a preprocessor for [RAG](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) systems, as it preserves key structural information from visually rich documents.

While various parsers are available on the market, there is currently no standard evaluation metric to assess their performance.
To address this gap, we propose a set of new evaluation metrics along with a benchmark dataset designed to measure parser performance.

## Metrics
We propose assessing the performance of parsers using three key metrics: 
NID for element detection and serialization, TEDS and TEDS-S for table structure recognition.

### Element detection and serialization

**NID (Normalized Indel Distance).**
NID evaluates how well a parser detects and serializes document elements according to human reading order.
NID is similar to the [normalized edit distance](https://en.wikipedia.org/wiki/Levenshtein_distance) metric but excludes substitutions during evaluation, making it more sensitive to length differences between strings.

The NID metric is computed as follows:

$$
NID = 1 - \frac{\text{distance}}{\text{len(reference)} + \text{len(prediction)}}
$$

The distance measures the similarity between the reference and predicted text, with values ranging from 0 to 1, where 0 represents perfect alignment and 1 denotes complete dissimilarity. 
Here, the predicted text is compared against the reference text to determine how many character-level insertions and deletions are needed to match it.
A higher NID score reflects better performance in both recognizing and ordering the text within the document's detected layout regions.

### Table structure recognition

Tables are one of the most complex elements in documents, often presenting both structural and content-related challenges. 
Yet, during NID evaluation, table elements (as well as figures and charts) are excluded, allowing the metric to focus on text elements such as paragraphs, headings, indexes, and footnotes. To specifically evaluate table structure and content extraction, we use the [TEDS](https://arxiv.org/abs/1911.10683) and [TEDS-S](https://arxiv.org/abs/1911.10683) metrics.

The [traditional metric](https://ieeexplore.ieee.org/document/1227792) fails to account for the hierarchical nature of tables (rows, columns, cells), but TEDS/TEDS-S measures the similarity between the predicted and ground-truth tables by comparing both structural layout and content, offering a more comprehensive evaluation. 

**TEDS (Tree Edit Distance-based Similarity).**
The TEDS metric is computed as follows:

$$
TEDS(T_a, T_b) = 1 - \frac{EditDist(T_a, T_b)}{\max(|T_a|, |T_b|)}
$$

The equation evaluates the similarity between two tables by modeling them as tree structures \\(T_a\\) and \\(T_b\\).
This metric evaluates how accurately the table structure is predicted, including the content of each cell.
A higher TEDS score indicates better overall performance in capturing both the table layout and the textual content.

**TEDS-S (Tree Edit Distance-based Similarity-Struct).**
TEDS-S stands for Tree Edit Distance-based Similarity-Struct, measuring the structural similarity between the predicted and reference tables.
While the metric formulation is identical to TEDS, it uses modified tree representations, denoted as \\(T_a'\\) and \\(T_b'\\), where the nodes correspond solely to the table structure, omitting any cell-level content.
This allows TEDS-S to concentrate on assessing the structural similarity of the tables, such as row and column alignment, without being influenced by the textual data contained within the cells.
## Benchmark dataset

### Document sources
The benchmark dataset is gathered from three sources: 90 samples from the Library of Congress; 90 samples from Open Educational Resources; 
and 20 samples from Upstage's internal documents.
Together, these sources provide a broad and specialized range of information.

<div style="width: 500px;">
  
| Sources                    | Count|
|:---------------------------|:----:|
| Library of Congress        | 90   |
| Open educational resources | 90   |
| Upstage                    | 20   |

</div>

### Layout elements

While works like [ReadingBank](https://github.com/doc-analysis/ReadingBank) often focus solely on text conversion in document parsing, we have taken a more detailed approach by dividing the document into specific elements, with a particular emphasis on table performance. 

This benchmark dataset was created by extracting pages with various layout elements from multiple types of documents. The layout elements consist of 12 element types: **Table, Paragraph, Figure, Chart, Header, Footer, Caption, Equation, Heading1, List, Index, Footnote**. This diverse set of layout elements ensures that our evaluation covers a wide range of document structures and complexities, providing a comprehensive assessment of document parsing capabilities.

Note that only Heading1 is included among various heading sizes because it represents the main structural divisions in most documents, serving as the primary section title. 
This high-level segmentation is sufficient for assessing the core structure without adding unnecessary complexity. 
Detailed heading levels like Heading2 and Heading3 are omitted to keep the evaluation focused and manageable.

<div style="width: 500px;">
  
| Category   | Count |
|:-----------|------:|
| Paragraph  | 804   |
| Heading1   | 194   |
| Footer     | 168   |
| Caption    | 154   |
| Header     | 101   |
| List       | 91    |
| Chart      | 67    |
| Footnote   | 63    |
| Equation   | 58    |
| Figure     | 57    |
| Table      | 55    |
| Index      | 10    |

</div>

### Dataset format

The dataset is in JSON format, representing elements extracted from a PDF file, with each element defined by its position, layout class, and content. 
The **category** field represents various layout classes, including but not limited to text regions, headings, footers, captions, tables, and more.
The **content** field has three options: the **text** field contains text-based content, **html** represents layout regions where equations are in LaTeX and tables in HTML, and **markdown** distinguishes between regions like Heading1 and other text-based regions such as paragraphs, captions, and footers.
Each element includes coordinates (x, y), a unique ID, and the page number it appears on. The dataset’s structure supports flexible representation of layout classes and content formats for document parsing.

```
{
    "01030000000001.pdf": {
        "elements": [
            {
                "coordinates": [
                    {
                        "x": 170.9176246670229,
                        "y": 102.3493458064781
                    },
                    {
                        "x": 208.5023846755278,
                        "y": 102.3493458064781
                    },
                    {
                        "x": 208.5023846755278,
                        "y": 120.6598699131856
                    },
                    {
                        "x": 170.9176246670229,
                        "y": 120.6598699131856
                    }
                ],
                "category": "Header",
                "id": 0,
                "page": 1,
                "content": {
                    "text": "314",
                    "html": "",
                    "markdown": ""
                }
            },
            ...
    ...
```

<div style="width: 800px;">
  
### Document domains
| Domain                               | Subdomain               | Count |
|:-------------------------------------|:------------------------|------:|
| Social Sciences                      | Economics               | 26    |
| Social Sciences                      | Political Science       | 18    |
| Social Sciences                      | Sociology               | 16    |
| Social Sciences                      | Law                     | 12    |
| Social Sciences                      | Cultural Anthropology   | 11    |
| Social Sciences                      | Education               | 8     |
| Social Sciences                      | Psychology              | 4     |
| Natural Sciences                     | Environmental Science   | 26    |
| Natural Sciences                     | Biology                 | 10    |
| Natural Sciences                     | Astronomy               | 4     |
| Technology                           | Technology              | 33    |
| Mathematics and Information Sciences | Mathematics             | 13    |
| Mathematics and Information Sciences | Informatics             | 9     |
| Mathematics and Information Sciences | Computer Science        | 8     |
| Mathematics and Information Sciences | Statistics              | 2     |

</div>

## Usage

### Setup

Before setting up the environment, make sure to [install Git LFS](https://git-lfs.com/), which is required for handling large files.
Once installed, you can clone the repository and install the necessary dependencies by running the following commands:

```
$ git clone https://huggingface.co/datasets/upstage/dp-bench.git
$ cd dp-bench
$ pip install -r requirements.txt
```
The repository includes necessary scripts for inference and evaluation, as described in the following sections.

### Inference
We offer inference scripts that let you request results from various document parsing services.
For more details, refer to this [README](https://huggingface.co/datasets/upstage/dp-bench/blob/main/scripts/README.md).

### Evaluation
The benchmark dataset can be found in the `dataset` folder. 
It contains a wide range of document layouts, from text-heavy pages to complex tables, enabling a thorough evaluation of the parser’s performance. 
The dataset comes with annotations for layout elements such as paragraphs, headings, and tables.


#### Element detection and serialization evaluation
This evaluation will compute the NID metric to assess how accurately the text in the document is recognized considering the structure and order of the document layout.
To evaluate the document layout results, run the following command:

```
$ python evaluate.py \
  --label_path <path to the reference json file> \
  --pred_path <path to the predicted json file> \
  --mode layout
```


#### Table structure recognition evaluation
This will compute TEDS-S (structural accuracy) and TEDS (structural and textual accuracy).
To evaluate table recognition performance, use the following command:

```
$ python evaluate.py \
  --label_path <path to the reference json file> \
  --pred_path <path to the predicted json file> \
  --mode table
```

# Leaderboard
<div style="width: 800px;">
  
| Source               | Request date | TEDS       | TEDS-S    | NID ⬇️      |  Avg. Time  |
|:---------------------|:------------:|-----------:|----------:|------------:|------------:|
| upstage              | 2024-09-26   | **91.01**      | **93.47**     | **96.27**       |  **3.79**       |
| aws                  | 2024-09-26   | 86.39      | 90.22     | 95.94       |  14.47      |
| llamaparse           | 2024-09-26   | 68.90      | 70.86     | 90.92       |  4.14       |
| unstructured         | 2024-09-26   | 64.49      | 69.90     | 90.42       |  13.14      |
| google               | 2024-09-26   | 62.44      | 68.75     | 90.09       |  5.85       |
| microsoft            | 2024-09-26   | 85.54      | 89.07     | 87.03       |  4.44       |

</div>