Datasets:

ArXiv:
License:

What is the metric system you used for coordinates in reference.json?

#1
by Icycream - opened

The "coordinates" in reference.json are given as follows:
"coordinates": [
{
"x": 777.662857668408,
"y": 93.92663804866078
},
{
"x": 1041.2393444021122,
"y": 93.92663804866078
},
{
"x": 1041.2393444021122,
"y": 124.9356364879201
},
{
"x": 777.662857668408,
"y": 124.9356364879201
}
]

Are they in points or pixels? If they are in pixels, what DPI did you use? If they are in points, what scaling or transformation did you use for the PDFs?
The official document(https://developers.upstage.ai/docs/apis/document-parse) does not help as it specifies that the coordinates are relative coordinates, which is clearly not the case here.

upstage org

Hi, thank you for opening up the first discussion
I've updated the reference.json file to ensure the coordinate information is now in relative coordinates. This should help visualize the file with the correct alignment.

To convert PDF files to images, we used the pdf2image library with a default DPI setting of 200.

Thank you for the kind (and fast) reply!
I have another question, if you don't mind.

As I am interested in Table Parsing models, I was looking at samples in this dataset that have a table, one by one.
While doing so, I found a few samples that I want to bring up.

  1. There are a couple pdfs with tables that cannot be parsed. Namely, the text inside table cells cannot be obtained from the pdf itself. This might be because the table is in the form of an image(.jpg, .png) inside the pdf file. Some Table Parsing models do not have a built-in OCR head, instead they rely on parsed text from pdfs.
    My question is, did you intend this benchmark dataset to be used for models that does not rely on pdf parsed texts? Or is this dataset for all models, including ones that rely on pdf parsing?
    Below are pdf file names and ids for tables that cannot be parsed with pymupdf.
    PDF Name : 01030000000110.pdf , ID : 7
    PDF Name : 01030000000122.pdf, ID : 3

  2. There are a few table ground truths in reference.json that are different from the table image. Below are pdf file names and ids for these tables.
    PDF Name : 01030000000170.pdf, ID : 8
    PDF Name : 01030000000170.pdf, ID : 1
    PDF Name : 01030000000197.pdf, ID : 3 (This one has incorrect bounding box)

Could you kindly let me know if there are any plans to improve these errors?

I am deeply grateful for your decision to make the benchmark publicly available.
Thank you.

upstage org
  1. The benchmark is intended to evaluate the performance of document parsing products, including both model-based solutions and traditional PDF parsing libraries. We understand that some layout elements, such as tables embedded as images, might not be parsed correctly by certain PDF parsing tools that rely solely on text extraction. However, regardless of whether the table is represented as an image within the document, we expect the document parsing product to identify it as a table. This benchmark is designed to be applicable to all models, whether or not they rely on pre-parsed PDF text.

  2. Thank you for bringing this to our attention. We've identified that a few PDF files were not properly synchronized with the reference.json file. We have since updated the benchmark PDF files, and verified that the issues you reported—specifically with the following files—are now resolved, with the table data correctly aligned with the bounding boxes in the reference.json file:
    01030000000170.pdf, ID: 8
    01030000000170.pdf, ID: 1
    01030000000197.pdf, ID: 3 (incorrect bounding box)
    Please let us know if you encounter any further issues.

Sign up or log in to comment