Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
Libraries:
Datasets
pandas
License:
query
stringclasses
3 values
image
imagewidth (px)
989
989
How does the positional encoding work?
How does the scaled dot attention product work?
How are the encoders and decoders connected in the Transformer model architecture?

Model Card: Document Visual Retrieval Test (internal)

Dataset Overview

This dataset is designed to evaluate the performance of visual retrievers by testing their ability to match a query to a relevant image. Each of the three examples in this dataset contains a text query and an associated image, which is a scanned page from the foundational "Attention is All You Need" paper. The purpose of this dataset is to facilitate the evaluation of visual retrievers, where the retrieval model should accurately link each query with its corresponding page.

Dataset Details

  • Number of Examples: 3
  • Image Type: Scanned pages from the "Attention is All You Need" paper
  • Purpose: Testing the retrieval accuracy of visual retrievers on academic paper pages
  • Usage: The dataset is ideal for testing retrieval models, especially those focusing on cross-modal retrieval where a text query matches a specific visual page.

Intended Use

This dataset is intended for use in assessing and benchmarking the performance of visual retrieval models. Specifically, a high-performing model should be able to:

  • Understand the textual context provided in the query.
  • Retrieve the correct image from a set of images that corresponds to that specific query.

Example Queries

The queries reflect key sections of the "Attention is All You Need" paper and require the retriever to connect the query to the correct page image containing the relevant information.

Performance Evaluation

To assess the performance of a visual retriever with this dataset, standard metrics such as nDCG@k (Normalized Discounted Cumulative Gain), Recall@K, and MRR (Mean Reciprocal Rank) are recommended. The dataset is small and meant as a preliminary benchmark to test if a retriever can reliably match highly specific text queries to their associated visual representations.

Baseline Performance

A basic text-to-image matching model should aim for a Recall@1 score of 100% given the straightforward nature of this task and the limited dataset size.

Ethical Considerations

This dataset uses publicly available content from an academic paper (the "Attention is All You Need" paper). Users should ensure appropriate use in line with fair-use guidelines for academic and research purposes. No private or sensitive information is contained in this dataset.

Downloads last month
30
Edit dataset card