The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Dataset Description

CapMIT1003 is a dataset of captions and click-contingent image explorations collected during captioning tasks. CapMIT1003 is based on the same stimuli from the well-known MIT1003 benchmark, for which eye-tracking data under free-viewing conditions is available, which offers a promising opportunity to concurrently study human attention under both tasks.

Usage

You can load CapMIT1003 as follows:

from datasets import load_dataset

capmit1003_dataset = load_dataset("azugarini/CapMIT1003", trust_remote_code=True)
print(capmit1003_dataset["train"][0]) #print first example

Citation Information

If you use this dataset in your research or work, please cite the following paper:

@article{zanca2023contrastive,
  title={Contrastive Language-Image Pretrained Models are Zero-Shot Human Scanpath Predictors},
  author={Zanca, Dario and Zugarini, Andrea and Dietz, Simon and Altstidl, Thomas R and Ndjeuha, Mark A Turban and Schwinn, Leo and Eskofier, Bjoern},
  journal={arXiv preprint arXiv:2305.12380},
  year={2023}
Downloads last month
37