metadata
license: mit
Visual Haystacks Dataset Card
Dataset details
Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate the Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first visual-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.
Data Preparation and Benchmarking
- Download the VQA questions:
huggingface-cli download --repo-type dataset tsunghanwu/visual_haystacks --local-dir dataset/VHs_qa
- Download the COCO 2017 dataset and organize it as follows, with the default root directory ./dataset/coco:
dataset/ βββ coco β βββ annotations β βββ test2017 β βββ val2017 βββ VHs_qa βββ VHs_full β βββ multi_needle β βββ single_needle βββ VHs_small βββ multi_needle βββ single_needle
- Follow the instructions in https://github.com/visual-haystacks/vhs_benchmark to run the evaluation
- Please check out our project page for more information. You can also send questions or comments about the model to our github repo
Intended use
Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.