license: mit
Visual Haystacks Dataset Card
Dataset details
Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate the Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first vision-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.
Data Preparation and Benchmarking
- Download the VQA questions:
huggingface-cli download --repo-type dataset tsunghanwu/visual_haystacks --local-dir dataset/VHs_qa
- Download the COCO 2017 dataset and organize it as follows, with the default root directory ./dataset/coco:
dataset/ βββ coco β βββ annotations β βββ test2017 β βββ val2017 βββ VHs_qa βββ single_needle β βββ VHs_large β βββ VHs_small βββ multi_needle βββ multi_needle_2 βββ multi_needle_3
- Follow the instructions in https://github.com/visual-haystacks/vhs_benchmark to run the evaluation
Please check out our project page for more information. You can also send questions or comments about the model to our github repo.
This is the updated VHs dataset, enhanced for greater diversity and balance. The original dataset can be found at tsunghanwu/visual_haystacks_v0.
Intended use
Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.
Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.