visual_haystacks / README.md
tsunghanwu's picture
update readme
d517ee6
|
raw
history blame
1.95 kB
metadata
license: mit

Visual Haystacks Dataset Card

Dataset details

  1. Dataset type: Visual Haystacks (VHs) is a benchmark dataset specifically designed to evaluate the Large Multimodal Model's (LMM's) capability to handle long-context visual information. It can also be viewed as the first vision-centric Needle-In-A-Haystack (NIAH) benchmark dataset. Please also download COCO-2017's training set validation set.

  2. Data Preparation and Benchmarking

  • Download the VQA questions:
    huggingface-cli download --repo-type dataset tsunghanwu/visual_haystacks --local-dir dataset/VHs_qa
    
  • Download the COCO 2017 dataset and organize it as follows, with the default root directory ./dataset/coco:
    dataset/
    β”œβ”€β”€ coco
    β”‚   β”œβ”€β”€ annotations
    β”‚   β”œβ”€β”€ test2017
    β”‚   └── val2017
    └── VHs_qa
        β”œβ”€β”€ single_needle
        β”‚   β”œβ”€β”€ VHs_large
        β”‚   └── VHs_small
        └── multi_needle
            β”œβ”€β”€ multi_needle_2
            └── multi_needle_3
    
  • Follow the instructions in https://github.com/visual-haystacks/vhs_benchmark to run the evaluation
  1. Please check out our project page for more information. You can also send questions or comments about the model to our github repo.

  2. This is the updated VHs dataset, enhanced for greater diversity and balance. The original dataset can be found at tsunghanwu/visual_haystacks_v0.

Intended use

Primary intended uses: The primary use of VHs is research on large multimodal models and chatbots.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.