JA-Multi-Image-VQA / README.md
Inoichan's picture
Update the license
3e3df58 verified
|
raw
history blame
1.75 kB
metadata
size_categories:
  - n<1K
task_categories:
  - visual-question-answering
dataset_info:
  features:
    - name: images
      sequence: image
    - name: page_urls
      sequence: string
    - name: image_urls
      sequence: string
    - name: question
      dtype: string
    - name: answer
      dtype: string
  splits:
    - name: test
      num_bytes: 102782224
      num_examples: 55
  download_size: 29527995
  dataset_size: 102782224
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
license: apache-2.0
language:
  - ja

JA-Multi-Image-VQA

Dataset Description

JA-Multi-Image-VQA is a dataset for evaluating the question answering capabilities on multiple image inputs. We carefully collected a diverse set of 39 images with 55 questions in total.

Some images contain Japanese culture and objects in Japan. The Japanese questions and answers were generated manually.

Usage

from datasets import load_dataset
dataset = load_dataset("SakanaAI/JA-Multi-Image-VQA", split="test")

Uses

The images in this dataset are sourced from Unsplash and are free to use under the Unsplash License. They cannot be sold without significant modification and cannot be used to replicate similar or competing services.

All parts of this dataset, other than the images, are licensed under the Apache 2.0 License.

Citation

@misc{Llama-3-EvoVLM-JP-v2, 
url    = {[https://huggingface.co/SakanaAI/Llama-3-EvoVLM-JP-v2](https://huggingface.co/SakanaAI/Llama-3-EvoVLM-JP-v2)}, 
title  = {Llama-3-EvoVLM-JP-v2}, 
author = {Yuichi, Inoue and Takuya, Akiba and Shing, Makoto}
}