task_categories:
- question-answering
- visual-question-answering
language:
- en
tags:
- Multimodal Search
- Multimodal Long Context
size_categories:
- n<1K
configs:
- config_name: end2end
data_files:
- split: end2end
path: end2end.parquet
- config_name: rerank
data_files:
- split: rerank
path: rerank.parquet
- config_name: summarization
data_files:
- split: summarization
path: summarization.parquet
dataset_info:
- config_name: end2end
features:
- name: sample_id
dtype: string
- name: query
dtype: string
- name: query_image
dtype: image
- name: image_search_result
dtype: image
- name: area
dtype: string
- name: subfield
dtype: string
- name: timestamp
dtype: string
- name: gt_requery
dtype: string
- name: gt_answer
dtype: string
- name: alternative_gt_answers
sequence: string
splits:
- name: end2end
num_examples: 300
- config_name: rerank
features:
- name: sample_id
dtype: string
- name: query
dtype: string
- name: query_image
dtype: image
- name: image_search_result
dtype: image
- name: area
dtype: string
- name: subfield
dtype: string
- name: timestamp
dtype: string
- name: valid
sequence: int32
- name: not_sure
sequence: int32
- name: invalid
sequence: int32
- name: gt_answer
dtype: string
- name: website0_info
struct:
- name: title
dtype: string
- name: snippet
dtype: string
- name: url
dtype: string
- name: website1_info
struct:
- name: title
dtype: string
- name: snippet
dtype: string
- name: url
dtype: string
- name: website2_info
struct:
- name: title
dtype: string
- name: snippet
dtype: string
- name: url
dtype: string
- name: website3_info
struct:
- name: title
dtype: string
- name: snippet
dtype: string
- name: url
dtype: string
- name: website4_info
struct:
- name: title
dtype: string
- name: snippet
dtype: string
- name: url
dtype: string
- name: website5_info
struct:
- name: title
dtype: string
- name: snippet
dtype: string
- name: url
dtype: string
- name: website6_info
struct:
- name: title
dtype: string
- name: snippet
dtype: string
- name: url
dtype: string
- name: website7_info
struct:
- name: title
dtype: string
- name: snippet
dtype: string
- name: url
dtype: string
- name: website0_head_screenshot
dtype: image
- name: website1_head_screenshot
dtype: image
- name: website2_head_screenshot
dtype: image
- name: website3_head_screenshot
dtype: image
- name: website4_head_screenshot
dtype: image
- name: website5_head_screenshot
dtype: image
- name: website6_head_screenshot
dtype: image
- name: website7_head_screenshot
dtype: image
splits:
- name: rerank
num_examples: 300
- config_name: summarization
features:
- name: sample_id
dtype: string
- name: query
dtype: string
- name: query_image
dtype: image
- name: image_search_result
dtype: image
- name: area
dtype: string
- name: subfield
dtype: string
- name: timestamp
dtype: string
- name: website_title
dtype: string
- name: website_snippet
dtype: string
- name: website_url
dtype: string
- name: website_original_content
dtype: string
- name: website_retrieved_content
dtype: string
- name: website_fullpage_screenshot
dtype: image
- name: gt_requery
dtype: string
- name: gt_answer
dtype: string
- name: alternative_gt_answers
sequence: string
splits:
- name: summarization
num_examples: 300
MMSearch π₯: Benchmarking the Potential of Large Models as Multi-modal Search Engines
Official repository for the paper "MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines".
π For more details, please refer to the project page with dataset exploration and visualization tools: https://mmsearch.github.io/.
[π Webpage] [π Paper] [π€ Huggingface Dataset] [π Leaderboard] [π Visualization]
π₯ News
- [2024.09.25] π The evaluation code now supports directly use models implemented in VLMEvalKit!
- [2024.09.22] π₯ We release the evaluation code, which you only need to add an inference API of your LMM!
- [2024.09.20] π We release the arXiv paper and all MMSearch data samples in huggingface dataset.
π ToDo
- Coming soon: MMSearch-Engine (for new query)
π About MMSearch
The capabilities of Large Multi-modal Models (LMMs) in multimodal search remain insufficiently explored and evaluated. To fill the blank of a framework for LMM to conduct multimodal AI search engine, we first design a delicate pipeline MMSearch-Engine to facilitate any LMM to function as a multimodal AI search engine
The overview of MMSearch-Engine.
To further evaluate the potential of LMMs in the multimodal search domain, we introduce MMSearch, an all-around multimodal search benchmark designed for assessing the multimodal search performance. The benchmark contains 300 manually collected instances spanning 14 subfields, which involves no overlap with the current LMMs' training data, ensuring the correct answer can only be obtained within searching.
The overview of MMSearch.
In addition, we propose a step-wise evaluation strategy to better understand the LMMs' searching capability. The models are evaluated by performing three individual tasks (requery, rerank, and summarization), and one challenging end-to-end task with a complete searching process. The final score is weighted by the four tasks.
Outline of Evaluation Tasks, Inputs, and Outputs.
An example of LMM input, output, and ground truth for four evaluation tasks is shown here.
π Leaderboard
Contributing to the Leaderboard
π¨ The Leaderboard is continuously being updated, welcoming the contribution of your excellent LMMs!
:white_check_mark: Citation
If you find MMSearch useful for your research and applications, please kindly cite using this BibTeX:
@article{jiang2024mmsearch,
title={MMSearch: Benchmarking the Potential of Large Models as Multi-modal Search Engines},
author={Jiang, Dongzhi and Zhang, Renrui and Guo, Ziyu and Wu, Yanmin and Lei, Jiayi and Qiu, Pengshuo and Lu, Pan and Chen, Zehui and Song, Guanglu and Gao, Peng and others},
journal={arXiv preprint arXiv:2409.12959},
year={2024}
}
π§ Related Work
Explore our additional research on Vision-Language Large Models:
- [MathVerse] MathVerse: Does Your Multi-modal LLM Truly See the Diagrams in Visual Math Problems?
- [MathVista] MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts
- [LLaMA-Adapter] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention
- [LLaMA-Adapter V2] LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model
- [ImageBind-LLM] Imagebind-LLM: Multi-modality Instruction Tuning
- [SPHINX] The Joint Mixing of Weights, Tasks, and Visual Embeddings for Multi-modal LLMs
- [SPHINX-X] Scaling Data and Parameters for a Family of Multi-modal Large Language Models
- [Point-Bind & Point-LLM] Multi-modality 3D Understanding, Generation, and Instruction Following
- [PerSAM] Personalize segment anything model with one shot
- [CoMat] CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching