File size: 2,476 Bytes
99f6520 429d8fd 0b9e48e 58d9fb6 0b9e48e 429d8fd 99f6520 4f4c666 99f6520 48822ee 56c2767 04b511c 48822ee 99f6520 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
---
license: apache-2.0
language:
- en
tags:
- rag
- long-context
- llm-search
- reasoning
- factuality
- retrieval
- question-answering
- iterative-search
task_categories:
- text-classification
- token-classification
- table-question-answering
- question-answering
pretty_name: Who are I or you
size_categories:
- n>1T
---
# FRAMES: Factuality, Retrieval, And reasoning MEasurement Set
FRAMES is a comprehensive evaluation dataset designed to test the capabilities of Retrieval-Augmented Generation (RAG) systems across factuality, retrieval accuracy, and reasoning.
Our paper with details and experiments is available on arXiv: [https://arxiv.org/abs/2409.12941](https://arxiv.org/abs/2409.12941).
## Dataset Overview
- 824 challenging multi-hop questions requiring information from 2-15 Wikipedia articles
- Questions span diverse topics including history, sports, science, animals, health, etc.
- Each question is labeled with reasoning types: numerical, tabular, multiple constraints, temporal, and post-processing
- Gold answers and relevant Wikipedia articles provided for each question
## Key Features
- Tests end-to-end RAG capabilities in a unified framework
- Requires integration of information from multiple sources
- Incorporates complex reasoning and temporal disambiguation
- Designed to be challenging for state-of-the-art language models
## Usage
This dataset can be used to:
- Evaluate RAG system performance
- Benchmark language model factuality and reasoning
- Develop and test multi-hop retrieval strategies
## Baseline Results
We provide baseline results using state-of-the-art models like Gemini-Pro-1.5-0514:
- Naive prompting: 40.8% accuracy
- BM25 retrieval (4 docs): 47.4% accuracy
- Oracle retrieval: 72.9% accuracy
- Multi-step retrieval & reasoning: 66% accuracy
## Citation
If you use this dataset in your research, please cite our paper:
```
@misc{krishna2024factfetchreasonunified,
title={Fact, Fetch, and Reason: A Unified Evaluation of Retrieval-Augmented Generation},
author={Satyapriya Krishna and Kalpesh Krishna and Anhad Mohananey and Steven Schwarcz and Adam Stambler and Shyam Upadhyay and Manaal Faruqui},
year={2024},
eprint={2409.12941},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2409.12941},
}
```
We hope FRAMES will be useful for advancing RAG systems and language model capabilities. For more details, please refer to our full paper. |