File size: 6,437 Bytes
79dfe51 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 |
---
configs:
- config_name: 0k
data_files:
- split: qa1
path: data/qa1/0k.json
- split: qa2
path: data/qa2/0k.json
- split: qa3
path: data/qa3/0k.json
- split: qa4
path: data/qa4/0k.json
- split: qa5
path: data/qa5/0k.json
- split: qa6
path: data/qa6/0k.json
- split: qa7
path: data/qa7/0k.json
- split: qa8
path: data/qa8/0k.json
- split: qa9
path: data/qa9/0k.json
- split: qa10
path: data/qa10/0k.json
- config_name: 1k
data_files:
- split: qa1
path: data/qa1/1k.json
- split: qa2
path: data/qa2/1k.json
- split: qa3
path: data/qa3/1k.json
- split: qa4
path: data/qa4/1k.json
- split: qa5
path: data/qa5/1k.json
- split: qa6
path: data/qa6/1k.json
- split: qa7
path: data/qa7/1k.json
- split: qa8
path: data/qa8/1k.json
- split: qa9
path: data/qa9/1k.json
- split: qa10
path: data/qa10/1k.json
- config_name: 2k
data_files:
- split: qa1
path: data/qa1/2k.json
- split: qa2
path: data/qa2/2k.json
- split: qa3
path: data/qa3/2k.json
- split: qa4
path: data/qa4/2k.json
- split: qa5
path: data/qa5/2k.json
- split: qa6
path: data/qa6/2k.json
- split: qa7
path: data/qa7/2k.json
- split: qa8
path: data/qa8/2k.json
- split: qa9
path: data/qa9/2k.json
- split: qa10
path: data/qa10/2k.json
- config_name: 4k
data_files:
- split: qa1
path: data/qa1/4k.json
- split: qa2
path: data/qa2/4k.json
- split: qa3
path: data/qa3/4k.json
- split: qa4
path: data/qa4/4k.json
- split: qa5
path: data/qa5/4k.json
- split: qa6
path: data/qa6/4k.json
- split: qa7
path: data/qa7/4k.json
- split: qa8
path: data/qa8/4k.json
- split: qa9
path: data/qa9/4k.json
- split: qa10
path: data/qa10/4k.json
- config_name: 8k
data_files:
- split: qa1
path: data/qa1/8k.json
- split: qa2
path: data/qa2/8k.json
- split: qa3
path: data/qa3/8k.json
- split: qa4
path: data/qa4/8k.json
- split: qa5
path: data/qa5/8k.json
- split: qa6
path: data/qa6/8k.json
- split: qa7
path: data/qa7/8k.json
- split: qa8
path: data/qa8/8k.json
- split: qa9
path: data/qa9/8k.json
- split: qa10
path: data/qa10/8k.json
- config_name: 16k
data_files:
- split: qa1
path: data/qa1/16k.json
- split: qa2
path: data/qa2/16k.json
- split: qa3
path: data/qa3/16k.json
- split: qa4
path: data/qa4/16k.json
- split: qa5
path: data/qa5/16k.json
- split: qa6
path: data/qa6/16k.json
- split: qa7
path: data/qa7/16k.json
- split: qa8
path: data/qa8/16k.json
- split: qa9
path: data/qa9/16k.json
- split: qa10
path: data/qa10/16k.json
- config_name: 32k
data_files:
- split: qa1
path: data/qa1/32k.json
- split: qa2
path: data/qa2/32k.json
- split: qa3
path: data/qa3/32k.json
- split: qa4
path: data/qa4/32k.json
- split: qa5
path: data/qa5/32k.json
- split: qa6
path: data/qa6/32k.json
- split: qa7
path: data/qa7/32k.json
- split: qa8
path: data/qa8/32k.json
- split: qa9
path: data/qa9/32k.json
- split: qa10
path: data/qa10/32k.json
---
# BABILong (5k train samples) : a long-context needle-in-a-haystack benchmark for LLMs
Preprint is on [arXiv](https://arxiv.org/abs/2402.10790)
## bAbI + Books = BABILong
**BABILong** is a novel generative benchmark for evaluating the performance of NLP models in
processing arbitrarily long documents with distributed facts.
It contains 10 configs, each corresponding to its bAbI task. Each config has spltis corresponding to different sequence lengths in tokens: '4k', '32k', '128k', '256k', '512k', '1M'
Solving tasks with a long context size requires the model to distinguish important information from large amounts of irrelevant details. To simulate this behavior we ”hide” the sentences of the original task between the sentences of irrelevant text. We use the [bAbI](https://huggingface.co/datasets/facebook/babi_qa) dataset [1] as facts and [PG19](https://huggingface.co/datasets/pg19) as background text. Resulting test samples might have lenghts of **millions of tokens**.
BABILong consists of 10 tasks designed for evaluation of basic aspects of reasoning. The bAbI tasks are generated by simulating a set of characters and objects engaged in various movements and interactions with each other in multiple locations. Each interaction is represented by a fact, e.g. **”Mary travelled to the office”**, and the task is to answer a question using the facts from the current simulation, for instance, **”Where is Mary?”**. The bAbI tasks vary based on the number of facts, question complexity and the aspects of reasoning.
### First ten tasks of BABILong
| Task | Name | facts per task | supporting facts per task |
|------|--------------------------|-----------------|---------------------------|
| qa1 | single supporting fact | 2 - 10 | 1 |
| qa2 | two supporting facts | 2 - 68 | 2 |
| qa3 | three supporting facts | 4 - 32 | 3 |
| qa4 | two arg relations | 2 | 1 |
| qa5 | three arg relations | 2 - 126 | 1 |
| qa6 | yes-no questions | 2 - 26 | 1 |
| qa7 | counting | 2 - 52 | 1-10 |
| qa8 | lists-sets | 2 - 50 | 1-8 |
| qa9 | simple negation | 2 - 10 | 1 |
| qa10 | indefinite knowledge | 2 - 10 | 1 |
Join us in this exciting endeavor and let's push the boundaries of what's possible together!
## Citation
```
@misc{kuratov2024search,
title={In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss},
author={Yuri Kuratov and Aydar Bulatov and Petr Anokhin and Dmitry Sorokin and Artyom Sorokin and Mikhail Burtsev},
year={2024},
eprint={2402.10790},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## References
[1] Weston, Jason, et al. "Towards ai-complete question answering: A set of prerequisite toy tasks." arXiv preprint [arXiv:1502.05698](https://arxiv.org/abs/1502.05698) (2015). |