Datasets:
metadata
annotations_creators:
- found
language_creators:
- found
language:
- en
license:
- unknown
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
paperswithcode_id: multi-xscience
pretty_name: Multi-XScience
This is a copy of the Multi-XScience dataset, except the input source documents of its test
split have been replaced by a sparse retriever. The retrieval pipeline used:
- query: The
related_work
field of each example - corpus: The union of all documents in the
train
,validation
andtest
splits - retriever: BM25 via PyTerrier with default settings
- top-k strategy:
"max"
, i.e. the number of documents retrieved,k
, is set as the maximum number of documents seen across examples in this dataset, in this casek==20
Retrieval results on the test
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.548 | 0.2272 | 0.055 | 0.4039 |