Datasets:
Tasks:
Summarization
Modalities:
Text
Formats:
parquet
Sub-tasks:
news-articles-summarization
Languages:
English
Size:
10K - 100K
License:
metadata
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: Multi-News
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: multi-news
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
This is a copy of the Multi-News dataset, except the input source documents of its test
split have been replaced by a dense retriever. The retrieval pipeline used:
- query: The
summary
field of each example - corpus: The union of all documents in the
train
,validation
andtest
splits - retriever:
facebook/contriever-msmarco
via PyTerrier with default settings - top-k strategy:
"oracle"
, i.e. the number of documents retrieved,k
, is set as the original number of input documents for each example
Retrieval results on the train
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.8661 | 0.6867 | 0.6867 | 0.6867 |
Retrieval results on the validation
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.8626 | 0.6859 | 0.6859 | 0.6859 |
Retrieval results on the test
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.8625 | 0.6927 | 0.6927 | 0.6927 |