Datasets:
Tasks:
Summarization
Modalities:
Text
Formats:
parquet
Sub-tasks:
news-articles-summarization
Languages:
English
Size:
10K - 100K
License:
metadata
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- other
multilinguality:
- monolingual
pretty_name: WCEP-10
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- summarization
task_ids:
- news-articles-summarization
paperswithcode_id: wcep
train-eval-index:
- config: default
task: summarization
task_id: summarization
splits:
train_split: train
eval_split: test
col_mapping:
document: text
summary: target
metrics:
- type: rouge
name: Rouge
This is a copy of the WCEP-10 dataset, except the input source documents of its test
split have been replaced by a sparse retriever. The retrieval pipeline used:
- query: The
summary
field of each example - corpus: The union of all documents in the
train
,validation
andtest
splits - retriever: BM25 via PyTerrier with default settings
- top-k strategy:
"max"
, i.e. the number of documents retrieved,k
, is set as the maximum number of documents seen across examples in this dataset, in this casek==10
Retrieval results on the train
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.8753 | 0.6443 | 0.5919 | 0.6588 |
Retrieval results on the validation
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.8706 | 0.6280 | 0.5988 | 0.6346 |
Retrieval results on the test
set:
Recall@100 | Rprec | Precision@k | Recall@k |
---|---|---|---|
0.8836 | 0.6658 | 0.6296 | 0.6746 |