language:
- en
paperswithcode_id: comqa
pretty_name: ComQA
dataset_info:
features:
- name: cluster_id
dtype: string
- name: questions
sequence: string
- name: answers
sequence: string
splits:
- name: train
num_bytes: 696645
num_examples: 3966
- name: test
num_bytes: 273384
num_examples: 2243
- name: validation
num_bytes: 131945
num_examples: 966
download_size: 1671684
dataset_size: 1101974
task_categories:
- question-answering
license: unknown
Dataset Card for "com_qa"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: http://qa.mpi-inf.mpg.de/comqa/
- Repository: More Information Needed
- Paper: https://doi.org/10.18653/v1/N19-1027
- Paper: https://arxiv.org/abs/1809.09528
- Point of Contact: Rishiraj Saha Roy
- Size of downloaded dataset files: 1.67 MB
- Size of the generated dataset: 1.10 MB
- Total amount of disk used: 2.78 MB
Dataset Summary
ComQA is a dataset of 11,214 questions, which were collected from WikiAnswers, a community question answering website. By collecting questions from such a site we ensure that the information needs are ones of interest to actual users. Moreover, questions posed there are often cannot be answered by commercial search engines or QA technology, making them more interesting for driving future research compared to those collected from an engine's query log. The dataset contains questions with various challenging phenomena such as the need for temporal reasoning, comparison (e.g., comparatives, superlatives, ordinals), compositionality (multiple, possibly nested, subquestions with multiple entities), and unanswerable questions (e.g., Who was the first human being on Mars?). Through a large crowdsourcing effort, questions in ComQA are grouped into 4,834 paraphrase clusters that express the same information need. Each cluster is annotated with its answer(s). ComQA answers come in the form of Wikipedia entities wherever possible. Wherever the answers are temporal or measurable quantities, TIMEX3 and the International System of Units (SI) are used for normalization.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
default
- Size of downloaded dataset files: 1.67 MB
- Size of the generated dataset: 1.10 MB
- Total amount of disk used: 2.78 MB
An example of 'validation' looks as follows.
{
"answers": ["https://en.wikipedia.org/wiki/north_sea"],
"cluster_id": "cluster-922",
"questions": ["what sea separates the scandinavia peninsula from britain?", "which sea separates britain from scandinavia?"]
}
Data Fields
The data fields are the same among all splits.
default
cluster_id
: astring
feature.questions
: alist
ofstring
features.answers
: alist
ofstring
features.
Data Splits
name | train | validation | test |
---|---|---|---|
default | 3966 | 966 | 2243 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{abujabal-etal-2019-comqa,
title = "{ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters",
author = {Abujabal, Abdalghani and
Saha Roy, Rishiraj and
Yahya, Mohamed and
Weikum, Gerhard},
booktitle = {Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)},
month = {jun},
year = {2019},
address = {Minneapolis, Minnesota},
publisher = {Association for Computational Linguistics},
url = {https://www.aclweb.org/anthology/N19-1027},
doi = {10.18653/v1/N19-1027{,
pages = {307--317},
}
Contributions
Thanks to @lewtun, @thomwolf, @mariamabarham, @patrickvonplaten, @albertvillanova for adding this dataset.