Datasets:
Tasks:
Text2Text Generation
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
License:
dataset_info: | |
features: | |
- name: context | |
dtype: string | |
- name: questions | |
dtype: string | |
splits: | |
- name: train | |
num_bytes: 20544587 | |
num_examples: 18896 | |
- name: validation | |
num_bytes: 2405721 | |
num_examples: 2067 | |
download_size: 12611933 | |
dataset_size: 22950308 | |
annotations_creators: | |
- crowdsourced | |
language: | |
- en | |
language_creators: | |
- crowdsourced | |
license: | |
- cc-by-4.0 | |
multilinguality: | |
- monolingual | |
pretty_name: Question Generation for T5 based on Squad V1.1 | |
size_categories: | |
- 10K<n<100K | |
source_datasets: | |
- extended|squad | |
tags: | |
- questiongeneration | |
- question-generation | |
- text2text-generation | |
task_categories: | |
- text2text-generation | |
task_ids: [] | |
# Dataset Card for "squad-v1.1-t5-question-generation" | |
## Dataset Description | |
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/) | |
- **Paper:** [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) | |
### Dataset Summary | |
This is a modified Stanford Question Answering Dataset (SQuAD) to suit question generation with All Questions in One Line (AQOL) just like in [Transformer-based End-to-End Question Generation](https://arxiv.org/pdf/2005.01107v1.pdf) | |
specifically for the T5 family of models. The prefix is `generate questions: ` so that the task can be unique to a trained model. | |
Check out the generation notebook [here](https://nbviewer.org/urls/huggingface.co/datasets/derek-thomas/squad-v1.1-t5-question-generation/resolve/main/Squad_V1_Question_Generation.ipynb). | |
### Supported Tasks and Leaderboards | |
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | |
### Languages | |
## Dataset Structure | |
### Data Instances | |
#### plain_text | |
An example of 'train' looks as follows. | |
``` | |
{ | |
"context": "generate questions: This is a test context.", | |
"question": "Is this a test? {sep_token} Is this another Test {sep_token}" | |
} | |
``` | |
### Data Fields | |
The data fields are the same among all splits. | |
#### plain_text | |
- `context`: a `string` feature. | |
- `question`: a `string` feature. | |
### Data Splits | |
| name |train|validation| | |
|----------|----:|---------:| | |
|plain_text|18896| 2067| | |
### Citation Information | |
``` | |
@article{2016arXiv160605250R, | |
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev}, | |
Konstantin and {Liang}, Percy}, | |
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}", | |
journal = {arXiv e-prints}, | |
year = 2016, | |
eid = {arXiv:1606.05250}, | |
pages = {arXiv:1606.05250}, | |
archivePrefix = {arXiv}, | |
eprint = {1606.05250}, | |
} | |
``` | |
### Contributions | |
Thanks to [Derek Thomas](https://huggingface.co/derek-thomas) and [Thomas Simonini](https://huggingface.co/ThomasSimonini) for adding this to the hub | |
Check out: [How to contribute more](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |