Datasets:
lmqg
/

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
asahi417 commited on
Commit
817fd99
1 Parent(s): 9f1815b
.gitattributes CHANGED
@@ -52,3 +52,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
 
 
 
 
52
  *.jpg filter=lfs diff=lfs merge=lfs -text
53
  *.jpeg filter=lfs diff=lfs merge=lfs -text
54
  *.webp filter=lfs diff=lfs merge=lfs -text
55
+ data/processed/test.jsonl filter=lfs diff=lfs merge=lfs -text
56
+ data/processed/train.jsonl filter=lfs diff=lfs merge=lfs -text
57
+ data/processed/validation.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ pretty_name: TweetQA for question generation
4
+ language: en
5
+ multilinguality: monolingual
6
+ size_categories: 1k<n<10K
7
+ source_datasets: tweet_qa
8
+ task_categories:
9
+ - text-generation
10
+ task_ids:
11
+ - language-modeling
12
+ tags:
13
+ - question-generation
14
+ ---
15
+
16
+ # Dataset Card for "lmqg/qag_tweetqa"
17
+
18
+
19
+ ## Dataset Description
20
+ - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
21
+ - **Paper:** [TBA](TBA)
22
+ - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
23
+
24
+ ### Dataset Summary
25
+ This is the question & answer generation dataset based on the [tweet_qa](https://huggingface.co/datasets/tweet_qa). The test set of the original data is not publicly released, so we randomly sampled test questions from the training set.
26
+
27
+ ### Supported Tasks and Leaderboards
28
+ * `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
29
+ Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
30
+
31
+ ### Languages
32
+ English (en)
33
+
34
+ ## Dataset Structure
35
+ An example of 'train' looks as follows.
36
+ ```
37
+ {
38
+ "paragraph": "I would hope that Phylicia Rashad would apologize now that @missjillscott has! You cannot discount 30 victims who come with similar stories.— JDWhitner (@JDWhitner) July 7, 2015",
39
+ "questions": [ "what should phylicia rashad do now?", "how many victims have come forward?" ],
40
+ "answers": [ "apologize", "30" ],
41
+ "questions_answers": "Q: what should phylicia rashad do now?, A: apologize Q: how many victims have come forward?, A: 30"
42
+ }
43
+ ```
44
+ The data fields are the same among all splits.
45
+ - `questions`: a `list` of `string` features.
46
+ - `answers`: a `list` of `string` features.
47
+ - `paragraph`: a `string` feature.
48
+ - `questions_answers`: a `string` feature.
49
+
50
+ ## Data Splits
51
+
52
+ |train|validation|test |
53
+ |----:|---------:|----:|
54
+ |4536 | 583| 583|
55
+
56
+
57
+ ## Citation Information
58
+
59
+ ```
60
+ TBA
61
+ ```
data/processed/test.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73109803456855be24ad96b03d49154fcfec146c70caff671f30573a2b96a92e
3
+ size 609218
data/processed/train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf69758780984227312d576922a2a51fddc5c0d33fdb69783aee9566b2a7093b
3
+ size 4749947
data/processed/validation.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d86a4926a2d826492430aa74c43150d3b448e7ee7cd825ebff95c5e9babcb8e2
3
+ size 539321
qg_tweetqa.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import datasets
3
+
4
+ logger = datasets.logging.get_logger(__name__)
5
+ _VERSION = "2.0.1"
6
+ _NAME = "qag_tweetqa"
7
+ _CITATION = """
8
+ TBA
9
+ """
10
+ _DESCRIPTION = """Question & answer generation dataset based on [TweetQA](https://huggingface.co/datasets/tweet_qa)."""
11
+ _URL = "https://huggingface.co/datasets/lmqg/qag_tweetqa/resolve/main/data/processed"
12
+ _URLS = {
13
+ 'train': f'{_URL}/train.jsonl',
14
+ 'test': f'{_URL}/test.jsonl',
15
+ 'validation': f'{_URL}/validation.jsonl'
16
+ }
17
+
18
+
19
+ class QAGTweetQAConfig(datasets.BuilderConfig):
20
+ """BuilderConfig"""
21
+
22
+ def __init__(self, **kwargs):
23
+ """BuilderConfig.
24
+ Args:
25
+ **kwargs: keyword arguments forwarded to super.
26
+ """
27
+ super(QAGTweetQAConfig, self).__init__(**kwargs)
28
+
29
+
30
+ class QAGTweetQA(datasets.GeneratorBasedBuilder):
31
+
32
+ BUILDER_CONFIGS = [
33
+ QAGTweetQAConfig(name=_NAME, version=datasets.Version(_VERSION), description=_DESCRIPTION),
34
+ ]
35
+
36
+ def _info(self):
37
+ return datasets.DatasetInfo(
38
+ description=_DESCRIPTION,
39
+ features=datasets.Features(
40
+ {
41
+ "answers": datasets.Sequence(datasets.Value("string")),
42
+ "questions": datasets.Sequence(datasets.Value("string")),
43
+ "paragraph": datasets.Value("string"),
44
+ "paragraph_id": datasets.Value("string"),
45
+ "questions_answers": datasets.Value("string")
46
+ }
47
+ ),
48
+ supervised_keys=None,
49
+ homepage="https://github.com/asahi417/lm-question-generation"
50
+ )
51
+
52
+ def _split_generators(self, dl_manager):
53
+ downloaded_file = dl_manager.download_and_extract(_URLS)
54
+ return [
55
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_file["train"]}),
56
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": downloaded_file["validation"]}),
57
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_file["test"]}),
58
+ ]
59
+
60
+ def _generate_examples(self, filepath):
61
+ _key = 0
62
+ logger.info("generating examples from = %s", filepath)
63
+ with open(filepath, encoding="utf-8") as f:
64
+ _list = f.read().split('\n')
65
+ if _list[-1] == '':
66
+ _list = _list[:-1]
67
+ for i in _list:
68
+ data = json.loads(i)
69
+ yield _key, data
70
+ _key += 1