commit files to HF hub
Browse files- README.md +211 -0
- eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_zhquad.default.json +1 -0
- eval/metric.first.answer.paragraph_answer.question.lmqg_qg_zhquad.default.json +1 -0
- eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_zhquad.default.json +1 -0
- eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_zhquad.default.json +1 -0
- eval/samples.test.hyp.paragraph.questions_answers.lmqg_qg_zhquad.default.txt +0 -0
- eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_zhquad.default.txt +0 -0
- eval/samples.test.hyp.paragraph_sentence.answer.lmqg_qg_zhquad.default.txt +0 -0
- eval/samples.validation.hyp.paragraph.questions_answers.lmqg_qg_zhquad.default.txt +0 -0
- eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_zhquad.default.txt +0 -0
- eval/samples.validation.hyp.paragraph_sentence.answer.lmqg_qg_zhquad.default.txt +0 -0
README.md
ADDED
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
license: cc-by-4.0
|
4 |
+
metrics:
|
5 |
+
- bleu4
|
6 |
+
- meteor
|
7 |
+
- rouge-l
|
8 |
+
- bertscore
|
9 |
+
- moverscore
|
10 |
+
language: zh
|
11 |
+
datasets:
|
12 |
+
- lmqg/qg_zhquad
|
13 |
+
pipeline_tag: text2text-generation
|
14 |
+
tags:
|
15 |
+
- question generation
|
16 |
+
- answer extraction
|
17 |
+
widget:
|
18 |
+
- text: "generate question: 南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近<hl> 南安普敦中央 <hl>火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。"
|
19 |
+
example_title: "Question Generation Example 1"
|
20 |
+
- text: "generate question: 芝加哥大学的<hl> 1960—61 <hl>集团理论年汇集了Daniel Gorenstein、John G. Thompson和Walter Feit等团体理论家,奠定了一个合作的基础,借助于其他众多数学家的输入,1982中对所有有限的简单群进行了分类。这个项目的规模超过了以往的数学研究,无论是证明的长度还是研究人员的数量。目前正在进行研究,以简化这一分类的证明。如今,群论仍然是一个非常活跃的数学分支,影响着许多其他领域"
|
21 |
+
example_title: "Question Generation Example 2"
|
22 |
+
- text: "extract answers: 南安普敦的警察服务由汉普郡警察提供。 南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。 <hl> 该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。 <hl> 此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。 在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。"
|
23 |
+
example_title: "Answer Extraction Example 1"
|
24 |
+
model-index:
|
25 |
+
- name: lmqg/mt5-small-zhquad-qg-ae
|
26 |
+
results:
|
27 |
+
- task:
|
28 |
+
name: Text2text Generation
|
29 |
+
type: text2text-generation
|
30 |
+
dataset:
|
31 |
+
name: lmqg/qg_zhquad
|
32 |
+
type: default
|
33 |
+
args: default
|
34 |
+
metrics:
|
35 |
+
- name: BLEU4 (Question Generation)
|
36 |
+
type: bleu4_question_generation
|
37 |
+
value: 13.98
|
38 |
+
- name: ROUGE-L (Question Generation)
|
39 |
+
type: rouge_l_question_generation
|
40 |
+
value: 33.17
|
41 |
+
- name: METEOR (Question Generation)
|
42 |
+
type: meteor_question_generation
|
43 |
+
value: 22.88
|
44 |
+
- name: BERTScore (Question Generation)
|
45 |
+
type: bertscore_question_generation
|
46 |
+
value: 76.64
|
47 |
+
- name: MoverScore (Question Generation)
|
48 |
+
type: moverscore_question_generation
|
49 |
+
value: 57.03
|
50 |
+
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer))
|
51 |
+
type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer
|
52 |
+
value: 78.55
|
53 |
+
- name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer))
|
54 |
+
type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer
|
55 |
+
value: 82.09
|
56 |
+
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer))
|
57 |
+
type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer
|
58 |
+
value: 75.41
|
59 |
+
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer))
|
60 |
+
type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer
|
61 |
+
value: 53.47
|
62 |
+
- name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer))
|
63 |
+
type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer
|
64 |
+
value: 55.73
|
65 |
+
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer))
|
66 |
+
type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer
|
67 |
+
value: 51.5
|
68 |
+
- name: BLEU4 (Answer Extraction)
|
69 |
+
type: bleu4_answer_extraction
|
70 |
+
value: 81.9
|
71 |
+
- name: ROUGE-L (Answer Extraction)
|
72 |
+
type: rouge_l_answer_extraction
|
73 |
+
value: 95.05
|
74 |
+
- name: METEOR (Answer Extraction)
|
75 |
+
type: meteor_answer_extraction
|
76 |
+
value: 69.99
|
77 |
+
- name: BERTScore (Answer Extraction)
|
78 |
+
type: bertscore_answer_extraction
|
79 |
+
value: 99.69
|
80 |
+
- name: MoverScore (Answer Extraction)
|
81 |
+
type: moverscore_answer_extraction
|
82 |
+
value: 98.34
|
83 |
+
- name: AnswerF1Score (Answer Extraction)
|
84 |
+
type: answer_f1_score__answer_extraction
|
85 |
+
value: 93.58
|
86 |
+
- name: AnswerExactMatch (Answer Extraction)
|
87 |
+
type: answer_exact_match_answer_extraction
|
88 |
+
value: 93.5
|
89 |
+
---
|
90 |
+
|
91 |
+
# Model Card of `lmqg/mt5-small-zhquad-qg-ae`
|
92 |
+
This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation and answer extraction jointly on the [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
|
93 |
+
|
94 |
+
|
95 |
+
### Overview
|
96 |
+
- **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small)
|
97 |
+
- **Language:** zh
|
98 |
+
- **Training data:** [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) (default)
|
99 |
+
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
|
100 |
+
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
|
101 |
+
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
|
102 |
+
|
103 |
+
### Usage
|
104 |
+
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
|
105 |
+
```python
|
106 |
+
from lmqg import TransformersQG
|
107 |
+
|
108 |
+
# initialize model
|
109 |
+
model = TransformersQG(language="zh", model="lmqg/mt5-small-zhquad-qg-ae")
|
110 |
+
|
111 |
+
# model prediction
|
112 |
+
question_answer_pairs = model.generate_qa("南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近南安普敦中央火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
|
113 |
+
|
114 |
+
```
|
115 |
+
|
116 |
+
- With `transformers`
|
117 |
+
```python
|
118 |
+
from transformers import pipeline
|
119 |
+
|
120 |
+
pipe = pipeline("text2text-generation", "lmqg/mt5-small-zhquad-qg-ae")
|
121 |
+
|
122 |
+
# answer extraction
|
123 |
+
answer = pipe("generate question: 南安普敦的警察服务由汉普郡警察提供。南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。该建筑位于南路,2011年启用,靠近<hl> 南安普敦中央 <hl>火车站。此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
|
124 |
+
|
125 |
+
# question generation
|
126 |
+
question = pipe("extract answers: 南安普敦的警察服务由汉普郡警察提供。 南安普敦行动的主要基地是一座新的八层专用建筑,造价3000万英镑。 <hl> 该建筑位于南路,2011年启用,靠近 南安普敦中央 火车站。 <hl> 此前,南安普顿市中心的行动位于市民中心西翼,但由于设施老化,加上计划在旧警察局和地方法院建造一座新博物馆,因此必须搬迁。 在Portswood、Banister Park、Hille和Shirley还有其他警察局,在南安普顿中央火车站还有一个英国交通警察局。")
|
127 |
+
|
128 |
+
```
|
129 |
+
|
130 |
+
## Evaluation
|
131 |
+
|
132 |
+
|
133 |
+
- ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-zhquad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_zhquad.default.json)
|
134 |
+
|
135 |
+
| | Score | Type | Dataset |
|
136 |
+
|:-----------|--------:|:--------|:-----------------------------------------------------------------|
|
137 |
+
| BERTScore | 76.64 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
138 |
+
| Bleu_1 | 35.24 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
139 |
+
| Bleu_2 | 24.56 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
140 |
+
| Bleu_3 | 18.21 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
141 |
+
| Bleu_4 | 13.98 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
142 |
+
| METEOR | 22.88 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
143 |
+
| MoverScore | 57.03 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
144 |
+
| ROUGE_L | 33.17 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
145 |
+
|
146 |
+
|
147 |
+
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-zhquad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_zhquad.default.json)
|
148 |
+
|
149 |
+
| | Score | Type | Dataset |
|
150 |
+
|:--------------------------------|--------:|:--------|:-----------------------------------------------------------------|
|
151 |
+
| QAAlignedF1Score (BERTScore) | 78.55 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
152 |
+
| QAAlignedF1Score (MoverScore) | 53.47 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
153 |
+
| QAAlignedPrecision (BERTScore) | 75.41 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
154 |
+
| QAAlignedPrecision (MoverScore) | 51.5 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
155 |
+
| QAAlignedRecall (BERTScore) | 82.09 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
156 |
+
| QAAlignedRecall (MoverScore) | 55.73 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
157 |
+
|
158 |
+
|
159 |
+
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-zhquad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_zhquad.default.json)
|
160 |
+
|
161 |
+
| | Score | Type | Dataset |
|
162 |
+
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
|
163 |
+
| AnswerExactMatch | 93.5 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
164 |
+
| AnswerF1Score | 93.58 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
165 |
+
| BERTScore | 99.69 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
166 |
+
| Bleu_1 | 92 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
167 |
+
| Bleu_2 | 88.87 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
168 |
+
| Bleu_3 | 85.52 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
169 |
+
| Bleu_4 | 81.9 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
170 |
+
| METEOR | 69.99 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
171 |
+
| MoverScore | 98.34 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
172 |
+
| ROUGE_L | 95.05 | default | [lmqg/qg_zhquad](https://huggingface.co/datasets/lmqg/qg_zhquad) |
|
173 |
+
|
174 |
+
|
175 |
+
|
176 |
+
## Training hyperparameters
|
177 |
+
|
178 |
+
The following hyperparameters were used during fine-tuning:
|
179 |
+
- dataset_path: lmqg/qg_zhquad
|
180 |
+
- dataset_name: default
|
181 |
+
- input_types: ['paragraph_answer', 'paragraph_sentence']
|
182 |
+
- output_types: ['question', 'answer']
|
183 |
+
- prefix_types: ['qg', 'ae']
|
184 |
+
- model: google/mt5-small
|
185 |
+
- max_length: 512
|
186 |
+
- max_length_output: 32
|
187 |
+
- epoch: 13
|
188 |
+
- batch: 16
|
189 |
+
- lr: 0.0005
|
190 |
+
- fp16: False
|
191 |
+
- random_seed: 1
|
192 |
+
- gradient_accumulation_steps: 4
|
193 |
+
- label_smoothing: 0.15
|
194 |
+
|
195 |
+
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-zhquad-qg-ae/raw/main/trainer_config.json).
|
196 |
+
|
197 |
+
## Citation
|
198 |
+
```
|
199 |
+
@inproceedings{ushio-etal-2022-generative,
|
200 |
+
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
|
201 |
+
author = "Ushio, Asahi and
|
202 |
+
Alva-Manchego, Fernando and
|
203 |
+
Camacho-Collados, Jose",
|
204 |
+
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
|
205 |
+
month = dec,
|
206 |
+
year = "2022",
|
207 |
+
address = "Abu Dhabi, U.A.E.",
|
208 |
+
publisher = "Association for Computational Linguistics",
|
209 |
+
}
|
210 |
+
|
211 |
+
```
|
eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_zhquad.default.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"test": {"QAAlignedF1Score (BERTScore)": 0.7855418210382505, "QAAlignedRecall (BERTScore)": 0.8208603804685184, "QAAlignedPrecision (BERTScore)": 0.7540897454058011, "QAAlignedF1Score (MoverScore)": 0.5346857464613931, "QAAlignedRecall (MoverScore)": 0.5573226932786458, "QAAlignedPrecision (MoverScore)": 0.5149503046336772, "Bleu_1": 0.0041614176413362495, "Bleu_2": 0.0002733861561829292, "Bleu_3": 2.340360147027646e-09, "Bleu_4": 6.888798702454551e-12, "METEOR": 0.18429060924552587, "ROUGE_L": 0.00845788063698488, "BERTScore": 0.6414252227800591, "MoverScore": 0.5141970065432502}, "validation": {"QAAlignedF1Score (BERTScore)": 0.7789481114021237, "QAAlignedRecall (BERTScore)": 0.7934471714686923, "QAAlignedPrecision (BERTScore)": 0.7658980201047756, "QAAlignedF1Score (MoverScore)": 0.5278587637995821, "QAAlignedRecall (MoverScore)": 0.5360172284317156, "QAAlignedPrecision (MoverScore)": 0.5205962308310392, "Bleu_1": 0.02065945599546766, "Bleu_2": 0.002552676017681431, "Bleu_3": 1.901519885362209e-08, "Bleu_4": 5.2356373782118696e-11, "METEOR": 0.22093221593656595, "ROUGE_L": 0.03587313290341381, "BERTScore": 0.7149331729339831, "MoverScore": 0.5313016230989522}}
|
eval/metric.first.answer.paragraph_answer.question.lmqg_qg_zhquad.default.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"validation": {"Bleu_1": 0.3124483107517401, "Bleu_2": 0.2022414212638196, "Bleu_3": 0.14075126599752, "Bleu_4": 0.10182666104520205}, "test": {"Bleu_1": 0.3501257943333252, "Bleu_2": 0.24440639411446236, "Bleu_3": 0.18147864275298717, "Bleu_4": 0.13947640671540798}}
|
eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_zhquad.default.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"validation": {"Bleu_1": 0.8984976332578349, "Bleu_2": 0.863700476581162, "Bleu_3": 0.8282907154192872, "Bleu_4": 0.7920031456423521, "METEOR": 0.6806077153375327, "ROUGE_L": 0.9354048878527517, "BERTScore": 0.9926965533420552, "MoverScore": 0.9717035876046614, "AnswerF1Score": 91.1079696570226, "AnswerExactMatch": 90.95434677027683}, "test": {"Bleu_1": 0.919962455736127, "Bleu_2": 0.8887173195119843, "Bleu_3": 0.8551528572574789, "Bleu_4": 0.8190150316147392, "METEOR": 0.6998790171093476, "ROUGE_L": 0.9505111691582052, "BERTScore": 0.9968959816223559, "MoverScore": 0.9834331628286197, "AnswerF1Score": 93.57801984319713, "AnswerExactMatch": 93.50412821758135}}
|
eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_zhquad.default.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"validation": {"Bleu_1": 0.33373569071875603, "Bleu_2": 0.2185563962082155, "Bleu_3": 0.1535138754584376, "Bleu_4": 0.11187321831967545, "METEOR": 0.21087463407744778, "ROUGE_L": 0.30035405539883947, "BERTScore": 0.7462091987138463, "MoverScore": 0.5583120385347042}, "test": {"Bleu_1": 0.35243536991633617, "Bleu_2": 0.2455785473109513, "Bleu_3": 0.1820979479554548, "Bleu_4": 0.13977943479979163, "METEOR": 0.2288054594770842, "ROUGE_L": 0.33168122312199494, "BERTScore": 0.7664240089175756, "MoverScore": 0.5703214083124712}}
|
eval/samples.test.hyp.paragraph.questions_answers.lmqg_qg_zhquad.default.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
eval/samples.test.hyp.paragraph_answer.question.lmqg_qg_zhquad.default.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
eval/samples.test.hyp.paragraph_sentence.answer.lmqg_qg_zhquad.default.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
eval/samples.validation.hyp.paragraph.questions_answers.lmqg_qg_zhquad.default.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
eval/samples.validation.hyp.paragraph_answer.question.lmqg_qg_zhquad.default.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
eval/samples.validation.hyp.paragraph_sentence.answer.lmqg_qg_zhquad.default.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|