File size: 3,130 Bytes
49774e1 3c9fde7 49774e1 3c9fde7 49774e1 3c9fde7 49774e1 3c9fde7 49774e1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
---
language:
- en
license: apache-2.0
tags:
- t5-large
- text2text-generation
- conversational question rewriting
datasets:
- CANARD
metrics:
- BLEU
widget:
- text: 'Rewrite the question according to the given context to make the dialog fluent
using anaphora and ellipsis.
question: What else happened during 1977-1981 other than Superstar Billy Graham''s
return?
context: Superstar Billy Graham
Return to WWWF (1977-1981)
Why did he return to the WWWF?
an agreement with promoter Vincent J. McMahon (Senior
What was his agreement with McMahon?
I don''t know.
How did people respond to his return?
I don''t know.'
- text: 'Rewrite the question according to the given context to make the dialog fluent
using anaphora and ellipsis.
question: why did Billy Graham personally sued Zahorian and the WWF?
context: Superstar Billy Graham
Disputes with the McMahons
what disputes did he have?
Graham personally sued Zahorian and the WWF,'
inference:
parameters:
max_length: 100
base_model: t5-large
model-index:
- name: t5-large-coqr-canard
results:
- task:
type: text2text-generation
name: conversational question rewriting
dataset:
name: CANARD
type: CANARD
split: test
metrics:
- type: BLEU
value: 77.8
name: BLEU
---
# t5-large-coqr-canard
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the [CANARD](https://sites.google.com/view/qanta/projects/canard) dataset.
It achieves the following results on the test set:
- Loss: 0.3064
- Bleu: 77.1979
- Generation Length: 9.576
## Model description
CANARD dataset rewrites the original questions in conversations to make them context-independent (understandable w/o context).
On the contrary, this model is trained to rewrite context-independent questions to conversational questions, aiming to create fluent dialog with anaphora and ellipsis.
Input:
```
Rewrite the question according to the given context to make the dialog fluent using anaphora and ellipsis.
question: How did people respond to Superstar Billy Graham's return?
context: Superstar Billy Graham
Return to WWWF (1977-1981)
Why did he return to the WWWF?
an agreement with promoter Vincent J. McMahon (Senior
What was his agreement with McMahon?
I don't know.
```
Target:
```
How did people respond to his return?
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 62 | 0.2987 | 77.2361 | 9.4534 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
|