table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
P19-1359table_5
Results reported in the embedding scores, BLEU, diversity, and the quality of emotional expression.
2
[['Models', 'Seq2Seq'], ['Models', 'EmoEmb'], ['Models', 'ECM'], ['Models', 'EmoDS-MLE'], ['Models', 'EmoDS-EV'], ['Models', 'EmoDS-BS'], ['Models', 'EmoDS']]
2
[['Embedding', 'Average'], ['Embedding', 'Greedy'], ['Embedding', 'Extreme'], ['BLEU Score', 'BLEU'], ['Diversity', 'distinct-1'], ['Diversity', 'distinct-2'], ['Emotional Expression', 'emotion-a'], ['Emotional Expression', 'emotion-w']]
[['0.523', '0.376', '0.35', '1.5', '0.0038', '0.012', '0.335', '0.371'], ['0.524', '0.381', '0.355', '1.69', '0.0054', '0.0484', '0.72', '0.512'], ['0.624', '0.434', '0.409', '1.68', '0.009', '0.0735', '0.765', '0.58'], ['0.548', '0.367', '0.374', '1.6', '0.0053', '0.067', '0.721', '0.556'], ['0.571', '0.39', '0.384', '1.64', '0.0053', '0.0659', '0.746', '0.47'], ['0.614', '0.442', '0.409', '1.73', '0.0051', '0.0467', '0.773', '0.658'], ['0.634', '0.451', '0.435', '1.73', '0.0113', '0.0867', '0.81', '0.687']]
column
['Average', 'Greedy', 'Extreme', 'BLEU', 'distinct-1', 'distinct-2', 'emotion-a', 'emotion-w']
['EmoDS-MLE', 'EmoDS-EV', 'EmoDS-BS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Embedding || Average</th> <th>Embedding || Greedy</th> <th>Embedding || Extreme</th> <th>BLEU Score || BLEU</th> <th>Diversity || distinct-1</th> <th>Diversity || distinct-2</th> <th>Emotional Expression || emotion-a</th> <th>Emotional Expression || emotion-w</th> </tr> </thead> <tbody> <tr> <td>Models || Seq2Seq</td> <td>0.523</td> <td>0.376</td> <td>0.35</td> <td>1.5</td> <td>0.0038</td> <td>0.012</td> <td>0.335</td> <td>0.371</td> </tr> <tr> <td>Models || EmoEmb</td> <td>0.524</td> <td>0.381</td> <td>0.355</td> <td>1.69</td> <td>0.0054</td> <td>0.0484</td> <td>0.72</td> <td>0.512</td> </tr> <tr> <td>Models || ECM</td> <td>0.624</td> <td>0.434</td> <td>0.409</td> <td>1.68</td> <td>0.009</td> <td>0.0735</td> <td>0.765</td> <td>0.58</td> </tr> <tr> <td>Models || EmoDS-MLE</td> <td>0.548</td> <td>0.367</td> <td>0.374</td> <td>1.6</td> <td>0.0053</td> <td>0.067</td> <td>0.721</td> <td>0.556</td> </tr> <tr> <td>Models || EmoDS-EV</td> <td>0.571</td> <td>0.39</td> <td>0.384</td> <td>1.64</td> <td>0.0053</td> <td>0.0659</td> <td>0.746</td> <td>0.47</td> </tr> <tr> <td>Models || EmoDS-BS</td> <td>0.614</td> <td>0.442</td> <td>0.409</td> <td>1.73</td> <td>0.0051</td> <td>0.0467</td> <td>0.773</td> <td>0.658</td> </tr> <tr> <td>Models || EmoDS</td> <td>0.634</td> <td>0.451</td> <td>0.435</td> <td>1.73</td> <td>0.0113</td> <td>0.0867</td> <td>0.81</td> <td>0.687</td> </tr> </tbody></table>
Table 5
table_5
P19-1359
6
acl2019
The bottom half of Table 5 shows the results of ablation tests. As we can see, after removing the emotion classification term (EmoDS-MLE), the performance decreased most significantly. Our interpretation is that without the emotion classification term, the model can only express the desired emotion explicitly in the generated responses and can not capture the emotional sequences not containing any emotional word. Applying an external emotion lexicon (EmoDS-EV) also brought performance decline, especially on emotion-w. This makes sense because an external emotion lexicon shares fewer words with the corpus, causing the generation process to focus on generic vocabulary and more commonplace responses to be generated. Additionally, the distinct-1/distinct-2 decreased most when using the original beam search (EmoDS-BS), indicating that the diverse decoding can promote diversity in response generation.
[1, 1, 2, 1, 2, 1]
['The bottom half of Table 5 shows the results of ablation tests.', 'As we can see, after removing the emotion classification term (EmoDS-MLE), the performance decreased most significantly.', 'Our interpretation is that without the emotion classification term, the model can only express the desired emotion explicitly in the generated responses and can not capture the emotional sequences not containing any emotional word.', 'Applying an external emotion lexicon (EmoDS-EV) also brought performance decline, especially on emotion-w.', 'This makes sense because an external emotion lexicon shares fewer words with the corpus, causing the generation process to focus on generic vocabulary and more commonplace responses to be generated.', 'Additionally, the distinct-1/distinct-2 decreased most when using the original beam search (EmoDS-BS), indicating that the diverse decoding can promote diversity in response generation.']
[None, ['EmoDS-MLE'], None, ['EmoDS-EV', 'emotion-w'], None, ['EmoDS-BS', 'distinct-1', 'distinct-2']]
1
P19-1359table_6
The results of human evaluation. Cont. and Emot. denote content and emotion, respectively.
2
[['Models', 'Seq2Seq'], ['Models', 'EmoEmb'], ['Models', 'ECM'], ['Models', 'EmoDS']]
2
[['Joy', 'Cont.'], ['Joy', 'Emot.'], ['Contentment', 'Cont.'], ['Contentment', 'Emot.'], ['Disgust', 'Cont.'], ['Disgust', 'Emot.'], ['Anger', 'Cont.'], ['Anger', 'Emot.'], ['Sadness', 'Cont.'], ['Sadness', 'Emot.'], ['Overall', 'Cont.'], ['Overall', 'Emot.']]
[['1.35', '0.455', '1.445', '0.325', '1.18', '0.095', '1.15', '0.115', '1.09', '0.1', '1.243', '0.216'], ['1.285', '0.655', '1.32', '0.565', '1.015', '0.225', '1.16', '0.4', '0.995', '0.19', '1.155', '0.407'], ['1.395', '0.69', '1.4', '0.615', '1.13', '0.425', '1.19', '0.33', '1.195', '0.335', '1.262', '0.479'], ['1.265', '0.695', '1.26', '0.685', '1.37', '0.53', '1.185', '0.505', '1.265', '0.625', '1.269', '0.608']]
column
['Joy', 'Joy', 'Contentment', 'Contentment', 'Disguss', 'Disguss', 'Anger', 'Anger', 'Sadness', 'Sadness', 'Overall', 'Overall']
['EmoDS']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Joy || Cont.</th> <th>Joy || Emot.</th> <th>Contentment || Cont.</th> <th>Contentment || Emot.</th> <th>Disgust || Cont.</th> <th>Disgust || Emot.</th> <th>Anger || Cont.</th> <th>Anger || Emot.</th> <th>Sadness || Cont.</th> <th>Sadness || Emot.</th> <th>Overall || Cont.</th> <th>Overall || Emot.</th> </tr> </thead> <tbody> <tr> <td>Models || Seq2Seq</td> <td>1.35</td> <td>0.455</td> <td>1.445</td> <td>0.325</td> <td>1.18</td> <td>0.095</td> <td>1.15</td> <td>0.115</td> <td>1.09</td> <td>0.1</td> <td>1.243</td> <td>0.216</td> </tr> <tr> <td>Models || EmoEmb</td> <td>1.285</td> <td>0.655</td> <td>1.32</td> <td>0.565</td> <td>1.015</td> <td>0.225</td> <td>1.16</td> <td>0.4</td> <td>0.995</td> <td>0.19</td> <td>1.155</td> <td>0.407</td> </tr> <tr> <td>Models || ECM</td> <td>1.395</td> <td>0.69</td> <td>1.4</td> <td>0.615</td> <td>1.13</td> <td>0.425</td> <td>1.19</td> <td>0.33</td> <td>1.195</td> <td>0.335</td> <td>1.262</td> <td>0.479</td> </tr> <tr> <td>Models || EmoDS</td> <td>1.265</td> <td>0.695</td> <td>1.26</td> <td>0.685</td> <td>1.37</td> <td>0.53</td> <td>1.185</td> <td>0.505</td> <td>1.265</td> <td>0.625</td> <td>1.269</td> <td>0.608</td> </tr> </tbody></table>
Table 6
table_6
P19-1359
7
acl2019
It is shown in Table 6 that EmoDS achieved the highest performance in most cases (Sign Test, with p-value < 0.05). Specifically, for content coherence, there was no obvious difference among most models, but for emotional expression, the EmoDS yielded a significant performance boost. As we can see from Table 6, EmoDS performed well on all categories with an overall emotion score of 0.608, while EmoEmb and ECM performed poorly on categories with less training data, e.g., disgust, anger and sadness. Note that all emotion scores of Seq2Seq were the lowest, indicating that Seq2Seq is bad at emotional expression when generating responses. To sum up, EmoDS can generate meaningful responses with better emotional expression, due to the fact that EmoDS is capable of expressing the desired emotion either explicitly or implicitly.
[1, 1, 1, 1, 1]
['It is shown in Table 6 that EmoDS achieved the highest performance in most cases (Sign Test, with p-value < 0.05).', 'Specifically, for content coherence, there was no obvious difference among most models, but for emotional expression, the EmoDS yielded a significant performance boost.', 'As we can see from Table 6, EmoDS performed well on all categories with an overall emotion score of 0.608, while EmoEmb and ECM performed poorly on categories with less training data, e.g., disgust, anger and sadness.', 'Note that all emotion scores of Seq2Seq were the lowest, indicating that Seq2Seq is bad at emotional expression when generating responses.', 'To sum up, EmoDS can generate meaningful responses with better emotional expression, due to the fact that EmoDS is capable of expressing the desired emotion either explicitly or implicitly.']
[['EmoDS'], ['EmoDS'], ['EmoDS'], ['Seq2Seq'], ['EmoDS']]
1
P19-1367table_2
Performances on whether using multi-level vocabularies or not, where “SV” represents single vocabulary (from raw words), and “MVs” means multilevel vocabularies obtained from hierarchical clustering. “enc” and “dec” denote encoder and decoder, respectively, and numbers after them represent how many passes. For example, “enc1-dec3” means a encoder along with three passes of decoders.
2
[['Models', 'enc3-dec1 (SV)'], ['Models', 'enc3-dec1 (MVs)'], ['Models', 'enc1-dec3 (SV)'], ['Models', 'enc1-dec3 (MVs)'], ['Models', 'enc3-dec3 (SV)'], ['Models', 'enc3-dec3 (MVs)']]
2
[['Twitter', 'BLEU'], ['Twitter', 'ROUGE'], ['Weibo', 'BLEU'], ['Weibo', 'ROUGE']]
[['6.27', '6.29', '6.61', '7.08'], ['7.16', '8.01', '9.15', '10.63'], ['7.43', '7.54', '9.92', '10.24'], ['6.75', '7.78', '12.01', '10.86'], ['7.44', '7.56', '9.95', '9.7'], ['8.58', '7.88', '12.51', '11.76']]
column
['BLEU', 'ROUGE', 'BLEU', 'ROUGE']
['enc3-dec1 (MVs)', 'enc1-dec3 (MVs)', 'enc3-dec3 (MVs)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter || BLEU</th> <th>Twitter || ROUGE</th> <th>Weibo || BLEU</th> <th>Weibo || ROUGE</th> </tr> </thead> <tbody> <tr> <td>Models || enc3-dec1 (SV)</td> <td>6.27</td> <td>6.29</td> <td>6.61</td> <td>7.08</td> </tr> <tr> <td>Models || enc3-dec1 (MVs)</td> <td>7.16</td> <td>8.01</td> <td>9.15</td> <td>10.63</td> </tr> <tr> <td>Models || enc1-dec3 (SV)</td> <td>7.43</td> <td>7.54</td> <td>9.92</td> <td>10.24</td> </tr> <tr> <td>Models || enc1-dec3 (MVs)</td> <td>6.75</td> <td>7.78</td> <td>12.01</td> <td>10.86</td> </tr> <tr> <td>Models || enc3-dec3 (SV)</td> <td>7.44</td> <td>7.56</td> <td>9.95</td> <td>9.7</td> </tr> <tr> <td>Models || enc3-dec3 (MVs)</td> <td>8.58</td> <td>7.88</td> <td>12.51</td> <td>11.76</td> </tr> </tbody></table>
Table 2
table_2
P19-1367
7
acl2019
Comparison Results. Table 2 demonstrates performances on whether using multi-level vocabularies. We can observe that incorporating multilevel vocabularies could improve performances on almost all of the metrics. For example, enc3-dec3 (MVs) improves relative performance up to 25.73% in BLEU score compared with enc3-dec3 (SV) on the Weibo dataset. Only on the Twitter dataset, enc1-dec3 (MVs) is slightly worse than “enc1-dec3 (SV)” in the BLEU score.
[2, 1, 1, 1, 1]
['Comparison Results.', 'Table 2 demonstrates performances on whether using multi-level vocabularies.', 'We can observe that incorporating multilevel vocabularies could improve performances on almost all of the metrics.', 'For example, enc3-dec3 (MVs) improves relative performance up to 25.73% in BLEU score compared with enc3-dec3 (SV) on the Weibo dataset.', 'Only on the Twitter dataset, enc1-dec3 (MVs) is slightly worse than “enc1-dec3 (SV)” in the BLEU score.']
[None, ['enc3-dec1 (MVs)', 'enc1-dec3 (MVs)', 'enc3-dec3 (MVs)'], ['enc3-dec1 (MVs)', 'enc1-dec3 (MVs)', 'enc3-dec3 (MVs)', 'ROUGE', 'BLEU'], ['enc3-dec3 (MVs)', 'enc3-dec3 (SV)', 'Weibo', 'BLEU'], ['enc1-dec3 (MVs)', 'enc1-dec3 (SV)', 'Twitter', 'BLEU']]
1
P19-1368table_2
On-device Results and Comparison on Multiple Datasets and Languages
2
[['Model', 'SGNN++ (our on-device)'], ['Model', 'SGNN(Ravi and Kozareva, 2018)(sota on-device)'], ['Model', 'RNN(Khanpour et al., 2016)'], ['Model', 'RNN+Attention(Ortega and Vu, 2017)'], ['Model', 'CNN(Lee and Dernoncourt, 2016)'], ['Model', 'GatedAtten.(Goo et al., 2018)'], ['Model', 'JointBiLSTM(Hakkani-Tur et al., 2016)'], ['Model', 'Atten.RNN(Liu and Lane, 2016)'], ['Model', 'ADAPT-Run1(Dzendzik et al., 2017)'], ['Model', 'Bingo-logistic-reg(Elfardy et al., 2017)'], ['Model', 'Baseline']]
1
[['MRDA'], ['SwDA'], ['ATIS'], ['CF-EN'], ['CF-JP'], ['CF-FR'], ['CF-SP']]
[['87.3', '88.43', '93.73', '65', '74.33', '70.93', '83.95'], ['86.7', '83.1', '-', '-', '-', '-', '-'], ['86.8', '80.1', '-', '-', '-', '-', '-'], ['84.3', '73.9', '-', '-', '-', '-', '-'], ['84.6', '73.1', '-', '-', '-', '-', '-'], ['-', '-', '93.6', '-', '-', '-', '-'], ['-', '-', '92.6', '-', '-', '-', ''], ['-', '-', '91.1', '-', '-', '-', '-'], ['-', '-', '-', '63.4', '67.67', '69.5', '83.61'], ['-', '-', '-', '55.8', '60.67', '59', '72.91'], ['74.6', '47.3', '72.22', '48.8', '56.67', '54.75', '77.26']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
None
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MRDA</th> <th>SwDA</th> <th>ATIS</th> <th>CF-EN</th> <th>CF-JP</th> <th>CF-FR</th> <th>CF-SP</th> </tr> </thead> <tbody> <tr> <td>Model || SGNN++ (our on-device)</td> <td>87.3</td> <td>88.43</td> <td>93.73</td> <td>65</td> <td>74.33</td> <td>70.93</td> <td>83.95</td> </tr> <tr> <td>Model || SGNN(Ravi and Kozareva, 2018)(sota on-device)</td> <td>86.7</td> <td>83.1</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || RNN(Khanpour et al., 2016)</td> <td>86.8</td> <td>80.1</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || RNN+Attention(Ortega and Vu, 2017)</td> <td>84.3</td> <td>73.9</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || CNN(Lee and Dernoncourt, 2016)</td> <td>84.6</td> <td>73.1</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || GatedAtten.(Goo et al., 2018)</td> <td>-</td> <td>-</td> <td>93.6</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || JointBiLSTM(Hakkani-Tur et al., 2016)</td> <td>-</td> <td>-</td> <td>92.6</td> <td>-</td> <td>-</td> <td>-</td> <td></td> </tr> <tr> <td>Model || Atten.RNN(Liu and Lane, 2016)</td> <td>-</td> <td>-</td> <td>91.1</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || ADAPT-Run1(Dzendzik et al., 2017)</td> <td>-</td> <td>-</td> <td>-</td> <td>63.4</td> <td>67.67</td> <td>69.5</td> <td>83.61</td> </tr> <tr> <td>Model || Bingo-logistic-reg(Elfardy et al., 2017)</td> <td>-</td> <td>-</td> <td>-</td> <td>55.8</td> <td>60.67</td> <td>59</td> <td>72.91</td> </tr> <tr> <td>Model || Baseline</td> <td>74.6</td> <td>47.3</td> <td>72.22</td> <td>48.8</td> <td>56.67</td> <td>54.75</td> <td>77.26</td> </tr> </tbody></table>
Table 2
table_2
P19-1368
7
acl2019
Taking these major differences into consideration, we still compare results against prior non-ondevice state-of-art neural networks. As shown in Table 2 only (Khanpour et al., 2016; Ortega and Vu, 2017; Lee and Dernoncourt, 2016) have evaluated on more than one task, while the rest of the methods target specific one. We denote with ? models that do not have results for the task. SGNN++ is the only approach spanning across multiple NLP tasks and languages. On the Dialog Act MRDA and SWDA tasks, SGNN++ outperformed deep learning methods like CNN (Lee and Dernoncourt, 2016), RNN (Khanpour et al., 2016) and RNN with gated attention (Tran et al., 2017) and reached the best results of 87.3% and 88.43% accuracy. For Intent Prediction, SGNN++ also improved with 0.13% 1.13% and 2.63% over the gated attention (Goo et al., 2018), the joint slot and intent biLSTM model (Hakkani-Tur et al., 2016) and the attention slot and intent RNN (Liu and Lane, 2016) on the ATIS task. This is very significant, given that (Goo et al., 2018; Hakkani-Tur et al., 2016; Liu and Lane, 2016) used a joint model to learn the slot entities and types, and used this information to better guide the intent prediction, while SGNN++ does not have any additional information about slots, entities and entity types. On Customer Feedback, SGNN++ reached better performance than Logistic regression models (Elfardy et al., 2017; Dzendzik et al., 2017). Overall, SGNN++ achieves impressive results given the small memory footprint and the fact that it did not rely on pre-trained word embeddings like (Hakkani-Tur et al., 2016; Liu and Lane, 2016) and used the same architecture and model parameters across all tasks and languages. We believe that the dimensionality-reduction techniques like locality sensitive context projections jointly coupled with deep, non-linear functions are effective at dynamically capturing low dimensional semantic text representations that are useful for text classification applications.
[0, 1, 2, 1, 1, 2, 1, 1, 2]
['Taking these major differences into consideration, we still compare results against prior non-ondevice state-of-art neural networks.', 'As shown in Table 2 only (Khanpour et al., 2016; Ortega and Vu, 2017; Lee and Dernoncourt, 2016) have evaluated on more than one task, while the rest of the methods target specific one. We denote with ? models that do not have results for the task.', 'SGNN++ is the only approach spanning across multiple NLP tasks and languages.', 'On the Dialog Act MRDA and SWDA tasks, SGNN++ outperformed deep learning methods like CNN (Lee and Dernoncourt, 2016), RNN (Khanpour et al., 2016) and RNN with gated attention (Tran et al., 2017) and reached the best results of 87.3% and 88.43% accuracy.', 'For Intent Prediction, SGNN++ also improved with 0.13% 1.13% and 2.63% over the gated attention (Goo et al., 2018), the joint slot and intent biLSTM model (Hakkani-Tur et al., 2016) and the attention slot and intent RNN (Liu and Lane, 2016) on the ATIS task.', 'This is very significant, given that (Goo et al., 2018; Hakkani-Tur et al., 2016; Liu and Lane, 2016) used a joint model to learn the slot entities and types, and used this information to better guide the intent prediction, while SGNN++ does not have any additional information about slots, entities and entity types.', 'On Customer Feedback, SGNN++ reached better performance than Logistic regression models (Elfardy et al., 2017; Dzendzik et al., 2017).', 'Overall, SGNN++ achieves impressive results given the small memory footprint and the fact that it did not rely on pre-trained word embeddings like (Hakkani-Tur et al., 2016; Liu and Lane, 2016) and used the same architecture and model parameters across all tasks and languages.', 'We believe that the dimensionality-reduction techniques like locality sensitive context projections jointly coupled with deep, non-linear functions are effective at dynamically capturing low dimensional semantic text representations that are useful for text classification applications.']
[None, None, ['SGNN++ (our on-device)'], ['SGNN++ (our on-device)', 'CNN(Lee and Dernoncourt, 2016)', 'RNN(Khanpour et al., 2016)', 'JointBiLSTM(Hakkani-Tur et al., 2016)', 'Atten.RNN(Liu and Lane, 2016)', 'RNN+Attention(Ortega and Vu, 2017)', 'ADAPT-Run1(Dzendzik et al., 2017)', 'Bingo-logistic-reg(Elfardy et al., 2017)'], ['SGNN++ (our on-device)'], ['SGNN++ (our on-device)'], ['SGNN++ (our on-device)'], ['SGNN++ (our on-device)'], None]
1
P19-1372table_1
Automatic evaluation results of different models where the best results are bold. The G, A and E of Embedding represent Greedy, Average, Extreme embedding-based metrics, repsectively.
2
[['Method', 'S2S'], ['Method', 'S2S+DB'], ['Method', 'MMS'], ['Method', 'CVAE'], ['Method', 'CVAE+BOW'], ['Method', 'WAE'], ['Method', 'Ours-First'], ['Method', 'Ours-Disc'], ['Method', 'Ours-MBOW'], ['Method', 'Ours'], ['Method', 'Ours+GMP']]
2
[['Multi-BLEU', 'BLEU-1'], ['Multi-BLEU', 'BLEU-2'], ['EMBEDDING', 'G'], ['EMBEDDING', 'A'], ['EMBEDDING', 'E'], ['Intra-Dist', 'Dist-1'], ['Intra-Dist', 'Dist-2'], ['Inter-Dist', 'Dist-1'], ['Inter-Dist', 'Dist-2']]
[['21.49', '9.498', '0.567', '0.677', '0.415', '0.311', '0.447', '0.027', '0.127'], ['20.2', '9.445', '0.561', '0.682', '0.422', '0.324', '0.457', '0.028', '0.13'], ['21.4', '9.398', '0.569', '0.691', '0.427', '0.561', '0.697', '0.033', '0.158'], ['22.71', '8.923', '0.601', '0.73', '0.452', '0.628', '0.801', '0.035', '0.179'], ['23.12', '8.42', '0.605', '0.741', '0.456', '0.687', '0.873', '0.038', '0.194'], ['24.02', '9.281', '0.611', '0.754', '0.46', '0.734', '0.885', '0.044', '0.196'], ['23.68', '9.24', '0.619', '0.762', '0.471', '0.725', '0.891', '0.045', '0.199'], ['24.22', '9.101', '0.617', '0.754', '0.465', '0.67', '0.863', '0.036', '0.184'], ['23.88', '9.582', '0.622', '0.778', '0.477', '0.681', '0.877', '0.04', '0.19'], ['24.04', '9.362', '0.625', '0.771', '0.48', '0.699', '0.876', '0.042', '0.19'], ['24.2', '9.417', '0.618', '0.769', '0.482', '0.728', '0.889', '0.044', '0.198']]
column
['BLEU-1', 'BLEU-2', 'G', 'A', 'E', 'Dist-1', 'Dist-2', 'Dist-1', 'Dist-2']
['Ours-First', 'Ours-Disc', 'Ours-MBOW', 'Ours', 'Ours+GMP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Multi-BLEU || BLEU-1</th> <th>Multi-BLEU || BLEU-2</th> <th>EMBEDDING || G</th> <th>EMBEDDING || A</th> <th>EMBEDDING || E</th> <th>Intra-Dist || Dist-1</th> <th>Intra-Dist || Dist-2</th> <th>Inter-Dist || Dist-1</th> <th>Inter-Dist || Dist-2</th> </tr> </thead> <tbody> <tr> <td>Method || S2S</td> <td>21.49</td> <td>9.498</td> <td>0.567</td> <td>0.677</td> <td>0.415</td> <td>0.311</td> <td>0.447</td> <td>0.027</td> <td>0.127</td> </tr> <tr> <td>Method || S2S+DB</td> <td>20.2</td> <td>9.445</td> <td>0.561</td> <td>0.682</td> <td>0.422</td> <td>0.324</td> <td>0.457</td> <td>0.028</td> <td>0.13</td> </tr> <tr> <td>Method || MMS</td> <td>21.4</td> <td>9.398</td> <td>0.569</td> <td>0.691</td> <td>0.427</td> <td>0.561</td> <td>0.697</td> <td>0.033</td> <td>0.158</td> </tr> <tr> <td>Method || CVAE</td> <td>22.71</td> <td>8.923</td> <td>0.601</td> <td>0.73</td> <td>0.452</td> <td>0.628</td> <td>0.801</td> <td>0.035</td> <td>0.179</td> </tr> <tr> <td>Method || CVAE+BOW</td> <td>23.12</td> <td>8.42</td> <td>0.605</td> <td>0.741</td> <td>0.456</td> <td>0.687</td> <td>0.873</td> <td>0.038</td> <td>0.194</td> </tr> <tr> <td>Method || WAE</td> <td>24.02</td> <td>9.281</td> <td>0.611</td> <td>0.754</td> <td>0.46</td> <td>0.734</td> <td>0.885</td> <td>0.044</td> <td>0.196</td> </tr> <tr> <td>Method || Ours-First</td> <td>23.68</td> <td>9.24</td> <td>0.619</td> <td>0.762</td> <td>0.471</td> <td>0.725</td> <td>0.891</td> <td>0.045</td> <td>0.199</td> </tr> <tr> <td>Method || Ours-Disc</td> <td>24.22</td> <td>9.101</td> <td>0.617</td> <td>0.754</td> <td>0.465</td> <td>0.67</td> <td>0.863</td> <td>0.036</td> <td>0.184</td> </tr> <tr> <td>Method || Ours-MBOW</td> <td>23.88</td> <td>9.582</td> <td>0.622</td> <td>0.778</td> <td>0.477</td> <td>0.681</td> <td>0.877</td> <td>0.04</td> <td>0.19</td> </tr> <tr> <td>Method || Ours</td> <td>24.04</td> <td>9.362</td> <td>0.625</td> <td>0.771</td> <td>0.48</td> <td>0.699</td> <td>0.876</td> <td>0.042</td> <td>0.19</td> </tr> <tr> <td>Method || Ours+GMP</td> <td>24.2</td> <td>9.417</td> <td>0.618</td> <td>0.769</td> <td>0.482</td> <td>0.728</td> <td>0.889</td> <td>0.044</td> <td>0.198</td> </tr> </tbody></table>
Table 1
table_1
P19-1372
6
acl2019
5.1 Comparison against Baselines. Table 1 shows our main experimental results, with baselines shown in the top and our models at the bottom. The results show that our model (Ours) outperforms competitive baselines on various evaluation metrics. The Seq2seq based models (S2S, S2S-DB and MMS) tend to generate fluent utterances and can share some overlapped words with the references, as the high BLEU-2 scores show.
[2, 1, 1, 2]
['5.1 Comparison against Baselines.', 'Table 1 shows our main experimental results, with baselines shown in the top and our models at the bottom.', 'The results show that our model (Ours) outperforms competitive baselines on various evaluation metrics.', 'The Seq2seq based models (S2S, S2S-DB and MMS) tend to generate fluent utterances and can share some overlapped words with the references, as the high BLEU-2 scores show.']
[None, ['Ours-First', 'Ours-Disc', 'Ours-MBOW', 'Ours', 'Ours+GMP'], ['Ours-First', 'Ours-Disc', 'Ours-MBOW', 'Ours', 'Ours+GMP'], ['S2S', 'MMS', 'BLEU-2']]
1
P19-1374table_4
Conversation results on the Ubuntu test set. Our new model is substantially better than prior work. Significance is not measured as we are unaware of methods for set structured data.
2
[['System', 'Previous'], ['System', 'Linear'], ['System', 'Feedforward'], ['System', 'x10 union'], ['System', 'x10 vote'], ['System', 'x10 intersect'], ['System', 'Lowe (2017)'], ['System', 'Elsner (2008)']]
1
[['VI'], ['1-1'], ['P'], ['R'], ['F']]
[['66.1', '27.6', '0', '0', '0'], ['88.9', '69.5', '19.3', '24.9', '21.8'], ['91.3', '75.6', '34.6', '38', '36.2'], ['86.2', '62.5', '40.4', '28.5', '33.4'], ['91.5', '76', '36.3', '39.7', '38'], ['69.3', '26.6', '67', '21.1', '32.1'], ['80.6', '53.7', '10.8', '7.6', '8.9'], ['82.1', '51.4', '12.1', '21.5', '15.5']]
column
['VI', '1-1', 'P', 'R', 'F']
['x10 vote']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>VI</th> <th>1-1</th> <th>P</th> <th>R</th> <th>F</th> </tr> </thead> <tbody> <tr> <td>System || Previous</td> <td>66.1</td> <td>27.6</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>System || Linear</td> <td>88.9</td> <td>69.5</td> <td>19.3</td> <td>24.9</td> <td>21.8</td> </tr> <tr> <td>System || Feedforward</td> <td>91.3</td> <td>75.6</td> <td>34.6</td> <td>38</td> <td>36.2</td> </tr> <tr> <td>System || x10 union</td> <td>86.2</td> <td>62.5</td> <td>40.4</td> <td>28.5</td> <td>33.4</td> </tr> <tr> <td>System || x10 vote</td> <td>91.5</td> <td>76</td> <td>36.3</td> <td>39.7</td> <td>38</td> </tr> <tr> <td>System || x10 intersect</td> <td>69.3</td> <td>26.6</td> <td>67</td> <td>21.1</td> <td>32.1</td> </tr> <tr> <td>System || Lowe (2017)</td> <td>80.6</td> <td>53.7</td> <td>10.8</td> <td>7.6</td> <td>8.9</td> </tr> <tr> <td>System || Elsner (2008)</td> <td>82.1</td> <td>51.4</td> <td>12.1</td> <td>21.5</td> <td>15.5</td> </tr> </tbody></table>
Table 4
table_4
P19-1374
6
acl2019
Conversations: Table 4 presents results on the metrics defined in Section 4.3. There are three regions of performance. First, the baseline has consistently low scores since it forms a single conversation containing all messages. Second, Elsner and Charniak (2008) and Lowe et al. (2017) perform similarly, with one doing better on VI and the other on 1-1, though Elsner and Charniak (2008) do consistently better across the exact conversation extraction metrics. Third, our methods do best, with x10 vote best in all cases except precision, where the intersect approach is much better.
[1, 1, 1, 1, 1]
['Conversations: Table 4 presents results on the metrics defined in Section 4.3.', 'There are three regions of performance.', 'First, the baseline has consistently low scores since it forms a single conversation containing all messages.', 'Second, Elsner and Charniak (2008) and Lowe et al. (2017) perform similarly, with one doing better on VI and the other on 1-1, though Elsner and Charniak (2008) do consistently better across the exact conversation extraction metrics.', 'Third, our methods do best, with x10 vote best in all cases except precision, where the intersect approach is much better.']
[None, None, None, ['VI', '1-1', 'Elsner (2008)', 'Lowe (2017)'], ['x10 vote']]
1
P19-1374table_5
Performance with different training conditions on the Ubuntu test set. For Graph-F, * indicates a significant difference at the 0.01 level compared to Standard. Results are averages over 10 runs, varying the data and random seeds. The standard deviation is shown in parentheses.
2
[['Training Condition', 'Standard'], ['Training Condition', 'No context'], ['Training Condition', '1k random msg'], ['Training Condition', '2x 500 msg samples']]
1
[['Graph-F'], ['Conv-F']]
[['72.3 (0.4)', '36.2 (1.7)'], ['72.3 (0.2)', '37.6 (1.6)'], ['63.0* (0.4)', '21 (2.3)'], ['61.4* (1.8)', '20.4 (3.2)']]
column
['accuracy', 'accuracy']
['Training Condition']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Graph-F</th> <th>Conv-F</th> </tr> </thead> <tbody> <tr> <td>Training Condition || Standard</td> <td>72.3 (0.4)</td> <td>36.2 (1.7)</td> </tr> <tr> <td>Training Condition || No context</td> <td>72.3 (0.2)</td> <td>37.6 (1.6)</td> </tr> <tr> <td>Training Condition || 1k random msg</td> <td>63.0* (0.4)</td> <td>21 (2.3)</td> </tr> <tr> <td>Training Condition || 2x 500 msg samples</td> <td>61.4* (1.8)</td> <td>20.4 (3.2)</td> </tr> </tbody></table>
Table 5
table_5
P19-1374
6
acl2019
Dataset Variations: Table 5 shows results for the feedforward model with several modifications to the training set, designed to test corpus design decisions. Removing context does not substantially impact results. Decreasing the data size to match Elsner and Charniak (2008)’s training set leads to worse results, both if the sentences are from diverse contexts (3rd row), and if they are from just two contexts (bottom row). We also see a substantial increase in the standard deviation when only two samples are used, indicating that performance is not robust when the data is not widely sampled.
[1, 1, 1, 1]
['Dataset Variations: Table 5 shows results for the feedforward model with several modifications to the training set, designed to test corpus design decisions.', 'Removing context does not substantially impact results.', 'Decreasing the data size to match Elsner and Charniak (2008)’s training set leads to worse results, both if the sentences are from diverse contexts (3rd row), and if they are from just two contexts (bottom row).', 'We also see a substantial increase in the standard deviation when only two samples are used, indicating that performance is not robust when the data is not widely sampled.']
[None, ['No context'], ['1k random msg', '2x 500 msg samples'], None]
1
P19-1389table_2
Results (%) on 10,000 test query segments on the Classification-for-Modeling task.
2
[['Method', 'CNN-encoder (separated)'], ['Method', 'RNN-encoder (separated)'], ['Method', 'CNN-encoder (joint)'], ['Method', 'RNN-encoder (joint)']]
2
[['level-1 sentence functions', 'Accuracy'], ['level-1 sentence functions', 'Macro-F1'], ['level-1 sentence functions', 'Micro-F1'], ['level-2 sentence functions', 'Accuracy'], ['level-2 sentence functions', 'Macro-F1'], ['level-2 sentence functions', 'Micro-F1']]
[['97.5', '87.6', '97.5', '86.2', '52', '86.2'], ['97.6', '90.9', '97.6', '87.2', '65.8', '87.1'], ['97.4', '87.3', '97.3', '86.5', '51.8', '86.4'], ['97.6', '91.2', '97.5', '87.6', '64.2', '87.6']]
column
['Accuracy', 'Macro-F1', 'Micro-F1', 'Accuracy', 'Macro-F1', 'Micro-F1']
['RNN-encoder (joint)', 'RNN-encoder (separated)', 'CNN-encoder (separated)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>level-1 sentence functions || Accuracy</th> <th>level-1 sentence functions || Macro-F1</th> <th>level-1 sentence functions || Micro-F1</th> <th>level-2 sentence functions || Accuracy</th> <th>level-2 sentence functions || Macro-F1</th> <th>level-2 sentence functions || Micro-F1</th> </tr> </thead> <tbody> <tr> <td>Method || CNN-encoder (separated)</td> <td>97.5</td> <td>87.6</td> <td>97.5</td> <td>86.2</td> <td>52</td> <td>86.2</td> </tr> <tr> <td>Method || RNN-encoder (separated)</td> <td>97.6</td> <td>90.9</td> <td>97.6</td> <td>87.2</td> <td>65.8</td> <td>87.1</td> </tr> <tr> <td>Method || CNN-encoder (joint)</td> <td>97.4</td> <td>87.3</td> <td>97.3</td> <td>86.5</td> <td>51.8</td> <td>86.4</td> </tr> <tr> <td>Method || RNN-encoder (joint)</td> <td>97.6</td> <td>91.2</td> <td>97.5</td> <td>87.6</td> <td>64.2</td> <td>87.6</td> </tr> </tbody></table>
Table 2
table_2
P19-1389
6
acl2019
We randomly sample 10,000 query and response segments respectively from the STCSeFun dataset for testing. Results on test query is summarized in Table 2. As stated in Section 4.1, we train different models with query/response data only (denoted as separated), as well as query and response data jointly (denoted as joint) and try two sentence encoders: CNN-based and RNN-based. From the results, we can see that the RNN-based encoder is better than the CNN-based encoder on test query consistently on all metrics.
[2, 1, 2, 1]
['We randomly sample 10,000 query and response segments respectively from the STCSeFun dataset for testing.', 'Results on test query is summarized in Table 2.', 'As stated in Section 4.1, we train different models with query/response data only (denoted as separated), as well as query and response data jointly (denoted as joint) and try two sentence encoders: CNN-based and RNN-based.', 'From the results, we can see that the RNN-based encoder is better than the CNN-based encoder on test query consistently on all metrics.']
[None, None, None, ['RNN-encoder (joint)', 'RNN-encoder (separated)', 'CNN-encoder (separated)']]
1
P19-1389table_4
Results(%) on 5,000 test queries on the Classification-for-Testing task.
2
[['Method', 'CNN-encoder (without query SeFun)'], ['Method', 'RNN-encoder (without query SeFun)'], ['Method', 'CNN-encoder (with query SeFun)'], ['Method', 'RNN-encoder (with query SeFun)']]
2
[['level-1', 'Accuracy'], ['level-1', 'Macro-F1'], ['level-1', 'Micro-F1'], ['level-2', 'Accuracy'], ['level-2', 'Macro-F1'], ['level-2', 'Micro-F1']]
[['81.2', '15.1', '81.1', '55.7', '23.5', '55.7'], ['77.9', '30.3', '77.9', '65.6', '25.8', '65.5'], ['81.2', '17.4', '81.1', '65.6', '21.1', '65.6'], ['81.3', '28.5', '81.5', '65.5', '25.7', '65.7']]
column
['Accuracy', 'Macro-F1', 'Micro-F1', 'Accuracy', 'Macro-F1', 'Micro-F1']
['CNN-encoder (with query SeFun)', 'RNN-encoder (with query SeFun)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>level-1 || Accuracy</th> <th>level-1 || Macro-F1</th> <th>level-1 || Micro-F1</th> <th>level-2 || Accuracy</th> <th>level-2 || Macro-F1</th> <th>level-2 || Micro-F1</th> </tr> </thead> <tbody> <tr> <td>Method || CNN-encoder (without query SeFun)</td> <td>81.2</td> <td>15.1</td> <td>81.1</td> <td>55.7</td> <td>23.5</td> <td>55.7</td> </tr> <tr> <td>Method || RNN-encoder (without query SeFun)</td> <td>77.9</td> <td>30.3</td> <td>77.9</td> <td>65.6</td> <td>25.8</td> <td>65.5</td> </tr> <tr> <td>Method || CNN-encoder (with query SeFun)</td> <td>81.2</td> <td>17.4</td> <td>81.1</td> <td>65.6</td> <td>21.1</td> <td>65.6</td> </tr> <tr> <td>Method || RNN-encoder (with query SeFun)</td> <td>81.3</td> <td>28.5</td> <td>81.5</td> <td>65.5</td> <td>25.7</td> <td>65.7</td> </tr> </tbody></table>
Table 4
table_4
P19-1389
7
acl2019
We utilize classifiers for this task to estimate the proper response sentence function given the query with/without the query sentence functions. We also implement the RNN-based and CNN-based encoders for the query representation for comparison. Table 4 shows the results on 5,000 test queries by comparing the predicted response sentence function with its annotated groundtrue response sentence function. We can observe that encoding query sentence functions is useful to improve the performance for both CNN-based and RNN-based encoders.
[2, 1, 1, 1]
['We utilize classifiers for this task to estimate the proper response sentence function given the query with/without the query sentence functions.', 'We also implement the RNN-based and CNN-based encoders for the query representation for comparison.', 'Table 4 shows the results on 5,000 test queries by comparing the predicted response sentence function with its annotated groundtrue response sentence function.', 'We can observe that encoding query sentence functions is useful to improve the performance for both CNN-based and RNN-based encoders.']
[None, None, None, ['CNN-encoder (with query SeFun)', 'RNN-encoder (with query SeFun)']]
1
P19-1402table_2
Performance on Named Entity Recognition and Part-of-Speech Tagging tasks. All methods are evaluated on test data containing OOV words. Results demonstrate that the proposed approach, HiCE + Morph + MAML, improves the downstream model by learning better representations for OOV words.
2
[['Methods', 'Word2vec'], ['Methods', 'FastText'], ['Methods', 'Additive'], ['Methods', 'nonce2vec'], ['Methods', 'Ã\xa0 la carte'], ['Methods', 'HiCE w/o Morph'], ['Methods', 'HiCE + Morph'], ['Methods', 'HiCE + Morph + MAML']]
3
[['Named Entity Recognition', 'F1-score', 'Rare-NER'], ['Named Entity Recognition', 'F1-score', 'Bio-NER'], ['POS Tagging', 'Acc', 'Twitter POS']]
[['0.1862', '0.7205', '0.7649'], ['0.1981', '0.7241', '0.8116'], ['0.2021', '0.7034', '0.7576'], ['0.2096', '0.7289', '0.7734'], ['0.2153', '0.7423', '0.7883'], ['0.2394', '0.7486', '0.8194'], ['0.2375', '0.7522', '0.8227'], ['0.2419', '0.7636', '0.8286']]
column
['F1-score', 'F1-score', 'Acc']
['HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Named Entity Recognition (F1-score) || Rare-NER</th> <th>Named Entity Recognition (F1-score) || Bio-NER</th> <th>POS Tagging (Acc) || Twitter POS</th> </tr> </thead> <tbody> <tr> <td>Methods || Word2vec</td> <td>0.1862</td> <td>0.7205</td> <td>0.7649</td> </tr> <tr> <td>Methods || FastText</td> <td>0.1981</td> <td>0.7241</td> <td>0.8116</td> </tr> <tr> <td>Methods || Additive</td> <td>0.2021</td> <td>0.7034</td> <td>0.7576</td> </tr> <tr> <td>Methods || nonce2vec</td> <td>0.2096</td> <td>0.7289</td> <td>0.7734</td> </tr> <tr> <td>Methods || à la carte</td> <td>0.2153</td> <td>0.7423</td> <td>0.7883</td> </tr> <tr> <td>Methods || HiCE w/o Morph</td> <td>0.2394</td> <td>0.7486</td> <td>0.8194</td> </tr> <tr> <td>Methods || HiCE + Morph</td> <td>0.2375</td> <td>0.7522</td> <td>0.8227</td> </tr> <tr> <td>Methods || HiCE + Morph + MAML</td> <td>0.2419</td> <td>0.7636</td> <td>0.8286</td> </tr> </tbody></table>
Table 2
table_2
P19-1402
7
acl2019
Results. Table 2 illustrates the results evaluated on the downstream tasks. HiCE outperforms the baselines in all the settings. Compared to the best baseline `a la carte, the relative improvements are 12.4%, 2.9% and 5.1% for Rare-NER, BioNER, and Twitter POS, respectively. As aforementioned, the ratio of OOV words in Rare-NER is high. As a result, all the systems perform worse on Rare-NER than Bio-NER, while HiCE reaches the largest improvement than all the other baselines. Besides, our baseline embedding is trained on Wikipedia corpus (WikiText-103), which is quite different from the bio-medical texts and social media domain. The experiment demonstrates that HiCE trained on DT is already able to leverage the general language knowledge which can be transferred through different domains, and adaptation with MAML can further reduce the domain gap and enhance the performance.
[2, 1, 1, 1, 2, 1, 2, 2]
['Results.', 'Table 2 illustrates the results evaluated on the downstream tasks.', 'HiCE outperforms the baselines in all the settings.', 'Compared to the best baseline `a la carte, the relative improvements are 12.4%, 2.9% and 5.1% for Rare-NER, BioNER, and Twitter POS, respectively.', 'As aforementioned, the ratio of OOV words in Rare-NER is high.', 'As a result, all the systems perform worse on Rare-NER than Bio-NER, while HiCE reaches the largest improvement than all the other baselines.', 'Besides, our baseline embedding is trained on Wikipedia corpus (WikiText-103), which is quite different from the bio-medical texts and social media domain.', 'The experiment demonstrates that HiCE trained on DT is already able to leverage the general language knowledge which can be transferred through different domains, and adaptation with MAML can further reduce the domain gap and enhance the performance.']
[None, None, ['HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML'], ['HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML', 'Rare-NER', 'Twitter POS'], None, ['Rare-NER', 'HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML'], None, ['HiCE w/o Morph', 'HiCE + Morph', 'HiCE + Morph + MAML']]
1
P19-1403table_5
Performance gains of two neural temporality adaptation models when they are initialized by diachronic word embeddings as compared to initialization with standard non-diachronic word embeddings. Subword refers to our proposed diachronic word embedding in this paper (Section 3). We report absolute percentage increases in weighted F1 score after applying diachronic word embeddings.
2
[['Data', 'Twitter'], ['Data', 'Economy'], ['Data', 'Yelp-rest'], ['Data', 'Yelp-hotel'], ['Data', 'Amazon'], ['Data', 'Dianping'], ['Data', 'Average'], ['Data', 'Median']]
2
[['RCNN', 'Incre'], ['RCNN', 'Linear'], ['RCNN', 'Procrustes'], ['RCNN', 'Subword'], ['NTAM', 'Incre'], ['NTAM', 'Linear'], ['NTAM', 'Procrutes'], ['NTAM', 'Subword']]
[['-0.7', '1.4', '-0.2', '-0.8', '1.4', '-0.3', '1.7', '3.5'], ['0.5', '0', '-0.7', '0.4', '-0.3', '-1', '-0.5', '0.3'], ['1.4', '0.1', '-1.9', '2.3', '1.9', '1.6', '1.4', '4.3'], ['-1.5', '-1.2', '-0.5', '-0.2', '-0.7', '-2', '-1.8', '0.8'], ['0.2', '0.2', '-2', '0.5', '-0.8', '-0.7', '-0.8', '2.1'], ['0.4', '1.6', '0.7', '1', '0.8', '1.8', '3.4', '4.2'], ['0.05', '0.35', '-0.47', '0.53', '0.38', '-0.1', '0.57', '2.53'], ['0.3', '0.15', '-0.6', '0.45', '0.25', '-0.5', '0.45', '2.8']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['RCNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RCNN || Incre</th> <th>RCNN || Linear</th> <th>RCNN || Procrustes</th> <th>RCNN || Subword</th> <th>NTAM || Incre</th> <th>NTAM || Linear</th> <th>NTAM || Procrutes</th> <th>NTAM || Subword</th> </tr> </thead> <tbody> <tr> <td>Data || Twitter</td> <td>-0.7</td> <td>1.4</td> <td>-0.2</td> <td>-0.8</td> <td>1.4</td> <td>-0.3</td> <td>1.7</td> <td>3.5</td> </tr> <tr> <td>Data || Economy</td> <td>0.5</td> <td>0</td> <td>-0.7</td> <td>0.4</td> <td>-0.3</td> <td>-1</td> <td>-0.5</td> <td>0.3</td> </tr> <tr> <td>Data || Yelp-rest</td> <td>1.4</td> <td>0.1</td> <td>-1.9</td> <td>2.3</td> <td>1.9</td> <td>1.6</td> <td>1.4</td> <td>4.3</td> </tr> <tr> <td>Data || Yelp-hotel</td> <td>-1.5</td> <td>-1.2</td> <td>-0.5</td> <td>-0.2</td> <td>-0.7</td> <td>-2</td> <td>-1.8</td> <td>0.8</td> </tr> <tr> <td>Data || Amazon</td> <td>0.2</td> <td>0.2</td> <td>-2</td> <td>0.5</td> <td>-0.8</td> <td>-0.7</td> <td>-0.8</td> <td>2.1</td> </tr> <tr> <td>Data || Dianping</td> <td>0.4</td> <td>1.6</td> <td>0.7</td> <td>1</td> <td>0.8</td> <td>1.8</td> <td>3.4</td> <td>4.2</td> </tr> <tr> <td>Data || Average</td> <td>0.05</td> <td>0.35</td> <td>-0.47</td> <td>0.53</td> <td>0.38</td> <td>-0.1</td> <td>0.57</td> <td>2.53</td> </tr> <tr> <td>Data || Median</td> <td>0.3</td> <td>0.15</td> <td>-0.6</td> <td>0.45</td> <td>0.25</td> <td>-0.5</td> <td>0.45</td> <td>2.8</td> </tr> </tbody></table>
Table 5
table_5
P19-1403
9
acl2019
Table 5 shows the absolute percentage improvement in classification performance when using each diachronic embedding compared to a classifier without diachronic embeddings. Overall, diachronic embeddings improve classification models. The diachronic embedding appears to be particularly important for NTAM, improving performance on all 6 datasets with an average increase in performance up to 2.53 points. The RCNN also benefits from diachronic embeddings, but to a lesser extent, with an improvement on 4 of the 6 datasets. Comparing the different methods for constructing diachronic embeddings, we find that our proposed subword method works the best on average for both classifiers. The incremental training method also provides improved performance for both classifiers, while the linear regression and Procrustes approaches have mixed results.
[1, 2, 2, 1, 1, 2]
['Table 5 shows the absolute percentage improvement in classification performance when using each diachronic embedding compared to a classifier without diachronic embeddings.', 'Overall, diachronic embeddings improve classification models.', 'The diachronic embedding appears to be particularly important for NTAM, improving performance on all 6 datasets with an average increase in performance up to 2.53 points.', 'The RCNN also benefits from diachronic embeddings, but to a lesser extent, with an improvement on 4 of the 6 datasets.', 'Comparing the different methods for constructing diachronic embeddings, we find that our proposed subword method works the best on average for both classifiers.', 'The incremental training method also provides improved performance for both classifiers, while the linear regression and Procrustes approaches have mixed results.']
[None, None, ['NTAM'], ['RCNN', 'Incre', 'Linear', 'Procrustes', 'Subword'], ['NTAM', 'Incre', 'Linear', 'Procrutes', 'Subword'], None]
1
P19-1407table_1
Performance for non-scratchpad models are taken from He et al. (2018) except Stanford NMT (Luong and Manning, 2015). ∗: model is 2 layers.
2
[['Model', 'MIXER'], ['Model', 'AC + LL'], ['Model', 'NPMT'], ['Model', 'Stanford NMT'], ['Model', 'Transformer (6 layer)'], ['Model', 'Layer-Coord (14 layer)'], ['Model', 'Scratchpad (3 layer)']]
2
[['IWSLT14', 'De-En'], ['IWSLT15', 'Es-En'], ['IWSLT15', 'En-Vi']]
[['21.83', '-', '-'], ['28.53', '-', '-'], ['29.96', '-', '28.07'], ['-', '-', '26.1'], ['32.86', '38.57', '-'], ['35.07', '40.5', '-'], ['35.08', '40.92', '29.59']]
column
['BLEU', 'BLEU', 'BLEU']
['Scratchpad (3 layer)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IWSLT14 || De-En</th> <th>IWSLT15 || Es-En</th> <th>IWSLT15 || En-Vi</th> </tr> </thead> <tbody> <tr> <td>Model || MIXER</td> <td>21.83</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || AC + LL</td> <td>28.53</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || NPMT</td> <td>29.96</td> <td>-</td> <td>28.07</td> </tr> <tr> <td>Model || Stanford NMT</td> <td>-</td> <td>-</td> <td>26.1</td> </tr> <tr> <td>Model || Transformer (6 layer)</td> <td>32.86</td> <td>38.57</td> <td>-</td> </tr> <tr> <td>Model || Layer-Coord (14 layer)</td> <td>35.07</td> <td>40.5</td> <td>-</td> </tr> <tr> <td>Model || Scratchpad (3 layer)</td> <td>35.08</td> <td>40.92</td> <td>29.59</td> </tr> </tbody></table>
Table 1
table_1
P19-1407
3
acl2019
4.1 Translation . We evaluate on the IWLST 14 English to German and Spanish to English translation datasets (Cettolo et al., 2015) as well as the IWSLT 15 (Cettolo et al., 2015) English to Vietnamese translation dataset. For IWSLT14 (Cettolo et al., 2015), we compare to the models evaluated by He et al. (2018), which includes a transformer (Vaswani et al., 2017) and RNN-based models (Bahdanau et al., 2014). For IWSLT15, we primarily compare to GNMT (Wu et al., 2016), which incorporates Coverage (Tu et al., 2016). Table 1 shows BLEU scores of our approach on 3 IWSLT translation tasks along with reported results from previous work. Our approach achieves state-of-the-art or comparable results on all datasets.
[2, 2, 2, 2, 1, 1]
['4.1 Translation .', 'We evaluate on the IWLST 14 English to German and Spanish to English translation datasets (Cettolo et al., 2015) as well as the IWSLT 15 (Cettolo et al., 2015) English to Vietnamese translation dataset.', 'For IWSLT14 (Cettolo et al., 2015), we compare to the models evaluated by He et al. (2018), which includes a transformer (Vaswani et al., 2017) and RNN-based models (Bahdanau et al., 2014).', 'For IWSLT15, we primarily compare to GNMT (Wu et al., 2016), which incorporates Coverage (Tu et al., 2016).', 'Table 1 shows BLEU scores of our approach on 3 IWSLT translation tasks along with reported results from previous work.', 'Our approach achieves state-of-the-art or comparable results on all datasets.']
[None, ['IWSLT14', 'IWSLT15'], ['IWSLT14'], ['IWSLT15'], None, ['Scratchpad (3 layer)']]
1
P19-1408table_5
CoNLL-2012 shared task systems evaluations based on maximum spans, MINA spans, and head words. The rankings based on the CoNLL scores are included in parentheses for maximum and MINA spans. Rankings which are different based on maximum vs. MINA spans are highlighted.
1
[['fernandes'], ['martschat'], ['bjorkelund'], ['chang'], ['chen'], ['chunyuang'], ['shou'], ['yuan'], ['xu'], ['uryupina'], ['songyang']]
2
[['CoNLL', 'max'], ['CoNLL', 'MINA'], ['CoNLL', 'head'], ['LEA', 'max'], ['LEA', 'MINA'], ['LEA', 'head']]
[['60.6 (1)', ' 62.2 (1)', ' 63.9', ' 53.3', ' 55.1', ' 57.0'], ['57.7 (2)', ' 59.7 (2)', ' 61.0', ' 50.0', ' 52.4', ' 53.9'], ['57.4 (3)', ' 58.9 (3)', ' 60.7', ' 50.0', ' 51.6', ' 53.6'], ['56.1 (4)', ' 58.0 (4)', ' 59.6', ' 48.5', ' 50.7', ' 52.5'], ['54.5 (5)', ' 56.5 (5)', ' 58.2', ' 46.2', ' 48.6', ' 50.4'], ['54.2 (6)', ' 56.1 (6)', ' 57.9', ' 45.8', ' 48.1', ' 50.2'], ['53.0 (7)', ' 54.8 (8)', ' 56.5', ' 44.0', ' 46.1', ' 48.1'], ['52.9 (8)', ' 54.9 (7)', ' 56.7', ' 44.8', ' 47.0', ' 48.9'], ['52.6 (9)', ' 53.9 (9)', ' 55.2', ' 46.8', ' 48.4', ' 50.0'], ['50.0 (10)', ' 51.0 (11)', ' 52.4', ' 41.2', ' 42.3', ' 43.7'], [' 49.4 (11)', ' 51.3 (10)', ' 52.9', ' 41.3', ' 43.5', ' 45.3']]
column
['CoNLL', 'CoNLL', 'CoNLL', 'LEA', 'LEA', 'LEA']
['MINA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CoNLL || max</th> <th>CoNLL || MINA</th> <th>CoNLL || head</th> <th>LEA || max</th> <th>LEA || MINA</th> <th>LEA || head</th> </tr> </thead> <tbody> <tr> <td>fernandes</td> <td>60.6 (1)</td> <td>62.2 (1)</td> <td>63.9</td> <td>53.3</td> <td>55.1</td> <td>57.0</td> </tr> <tr> <td>martschat</td> <td>57.7 (2)</td> <td>59.7 (2)</td> <td>61.0</td> <td>50.0</td> <td>52.4</td> <td>53.9</td> </tr> <tr> <td>bjorkelund</td> <td>57.4 (3)</td> <td>58.9 (3)</td> <td>60.7</td> <td>50.0</td> <td>51.6</td> <td>53.6</td> </tr> <tr> <td>chang</td> <td>56.1 (4)</td> <td>58.0 (4)</td> <td>59.6</td> <td>48.5</td> <td>50.7</td> <td>52.5</td> </tr> <tr> <td>chen</td> <td>54.5 (5)</td> <td>56.5 (5)</td> <td>58.2</td> <td>46.2</td> <td>48.6</td> <td>50.4</td> </tr> <tr> <td>chunyuang</td> <td>54.2 (6)</td> <td>56.1 (6)</td> <td>57.9</td> <td>45.8</td> <td>48.1</td> <td>50.2</td> </tr> <tr> <td>shou</td> <td>53.0 (7)</td> <td>54.8 (8)</td> <td>56.5</td> <td>44.0</td> <td>46.1</td> <td>48.1</td> </tr> <tr> <td>yuan</td> <td>52.9 (8)</td> <td>54.9 (7)</td> <td>56.7</td> <td>44.8</td> <td>47.0</td> <td>48.9</td> </tr> <tr> <td>xu</td> <td>52.6 (9)</td> <td>53.9 (9)</td> <td>55.2</td> <td>46.8</td> <td>48.4</td> <td>50.0</td> </tr> <tr> <td>uryupina</td> <td>50.0 (10)</td> <td>51.0 (11)</td> <td>52.4</td> <td>41.2</td> <td>42.3</td> <td>43.7</td> </tr> <tr> <td>songyang</td> <td>49.4 (11)</td> <td>51.3 (10)</td> <td>52.9</td> <td>41.3</td> <td>43.5</td> <td>45.3</td> </tr> </tbody></table>
Table 5
table_5
P19-1408
11
acl2019
A Appendix . Table 5 shows CoNLL scores and the LEA F1 values of the participating systems in the CoNLL2012 shared task (closed task with predicted syntax and mentions) based on both maximum and minimum span evaluations. Minimum spans are detected using both MINA and Collins’ head finding rules using gold parse trees. Based on the results of Tables 5 and 6: (1) the use of minimum spans reduces the gap between the performance on gold vs. system mentions by about two percent, (2) the use of minimum instead of maximum spans results in a different ordering for some of the coreference resolvers, and (3) when gold mentions are used, there are no boundary detection errors, and consequently the results using MINA are the same as those of using maximum spans. Due to recognizing the same head for distinct overlapping mentions, the scores using the head of gold mentions are not the same as using their maximum span, which in turn indicates MINA is suited better for detecting minimum spans compared to head words.
[2, 1, 2, 1, 1]
['A Appendix .', 'Table 5 shows CoNLL scores and the LEA F1 values of the participating systems in the CoNLL2012 shared task (closed task with predicted syntax and mentions) based on both maximum and minimum span evaluations.', 'Minimum spans are detected using both MINA and Collins’ head finding rules using gold parse trees.', 'Based on the results of Tables 5 and 6: (1) the use of minimum spans reduces the gap between the performance on gold vs. system mentions by about two percent, (2) the use of minimum instead of maximum spans results in a different ordering for some of the coreference resolvers, and (3) when gold mentions are used, there are no boundary detection errors, and consequently the results using MINA are the same as those of using maximum spans.', 'Due to recognizing the same head for distinct overlapping mentions, the scores using the head of gold mentions are not the same as using their maximum span, which in turn indicates MINA is suited better for detecting minimum spans compared to head words.']
[None, ['CoNLL', 'LEA'], ['MINA'], ['MINA', 'max'], ['MINA', 'head']]
1
P19-1409table_3
Combined withinand cross-document event coreference results on the ECB+ test set.
3
[['Model', 'Baselines', 'CLUSTER+LEMMA'], ['Model', 'Baselines', 'CV (Cybulska and Vossen 2015a),71,75,73,71,78,74,-,-,64,73\nModel,Baselines,KCP (Kenyon-Dean et al. 2018)'], ['Model', 'Baselines', 'CLUSTER+KCP'], ['Model', 'Model', 'Variants DISJOINT'], ['Model', 'Model', 'Variants JOINT']]
2
[['MUC', 'R'], ['MUC', 'P'], ['MUC', 'F1'], ['B 3', 'R'], [' B 3', 'P'], [' B 3', 'F1'], [' CEAF-e', 'R'], [' CEAF-e', 'P'], [' CEAF-e', 'F1'], [' CoNLL', 'F1']]
[['76.5', '79.9', '78.1', '71.7', '85', '77.8', '75.5', '71.7', '73.6', '76.5'], ['67', '71', '69', '71', '67', '69', '71', '67', '69', '69'], ['68.4', '79.3', '73.4', '67.2', '87.2', '75.9', '77.4', '66.4', '71.5', '73.6'], ['75.5', '83.6', '79.4', '75.4', '86', '80.4', '80.3', '71.9', '75.9', '78.5'], ['77.6', '84.5', '80.9', '76.1', '85.1', '80.3', '81', '73.8', '77.3', '79.5']]
column
['R', 'P', 'F1', 'R', 'P', 'F1', 'R', 'P', 'F1', 'F1']
['Variants JOINT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MUC || R</th> <th>MUC || P</th> <th>MUC || F1</th> <th>B 3 || R</th> <th>B 3 || P</th> <th>B 3 || F1</th> <th>CEAF-e || R</th> <th>CEAF-e || P</th> <th>CEAF-e || F1</th> <th>CoNLL || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Baselines || CLUSTER+LEMMA</td> <td>76.5</td> <td>79.9</td> <td>78.1</td> <td>71.7</td> <td>85</td> <td>77.8</td> <td>75.5</td> <td>71.7</td> <td>73.6</td> <td>76.5</td> </tr> <tr> <td>Model || Baselines || CV (Cybulska and Vossen 2015a),71,75,73,71,78,74,-,-,64,73\nModel,Baselines,KCP (Kenyon-Dean et al. 2018)</td> <td>67</td> <td>71</td> <td>69</td> <td>71</td> <td>67</td> <td>69</td> <td>71</td> <td>67</td> <td>69</td> <td>69</td> </tr> <tr> <td>Model || Baselines || CLUSTER+KCP</td> <td>68.4</td> <td>79.3</td> <td>73.4</td> <td>67.2</td> <td>87.2</td> <td>75.9</td> <td>77.4</td> <td>66.4</td> <td>71.5</td> <td>73.6</td> </tr> <tr> <td>Model || Model || Variants DISJOINT</td> <td>75.5</td> <td>83.6</td> <td>79.4</td> <td>75.4</td> <td>86</td> <td>80.4</td> <td>80.3</td> <td>71.9</td> <td>75.9</td> <td>78.5</td> </tr> <tr> <td>Model || Model || Variants JOINT</td> <td>77.6</td> <td>84.5</td> <td>80.9</td> <td>76.1</td> <td>85.1</td> <td>80.3</td> <td>81</td> <td>73.8</td> <td>77.3</td> <td>79.5</td> </tr> </tbody></table>
Table 3
table_3
P19-1409
7
acl2019
Table 3 presents the results on event coreference. Our joint model outperforms all the baseines with a gap of 10.5 CoNLL F1 points from the last published results (KCP), while surpassing our strong lemma baseline by 3 points.
[1, 1]
['Table 3 presents the results on event coreference.', 'Our joint model outperforms all the baseines with a gap of 10.5 CoNLL F1 points from the last published results (KCP), while surpassing our strong lemma baseline by 3 points.']
[None, ['Variants JOINT', ' CoNLL', 'F1', 'CV (Cybulska and Vossen 2015a),71,75,73,71,78,74,-,-,64,73\nModel,Baselines,KCP (Kenyon-Dean et al. 2018)', 'CLUSTER+KCP', 'Baselines']]
1
P19-1411table_2
System performance for the multi-class classification settings (i.e., F1 for 4-way and Accuracy for PDTB-Lin and PDTB-Ji as in the prior work). Our model is significantly better than the others (p < 0.05).
2
[['System', '(Lin et al. 2009)'], ['System', '(Ji and Eisenstein 2015b)'], ['System', '(Qin et al. 2016)'], ['System', '(Liu and Li 2016b)'], ['System', '(Qin et al. 2017)'], ['System', '(Lan et al. 2017)'], ['System', '(Dai and Huang 2018)'], ['System', '(Lei et al. 2018)'], ['System', '(Guo et al. 2018)'], ['System', '(Bai and Zhao 2018)'], ['System', 'This work']]
2
[['4-way', 'F1'], ['PDTB-Lin', 'Accuracy'], ['PDTB-Ji', 'Accuracy']]
[['-', '40.2', '-0.1'], ['-', '-', '44.59'], ['-', '43.81', '45.04'], ['46.29', '-', '-'], ['-', '44.65', '46.23'], ['47.8', '-', '-'], ['51.84', '-', '-'], ['47.15', '-', '-'], ['47.59', '-', '-'], ['51.06', '45.73', '48.22'], ['53', '46.48', '49.95']]
column
['F1', 'Accuracy', 'Accuracy']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>4-way || F1</th> <th>PDTB-Lin || Accuracy</th> <th>PDTB-Ji || Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || (Lin et al. 2009)</td> <td>-</td> <td>40.2</td> <td>-0.1</td> </tr> <tr> <td>System || (Ji and Eisenstein 2015b)</td> <td>-</td> <td>-</td> <td>44.59</td> </tr> <tr> <td>System || (Qin et al. 2016)</td> <td>-</td> <td>43.81</td> <td>45.04</td> </tr> <tr> <td>System || (Liu and Li 2016b)</td> <td>46.29</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || (Qin et al. 2017)</td> <td>-</td> <td>44.65</td> <td>46.23</td> </tr> <tr> <td>System || (Lan et al. 2017)</td> <td>47.8</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || (Dai and Huang 2018)</td> <td>51.84</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || (Lei et al. 2018)</td> <td>47.15</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || (Guo et al. 2018)</td> <td>47.59</td> <td>-</td> <td>-</td> </tr> <tr> <td>System || (Bai and Zhao 2018)</td> <td>51.06</td> <td>45.73</td> <td>48.22</td> </tr> <tr> <td>System || This work</td> <td>53</td> <td>46.48</td> <td>49.95</td> </tr> </tbody></table>
Table 2
table_2
P19-1411
5
acl2019
4.2 Comparing to the State of the Art . This section compares our proposed model with the current state-of-the-art models for IDRR. In particular, Table 2 shows the performance of the models for the multi-class classification settings (i.e., 4-way and 11-way with PDTB-Lin and PDTB-Ji) on the corresponding test sets. The first observation from these tables is that the proposed model is significantly better than the model in (Bai and Zhao, 2018) over all the dataset settings (with p <0.05) with large performance gap. As the proposed model is developed on topof the model in (Bai and Zhao, 2018), this is a direct comparison and demonstrates the benefit ofthe embeddings for relations and connectives as well as the transfer learning mechanisms for IDRR in this work. Second, the proposed model achieves the state-of-the-art performance on the multi-class classification settings (i.e., Table 2) and two set-tings for binary classification (i.e.,Comparison and Expansion). The performance gaps between the proposed method and the other methods on the multiclass classification datasets (i.e., Table2) are large and clearly testify to the advantage ofthe proposed model for IDRR.
[2, 1, 1, 1, 2, 1, 1]
['4.2 Comparing to the State of the Art .', 'This section compares our proposed model with the current state-of-the-art models for IDRR.', 'In particular, Table 2 shows the performance of the models for the multi-class classification settings (i.e., 4-way and 11-way with PDTB-Lin and PDTB-Ji) on the corresponding test sets.', 'The first observation from these tables is that the proposed model is significantly better than the model in (Bai and Zhao, 2018) over all the dataset settings (with p <0.05) with large performance gap.', ' As the proposed model is developed on topof the model in (Bai and Zhao, 2018), this is a direct comparison and demonstrates the benefit ofthe embeddings for relations and connectives as well as the transfer learning mechanisms for IDRR in this work.', 'Second, the proposed model achieves the state-of-the-art performance on the multi-class classification settings (i.e., Table 2) and two set-tings for binary classification (i.e.,Comparison and Expansion).', 'The performance gaps between the proposed method and the other methods on the multiclass classification datasets (i.e., Table2) are large and clearly testify to the advantage ofthe proposed model for IDRR.']
[None, None, ['4-way', 'PDTB-Lin', 'PDTB-Ji'], ['This work', '(Bai and Zhao 2018)'], ['This work', '(Bai and Zhao 2018)'], ['This work'], ['This work']]
1
P19-1411table_3
System performance with different combinations of L1, L2 and L3 (i.e., F1 for 4-way and Accuracy for PDTB-Lin and PDTB-Ji as in prior work). “None”: not using any term.
2
[['System', 'L1 + L2 + L3'], ['System', 'L1 + L2'], ['System', 'L1 + L3'], ['System', 'L2 + L3'], ['System', 'L1'], ['System', 'L2'], ['System', 'L3'], ['System', 'None']]
2
[['4-way', 'F1'], ['PDTB-Lin', 'Accuracy'], ['PDTB-Ji', 'Accuracy']]
[['53', '46.48', '49.95'], ['52.18', '46.08', '49.28'], ['52.31', '45.3', '49.57'], ['52.57', '44.91', '49.86'], ['51.11', '46.21', '49.09'], ['50.38', '45.56', '47.83'], ['52.52', '45.69', '49.09'], ['51.62', '45.82', '48.6']]
column
['F1', 'Accuracy', 'Accuracy']
['L1 + L2 + L3']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>4-way || F1</th> <th>PDTB-Lin || Accuracy</th> <th>PDTB-Ji || Accuracy</th> </tr> </thead> <tbody> <tr> <td>System || L1 + L2 + L3</td> <td>53</td> <td>46.48</td> <td>49.95</td> </tr> <tr> <td>System || L1 + L2</td> <td>52.18</td> <td>46.08</td> <td>49.28</td> </tr> <tr> <td>System || L1 + L3</td> <td>52.31</td> <td>45.3</td> <td>49.57</td> </tr> <tr> <td>System || L2 + L3</td> <td>52.57</td> <td>44.91</td> <td>49.86</td> </tr> <tr> <td>System || L1</td> <td>51.11</td> <td>46.21</td> <td>49.09</td> </tr> <tr> <td>System || L2</td> <td>50.38</td> <td>45.56</td> <td>47.83</td> </tr> <tr> <td>System || L3</td> <td>52.52</td> <td>45.69</td> <td>49.09</td> </tr> <tr> <td>System || None</td> <td>51.62</td> <td>45.82</td> <td>48.6</td> </tr> </tbody></table>
Table 3
table_3
P19-1411
5
acl2019
4.3 Ablation Study . The multi-task learning framework in this work involves three penalization terms (i.e., L1, L2 and L3 in Equations 2, 3 and 4). In order to illustrate the contribution of these terms, Table 3 presents the test set performance of the proposed model when different combinations of the terms are employed for the multi-class classification settings. The row with “None” in the table corresponds to the proposed model where none of the penalization terms (L1, L2 and L3) is used, reducing to the model in (Bai and Zhao, 2018) that is augmented with the connective and relation embeddings. As we can see from the table, the embeddings of connectives and relations can only slightly improve the performance of the model in (Bai and Zhao, 2018), necessitating the penalization terms L1, L2 and L3 to facilitate the knowledge transfer and further improve the performance. From the table, it is also clear that each penalization term is important for the proposed model as eliminating any of them would worsen the performance. Combining the three penalization terms results in the best performance for IDRR in this work.
[0, 2, 1, 2, 1, 1, 1]
['4.3 Ablation Study .', 'The multi-task learning framework in this work involves three penalization terms (i.e., L1, L2 and L3 in Equations 2, 3 and 4).', 'In order to illustrate the contribution of these terms, Table 3 presents the test set performance of the proposed model when different combinations of the terms are employed for the multi-class classification settings.', 'The row with “None” in the table corresponds to the proposed model where none of the penalization terms (L1, L2 and L3) is used, reducing to the model in (Bai and Zhao, 2018) that is augmented with the connective and relation embeddings.', ' As we can see from the table, the embeddings of connectives and relations can only slightly improve the performance of the model in (Bai and Zhao, 2018), necessitating the penalization terms L1, L2 and L3 to facilitate the knowledge transfer and further improve the performance.', 'From the table, it is also clear that each penalization term is important for the proposed model as eliminating any of them would worsen the performance.', ' Combining the three penalization terms results in the best performance for IDRR in this work.']
[None, ['L1', 'L2', 'L3'], None, ['L1', 'L2', 'L3'], ['L1 + L2 + L3'], ['L1', 'L2', 'L3', 'None'], ['L1 + L2 + L3']]
1
P19-1412table_3
Performance and number of items per feature. The scores in bold indicate the classes on which each model has the best performance (with respect to both metrics). † marks statistical significance of Pearson’s correlation (p < 0.05).
3
[['Feature', 'Embedding', 'Cond.'], ['Feature', 'Embedding', 'Modal'], ['Feature', 'Embedding', 'Negation'], ['Feature', 'Embedding', 'Question']]
2
[['r', 'Rule'], ['r', 'Hybr.'], ['MAE', 'Rule'], ['MAE', 'Hybr.']]
[['', '0.02', '2.08', '1.50'], ['-0.01', '0.21', '1.37', '1.08'], ['0.45', '0.22', '2.26', '2.40'], ['-0.22', '0.29', '2.35', '1.25']]
column
['r', 'r', 'MAE', 'MAE']
['Embedding']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>r || Rule</th> <th>r || Hybr.</th> <th>MAE || Rule</th> <th>MAE || Hybr.</th> </tr> </thead> <tbody> <tr> <td>Feature || Embedding || Cond.</td> <td></td> <td>0.02</td> <td>2.08</td> <td>1.50</td> </tr> <tr> <td>Feature || Embedding || Modal</td> <td>-0.01</td> <td>0.21</td> <td>1.37</td> <td>1.08</td> </tr> <tr> <td>Feature || Embedding || Negation</td> <td>0.45</td> <td>0.22</td> <td>2.26</td> <td>2.40</td> </tr> <tr> <td>Feature || Embedding || Question</td> <td>-0.22</td> <td>0.29</td> <td>2.35</td> <td>1.25</td> </tr> </tbody></table>
Table 3
table_3
P19-1412
4
acl2019
Focusing on the restricted set, we perform detailed error analysis of the outputs of the rule-based and hybrid biLSTM models, which achieved the best correlation. Table 3 shows performance for the following linguistic features. The rule-based model can only capture inferences involving negation (r = 0.45), while the hybrid model performs more consistently across negation, modal, and question (r ∼ 0.25). Both models cannot handle inferences with conditionals.
[2, 1, 1, 1]
['Focusing on the restricted set, we perform detailed error analysis of the outputs of the rule-based and hybrid biLSTM models, which achieved the best correlation.', 'Table 3 shows performance for the following linguistic features.', 'The rule-based model can only capture inferences involving negation (r = 0.45), while the hybrid model performs more consistently across negation, modal, and question (r ∼ 0.25).', 'Both models cannot handle inferences with conditionals.']
[None, None, ['r', 'Rule', 'Negation', 'Modal', 'Question'], ['r', 'Rule', 'Cond.']]
1
P19-1414table_3
Why-QA performances
1
[['Oh et al.(2013)'], ['Sharp et al.(2016)'], ['Tan et al.(2016)'], ['Oh et al.(2017)'], ['BASE'], ['BASE+AddTr'], ['BASE+CAns'], ['BASE+CEnc'], ['BASE+Enc'], ['BERT'], ['BERT+AddTr'], ['BERT+FOP'], ['BERT+FRV'], ['Ours (OP)'], ['Ours (RP)'], ['Ours (RV)'], ['Oracle']]
1
[['P@1'], ['MAP']]
[['41.8', '41.0'], ['33.2', '32.2'], ['34.0', '33.4'], ['47.6', '45.0'], ['51.4', '50.4'], ['52.0', '49.3'], ['51.8', '50.3'], ['52.4', '51.5'], ['52.2', '50.6'], ['51.2', '50.8'], ['51.8', '51.0'], ['53.4', '51.2'], ['53.2', '50.9'], ['54.8', '52.4'], ['53.4', '51.5'], ['54.6', '51.8'], ['60.4', '60.4']]
column
['P@1', 'map']
['Ours (OP)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P@1</th> <th>MAP</th> </tr> </thead> <tbody> <tr> <td>Oh et al.(2013)</td> <td>41.8</td> <td>41.0</td> </tr> <tr> <td>Sharp et al.(2016)</td> <td>33.2</td> <td>32.2</td> </tr> <tr> <td>Tan et al.(2016)</td> <td>34.0</td> <td>33.4</td> </tr> <tr> <td>Oh et al.(2017)</td> <td>47.6</td> <td>45.0</td> </tr> <tr> <td>BASE</td> <td>51.4</td> <td>50.4</td> </tr> <tr> <td>BASE+AddTr</td> <td>52.0</td> <td>49.3</td> </tr> <tr> <td>BASE+CAns</td> <td>51.8</td> <td>50.3</td> </tr> <tr> <td>BASE+CEnc</td> <td>52.4</td> <td>51.5</td> </tr> <tr> <td>BASE+Enc</td> <td>52.2</td> <td>50.6</td> </tr> <tr> <td>BERT</td> <td>51.2</td> <td>50.8</td> </tr> <tr> <td>BERT+AddTr</td> <td>51.8</td> <td>51.0</td> </tr> <tr> <td>BERT+FOP</td> <td>53.4</td> <td>51.2</td> </tr> <tr> <td>BERT+FRV</td> <td>53.2</td> <td>50.9</td> </tr> <tr> <td>Ours (OP)</td> <td>54.8</td> <td>52.4</td> </tr> <tr> <td>Ours (RP)</td> <td>53.4</td> <td>51.5</td> </tr> <tr> <td>Ours (RV)</td> <td>54.6</td> <td>51.8</td> </tr> <tr> <td>Oracle</td> <td>60.4</td> <td>60.4</td> </tr> </tbody></table>
Table 3
table_3
P19-1414
7
acl2019
4.4 Results . Table 3 shows the performances of all the methods in the Precision of the top answer (P@1) and the Mean Average Precision (MAP) (Oh et al., 2013). Note that the Oracle method indicates the performance of a fictional method that ranks the answer passages perfectly, i.e., it locates all the m correct answers to a question in the top-m ranks, based on the gold-standard labels. This performance is the upper bound of those of all the implementable methods. Our proposed method, Ours(OP), outperformed all the other methods. Our starting point,i.e.,BASE, was already superior to the methods in the previous works. Compared with BASE and BASE+AddTr, neither of which used compact-answer representations or fake-representation generator F, Ours(OP) gave 3.4% and 2.8% improvement in P@1, respectively. It also outperformed BASE+CAns and BASE+CEnc, which generated compact-answer representations in away different from the proposed method, and BASE+Enc, which trained the fake-representation generator without adversarial learning. These performance differences were statistically significant (p <0.01by the McNemar’s test). Ours (OP)also outperformed all the BERT-based models but an interesting point is that fake-representation generatorFboosted the performance of the BERT-based models (statistically significant with p <0.01by the McNemar’s test). These results suggest that AGR is effective in both our why-QA model and our BERT-based model.
[2, 1, 2, 2, 1, 1, 1, 1, 2, 1, 2]
['4.4 Results .', 'Table 3 shows the performances of all the methods in the Precision of the top answer (P@1) and the Mean Average Precision (MAP) (Oh et al., 2013).', 'Note that the Oracle method indicates the performance of a fictional method that ranks the answer passages perfectly, i.e., it locates all the m correct answers to a question in the top-m ranks, based on the gold-standard labels.', 'This performance is the upper bound of those of all the implementable methods.', 'Our proposed method, Ours(OP), outperformed all the other methods.', 'Our starting point,i.e.,BASE, was already superior to the methods in the previous works.', 'Compared with BASE and BASE+AddTr, neither of which used compact-answer representations or fake-representation generator F, Ours(OP) gave 3.4% and 2.8% improvement in P@1, respectively.', 'It also outperformed BASE+CAns and BASE+CEnc, which generated compact-answer representations in away different from the proposed method, and BASE+Enc, which trained the fake-representation generator without adversarial learning.', 'These performance differences were statistically significant (p <0.01by the McNemar’s test).', 'Ours (OP)also outperformed all the BERT-based models but an interesting point is that fake-representation generatorFboosted the performance of the BERT-based models (statistically significant with p <0.01by the McNemar’s test).', 'These results suggest that AGR is effective in both our why-QA model and our BERT-based model.']
[None, ['P@1', 'MAP'], ['Oracle'], None, ['Ours (OP)'], ['BASE'], ['Ours (OP)', 'BASE', 'BASE+AddTr', 'P@1'], ['Ours (OP)', 'BASE+CAns', 'BASE+CEnc', 'BASE+Enc'], None, ['Ours (OP)', 'BERT'], None]
1
P19-1415table_2
Experimental results of applying data augmentation to reading comprehension models on the SQuAD 2.0 dataset. “(cid:52)” indicates absolute improvement.
1
[['BNA'], ['BNA + UNANSQ'], ['DocQA'], ['DocQA + UNANSQ'], ['BERTBase'], ['BERTBase + UNANSQ'], ['BERT Large'], ['BERT Large+ UNANSQ']]
1
[['EM'], ['F1']]
[['59.7', '62.7'], ['61.0', '63.5'], ['61.9', '64.5'], ['62.4', '65.3'], ['74.3', '77.4'], ['76.4', '79.3'], ['78.2', '81.3'], ['80.0', '83.0']]
column
['EM', 'F1']
['BERTBase + UNANSQ', 'BERT Large+ UNANSQ']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>BNA</td> <td>59.7</td> <td>62.7</td> </tr> <tr> <td>BNA + UNANSQ</td> <td>61.0</td> <td>63.5</td> </tr> <tr> <td>DocQA</td> <td>61.9</td> <td>64.5</td> </tr> <tr> <td>DocQA + UNANSQ</td> <td>62.4</td> <td>65.3</td> </tr> <tr> <td>BERTBase</td> <td>74.3</td> <td>77.4</td> </tr> <tr> <td>BERTBase + UNANSQ</td> <td>76.4</td> <td>79.3</td> </tr> <tr> <td>BERT Large</td> <td>78.2</td> <td>81.3</td> </tr> <tr> <td>BERT Large+ UNANSQ</td> <td>80.0</td> <td>83.0</td> </tr> </tbody></table>
Table 2
table_2
P19-1415
6
acl2019
Table 2 shows the exact match and F1 scores of multiple reading comprehension models with and without data augmentation. We can see that the generated unanswerable questions can improve both specifically designed reading comprehension models and strong BERT fine-tuning models, yielding 1.9 absolute F1 improvement with BERTbase model and 1.7 absolute F1 improvement with BERT-large model. Our submitted model obtains an EM score of 80.75 and an F1 score of 83.85 on the hidden test set.
[1, 1, 2]
['Table 2 shows the exact match and F1 scores of multiple reading comprehension models with and without data augmentation.', 'We can see that the generated unanswerable questions can improve both specifically designed reading comprehension models and strong BERT fine-tuning models, yielding 1.9 absolute F1 improvement with BERTbase model and 1.7 absolute F1 improvement with BERT-large model.', 'Our submitted model obtains an EM score of 80.75 and an F1 score of 83.85 on the hidden test set.']
[['EM', 'F1'], ['F1', 'BERTBase + UNANSQ', 'BERT Large+ UNANSQ'], None]
1
P19-1415table_3
Human evaluation results. Unanswerability (UNANS): 1 for unanswerable, 0 otherwise. Relatedness (RELA): 3 for relevant to both answerable question and paragraph, 2 for relevant to only one, 1 for irrelevant. Readability (READ): 3 for fluent, 2 for minor grammatical errors, 1 for incomprehensible.
1
[['TFIDF'], ['SEQ2SEQ'], ['PAIR2SEQ'], ['Human']]
1
[['UNANS'], ['RELA'], ['READ']]
[['0.96', ' 1.52', ' 2.98'], ['0.62', '2.88', '2.39'], ['0.65', '2.95', '2.61'], ['0.95', '2.96', '3']]
column
['UNANS', 'RELA', 'READ']
['PAIR2SEQ']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>UNANS</th> <th>RELA</th> <th>READ</th> </tr> </thead> <tbody> <tr> <td>TFIDF</td> <td>0.96</td> <td>1.52</td> <td>2.98</td> </tr> <tr> <td>SEQ2SEQ</td> <td>0.62</td> <td>2.88</td> <td>2.39</td> </tr> <tr> <td>PAIR2SEQ</td> <td>0.65</td> <td>2.95</td> <td>2.61</td> </tr> <tr> <td>Human</td> <td>0.95</td> <td>2.96</td> <td>3</td> </tr> </tbody></table>
Table 3
table_3
P19-1415
6
acl2019
Table 3 shows the human evaluation results of generated unanswerable questions. We compare with the baseline method TFIDF, which uses the input answerable question to retrieve similar questions towards other articles as outputs. The retrieved questions are mostly unanswerable and readable, but they are not quite relevant to the question answering pair. Notice that being relevant is demonstrated to be important for data augmentation in further experiments on machine reading comprehension. Here pair-to-sequence model still outperforms sequence-to-sequence model in terms of all three metrics. But the differences in human evaluation are not as notable as in the automatic metrics.
[1, 1, 1, 1, 1, 2]
['Table 3 shows the human evaluation results of generated unanswerable questions.', 'We compare with the baseline method TFIDF, which uses the input answerable question to retrieve similar questions towards other articles as outputs.', 'The retrieved questions are mostly unanswerable and readable, but they are not quite relevant to the question answering pair.', 'Notice that being relevant is demonstrated to be important for data augmentation in further experiments on machine reading comprehension.', 'Here pair-to-sequence model still outperforms sequence-to-sequence model in terms of all three metrics.', 'But the differences in human evaluation are not as notable as in the automatic metrics.']
[None, ['TFIDF'], ['UNANS', ' READ'], [' RELA'], ['PAIR2SEQ', 'SEQ2SEQ'], None]
1
P19-1425table_2
Comparison with baseline methods trained on different backbone models (second column). * indicates the method trained using an extra corpus.
3
[['Method', 'Vaswani et al. (2017)', 'Trans.-Base'], ['Method', 'Miyato et al. (2017)', 'Trans.-Base'], ['Method', 'Sennrich et al. (2016a)', 'Trans.-Base'], ['Method', 'Wang et al. (2018)', 'Trans.-Base'], ['Method', 'Cheng et al. (2018)', 'RNMT lex.'], ['Method', 'Cheng et al. (2018)', 'RNMT feat.'], ['Method', 'Cheng et al. (2018)', 'Trans.-Base feat.'], ['Method', 'Cheng et al. (2018)', 'Trans.-Base lex'], ['Method', 'Sennrich et al. (2016b)*', 'Trans.-Base'], ['Method', 'Ours', 'Trans.-Base'], ['Method', 'Ours + BackTranslation*', 'Trans.-Base']]
1
[['MT06'], ['MT02'], ['MT03'], ['MT04'], ['MT05'], ['MT08']]
[['44.59', '44.82', '43.68', '45.6', '44.57', '35.07'], ['45.11', '45.95', '44.68', '45.99', '45.32', '35.84'], ['44.96', '46.03', '44.81', '46.01', '45.69', '35.32'], ['45.47', '46.31', '45.3', '46.45', '45.62', '35.66'], ['43.57', '44.82', '42.95', '45.05', '43.45', '34.85'], ['44.44', '46.1', '44.07', '45.61', '44.06', '34.94'], ['45.37', '46.16', '44.41', '46.32', '45.3', '35.85'], ['45.78', '45.96', '45.51', '46.49', '45.73', '36.08'], ['46.39', '47.31', '47.1', '47.81', '45.69', '36.43'], ['46.95', '47.06', '46.48', '47.39', '46.58', '37.38'], ['47.74', '48.13', '47.83', '49.13', '49.04', '38.61']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['Ours', 'Ours + BackTranslation*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06</th> <th>MT02</th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT08</th> </tr> </thead> <tbody> <tr> <td>Method || Vaswani et al. (2017) || Trans.-Base</td> <td>44.59</td> <td>44.82</td> <td>43.68</td> <td>45.6</td> <td>44.57</td> <td>35.07</td> </tr> <tr> <td>Method || Miyato et al. (2017) || Trans.-Base</td> <td>45.11</td> <td>45.95</td> <td>44.68</td> <td>45.99</td> <td>45.32</td> <td>35.84</td> </tr> <tr> <td>Method || Sennrich et al. (2016a) || Trans.-Base</td> <td>44.96</td> <td>46.03</td> <td>44.81</td> <td>46.01</td> <td>45.69</td> <td>35.32</td> </tr> <tr> <td>Method || Wang et al. (2018) || Trans.-Base</td> <td>45.47</td> <td>46.31</td> <td>45.3</td> <td>46.45</td> <td>45.62</td> <td>35.66</td> </tr> <tr> <td>Method || Cheng et al. (2018) || RNMT lex.</td> <td>43.57</td> <td>44.82</td> <td>42.95</td> <td>45.05</td> <td>43.45</td> <td>34.85</td> </tr> <tr> <td>Method || Cheng et al. (2018) || RNMT feat.</td> <td>44.44</td> <td>46.1</td> <td>44.07</td> <td>45.61</td> <td>44.06</td> <td>34.94</td> </tr> <tr> <td>Method || Cheng et al. (2018) || Trans.-Base feat.</td> <td>45.37</td> <td>46.16</td> <td>44.41</td> <td>46.32</td> <td>45.3</td> <td>35.85</td> </tr> <tr> <td>Method || Cheng et al. (2018) || Trans.-Base lex</td> <td>45.78</td> <td>45.96</td> <td>45.51</td> <td>46.49</td> <td>45.73</td> <td>36.08</td> </tr> <tr> <td>Method || Sennrich et al. (2016b)* || Trans.-Base</td> <td>46.39</td> <td>47.31</td> <td>47.1</td> <td>47.81</td> <td>45.69</td> <td>36.43</td> </tr> <tr> <td>Method || Ours || Trans.-Base</td> <td>46.95</td> <td>47.06</td> <td>46.48</td> <td>47.39</td> <td>46.58</td> <td>37.38</td> </tr> <tr> <td>Method || Ours + BackTranslation* || Trans.-Base</td> <td>47.74</td> <td>48.13</td> <td>47.83</td> <td>49.13</td> <td>49.04</td> <td>38.61</td> </tr> </tbody></table>
Table 2
table_2
P19-1425
6
acl2019
Table 2 shows the comparisons to the above five baseline methods. Among all methods trained without extra corpora, our approach achieves the best result across datasets. After incorporating the back-translated corpus, our method yields an additional gain of 1-3 points over (Sennrich et al., 2016b) trained on the same back-translated corpus. Since all methods are built on top of the same backbone, the result substantiates the efficacy of our method on the standard benchmarks that contain natural noise. Compared to (Miyato et al., 2017), we found that continuous gradientbased perturbations to word embeddings can be absorbed quickly, often resulting in a worse BLEU score than the proposed discrete perturbations by word replacement.
[1, 1, 1, 2, 2]
['Table 2 shows the comparisons to the above five baseline methods.', 'Among all methods trained without extra corpora, our approach achieves the best result across datasets.', 'After incorporating the back-translated corpus, our method yields an additional gain of 1-3 points over (Sennrich et al., 2016b) trained on the same back-translated corpus.', 'Since all methods are built on top of the same backbone, the result substantiates the efficacy of our method on the standard benchmarks that contain natural noise.', 'Compared to (Miyato et al., 2017), we found that continuous gradientbased perturbations to word embeddings can be absorbed quickly, often resulting in a worse BLEU score than the proposed discrete perturbations by word replacement.']
[None, ['Ours', 'Trans.-Base'], ['Ours + BackTranslation*', 'Trans.-Base'], None, None]
1
P19-1425table_3
Results on NIST Chinese-English translation.
3
[['Method', 'Vaswani et al. (2017)', 'Trans.-Base'], ['Method', 'Ours', ' Trans.-Base']]
1
[['MT06'], ['MT02'], ['MT03'], ['MT04'], ['MT05'], ['MT08']]
[['44.59', '44.82', '43.68', '45.60', '44.57', '35.07'], ['46.95', '47.06', '46.48', '47.39', '46.58', '37.38']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MT06</th> <th>MT02</th> <th>MT03</th> <th>MT04</th> <th>MT05</th> <th>MT08</th> </tr> </thead> <tbody> <tr> <td>Method || Vaswani et al. (2017) || Trans.-Base</td> <td>44.59</td> <td>44.82</td> <td>43.68</td> <td>45.60</td> <td>44.57</td> <td>35.07</td> </tr> <tr> <td>Method || Ours || Trans.-Base</td> <td>46.95</td> <td>47.06</td> <td>46.48</td> <td>47.39</td> <td>46.58</td> <td>37.38</td> </tr> </tbody></table>
Table 3
table_3
P19-1425
6
acl2019
Table 3 shows the BLEU scores on the NIST Chinese-English translation task. We first compare our approach with the Transformer model (Vaswani et al., 2017) on which our model is built. As we see, the introduction of our method to the standard backbone model (Trans.-Base) leads to substantial improvements across the validation and test sets. Specifically, our approach achieves an average gain of 2.25 BLEU points and up to 2.8 BLEU points on NIST03.
[1, 1, 1, 1]
['Table 3 shows the BLEU scores on the NIST Chinese-English translation task.', 'We first compare our approach with the Transformer model (Vaswani et al., 2017) on which our model is built.', 'As we see, the introduction of our method to the standard backbone model (Trans.-Base) leads to substantial improvements across the validation and test sets.', 'Specifically, our approach achieves an average gain of 2.25 BLEU points and up to 2.8 BLEU points on NIST03.']
[None, ['Ours', 'Vaswani et al. (2017)'], ['Ours'], ['Ours']]
1
P19-1429table_2
Experiment results on ACE 2005. For a fair comparison, the results of baselines are adapted from their original papers.
2
[['Feature based Approaches', 'MaxEnt'], ['Feature based Approaches', 'Combined-PSL'], ['Representation Learning based Approaches', 'DMCNN'], ['Representation Learning based Approaches', 'Bi-RNN'], ['Representation Learning based Approaches', 'NC-CNN'], ['External Resource based Approaches', 'SA-ANN-Arg (+Arguments)'], ['External Resource based Approaches', 'GMLATT (+Multi-Lingual)'], ['External Resource based Approaches', 'GCN-ED (+Syntactic)'], ['External Resource based Approaches', 'HBTNGMA (+Document)'], ['Our Approach', 'ELMo'], ['Our Approach', '∆concat w2v'], ['Our Approach', '∆concat ELM o'], ['Our Approach', '∆w2v'], ['Our Approach', '∆ELM o']]
1
[['P'], [' R'], [' F1']]
[['74.5', '59.1', '65.9'], ['75.3', '64.4', '69.4'], ['75.6', '63.6', '69.1'], ['66', '73', '69.3'], [' -', ' -', '71.3'], ['78', '66.3', '71.7'], ['78.9', '66.9', '72.4'], ['77.9', '68.8', '73.1'], ['77.9', '69.1', '73.3'], ['75.6', '62.3', '68.3'], ['71.8', '70.8', '71.3'], ['73.7', '71.9', '72.8'], ['74', '70.5', '72.2'], ['76.3', '71.9', '74']]
column
['P', 'R', 'F1']
['Our Approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Feature based Approaches || MaxEnt</td> <td>74.5</td> <td>59.1</td> <td>65.9</td> </tr> <tr> <td>Feature based Approaches || Combined-PSL</td> <td>75.3</td> <td>64.4</td> <td>69.4</td> </tr> <tr> <td>Representation Learning based Approaches || DMCNN</td> <td>75.6</td> <td>63.6</td> <td>69.1</td> </tr> <tr> <td>Representation Learning based Approaches || Bi-RNN</td> <td>66</td> <td>73</td> <td>69.3</td> </tr> <tr> <td>Representation Learning based Approaches || NC-CNN</td> <td>-</td> <td>-</td> <td>71.3</td> </tr> <tr> <td>External Resource based Approaches || SA-ANN-Arg (+Arguments)</td> <td>78</td> <td>66.3</td> <td>71.7</td> </tr> <tr> <td>External Resource based Approaches || GMLATT (+Multi-Lingual)</td> <td>78.9</td> <td>66.9</td> <td>72.4</td> </tr> <tr> <td>External Resource based Approaches || GCN-ED (+Syntactic)</td> <td>77.9</td> <td>68.8</td> <td>73.1</td> </tr> <tr> <td>External Resource based Approaches || HBTNGMA (+Document)</td> <td>77.9</td> <td>69.1</td> <td>73.3</td> </tr> <tr> <td>Our Approach || ELMo</td> <td>75.6</td> <td>62.3</td> <td>68.3</td> </tr> <tr> <td>Our Approach || ∆concat w2v</td> <td>71.8</td> <td>70.8</td> <td>71.3</td> </tr> <tr> <td>Our Approach || ∆concat ELM o</td> <td>73.7</td> <td>71.9</td> <td>72.8</td> </tr> <tr> <td>Our Approach || ∆w2v</td> <td>74</td> <td>70.5</td> <td>72.2</td> </tr> <tr> <td>Our Approach || ∆ELM o</td> <td>76.3</td> <td>71.9</td> <td>74</td> </tr> </tbody></table>
Table 2
table_2
P19-1429
7
acl2019
4.2 Overall Performance . Table 2 shows the overall ACE2005 results of all baselines and our approach. For our approach, we show the results of four settings: our approach using word embedding as its word representation rw – ∆w2v; our approach using ELMo as rw ∆ELM o; our approach simply concatenating [rd, rg, rw] as instance representation ∆concat . From Table 2, we can see that by distilling both discrimination and generalization knowledge, our method achieves state-of-the-art performance. Compared with the best feature system, ∆w2v and ∆ELMo gain 2.8 and 4.6 F1-score improvements. Compared to the representation learning based baselines, both ∆w2v and ∆ELMo outperform all of them. Notably, ∆ELMo outperforms all the baselines using external resources.
[2, 1, 1, 1, 1, 1, 1]
['4.2 Overall Performance .', 'Table 2 shows the overall ACE2005 results of all baselines and our approach.', 'For our approach, we show the results of four settings: our approach using word embedding as its word representation rw – ∆w2v; our approach using ELMo as rw ∆ELM o; our approach simply concatenating [rd, rg, rw] as instance representation ∆concat .', 'From Table 2, we can see that by distilling both discrimination and generalization knowledge, our method achieves state-of-the-art performance.', 'Compared with the best feature system, ∆w2v and ∆ELMo gain 2.8 and 4.6 F1-score improvements.', 'Compared to the representation learning based baselines, both ∆w2v and ∆ELMo outperform all of them.', 'Notably, ∆ELMo outperforms all the baselines using external resources.']
[None, None, None, ['Our Approach'], ['Feature based Approaches', '∆w2v', '∆ELM o', ' F1'], ['Representation Learning based Approaches', '∆w2v', '∆ELM o'], ['∆ELM o', 'External Resource based Approaches']]
1
P19-1435table_1
The accuracy scores of predicting the label with unlexicalized features, leakage features, and advanced graph-based features and the relative improvements. Result with ∗ is from Bowman et al. (2015). Results with † are from Williams et al. (2018). Result with ‡ is from Wang et al. (2017). Result with (cid:5) is from Shen et al. (2018). Result with (cid:62) is from Baudiˇs et al. (2016). Other results are based on our implementations. “%” is omitted.
2
[['Method', 'Majority'], ['Method', 'Unlexicalized'], ['Method', 'LSTM'], ['Method', 'Leakage'], ['Method', 'Advanced'], ['Method', 'Leakage vs Majority'], ['Method', 'Advanced vs Majority']]
1
[[' SNLI'], [' MultiNLI Matched'], ['MultiNLI Mismatched'], [' QuoraQP'], [' MSRP'], ['SICK NLI'], ['SICK STS'], [' ByteDance']]
[['33.7', '35.6', '36.5', '50', '66.5', '56.7', '50.3', '68.59'], ['47.7', '44.9', '45.5', '68.2', '73.9', '70.1', '70.2', '75.23'], ['77.6', '66.9', '66.9', '82.58', '70.6', '71.3', '70.2', '86.45'], ['36.6', '32.1', '31.1', '79.63', '66.7', '56.7', '55.5', '78.24'], ['39.1', '32.7', '33.8', '80.47', '67.9', '57.5', '56.3', '85.73'], ['8.61', '-9.83', '-14.79', '59.26', '0.3', '0', '10.34', '14.07'], ['16.02', '-8.15', '-7.4', '60.94', '2.11', '1.41', '11.93', '24.99']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Leakage', 'Advanced', 'Majority']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SNLI</th> <th>MultiNLI Matched</th> <th>MultiNLI Mismatched</th> <th>QuoraQP</th> <th>MSRP</th> <th>SICK NLI</th> <th>SICK STS</th> <th>ByteDance</th> </tr> </thead> <tbody> <tr> <td>Method || Majority</td> <td>33.7</td> <td>35.6</td> <td>36.5</td> <td>50</td> <td>66.5</td> <td>56.7</td> <td>50.3</td> <td>68.59</td> </tr> <tr> <td>Method || Unlexicalized</td> <td>47.7</td> <td>44.9</td> <td>45.5</td> <td>68.2</td> <td>73.9</td> <td>70.1</td> <td>70.2</td> <td>75.23</td> </tr> <tr> <td>Method || LSTM</td> <td>77.6</td> <td>66.9</td> <td>66.9</td> <td>82.58</td> <td>70.6</td> <td>71.3</td> <td>70.2</td> <td>86.45</td> </tr> <tr> <td>Method || Leakage</td> <td>36.6</td> <td>32.1</td> <td>31.1</td> <td>79.63</td> <td>66.7</td> <td>56.7</td> <td>55.5</td> <td>78.24</td> </tr> <tr> <td>Method || Advanced</td> <td>39.1</td> <td>32.7</td> <td>33.8</td> <td>80.47</td> <td>67.9</td> <td>57.5</td> <td>56.3</td> <td>85.73</td> </tr> <tr> <td>Method || Leakage vs Majority</td> <td>8.61</td> <td>-9.83</td> <td>-14.79</td> <td>59.26</td> <td>0.3</td> <td>0</td> <td>10.34</td> <td>14.07</td> </tr> <tr> <td>Method || Advanced vs Majority</td> <td>16.02</td> <td>-8.15</td> <td>-7.4</td> <td>60.94</td> <td>2.11</td> <td>1.41</td> <td>11.93</td> <td>24.99</td> </tr> </tbody></table>
Table 1
table_1
P19-1435
3
acl2019
Predicting semantic relationships without using sentence contents seems impossible. However, we find that the graph-based features (Leakage and Advanced) make the problem feasible on a wide range of datasets. Specifically, on the datasets like QuoraQP and ByteDance, the leakage features are even more effective than the unlexicalized features. One exception is that on MultiNLI, Majority outperforms Leakage and Advanced significantly. Another interesting finding is that on SNLI and ByteDance, advanced graph-based features improve a lot over the leakage features, while on QuoraQP, the difference is very small. Among all the tested datasets, only MSRP and SICKNLI are almost neutral to the leakage features. Note that their sizes are relatively small with only less than 10k samples. Results in Table 1 raise concerns about the impact of selection bias on the models and evaluation results.
[2, 1, 1, 1, 1, 1, 2, 1]
['Predicting semantic relationships without using sentence contents seems impossible.', 'However, we find that the graph-based features (Leakage and Advanced) make the problem feasible on a wide range of datasets.', 'Specifically, on the datasets like QuoraQP and ByteDance, the leakage features are even more effective than the unlexicalized features.', 'One exception is that on MultiNLI, Majority outperforms Leakage and Advanced significantly.', 'Another interesting finding is that on SNLI and ByteDance, advanced graph-based features improve a lot over the leakage features, while on QuoraQP, the difference is very small.', 'Among all the tested datasets, only MSRP and SICKNLI are almost neutral to the leakage features.', 'Note that their sizes are relatively small with only less than 10k samples.', 'Results in Table 1 raise concerns about the impact of selection bias on the models and evaluation results.']
[None, ['Leakage', 'Advanced'], [' QuoraQP', ' ByteDance', 'Leakage', 'Unlexicalized'], [' MultiNLI Matched', 'MultiNLI Mismatched', 'Majority', 'Leakage', 'Advanced'], [' SNLI', ' ByteDance', ' QuoraQP', 'Advanced', 'Leakage'], [' MSRP', 'SICK NLI', 'Leakage'], None, None]
1
P19-1435table_4
Evaluation Results with the synthetic dataset, MSRP and SICKSTS dataset. We report the accuracy scores and “%” is omitted.
2
[['Method', 'Biased Model'], ['Method', 'Debiased Model']]
1
[['Synthetic'], ['MSRP'], ['SICK STS']]
[['89.46', '51.94', '64.95'], ['92.62', '56.77', '66.05']]
column
['accuracy', 'accuracy', 'accuracy']
['Debiased Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Synthetic</th> <th>MSRP</th> <th>SICK STS</th> </tr> </thead> <tbody> <tr> <td>Method || Biased Model</td> <td>89.46</td> <td>51.94</td> <td>64.95</td> </tr> <tr> <td>Method || Debiased Model</td> <td>92.62</td> <td>56.77</td> <td>66.05</td> </tr> </tbody></table>
Table 4
table_4
P19-1435
7
acl2019
Table 4 reports the results on the datasets that are not biased to the leakage pattern of QuoraQP. We find that the Debiased Model significantly outperforms the Biased Model on all three datasets. This indicates that the Debiased Model better captures the true semantic similarities of the input sentences. We further visualize the predictions on the synthetic dataset in Figure 6. As illustrated, the predictions are more neutral to the leakage feature. From the experimental results, we can see that the proposed leakage-neutral training method is effective, as the Debiased Model performs significantly better with Synthetic dataset, MSRP and SICK, showing a better generalization strength.
[1, 1, 1, 2, 2, 1]
['Table 4 reports the results on the datasets that are not biased to the leakage pattern of QuoraQP.', 'We find that the Debiased Model significantly outperforms the Biased Model on all three datasets.', 'This indicates that the Debiased Model better captures the true semantic similarities of the input sentences.', 'We further visualize the predictions on the synthetic dataset in Figure 6.', 'As illustrated, the predictions are more neutral to the leakage feature.', 'From the experimental results, we can see that the proposed leakage-neutral training method is effective, as the Debiased Model performs significantly better with Synthetic dataset, MSRP and SICK, showing a better generalization strength.']
[None, ['Debiased Model', 'Biased Model'], ['Debiased Model'], None, None, ['Debiased Model', 'Synthetic', 'MSRP', 'SICK STS']]
1
P19-1436table_1
Results on SQuAD v1.1. ‘W/s’ indicates number of words the model can process (read) per second on a CPU in a batch mode (multiple queries at a time). DrQA (Chen et al., 2017) and BERT (Devlin et al., 2019) are from SQuAD leaderboard, and LSTM+SA and LSTM+SA+ELMo are query-agnostic baselines from Seo et al. (2018).
3
[['Original', 'Model', 'DrQA'], ['Original', 'Model', 'BERT-Large'], ['Query-Agnostic', 'Model', 'LSTM+SA'], ['Query-Agnostic', 'Model', 'LSTM+SA+ELMo'], ['Query-Agnostic', 'Model', 'DENSPI (dense only)'], ['Query-Agnostic', 'Model', '+ Linear layer'], ['Query-Agnostic', 'Model', '+ Indep. encoders'], ['Query-Agnostic', 'Model', '- Coherency scalar']]
1
[['EM'], ['F1'], ['W/s']]
[['69.5', '78.8', '4.8K'], ['84.1', '90.9', '51'], ['49', '59.8', '-'], ['52.7', '62.7', '-'], ['73.6', '81.7', '28.7M'], ['66.9', '76.4', '-'], ['65.4', '75.1', '-'], ['71.5', '81.5', '-']]
column
['EM', 'F1', 'W/s']
['DENSPI (dense only)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> <th>W/s</th> </tr> </thead> <tbody> <tr> <td>Original || Model || DrQA</td> <td>69.5</td> <td>78.8</td> <td>4.8K</td> </tr> <tr> <td>Original || Model || BERT-Large</td> <td>84.1</td> <td>90.9</td> <td>51</td> </tr> <tr> <td>Query-Agnostic || Model || LSTM+SA</td> <td>49</td> <td>59.8</td> <td>-</td> </tr> <tr> <td>Query-Agnostic || Model || LSTM+SA+ELMo</td> <td>52.7</td> <td>62.7</td> <td>-</td> </tr> <tr> <td>Query-Agnostic || Model || DENSPI (dense only)</td> <td>73.6</td> <td>81.7</td> <td>28.7M</td> </tr> <tr> <td>Query-Agnostic || Model || + Linear layer</td> <td>66.9</td> <td>76.4</td> <td>-</td> </tr> <tr> <td>Query-Agnostic || Model || + Indep. encoders</td> <td>65.4</td> <td>75.1</td> <td>-</td> </tr> <tr> <td>Query-Agnostic || Model || - Coherency scalar</td> <td>71.5</td> <td>81.5</td> <td>-</td> </tr> </tbody></table>
Table 1
table_1
P19-1436
7
acl2019
Results . Table 1 compares the performance of our system with different baselines in terms of efficiency and accuracy. We note the following observations from the result table. (1) DENSPI outperforms the query-agnostic baseline (Seo et al., 2018) by a large margin, 20.1% EM and 18.5% F1. This is largely credited towards the usage of BERT encoder with an effective phrase embedding mechanism on the top. (2) DENSPI outperforms DrQA by 3.3% EM. This signifies that phrase-indexed models can now outperform early (unconstrained) state-of-the-art models in SQuAD. (3) DENSPI is 9.2% below the current state of the art. The difference, which we call decomposability gap5, is now within 10% and future work will involve further closing the gap. (4) Query-agnostic models can process (read) words much faster than query-dependent representation models. In a controlled environment where all information is in memory and the documents are pre-indexed, DENSPI can process 28.7 million words per second, which is 6,000 times faster than DrQA and 563,000 times faster than BERT without any approximation.
[2, 1, 1, 1, 2, 1, 2, 1, 2, 2, 2]
['Results .', 'Table 1 compares the performance of our system with different baselines in terms of efficiency and accuracy.', 'We note the following observations from the result table.', '(1) DENSPI outperforms the query-agnostic baseline (Seo et al., 2018) by a large margin, 20.1% EM and 18.5% F1.', 'This is largely credited towards the usage of BERT encoder with an effective phrase embedding mechanism on the top.', '(2) DENSPI outperforms DrQA by 3.3% EM.', 'This signifies that phrase-indexed models can now outperform early (unconstrained) state-of-the-art models in SQuAD.', '(3) DENSPI is 9.2% below the current state of the art.', 'The difference, which we call decomposability gap5, is now within 10% and future work will involve further closing the gap.', '(4) Query-agnostic models can process (read) words much faster than query-dependent representation models.', 'In a controlled environment where all information is in memory and the documents are pre-indexed, DENSPI can process 28.7 million words per second, which is 6,000 times faster than DrQA and 563,000 times faster than BERT without any approximation.']
[None, None, None, ['DENSPI (dense only)', 'LSTM+SA+ELMo'], None, ['DENSPI (dense only)', 'DrQA'], None, ['DENSPI (dense only)', 'BERT-Large'], None, ['Query-Agnostic'], ['DENSPI (dense only)', 'DrQA']]
1
P19-1441table_2
GLUE test set results scored using the GLUE evaluation server. The number below each task denotes the number of training examples. The state-of-the-art results are in bold, and the results on par with or pass human performance are in bold. MT-DNN uses BERTLARGE to initialize its shared layers. All the results are obtained from https://gluebenchmark.com/leaderboard on February 25, 2019. Model references: 1:(Wang et al., 2018) ; 2:(Radford et al., 2018); 3: (Phang et al., 2018); 4:(Devlin et al., 2018).
2
[['Model', 'BiLSTM+ELMo+Attn 1'], ['Model', 'Singletask Pretrain Transformer 2'], ['Model', 'GPT on STILTs 3'], ['Model', 'BERT LARGE 4'], ['Model', 'MT-DNNno-fine-tune'], ['Model', 'MT-DNN'], ['Model', 'Human Performance']]
2
[[' CoLA', '8.5k'], [' SST-2', '67k'], [' MRPC', '3.7k'], ['STS-B', '7k'], ['QQP', '364k'], [' MNLI-m/mm', '393k'], [' QNLI', '108k'], [' RTE', '2.5k'], [' WNLI', '634'], [' AX', '-'], [' Score', '-']]
[['36', '90.4', ' 84.9/77.9', ' 75.1/73.3', ' 64.8/84.7', ' 76.4/76.1', ' -', '56.8', '65.1', '26.5', '70.5'], ['45.4', '91.3', ' 82.3/75.7', ' 82.0/80.0', ' 70.3/88.5', ' 82.1/81.4', ' -', '56', '53.4', '29.8', '72.8'], ['47.2', '93.1', ' 87.7/83.7', ' 85.3/84.8', ' 70.1/88.1', ' 80.8/80.6', ' -', '69.1', '65.1', '29.4', '76.9'], ['60.5', '94.9', ' 89.3/85.4', ' 87.6/86.5', ' 72.1/89.3', ' 86.7/85.9', '92.7', '70.1', '65.1', '39.6', '80.5'], ['58.9', '94.6', ' 90.1/86.4', ' 89.5/88.8', ' 72.7/89.6', ' 86.5/85.8', '93.1', '79.1', '65.1', '39.4', '81.7'], ['62.5', '95.6', ' 91.1/88.2', ' 89.5/88.8', ' 72.7/89.6', ' 86.7/86.0', '93.1', '81.4', '65.1', '40.3', '82.7'], ['66.4', '97.8', ' 86.3/80.8', ' 92.7/92.6', ' 59.5/80.4', ' 92.0/92.8', '91.2', '93.6', '95.9', ' -', '87.1']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['MT-DNNno-fine-tune', 'MT-DNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CoLA || 8.5k</th> <th>SST-2 || 67k</th> <th>MRPC || 3.7k</th> <th>STS-B || 7k</th> <th>QQP || 364k</th> <th>MNLI-m/mm || 393k</th> <th>QNLI || 108k</th> <th>RTE || 2.5k</th> <th>WNLI || 634</th> <th>AX || -</th> <th>Score || -</th> </tr> </thead> <tbody> <tr> <td>Model || BiLSTM+ELMo+Attn 1</td> <td>36</td> <td>90.4</td> <td>84.9/77.9</td> <td>75.1/73.3</td> <td>64.8/84.7</td> <td>76.4/76.1</td> <td>-</td> <td>56.8</td> <td>65.1</td> <td>26.5</td> <td>70.5</td> </tr> <tr> <td>Model || Singletask Pretrain Transformer 2</td> <td>45.4</td> <td>91.3</td> <td>82.3/75.7</td> <td>82.0/80.0</td> <td>70.3/88.5</td> <td>82.1/81.4</td> <td>-</td> <td>56</td> <td>53.4</td> <td>29.8</td> <td>72.8</td> </tr> <tr> <td>Model || GPT on STILTs 3</td> <td>47.2</td> <td>93.1</td> <td>87.7/83.7</td> <td>85.3/84.8</td> <td>70.1/88.1</td> <td>80.8/80.6</td> <td>-</td> <td>69.1</td> <td>65.1</td> <td>29.4</td> <td>76.9</td> </tr> <tr> <td>Model || BERT LARGE 4</td> <td>60.5</td> <td>94.9</td> <td>89.3/85.4</td> <td>87.6/86.5</td> <td>72.1/89.3</td> <td>86.7/85.9</td> <td>92.7</td> <td>70.1</td> <td>65.1</td> <td>39.6</td> <td>80.5</td> </tr> <tr> <td>Model || MT-DNNno-fine-tune</td> <td>58.9</td> <td>94.6</td> <td>90.1/86.4</td> <td>89.5/88.8</td> <td>72.7/89.6</td> <td>86.5/85.8</td> <td>93.1</td> <td>79.1</td> <td>65.1</td> <td>39.4</td> <td>81.7</td> </tr> <tr> <td>Model || MT-DNN</td> <td>62.5</td> <td>95.6</td> <td>91.1/88.2</td> <td>89.5/88.8</td> <td>72.7/89.6</td> <td>86.7/86.0</td> <td>93.1</td> <td>81.4</td> <td>65.1</td> <td>40.3</td> <td>82.7</td> </tr> <tr> <td>Model || Human Performance</td> <td>66.4</td> <td>97.8</td> <td>86.3/80.8</td> <td>92.7/92.6</td> <td>59.5/80.4</td> <td>92.0/92.8</td> <td>91.2</td> <td>93.6</td> <td>95.9</td> <td>-</td> <td>87.1</td> </tr> </tbody></table>
Table 2
table_2
P19-1441
6
acl2019
MT-DNNno-fine-tune. Since the MTL of MT-DNN uses all GLUE tasks, it is possible to directly apply MT-DNN to each GLUE task without finetuning. The results in Table 2 show that MTDNNno-fine-tune still outperforms BERTLARGE consistently among all tasks but CoLA. Our analysis shows that CoLA is a challenge task with much smaller in-domain data than other tasks, and its task definition and dataset are unique among all GLUE tasks, making it difficult to benefit from the knowledge learned from other tasks. As a result, MTL tends to underfit the CoLA dataset. In such a case, fine-tuning is necessary to boost the performance. As shown in Table 2, the accuracy improves from 58.9% to 62.5% after finetuning, even though only a very small amount of in-domain data is available for adaptation. This, together with the fact that the fine-tuned MT-DNN significantly outperforms the fine-tuned BERTLARGE on CoLA (62.5% vs. 60.5%), reveals that the learned MT-DNN representation allows much more effective domain adaptation than the pre-trained BERT representation. We will revisit this topic with more experiments in Section 4.4.
[1, 1, 1, 1, 1, 1, 1, 1, 0]
['MT-DNNno-fine-tune.', 'Since the MTL of MT-DNN uses all GLUE tasks, it is possible to directly apply MT-DNN to each GLUE task without finetuning.', 'The results in Table 2 show that MTDNNno-fine-tune still outperforms BERTLARGE consistently among all tasks but CoLA.', 'Our analysis shows that CoLA is a challenge task with much smaller in-domain data than other tasks, and its task definition and dataset are unique among all GLUE tasks, making it difficult to benefit from the knowledge learned from other tasks.', 'As a result, MTL tends to underfit the CoLA dataset.', 'In such a case, fine-tuning is necessary to boost the performance.', 'As shown in Table 2, the accuracy improves from 58.9% to 62.5% after finetuning, even though only a very small amount of in-domain data is available for adaptation.', 'This, together with the fact that the fine-tuned MT-DNN significantly outperforms the fine-tuned BERTLARGE on CoLA (62.5% vs. 60.5%), reveals that the learned MT-DNN representation allows much more effective domain adaptation than the pre-trained BERT representation.', 'We will revisit this topic with more experiments in Section 4.4.']
[['MT-DNNno-fine-tune'], ['MT-DNN'], ['MT-DNNno-fine-tune', 'BERT LARGE 4', ' CoLA'], [' CoLA'], [' CoLA', 'MT-DNN'], None, ['MT-DNNno-fine-tune', 'MT-DNN'], ['MT-DNN', 'BERT LARGE 4', ' CoLA'], None]
1
P19-1443table_8
Performance stratified by question difficulty on the development set. The performances of the two models decrease as questions are more difficult.
2
[['Goal Difficulty', 'Easy (483)'], ['Goal Difficulty', 'Medium (441)'], ['Goal Difficulty', 'Hard (145)'], ['Goal Difficulty', 'Extra hard (134)']]
1
[[' CD-Seq2Seq'], [' SyntaxSQL-con']]
[['35.1', '38.9'], ['7', '7.3'], ['2.8', '1.4'], ['0.8', '0.7']]
column
['accuracy', 'accuracy']
['Goal Difficulty']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CD-Seq2Seq</th> <th>SyntaxSQL-con</th> </tr> </thead> <tbody> <tr> <td>Goal Difficulty || Easy (483)</td> <td>35.1</td> <td>38.9</td> </tr> <tr> <td>Goal Difficulty || Medium (441)</td> <td>7</td> <td>7.3</td> </tr> <tr> <td>Goal Difficulty || Hard (145)</td> <td>2.8</td> <td>1.4</td> </tr> <tr> <td>Goal Difficulty || Extra hard (134)</td> <td>0.8</td> <td>0.7</td> </tr> </tbody></table>
Table 8
table_8
P19-1443
9
acl2019
Performance stratified by SQL difficulty We group individual questions in SParC into different difficulty levels based on the complexity of their corresponding SQL representations using the criteria proposed in Yu et al.(2018c). As shown in Figure 3, the questions turned to get harder as interaction proceeds, more questions with hard and extra hard difficulties appear in late turns. Table 8 shows the performance of the two models across each difficulty level. As we expect, the models perform better when the user request is easy. Both models fail on most hard and extra hard questions. Considering that the size and question types of SParC are very close to Spider, the relatively lower performances of SyntaxSQLNet on medium, hard and extra hard questions in Table 8 comparing to its performances on Spider (17.6%, 16.3%, and 4.9% respectively) indicates that SParC introduces additional challenge by introducing context dependencies, which is absent from Spider.
[2, 2, 1, 1, 1, 2]
['Performance stratified by SQL difficulty We group individual questions in SParC into different difficulty levels based on the complexity of their corresponding SQL representations using the criteria proposed in Yu et al.(2018c).', 'As shown in Figure 3, the questions turned to get harder as interaction proceeds, more questions with hard and extra hard difficulties appear in late turns.', 'Table 8 shows the performance of the two models across each difficulty level.', 'As we expect, the models perform better when the user request is easy.', 'Both models fail on most hard and extra hard questions.', 'Considering that the size and question types of SParC are very close to Spider, the relatively lower performances of SyntaxSQLNet on medium, hard and extra hard questions in Table 8 comparing to its performances on Spider (17.6%, 16.3%, and 4.9% respectively) indicates that SParC introduces additional challenge by introducing context dependencies, which is absent from Spider.']
[None, None, [' CD-Seq2Seq', ' SyntaxSQL-con', 'Goal Difficulty'], ['Easy (483)'], ['Hard (145)', 'Extra hard (134)'], None]
1
P19-1446table_4
SEMBLEU and SMATCH scores for several recent models. † indicates previously reported result.
4
[['Data', 'LDC2015E86', ' Model', ' Lyu'], ['Data', 'LDC2015E86', ' Model', ' Guo'], ['Data', 'LDC2015E86', ' Model', ' Gros'], ['Data', 'LDC2015E86', ' Model', ' JAMR'], ['Data', 'LDC2015E86', ' Model', ' CAMR'], ['Data', 'LDC2016E25', ' Model', ' Lyu'], ['Data', 'LDC2016E25', ' Model', ' van Nood'], ['Data', 'LDC2017T10', ' Model', ' Guo'], ['Data', 'LDC2017T10', ' Model', ' Gros'], ['Data', 'LDC2017T10', ' Model', ' JAMR'], ['Data', ' LDC2017T10', ' Model', ' CAMR']]
1
[[' SEMBLEU'], [' SMATCH']]
[['52.7', ' 73.7†'], ['50.1', ' 68.7†'], ['50', ' 70.2†'], ['46.8', '67'], ['37.2', '62'], ['54.3', ' 74.4†'], ['49.2', ' 71.0†'], ['52', ' 69.8†'], ['50.7', ' 71.0†'], ['47', '66'], ['36.6', '61']]
column
['SEMBLEU', 'SMATCH']
[' SEMBLEU']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SEMBLEU</th> <th>SMATCH</th> </tr> </thead> <tbody> <tr> <td>Data || LDC2015E86 || Model || Lyu</td> <td>52.7</td> <td>73.7†</td> </tr> <tr> <td>Data || LDC2015E86 || Model || Guo</td> <td>50.1</td> <td>68.7†</td> </tr> <tr> <td>Data || LDC2015E86 || Model || Gros</td> <td>50</td> <td>70.2†</td> </tr> <tr> <td>Data || LDC2015E86 || Model || JAMR</td> <td>46.8</td> <td>67</td> </tr> <tr> <td>Data || LDC2015E86 || Model || CAMR</td> <td>37.2</td> <td>62</td> </tr> <tr> <td>Data || LDC2016E25 || Model || Lyu</td> <td>54.3</td> <td>74.4†</td> </tr> <tr> <td>Data || LDC2016E25 || Model || van Nood</td> <td>49.2</td> <td>71.0†</td> </tr> <tr> <td>Data || LDC2017T10 || Model || Guo</td> <td>52</td> <td>69.8†</td> </tr> <tr> <td>Data || LDC2017T10 || Model || Gros</td> <td>50.7</td> <td>71.0†</td> </tr> <tr> <td>Data || LDC2017T10 || Model || JAMR</td> <td>47</td> <td>66</td> </tr> <tr> <td>Data || LDC2017T10 || Model || CAMR</td> <td>36.6</td> <td>61</td> </tr> </tbody></table>
Table 4
table_4
P19-1446
5
acl2019
3.4 Evaluating with SEMBLEU . Table 4 shows the SEMBLEU and SMATCH scores several recent models. In particular, we asked for the outputs of Lyu (Lyu and Titov, 2018), Gros (Groschwitz et al., 2018), van Nood (van Noord and Bos, 2017) and Guo (Guo and Lu, 2018) to evaluate on our SEMBLEU. For CAMR and JAMR, we obtain their outputs by running the released systems. SEMBLEU is mostly consistent with SMATCH, except for the order between Guo and Gros. It is probably because Guo has more highorder correspondences with the reference.
[2, 1, 1, 1, 1, 2]
['3.4 Evaluating with SEMBLEU .', 'Table 4 shows the SEMBLEU and SMATCH scores several recent models.', 'In particular, we asked for the outputs of Lyu (Lyu and Titov, 2018), Gros (Groschwitz et al., 2018), van Nood (van Noord and Bos, 2017) and Guo (Guo and Lu, 2018) to evaluate on our SEMBLEU.', 'For CAMR and JAMR, we obtain their outputs by running the released systems.', 'SEMBLEU is mostly consistent with SMATCH, except for the order between Guo and Gros.', 'It is probably because Guo has more highorder correspondences with the reference.']
[[' SEMBLEU'], [' SEMBLEU', ' SMATCH'], [' Gros', ' Guo', ' SEMBLEU'], [' CAMR', ' JAMR'], [' SEMBLEU', ' SMATCH', ' Guo'], [' Guo']]
1
P19-1453table_1
Comparison between training on 1 million examples from a backtranslated English-English corpus (En-En) and the original bitext corpus (En-Cs) sampling 1 million and 2 million sentence pairs (the latter equalizes the amount of English text with the En-En setting). Performance is the average Pearson’s r over the 2012-2016 STS datasets.
2
[['Model', 'LSTM-SP (20k)'], ['Model', 'SP (20k)'], ['Model', 'WORD'], ['Model', 'TRIGRAM']]
1
[['En-En'], ['En-Cs (1M)'], ['En-Cs (2M)']]
[['66.7', '65.7', '66.6'], ['68.3', '68.6', '70'], ['66', '63.8', '65.9'], ['69.2', '68.6', '69.9']]
column
['r', 'r', 'r']
['En-Cs (1M)', 'En-Cs (2M)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>En-En</th> <th>En-Cs (1M)</th> <th>En-Cs (2M)</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM-SP (20k)</td> <td>66.7</td> <td>65.7</td> <td>66.6</td> </tr> <tr> <td>Model || SP (20k)</td> <td>68.3</td> <td>68.6</td> <td>70</td> </tr> <tr> <td>Model || WORD</td> <td>66</td> <td>63.8</td> <td>65.9</td> </tr> <tr> <td>Model || TRIGRAM</td> <td>69.2</td> <td>68.6</td> <td>69.9</td> </tr> </tbody></table>
Table 1
table_1
P19-1453
3
acl2019
Results in Table 1 show two observations. First, models trained on En-En, in contrast to those trained on En-CS, have higher correlation for all encoders except SP. However, when the same number of English sentences is used, models trained on bitext have greater than or equal performance across all encoders. Second, SP has the best performance in the En-CS setting. It also has fewer parameters and is therefore faster to train than LSTM-SP and TRIGRAM. Further, it is much faster at encoding new sentences at test time.
[1, 1, 1, 1, 2, 2]
['Results in Table 1 show two observations.', 'First, models trained on En-En, in contrast to those trained on En-CS, have higher correlation for all encoders except SP.', 'However, when the same number of English sentences is used, models trained on bitext have greater than or equal performance across all encoders.', 'Second, SP has the best performance in the En-CS setting.', 'It also has fewer parameters and is therefore faster to train than LSTM-SP and TRIGRAM.', 'Further, it is much faster at encoding new sentences at test time.']
[None, ['En-En', 'En-Cs (1M)', 'En-Cs (2M)', 'Model', 'SP (20k)'], ['En-Cs (2M)', 'En-En'], ['LSTM-SP (20k)', 'SP (20k)', 'En-Cs (1M)', 'En-Cs (2M)'], ['LSTM-SP (20k)', 'SP (20k)', 'TRIGRAM'], None]
1
P19-1457table_2
Experimental results with constituent Tree-LSTMs.
2
[['Model', 'ConTree (Le and Zuidema, 2015)'], ['Model', 'ConTree (Tai et al., 2015)'], ['Model', 'ConTree (Zhu et al., 2015)'], ['Model', 'ConTree (Li et al., 2015)'], ['Model', 'ConTree (Our implementation)'], ['Model', 'ConTree + WG'], ['Model', 'ConTree + LVG4'], ['Model', 'ConTree + LVeG']]
1
[[' SST-5 Root'], [' SST-5 Phrase'], [' SST-2 Root'], [' SST-2 Phrase']]
[[' 49.9', ' -', ' 88.0', ' -'], [' 51.0', ' -', ' 88.0', ' -'], [' 50.1', ' -', ' -', ' -'], [' 50.4', ' 83.4', ' 86.7', ' -'], [' 51.5', ' 82.8', ' 89.4', ' 86.9'], [' 51.7', ' 83.0', ' 89.7', ' 88.9'], [' 52.2', ' 83.2', ' 89.8', ' 89.1'], [' 52.9', ' 83.4', ' 89.8', ' 89.5']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['ConTree (Our implementation)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-5 Root</th> <th>SST-5 Phrase</th> <th>SST-2 Root</th> <th>SST-2 Phrase</th> </tr> </thead> <tbody> <tr> <td>Model || ConTree (Le and Zuidema, 2015)</td> <td>49.9</td> <td>-</td> <td>88.0</td> <td>-</td> </tr> <tr> <td>Model || ConTree (Tai et al., 2015)</td> <td>51.0</td> <td>-</td> <td>88.0</td> <td>-</td> </tr> <tr> <td>Model || ConTree (Zhu et al., 2015)</td> <td>50.1</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || ConTree (Li et al., 2015)</td> <td>50.4</td> <td>83.4</td> <td>86.7</td> <td>-</td> </tr> <tr> <td>Model || ConTree (Our implementation)</td> <td>51.5</td> <td>82.8</td> <td>89.4</td> <td>86.9</td> </tr> <tr> <td>Model || ConTree + WG</td> <td>51.7</td> <td>83.0</td> <td>89.7</td> <td>88.9</td> </tr> <tr> <td>Model || ConTree + LVG4</td> <td>52.2</td> <td>83.2</td> <td>89.8</td> <td>89.1</td> </tr> <tr> <td>Model || ConTree + LVeG</td> <td>52.9</td> <td>83.4</td> <td>89.8</td> <td>89.5</td> </tr> </tbody></table>
Table 2
table_2
P19-1457
7
acl2019
We re-implement constituent Tree-LSTM (ConTree) of Tai et al. (2015) and obtain better results than their original implementation. We then integrate ConTree with Weighted Grammars (ConTree+WG), Latent Variable Grammars with a subtype number of 4 (ConTree+LVG4), and Latent Variable Grammars (ConTree+LVeG), respectively. Table 2 shows the experimental results for sentiment classification on both SST-5 and SST-2 at the sentence level (Root) and all nodes (Phrase).
[1, 2, 1]
['We re-implement constituent Tree-LSTM (ConTree) of Tai et al. (2015) and obtain better results than their original implementation.', 'We then integrate ConTree with Weighted Grammars (ConTree+WG), Latent Variable Grammars with a subtype number of 4 (ConTree+LVG4), and Latent Variable Grammars (ConTree+LVeG), respectively.', 'Table 2 shows the experimental results for sentiment classification on both SST-5 and SST-2 at the sentence level (Root) and all nodes (Phrase).']
[['ConTree (Our implementation)', 'ConTree (Tai et al., 2015)', 'ConTree (Le and Zuidema, 2015)', 'ConTree (Zhu et al., 2015)', 'ConTree (Li et al., 2015)'], ['ConTree + WG', 'ConTree + LVG4', 'ConTree + LVeG'], [' SST-5 Root', ' SST-5 Phrase', ' SST-2 Root', ' SST-2 Phrase']]
1
P19-1457table_3
Experimental results with ELMo. BCN(P) is the BCN implemented by Peters et al. (2018). BCN(O) is the BCN implemented by ourselves.
2
[['Model', 'BCN(P)'], ['Model', 'BCN(O)'], ['Model', 'BCN+WG'], ['Model', 'BCN+LVG4'], ['Model', 'BCN+LVeG']]
2
[[' SST-5', 'Root'], [' SST-5', ' Phrase'], [' SST-2', ' Root'], [' SST-2', ' Phrase']]
[['54.7', ' -', ' -', ' -'], ['54.6', '83.3', '91.4', '88.8'], ['55.1', '83.5', '91.5', '90.5'], ['55.5', '83.5', '91.7', '91.3'], ['56', '83.5', '92.1', '91.6']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['BCN+WG', 'BCN+LVG4', 'BCN+LVeG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-5 || Root</th> <th>SST-5 || Phrase</th> <th>SST-2 || Root</th> <th>SST-2 || Phrase</th> </tr> </thead> <tbody> <tr> <td>Model || BCN(P)</td> <td>54.7</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || BCN(O)</td> <td>54.6</td> <td>83.3</td> <td>91.4</td> <td>88.8</td> </tr> <tr> <td>Model || BCN+WG</td> <td>55.1</td> <td>83.5</td> <td>91.5</td> <td>90.5</td> </tr> <tr> <td>Model || BCN+LVG4</td> <td>55.5</td> <td>83.5</td> <td>91.7</td> <td>91.3</td> </tr> <tr> <td>Model || BCN+LVeG</td> <td>56</td> <td>83.5</td> <td>92.1</td> <td>91.6</td> </tr> </tbody></table>
Table 3
table_3
P19-1457
7
acl2019
There has also been work using large-scale external datasets to improve performances of sentiment classification. Peters et al. (2018) combined bi-attentive classification network (BCN, McCann et al. (2017)) with a pretrained language modelwith character convolutions on a large-scale corpus (ELMo) and reported an accuracy of 54.7 on sentence-level SST-5. For fair comparison, we also augment our model with ELMo. Table 3 shows that our methods beat the baseline on every task. BCN+WG improves accuracies on all task slightly by modeling sentiment composition explicitly. The obvious promotion of BCN+LVG4 and BCN+LVeG shows that explicitly modeling sentiment composition with fine-grained sentiment subtypes is useful. Particularly, BCN+LVeG improves the sentence level classification accurracies by 1.4 points (fine-grained) and 0.7 points (binary) compared to BCN (our implementation), respectively. To our knowledge, we achieve the best results on the SST dataset.
[2, 2, 2, 1, 1, 1, 1, 1]
['There has also been work using large-scale external datasets to improve performances of sentiment classification.', 'Peters et al. (2018) combined bi-attentive classification network (BCN, McCann et al. (2017)) with a pretrained language modelwith character convolutions on a large-scale corpus (ELMo) and reported an accuracy of 54.7 on sentence-level SST-5.', 'For fair comparison, we also augment our model with ELMo.', 'Table 3 shows that our methods beat the baseline on every task.', 'BCN+WG improves accuracies on all task slightly by modeling sentiment composition explicitly.', 'The obvious promotion of BCN+LVG4 and BCN+LVeG shows that explicitly modeling sentiment composition with fine-grained sentiment subtypes is useful.', 'Particularly, BCN+LVeG improves the sentence level classification accurracies by 1.4 points (fine-grained) and 0.7 points (binary) compared to BCN (our implementation), respectively.', 'To our knowledge, we achieve the best results on the SST dataset.']
[None, ['BCN(P)', ' SST-5'], None, ['BCN+WG', 'BCN+LVG4', 'BCN+LVeG'], ['BCN+WG'], ['BCN+LVG4', 'BCN+LVeG'], ['BCN+LVeG', 'BCN(O)'], [' SST-5', ' SST-2']]
1
P19-1458table_6
Comparison of results using large and small corpora. The small corpus is uniformly sampled from the Japanese Wikipedia (100MB). The large corpus is the entire Japanese Wikipedia (2.9GB).
2
[['Model', 'BCN+ELMo'], ['Model', 'ULMFiT'], ['Model', 'ULMFiT Adapted'], ['Model', 'BERTBASE'], ['Model', 'BCN+ELMo [100MB]'], ['Model', 'ULMFiT Adapted [100MB]'], ['Model', 'BERTBASE [100MB]']]
1
[[' Yahoo Binary']]
[['10.24'], ['12.2'], ['8.52'], ['8.42'], ['10.32'], ['8.57'], ['14.26']]
column
['error']
['BCN+ELMo [100MB]', 'BERTBASE [100MB]']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Yahoo Binary</th> </tr> </thead> <tbody> <tr> <td>Model || BCN+ELMo</td> <td>10.24</td> </tr> <tr> <td>Model || ULMFiT</td> <td>12.2</td> </tr> <tr> <td>Model || ULMFiT Adapted</td> <td>8.52</td> </tr> <tr> <td>Model || BERTBASE</td> <td>8.42</td> </tr> <tr> <td>Model || BCN+ELMo [100MB]</td> <td>10.32</td> </tr> <tr> <td>Model || ULMFiT Adapted [100MB]</td> <td>8.57</td> </tr> <tr> <td>Model || BERTBASE [100MB]</td> <td>14.26</td> </tr> </tbody></table>
Table 6
table_6
P19-1458
4
acl2019
7.3 Size of Pre-Training Corpus. We also investigate whether the size of the source language model affects the sentiment analysis performance on the Yahoo dataset. This is especially important for low-resource languages that do not usually have large amounts of data available for training. We used the ja.text816 small text corpus (100MB) from the Japanese Wikipedia to compare with the whole Wikipedia (2.9GB) used in our previous experiments. Table 6 shows slightly lower performance for BCN+ELMo and ULMFiT while BERT performed much worse. Thus, for effective sentiment analysis, a large corpus is required for pre-training BERT.
[2, 2, 2, 2, 1, 2]
['7.3 Size of Pre-Training Corpus.', ' We also investigate whether the size of the source language model affects the sentiment analysis performance on the Yahoo dataset.', 'This is especially important for low-resource languages that do not usually have large amounts of data available for training.', 'We used the ja.text816 small text corpus (100MB) from the Japanese Wikipedia to compare with the whole Wikipedia (2.9GB) used in our previous experiments.', 'Table 6 shows slightly lower performance for BCN+ELMo and ULMFiT while BERT performed much worse.', 'Thus, for effective sentiment analysis, a large corpus is required for pre-training BERT.']
[None, None, None, None, ['BCN+ELMo', 'BCN+ELMo [100MB]', 'ULMFiT', 'ULMFiT Adapted [100MB]', 'BERTBASE', 'BERTBASE [100MB]'], None]
1
P19-1465table_3
Experimental results on Quora test set.
2
[['Model', 'BiMPM (Wang et al., 2017)'], ['Model', 'pt-DecAttn-word (Tomar et al., 2017)'], ['Model', 'pt-DecAttn-char (Tomar et al., 2017)'], ['Model', 'DIIN (Gong et al., 2018)'], ['Model', 'MwAN (Tan et al., 2018)'], ['Model', 'CSRAN (Tay et al., 2018a)'], ['Model', 'SAN (Liu et al., 2018)'], ['Model', 'RE2 (ours)']]
1
[['Acc.(%)']]
[['88.2'], ['87.5'], ['88.4'], ['89.1'], ['89.1'], ['89.2'], ['89.4'], [' 89.2±0.2']]
column
['Acc.(%)']
['RE2 (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.(%)</th> </tr> </thead> <tbody> <tr> <td>Model || BiMPM (Wang et al., 2017)</td> <td>88.2</td> </tr> <tr> <td>Model || pt-DecAttn-word (Tomar et al., 2017)</td> <td>87.5</td> </tr> <tr> <td>Model || pt-DecAttn-char (Tomar et al., 2017)</td> <td>88.4</td> </tr> <tr> <td>Model || DIIN (Gong et al., 2018)</td> <td>89.1</td> </tr> <tr> <td>Model || MwAN (Tan et al., 2018)</td> <td>89.1</td> </tr> <tr> <td>Model || CSRAN (Tay et al., 2018a)</td> <td>89.2</td> </tr> <tr> <td>Model || SAN (Liu et al., 2018)</td> <td>89.4</td> </tr> <tr> <td>Model || RE2 (ours)</td> <td>89.2±0.2</td> </tr> </tbody></table>
Table 3
table_3
P19-1465
5
acl2019
Results on Quora dataset are listed in Table 3. Since paraphrase identification is a symmetric task where two input sequences can be swapped with no effect to the label of the text pair, in hyperparameter tuning we validate between two symmetric versions of the prediction layer (Equation 6 and Equation 7) and use no additional data augmentation. The performance of RE2 is on par with the state-of-the-art on this dataset.
[1, 2, 1]
['Results on Quora dataset are listed in Table 3.', 'Since paraphrase identification is a symmetric task where two input sequences can be swapped with no effect to the label of the text pair, in hyperparameter tuning we validate between two symmetric versions of the prediction layer (Equation 6 and Equation 7) and use no additional data augmentation.', 'The performance of RE2 is on par with the state-of-the-art on this dataset.']
[None, None, ['RE2 (ours)']]
1
P19-1469table_1
Semi-supervised classification results on the SNLI dataset. (a) Zhao et al. (2018); (b) Shen et al. (2018a).
2
[['Model', 'LSTM(a)'], ['Model', 'CNN(b)'], ['Model', 'LSTM-AE(a)'], ['Model', 'LSTM-ADAE(a)'], ['Model', 'DeConv-AE(b)'], ['Model', 'LSTM-VAE(b)'], ['Model', 'DeConv-VAE(b)'], ['Model', 'LSTM-vMF-VAE (ours)'], ['Model', 'CS-LVM (ours)']]
1
[['28k'], ['59k'], ['120k']]
[['57.9', '62.5', '65.9'], ['58.7', '62.7', '65.6'], ['59.9', '64.6', '68.5'], ['62.5', '66.8', '70.9'], ['62.1', '65.5', '68.7'], ['64.7', '67.5', '71.1'], ['67.2', '69.3', '72.2'], ['65.6', '68.7', '71.1'], ['68.4', '73.5', '76.9']]
column
['accuracy', 'accuracy', 'accuracy']
['CS-LVM (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>28k</th> <th>59k</th> <th>120k</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM(a)</td> <td>57.9</td> <td>62.5</td> <td>65.9</td> </tr> <tr> <td>Model || CNN(b)</td> <td>58.7</td> <td>62.7</td> <td>65.6</td> </tr> <tr> <td>Model || LSTM-AE(a)</td> <td>59.9</td> <td>64.6</td> <td>68.5</td> </tr> <tr> <td>Model || LSTM-ADAE(a)</td> <td>62.5</td> <td>66.8</td> <td>70.9</td> </tr> <tr> <td>Model || DeConv-AE(b)</td> <td>62.1</td> <td>65.5</td> <td>68.7</td> </tr> <tr> <td>Model || LSTM-VAE(b)</td> <td>64.7</td> <td>67.5</td> <td>71.1</td> </tr> <tr> <td>Model || DeConv-VAE(b)</td> <td>67.2</td> <td>69.3</td> <td>72.2</td> </tr> <tr> <td>Model || LSTM-vMF-VAE (ours)</td> <td>65.6</td> <td>68.7</td> <td>71.1</td> </tr> <tr> <td>Model || CS-LVM (ours)</td> <td>68.4</td> <td>73.5</td> <td>76.9</td> </tr> </tbody></table>
Table 1
table_1
P19-1469
6
acl2019
Table 1 summarizes the result of experiments. We can clearly see that the proposed CS-LVM architecture substantially outperforms other models based on auto-encoding. Also, the semantic constraints brought additional boost in performance, achieving the new state of the art in semisupervised classification of the SNLI dataset. When all training data are used as labeled data (? 550k), CS-LVM also improves performance by achieving accuracy of 82.8%, compared to the supervised LSTM (81.5%), LSTM-AE (81.6%), LSTM-VAE (80.8%), DeConv-VAE (80.9%).
[1, 1, 1, 1]
['Table 1 summarizes the result of experiments.', 'We can clearly see that the proposed CS-LVM architecture substantially outperforms other models based on auto-encoding.', 'Also, the semantic constraints brought additional boost in performance, achieving the new state of the art in semisupervised classification of the SNLI dataset.', 'When all training data are used as labeled data (? 550k), CS-LVM also improves performance by achieving accuracy of 82.8%, compared to the supervised LSTM (81.5%), LSTM-AE (81.6%), LSTM-VAE (80.8%), DeConv-VAE (80.9%).']
[None, ['CS-LVM (ours)'], ['CS-LVM (ours)'], ['CS-LVM (ours)', 'LSTM(a)', 'LSTM-AE(a)', 'LSTM-VAE(b)', 'DeConv-VAE(b)']]
1
P19-1470table_1
Automatic evaluations of quality and novelty for generations of ATOMIC commonsense. No novelty scores are reported for the NearestNeighbor baseline because all retrieved sequences are in the training set.
2
[['Model', '9ENC9DEC (Sap et al., 2019)'], ['Model', 'NearestNeighbor (Sap et al., 2019)'], ['Model', 'Event2(IN)VOLUN (Sap et al., 2019)'], ['Model', 'Event2PERSONX/Y (Sap et al., 2019)'], ['Model', 'Event2PRE/POST (Sap et al., 2019)'], ['Model', 'COMET (- pretrain)'], ['Model', 'COMET']]
1
[['PPL5'], ['BLEU-2'], ['N/T sro6'], ['N/T o'], ['N/U o']]
[['-', '10.01', '100', '8.61', '40.77'], ['-', '6.61', '-', '-', '-'], ['-', '9.67', '100', '9.52', '45.06'], ['-', '9.24', '100', '8.22', '41.66'], ['-', '9.93', '100', '7.38', '41.99'], ['15.42', '13.88', '100', '7.25', '45.71'], ['11.14', '15.1', '100', '9.71', '51.2']]
column
['PPL5', 'BLEU-2', 'N/T sro6', 'N/T o', 'N/U o']
['COMET']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL5</th> <th>BLEU-2</th> <th>N/T sro6</th> <th>N/T o</th> <th>N/U o</th> </tr> </thead> <tbody> <tr> <td>Model || 9ENC9DEC (Sap et al., 2019)</td> <td>-</td> <td>10.01</td> <td>100</td> <td>8.61</td> <td>40.77</td> </tr> <tr> <td>Model || NearestNeighbor (Sap et al., 2019)</td> <td>-</td> <td>6.61</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Event2(IN)VOLUN (Sap et al., 2019)</td> <td>-</td> <td>9.67</td> <td>100</td> <td>9.52</td> <td>45.06</td> </tr> <tr> <td>Model || Event2PERSONX/Y (Sap et al., 2019)</td> <td>-</td> <td>9.24</td> <td>100</td> <td>8.22</td> <td>41.66</td> </tr> <tr> <td>Model || Event2PRE/POST (Sap et al., 2019)</td> <td>-</td> <td>9.93</td> <td>100</td> <td>7.38</td> <td>41.99</td> </tr> <tr> <td>Model || COMET (- pretrain)</td> <td>15.42</td> <td>13.88</td> <td>100</td> <td>7.25</td> <td>45.71</td> </tr> <tr> <td>Model || COMET</td> <td>11.14</td> <td>15.1</td> <td>100</td> <td>9.71</td> <td>51.2</td> </tr> </tbody></table>
Table 1
table_1
P19-1470
5
acl2019
4.2 Results. The BLEU-2 results in Table 1 indicate that COMET exceeds the performance of all baselines, achieving a 51% relative improvement over the top performing model of Sap et al. (2019). More interesting, however, is the result of the human evaluation, where COMET reported a statistically significant relative Avg performance increase of 18% over the top baseline,Event2IN(VOLUN). This performance increase is consistent, as well, with an improvement being observed across every relation type. In addition to the quality improvements, Table 1 shows that COMET produces more novel tuple objects than the baselines, as well.
[2, 1, 1, 1, 1]
['4.2 Results.', 'The BLEU-2 results in Table 1 indicate that COMET exceeds the performance of all baselines, achieving a 51% relative improvement over the top performing model of Sap et al. (2019).', 'More interesting, however, is the result of the human evaluation, where COMET reported a statistically significant relative Avg performance increase of 18% over the top baseline,Event2IN(VOLUN).', 'This performance increase is consistent, as well, with an improvement being observed across every relation type.', 'In addition to the quality improvements, Table 1 shows that COMET produces more novel tuple objects than the baselines, as well.']
[None, ['COMET', 'BLEU-2'], ['COMET', 'Event2(IN)VOLUN (Sap et al., 2019)'], ['COMET', 'N/T sro6', 'N/T o', 'N/U o'], ['COMET']]
1
P19-1470table_4
Effect of amount of training data on automatic evaluation of commonsense generations
2
[['% train data', '1% train'], ['% train data', '10% train'], ['% train data', '50% train'], ['% train data', 'FULL (- pretrain)'], ['% train data', 'FULL train']]
1
[['PPL'], ['BLEU-2'], ['N/T o'], ['N/U o']]
[['23.81', '5.08', '7.24', '49.36'], ['13.74', '12.72', '9.54', '58.34'], ['11.82', '13.97', '9.32', '50.37'], ['15.18', '13.22', '7.14', '44.55'], ['11.13', '14.34', '9.51', '50.05']]
column
['PPL', 'BLEU-2', 'N/T o', 'N/U o']
['% train data']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL</th> <th>BLEU-2</th> <th>N/T o</th> <th>N/U o</th> </tr> </thead> <tbody> <tr> <td>% train data || 1% train</td> <td>23.81</td> <td>5.08</td> <td>7.24</td> <td>49.36</td> </tr> <tr> <td>% train data || 10% train</td> <td>13.74</td> <td>12.72</td> <td>9.54</td> <td>58.34</td> </tr> <tr> <td>% train data || 50% train</td> <td>11.82</td> <td>13.97</td> <td>9.32</td> <td>50.37</td> </tr> <tr> <td>% train data || FULL (- pretrain)</td> <td>15.18</td> <td>13.22</td> <td>7.14</td> <td>44.55</td> </tr> <tr> <td>% train data || FULL train</td> <td>11.13</td> <td>14.34</td> <td>9.51</td> <td>50.05</td> </tr> </tbody></table>
Table 4
table_4
P19-1470
6
acl2019
Efficiency of learning from seed tuples . Because not all domains will have large available commonsense KBs on which to train, we explore how varying the amount of training data available for learning affects the quality and novelty of the knowledge that is produced. Our results in Table 4 indicate that even with only 10% of the available training data, the model is still able to produce generations that are coherent, adequate, and novel. Using only 1% of the training data clearly diminishes the quality of the produced generations, with significantly lower observed results across both quality and novelty metrics. Interestingly, we note that training the model without pretrained weights performs comparably to training with 10% of the seed tuples, quantifying the impact of using pre-trained language representations.
[2, 2, 1, 1, 1]
['Efficiency of learning from seed tuples .', 'Because not all domains will have large available commonsense KBs on which to train, we explore how varying the amount of training data available for learning affects the quality and novelty of the knowledge that is produced.', 'Our results in Table 4 indicate that even with only 10% of the available training data, the model is still able to produce generations that are coherent, adequate, and novel.', 'Using only 1% of the training data clearly diminishes the quality of the produced generations, with significantly lower observed results across both quality and novelty metrics.', 'Interestingly, we note that training the model without pretrained weights performs comparably to training with 10% of the seed tuples, quantifying the impact of using pre-trained language representations.']
[None, None, None, ['1% train'], ['10% train']]
1
P19-1470table_6
ConceptNet generation Results
2
[['Model', 'LSTM - s'], ['Model', 'CKBG (Saito et al., 2018)'], ['Model', 'COMET (- pretrain)'], ['Model', 'COMET - RELTOK'], ['Model', 'COMET']]
1
[['PPL'], ['Score'], ['N/T sro'], ['N/T o'], ['Human']]
[['-', '60.83', '86.25', '7.83', '63.86'], ['-', '57.17', '86.25', '8.67', '53.95'], ['8.05', '89.25', '36.17', '6', '83.49'], ['4.39', '95.17', '56.42', '2.62', '92.11'], ['4.32', '95.25', '59.25', '3.75', '91.69']]
column
['PPL', 'Score', 'N/T sro', 'N/T o', 'Human']
['COMET']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL</th> <th>Score</th> <th>N/T sro</th> <th>N/T o</th> <th>Human</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM - s</td> <td>-</td> <td>60.83</td> <td>86.25</td> <td>7.83</td> <td>63.86</td> </tr> <tr> <td>Model || CKBG (Saito et al., 2018)</td> <td>-</td> <td>57.17</td> <td>86.25</td> <td>8.67</td> <td>53.95</td> </tr> <tr> <td>Model || COMET (- pretrain)</td> <td>8.05</td> <td>89.25</td> <td>36.17</td> <td>6</td> <td>83.49</td> </tr> <tr> <td>Model || COMET - RELTOK</td> <td>4.39</td> <td>95.17</td> <td>56.42</td> <td>2.62</td> <td>92.11</td> </tr> <tr> <td>Model || COMET</td> <td>4.32</td> <td>95.25</td> <td>59.25</td> <td>3.75</td> <td>91.69</td> </tr> </tbody></table>
Table 6
table_6
P19-1470
7
acl2019
5.2 Results . Quality . Our results indicate that high-quality knowledge can be generated by the model: the low perplexity scores in Table 6 indicate high model confidence in its predictions, while the high classifier score (95.25%) indicates that the KB completion model of Li et al. (2016) scores the generated tuples as correct in most of the cases. While adversarial generations could be responsible for this high score, a human evaluation (following the same design as for ATOMIC) scores 91.7% of greedily decoded tuples as correct. Randomly selected examples provided in Table 7 also point to the quality of knowledge produced by the model.
[2, 2, 1, 1, 0]
['5.2 Results .', 'Quality .', 'Our results indicate that high-quality knowledge can be generated by the model: the low perplexity scores in Table 6 indicate high model confidence in its predictions, while the high classifier score (95.25%) indicates that the KB completion model of Li et al. (2016) scores the generated tuples as correct in most of the cases.', 'While adversarial generations could be responsible for this high score, a human evaluation (following the same design as for ATOMIC) scores 91.7% of greedily decoded tuples as correct.', 'Randomly selected examples provided in Table 7 also point to the quality of knowledge produced by the model.']
[None, None, ['COMET', 'Score'], ['COMET', 'Human'], None]
1
P19-1478table_1
Results on WSC273 and its subsets. The comparison between each language model and its WSCR-tuned model is given. For each column, the better result of the two is in bold. The best result in the column overall is underlined. Results for the LM ensemble and Knowledge Hunter are taken from Trichelair et al. (2018). All models consistently improve their accuracy when fine-tuned on the WSCR dataset.
1
[['BERT_WIKI'], ['BERT_WIKI_WSCR'], ['BERT'], ['BERT_WSCR'], ['BERT-base'], ['BERT-base_WSCR'], ['GPT'], ['GPT_WSCR'], ['BERT_WIKI_WSCR_no_pairs'], ['BERT_WIKI_WSCR_pairs'], ['LM ensemble'], ['Knowledge Hunter']]
1
[['WSC273'], ['non-assoc.'], ['assoc.'], ['unswitched'], ['switched'], ['consist.'], ['WNLI']]
[['0.619', '0.597', '0.757', '0.573', '0.603', '0.389', '0.712'], ['0.725', '0.72', '0.757', '0.732', '0.71', '0.55', '0.747'], ['0.619', '0.602', '0.73', '0.595', '0.573', '0.458', '0.658'], ['0.714', '0.699', '0.811', '0.695', '0.702', '0.55', '0.719'], ['0.564', '0.551', '0.649', '0.527', '0.565', '0.443', '0.63'], ['0.623', '0.606', '0.73', '0.611', '0.634', '0.443', '0.705'], ['0.553', '0.525', '0.73', '0.595', '0.519', '0.466', '–'], ['0.674', '0.653', '0.811', '0.664', '0.58', '0.641', '–'], ['0.663', '0.669', '0.622', '0.672', '0.641', '0.511', '–'], ['0.703', '0.695', '0.757', '0.718', '0.71', '0.565', '–'], ['0.637', '0.606', '0.838', '0.634', '0.534', '0.443', '–'], ['0.571', '0.583', '0.5', '0.588', '0.588', '0.901', '–']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['BERT_WIKI_WSCR']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WSC273</th> <th>non-assoc.</th> <th>assoc.</th> <th>unswitched</th> <th>switched</th> <th>consist.</th> <th>WNLI</th> </tr> </thead> <tbody> <tr> <td>BERT_WIKI</td> <td>0.619</td> <td>0.597</td> <td>0.757</td> <td>0.573</td> <td>0.603</td> <td>0.389</td> <td>0.712</td> </tr> <tr> <td>BERT_WIKI_WSCR</td> <td>0.725</td> <td>0.72</td> <td>0.757</td> <td>0.732</td> <td>0.71</td> <td>0.55</td> <td>0.747</td> </tr> <tr> <td>BERT</td> <td>0.619</td> <td>0.602</td> <td>0.73</td> <td>0.595</td> <td>0.573</td> <td>0.458</td> <td>0.658</td> </tr> <tr> <td>BERT_WSCR</td> <td>0.714</td> <td>0.699</td> <td>0.811</td> <td>0.695</td> <td>0.702</td> <td>0.55</td> <td>0.719</td> </tr> <tr> <td>BERT-base</td> <td>0.564</td> <td>0.551</td> <td>0.649</td> <td>0.527</td> <td>0.565</td> <td>0.443</td> <td>0.63</td> </tr> <tr> <td>BERT-base_WSCR</td> <td>0.623</td> <td>0.606</td> <td>0.73</td> <td>0.611</td> <td>0.634</td> <td>0.443</td> <td>0.705</td> </tr> <tr> <td>GPT</td> <td>0.553</td> <td>0.525</td> <td>0.73</td> <td>0.595</td> <td>0.519</td> <td>0.466</td> <td>–</td> </tr> <tr> <td>GPT_WSCR</td> <td>0.674</td> <td>0.653</td> <td>0.811</td> <td>0.664</td> <td>0.58</td> <td>0.641</td> <td>–</td> </tr> <tr> <td>BERT_WIKI_WSCR_no_pairs</td> <td>0.663</td> <td>0.669</td> <td>0.622</td> <td>0.672</td> <td>0.641</td> <td>0.511</td> <td>–</td> </tr> <tr> <td>BERT_WIKI_WSCR_pairs</td> <td>0.703</td> <td>0.695</td> <td>0.757</td> <td>0.718</td> <td>0.71</td> <td>0.565</td> <td>–</td> </tr> <tr> <td>LM ensemble</td> <td>0.637</td> <td>0.606</td> <td>0.838</td> <td>0.634</td> <td>0.534</td> <td>0.443</td> <td>–</td> </tr> <tr> <td>Knowledge Hunter</td> <td>0.571</td> <td>0.583</td> <td>0.5</td> <td>0.588</td> <td>0.588</td> <td>0.901</td> <td>–</td> </tr> </tbody></table>
Table 1
table_1
P19-1478
5
acl2019
We evaluate all models on WSC273 and the WNLI test dataset, as well as the various subsets of WSC273, as described in Section 2. The results are reported in Table 1 and will be discussed next. We note that models that are fine-tuned on the WSCR dataset consistently outperform their non-fine-tuned counterparts. The BERT_WIKI_WSCR model outperforms other language models on 5 out of 6 sets that they are compared on. In comparison to the LM ensemble by Trinh and Le (2018), the accuracy is more consistent between associative and non-associative subsets and less affected by the switched parties. However, it remains fairly inconsistent, which is a general property of LMs.
[1, 1, 1, 1, 1, 1]
['We evaluate all models on WSC273 and the WNLI test dataset, as well as the various subsets of WSC273, as described in Section 2.', 'The results are reported in Table 1 and will be discussed next.', 'We note that models that are fine-tuned on the WSCR dataset consistently outperform their non-fine-tuned counterparts.', 'The BERT_WIKI_WSCR model outperforms other language models on 5 out of 6 sets that they are compared on.', 'In comparison to the LM ensemble by Trinh and Le (2018), the accuracy is more consistent between associative and non-associative subsets and less affected by the switched parties.', 'However, it remains fairly inconsistent, which is a general property of LMs.']
[['WSC273', 'WNLI'], None, ['BERT_WIKI_WSCR', 'BERT_WSCR', 'BERT-base_WSCR', 'GPT_WSCR', 'BERT_WIKI_WSCR_no_pairs', 'BERT_WIKI_WSCR_pairs'], ['BERT_WIKI_WSCR'], ['LM ensemble', 'assoc.', 'non-assoc.', 'switched'], ['LM ensemble']]
1
P19-1479table_4
Comparison between our graph2seq model and baseline models for the topic of entertainment. T, C, B, K represents title, content, bag of words, keywords separately. Total is the average of other three metrics
2
[['Models', 'seq2seq-T (Qin et al., 2018)'], ['Models', 'seq2seq-C (Qin et al., 2018)'], ['Models', 'seq2seq-TC (Qin et al., 2018)'], ['Models', 'self-attention-B (Chen et al., 2018)'], ['Models', 'self-attention-K (Chen et al., 2018)'], ['Models', 'hierarchical-attention (Yang et al., 2016)'], ['Models', 'graph2seq (proposed)']]
1
[['Coherence'], ['Informativeness'], ['Fluency'], ['Total']]
[['5.38', '3.7', '8.22', '5.77'], ['4.87', '3.72', '8.53', '5.71'], ['3.28', '4.02', '8.68', '5.33'], ['6.72', '5.05', '8.27', '6.68'], ['6.62', '4.73', '8.28', '6.54'], ['1.38', '2.97', '8.65', '4.33'], ['8.23', '5.27', '8.08', '7.19']]
column
['Coherence', 'Informativeness', 'Fluency', 'Total']
['graph2seq (proposed)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Coherence</th> <th>Informativeness</th> <th>Fluency</th> <th>Total</th> </tr> </thead> <tbody> <tr> <td>Models || seq2seq-T (Qin et al., 2018)</td> <td>5.38</td> <td>3.7</td> <td>8.22</td> <td>5.77</td> </tr> <tr> <td>Models || seq2seq-C (Qin et al., 2018)</td> <td>4.87</td> <td>3.72</td> <td>8.53</td> <td>5.71</td> </tr> <tr> <td>Models || seq2seq-TC (Qin et al., 2018)</td> <td>3.28</td> <td>4.02</td> <td>8.68</td> <td>5.33</td> </tr> <tr> <td>Models || self-attention-B (Chen et al., 2018)</td> <td>6.72</td> <td>5.05</td> <td>8.27</td> <td>6.68</td> </tr> <tr> <td>Models || self-attention-K (Chen et al., 2018)</td> <td>6.62</td> <td>4.73</td> <td>8.28</td> <td>6.54</td> </tr> <tr> <td>Models || hierarchical-attention (Yang et al., 2016)</td> <td>1.38</td> <td>2.97</td> <td>8.65</td> <td>4.33</td> </tr> <tr> <td>Models || graph2seq (proposed)</td> <td>8.23</td> <td>5.27</td> <td>8.08</td> <td>7.19</td> </tr> </tbody></table>
Table 4
table_4
P19-1479
7
acl2019
4.5 Results. In Table 4, we show the results of different baseline models and our graph2seq model for the topic of entertainment. From the results we can see that our proposed graph2seq model beats all the baselines in both coherence and informativeness. Our model receives much higher scores in coherence compared with all other baseline models.
[2, 1, 1, 1]
['4.5 Results.', 'In Table 4, we show the results of different baseline models and our graph2seq model for the topic of entertainment.', 'From the results we can see that our proposed graph2seq model beats all the baselines in both coherence and informativeness.', 'Our model receives much higher scores in coherence compared with all other baseline models.']
[None, ['graph2seq (proposed)'], ['graph2seq (proposed)', 'Coherence', 'Informativeness'], ['graph2seq (proposed)', 'Coherence']]
1
P19-1481table_2
BLEU, METEOR and ROUGE-L scores on the test set for Hindi and Chinese question generation. Best results for each metric (column) are highlighted in bold.
4
[['Languange', 'Hindi', 'Model', 'Transformer'], ['Languange', 'Hindi', 'Model', 'Transformer+pretraining'], ['Languange', 'Hindi', 'Model', 'CLQG'], ['Languange', 'Hindi', 'Model', 'CLQG+parallel'], ['Languange', 'Hindi', 'Model', 'Transformer'], ['Languange', 'Chinese', 'Model', 'Transformer+pretraining'], ['Languange', 'Chinese', 'Model', 'CLQG'], ['Languange', 'Chinese', 'Model', 'CLQG+parallel']]
1
[['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['METEOR'], ['ROUGE-L']]
[['28.414', '18.493', '12.356', '8.644', '23.803', '29.893'], ['41.059', '29.294', '21.403', '16.047', '28.159', '39.395'], ['41.034', '29.792', '22.038', '16.598', '27.581', '39.852'], ['42.281', '32.074', '25.182', '20.242', '29.143', '40.643'], ['25.52', '9.22', '5.14', '3.25', '7.64', '27.4'], ['30.38', '14.01', '8.37', '5.18', '10.46', '32.71'], ['30.69', '14.51', '8.82', '5.39', '10.44', '31.82'], ['30.3', '13.93', '8.43', '5.51', '10.26', '31.58']]
column
['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'METEOR', 'ROUGE-L']
['CLQG+parallel']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>METEOR</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Languange || Hindi || Model || Transformer</td> <td>28.414</td> <td>18.493</td> <td>12.356</td> <td>8.644</td> <td>23.803</td> <td>29.893</td> </tr> <tr> <td>Languange || Hindi || Model || Transformer+pretraining</td> <td>41.059</td> <td>29.294</td> <td>21.403</td> <td>16.047</td> <td>28.159</td> <td>39.395</td> </tr> <tr> <td>Languange || Hindi || Model || CLQG</td> <td>41.034</td> <td>29.792</td> <td>22.038</td> <td>16.598</td> <td>27.581</td> <td>39.852</td> </tr> <tr> <td>Languange || Hindi || Model || CLQG+parallel</td> <td>42.281</td> <td>32.074</td> <td>25.182</td> <td>20.242</td> <td>29.143</td> <td>40.643</td> </tr> <tr> <td>Languange || Hindi || Model || Transformer</td> <td>25.52</td> <td>9.22</td> <td>5.14</td> <td>3.25</td> <td>7.64</td> <td>27.4</td> </tr> <tr> <td>Languange || Chinese || Model || Transformer+pretraining</td> <td>30.38</td> <td>14.01</td> <td>8.37</td> <td>5.18</td> <td>10.46</td> <td>32.71</td> </tr> <tr> <td>Languange || Chinese || Model || CLQG</td> <td>30.69</td> <td>14.51</td> <td>8.82</td> <td>5.39</td> <td>10.44</td> <td>31.82</td> </tr> <tr> <td>Languange || Chinese || Model || CLQG+parallel</td> <td>30.3</td> <td>13.93</td> <td>8.43</td> <td>5.51</td> <td>10.26</td> <td>31.58</td> </tr> </tbody></table>
Table 2
table_2
P19-1481
6
acl2019
CLQG+parallel: The CLQG model undergoes further training using a parallel corpus (with primary language as source and secondary language as target). After unsupervised pretraining, the encoder and decoder weights are fine-tuned using the parallel corpus. This fine-tuning further refines the language models for both languages and helps enforce the shared latent space across both languages. We observe in Table 2 that CLQG+parallel outperforms all the other models for Hindi. For Chinese, parallel fine-tuning does not give significant improvements over CLQG; this could be attributed to the parallel corpus being smaller in size (when compared to Hindi) and domain-specific (i.e.the news domain).
[2, 2, 2, 1, 2]
['CLQG+parallel: The CLQG model undergoes further training using a parallel corpus (with primary language as source and secondary language as target).', 'After unsupervised pretraining, the encoder and decoder weights are fine-tuned using the parallel corpus.', 'This fine-tuning further refines the language models for both languages and helps enforce the shared latent space across both languages.', 'We observe in Table 2 that CLQG+parallel outperforms all the other models for Hindi.', 'For Chinese, parallel fine-tuning does not give significant improvements over CLQG; this could be attributed to the parallel corpus being smaller in size (when compared to Hindi) and domain-specific (i.e.the news domain).']
[['CLQG+parallel'], None, None, ['CLQG+parallel'], ['CLQG+parallel', 'CLQG']]
1
P19-1482table_4
Automatic evaluation results for classification accuracy and BLEU with human reference. Human denotes human references. Note that Acc for human references are relatively low; thus, we do not consider it as a valid metric for comparison.
1
[['CrossAligned'], ['MultiDecoder'], ['StyleEmbedding'], ['TemplateBased'], ['DeleteOnly'], ['Del-Ret-Gen'], ['BackTranslate'], ['UnpairedRL'], ['UnsuperMT'], ['Human'], ['Point-Then-Operate']]
2
[['Yelp', 'Acc'], ['Yelp', 'BLEU'], ['Amazon', 'Acc'], ['Amazon', 'BLEU']]
[['74.7', '9.06', '75.1', '1.9'], ['50.6', '14.54', '69.9', '9.07'], ['8.4', '21.06', '38.2', '15.07'], ['81.2', '22.57', '64.3', '34.79'], ['86', '14.64', '47', '33'], ['88.6', '15.96', '51', '30.09'], ['94.6', '2.46', '76.7', '1.04'], ['57.5', '18.81', '56.3', '15.93'], ['97.8', '22.75', '72.4', '33.95'], ['74.7', '-', '43.2', '-'], ['91.5', '29.86', '40.2', '41.86']]
column
['Acc', 'BLEU', 'Acc', 'BLEU']
['Point-Then-Operate']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Yelp || Acc</th> <th>Yelp || BLEU</th> <th>Amazon || Acc</th> <th>Amazon || BLEU</th> </tr> </thead> <tbody> <tr> <td>CrossAligned</td> <td>74.7</td> <td>9.06</td> <td>75.1</td> <td>1.9</td> </tr> <tr> <td>MultiDecoder</td> <td>50.6</td> <td>14.54</td> <td>69.9</td> <td>9.07</td> </tr> <tr> <td>StyleEmbedding</td> <td>8.4</td> <td>21.06</td> <td>38.2</td> <td>15.07</td> </tr> <tr> <td>TemplateBased</td> <td>81.2</td> <td>22.57</td> <td>64.3</td> <td>34.79</td> </tr> <tr> <td>DeleteOnly</td> <td>86</td> <td>14.64</td> <td>47</td> <td>33</td> </tr> <tr> <td>Del-Ret-Gen</td> <td>88.6</td> <td>15.96</td> <td>51</td> <td>30.09</td> </tr> <tr> <td>BackTranslate</td> <td>94.6</td> <td>2.46</td> <td>76.7</td> <td>1.04</td> </tr> <tr> <td>UnpairedRL</td> <td>57.5</td> <td>18.81</td> <td>56.3</td> <td>15.93</td> </tr> <tr> <td>UnsuperMT</td> <td>97.8</td> <td>22.75</td> <td>72.4</td> <td>33.95</td> </tr> <tr> <td>Human</td> <td>74.7</td> <td>-</td> <td>43.2</td> <td>-</td> </tr> <tr> <td>Point-Then-Operate</td> <td>91.5</td> <td>29.86</td> <td>40.2</td> <td>41.86</td> </tr> </tbody></table>
Table 4
table_4
P19-1482
7
acl2019
5.4 Evaluation Results . Table 4 shows the results of automatic evaluation. It should be noted that the classification accuracy for human reference is relatively low (74.7% on Yelp and 43.2% on Amazon); thus, we do not consider it as a valid metric for comparison. For BLEU score, our method outperforms recent systems by a large margin, which shows that our outputs have higher overlap with reference sentences provided by humans.
[0, 1, 1, 1]
['5.4 Evaluation Results .', 'Table 4 shows the results of automatic evaluation.', 'It should be noted that the classification accuracy for human reference is relatively low (74.7% on Yelp and 43.2% on Amazon); thus, we do not consider it as a valid metric for comparison.', 'For BLEU score, our method outperforms recent systems by a large margin, which shows that our outputs have higher overlap with reference sentences provided by humans.']
[None, None, ['Acc', 'Human', 'Yelp', 'Amazon'], ['BLEU', 'Point-Then-Operate']]
1
P19-1487table_3
Test accuracy on CQA v1.0. The addition of CoS-E-open-ended during training dramatically improves performance. Replacing CoS-E during training with CAGE reasoning during both training and inference leads to an absolute gain of 10% over the previous state-of-the-art.
2
[['Method', 'RC (Talmor et al., 2019)'], ['Method', 'GPT (Talmor et al., 2019)'], ['Method', 'CoS-E-open-ended'], ['Method', 'CAGE-reasoning'], ['Method', 'Human (Talmor et al., 2019)']]
1
[['Accuracy (%)']]
[['47.7'], ['54.8'], ['60.2'], ['64.7'], ['95.3']]
column
['Accuracy (%)']
['CoS-E-open-ended', 'CAGE-reasoning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Method || RC (Talmor et al., 2019)</td> <td>47.7</td> </tr> <tr> <td>Method || GPT (Talmor et al., 2019)</td> <td>54.8</td> </tr> <tr> <td>Method || CoS-E-open-ended</td> <td>60.2</td> </tr> <tr> <td>Method || CAGE-reasoning</td> <td>64.7</td> </tr> <tr> <td>Method || Human (Talmor et al., 2019)</td> <td>95.3</td> </tr> </tbody></table>
Table 3
table_3
P19-1487
6
acl2019
Table 3 shows the results obtained on the CQA test split. We report our two best models that represent using human explanations (CoS-E-openended) for training only and using language model explanations (CAGE-reasoning) during both train and test. We compare our approaches to the best reported models for the CQA task (Talmor et al.,2019). We observe that using CoS-E-open-ended during training improves the state-of-the-art by approximately 6%. Talmor et al. (2019) experimented with using Google search of question + answer choice for each example in the dataset and collected 100 top snippets per answer choice to be used as context for their Reading Comprehension (RC) model. They found that providing such extra data does not improve accuracy. On the other hand, using CAGE-reasoning resulted in a gain of 10% accuracy over the previous state-of-the-art. This suggests that our CoS-E-open-ended and CAGEreasoning explanations provide far more useful information than what can be achieved through simple heuristics like using Google search to find relevant snippets.
[1, 1, 2, 1, 2, 2, 1, 1]
['Table 3 shows the results obtained on the CQA test split.', 'We report our two best models that represent using human explanations (CoS-E-openended) for training only and using language model explanations (CAGE-reasoning) during both train and test.', 'We compare our approaches to the best reported models for the CQA task (Talmor et al.,2019).', 'We observe that using CoS-E-open-ended during training improves the state-of-the-art by approximately 6%.', 'Talmor et al. (2019) experimented with using Google search of question + answer choice for each example in the dataset and collected 100 top snippets per answer choice to be used as context for their Reading Comprehension (RC) model.', 'They found that providing such extra data does not improve accuracy.', 'On the other hand, using CAGE-reasoning resulted in a gain of 10% accuracy over the previous state-of-the-art.', 'This suggests that our CoS-E-open-ended and CAGEreasoning explanations provide far more useful information than what can be achieved through simple heuristics like using Google search to find relevant snippets.']
[None, ['CoS-E-open-ended', 'CAGE-reasoning'], None, ['CoS-E-open-ended'], ['RC (Talmor et al., 2019)', 'GPT (Talmor et al., 2019)', 'Human (Talmor et al., 2019)'], ['RC (Talmor et al., 2019)', 'GPT (Talmor et al., 2019)', 'Human (Talmor et al., 2019)'], ['CAGE-reasoning'], ['CoS-E-open-ended', 'CAGE-reasoning']]
1
P19-1487table_4
Oracle results on CQA dev-random-split using different variants of CoS-E for both training and validation. * indicates CoS-E-open-ended used during both training and validation to contrast with CoS-E-openended used only during training in Table 2.
2
[['Method', 'CoS-E-selected w/o ques'], ['Method', 'CoS-E-limited-open-ended'], ['Method', 'CoS-E-selected'], ['Method', 'CoS-E-open-ended w/o ques'], ['Method', 'CoS-E-open-ended*']]
1
[['Accuracy (%)']]
[['53'], ['67.6'], ['70'], ['84.5'], ['89.8']]
column
['Accuracy (%)']
['CoS-E-open-ended*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy (%)</th> </tr> </thead> <tbody> <tr> <td>Method || CoS-E-selected w/o ques</td> <td>53</td> </tr> <tr> <td>Method || CoS-E-limited-open-ended</td> <td>67.6</td> </tr> <tr> <td>Method || CoS-E-selected</td> <td>70</td> </tr> <tr> <td>Method || CoS-E-open-ended w/o ques</td> <td>84.5</td> </tr> <tr> <td>Method || CoS-E-open-ended*</td> <td>89.8</td> </tr> </tbody></table>
Table 4
table_4
P19-1487
6
acl2019
Table 4 also contains results that use only the explanation and exclude the original question from CQA denoted by ‘w/o question’. These variants also use explanation during both train and validation. For these experiments we give the explanation in place of the question followed by the answer choices as input to the model. When the explanation consists of words humans selected as justification for the answer (CoS-E-selected), the model was able to obtain 53% in contrast to the 85% achieved by the open-ended human explanations (CoS-E-open-ended). Adding the question boosts performance for CoS-E-selected to 70%, again falling short of almost 90% achieved by CoS-E-open-ended. We conclude then that our full, open-ended CoS-E thus supply a significant source of information beyond simply directing the model towards the most useful information already in the question.
[1, 2, 2, 1, 1, 1]
['Table 4 also contains results that use only the explanation and exclude the original question from CQA denoted by ‘w/o question’.', 'These variants also use explanation during both train and validation.', 'For these experiments we give the explanation in place of the question followed by the answer choices as input to the model.', 'When the explanation consists of words humans selected as justification for the answer (CoS-E-selected), the model was able to obtain 53% in contrast to the 85% achieved by the open-ended human explanations (CoS-E-open-ended).', 'Adding the question boosts performance for CoS-E-selected to 70%, again falling short of almost 90% achieved by CoS-E-open-ended.', 'We conclude then that our full, open-ended CoS-E thus supply a significant source of information beyond simply directing the model towards the most useful information already in the question.']
[None, None, None, ['CoS-E-selected', 'CoS-E-open-ended*'], ['CoS-E-selected', 'CoS-E-open-ended*'], ['CoS-E-open-ended*']]
1
P19-1493table_6
M-BERT’s POS accuracy on the code-switched Hindi/English dataset from Bhat et al. (2018), on script-corrected and original (transliterated) tokens, and comparisons to existing work on code-switch POS.
2
[['Train on monolingual HI+EN', 'M-BERT'], ['Train on monolingual HI+EN', 'Ball and Garrette (2018)'], ['Train on code-switched HI/EN', 'M-BERT'], ['Train on code-switched HI/EN', 'Bhat et al. (2018)']]
1
[['Corrected'], ['Transliterated']]
[['86.59', '50.41'], ['-', '77.4'], ['90.56', '85.64'], ['-', '90.53']]
column
['accuracy', 'accuracy']
['Train on monolingual HI+EN', 'Train on code-switched HI/EN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Corrected</th> <th>Transliterated</th> </tr> </thead> <tbody> <tr> <td>Train on monolingual HI+EN || M-BERT</td> <td>86.59</td> <td>50.41</td> </tr> <tr> <td>Train on monolingual HI+EN || Ball and Garrette (2018)</td> <td>-</td> <td>77.4</td> </tr> <tr> <td>Train on code-switched HI/EN || M-BERT</td> <td>90.56</td> <td>85.64</td> </tr> <tr> <td>Train on code-switched HI/EN || Bhat et al. (2018)</td> <td>-</td> <td>90.53</td> </tr> </tbody></table>
Table 6
table_6
P19-1493
4
acl2019
We test M-BERT on the CS Hindi/English UD corpus from Bhat et al. (2018), which provides texts in two formats: transliterated, where Hindi words are written in Latin script, and corrected, where annotators have converted them back to Devanagari script. Table 6 shows the results for models fine-tuned using a combination of monolingual Hindi and English, and using the CS training set (both fine-tuning on the script-corrected version of the corpus as well as the transliterated version).
[2, 1]
['We test M-BERT on the CS Hindi/English UD corpus from Bhat et al. (2018), which provides texts in two formats: transliterated, where Hindi words are written in Latin script, and corrected, where annotators have converted them back to Devanagari script.', 'Table 6 shows the results for models fine-tuned using a combination of monolingual Hindi and English, and using the CS training set (both fine-tuning on the script-corrected version of the corpus as well as the transliterated version).']
[['M-BERT', 'Train on code-switched HI/EN', 'Transliterated', 'Corrected'], ['Train on monolingual HI+EN', 'Train on code-switched HI/EN', 'Corrected', 'Transliterated']]
1
P19-1495table_7
Complaint prediction results using the original data set and distantly supervised data. All models are based on logistic regression with bag-of-word and Partof-Speech tag features.
1
[['Most Frequent Class'], ['LR-All Features – Original Data'], ['Dist. Supervision + Pooling'], ['Dist. Supervision + EasyAdapt']]
2
[['Model', 'Acc'], ['Model', 'F1'], ['Model', 'AUC']]
[['64.2', '39.1', '0.5'], ['80.5', '78', '0.873'], ['77.2', '75.7', '0.853'], ['81.2', '79', '0.885']]
column
['Acc', 'F1', 'AUC']
['Dist. Supervision + EasyAdapt']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model || Acc</th> <th>Model || F1</th> <th>Model || AUC</th> </tr> </thead> <tbody> <tr> <td>Most Frequent Class</td> <td>64.2</td> <td>39.1</td> <td>0.5</td> </tr> <tr> <td>LR-All Features – Original Data</td> <td>80.5</td> <td>78</td> <td>0.873</td> </tr> <tr> <td>Dist. Supervision + Pooling</td> <td>77.2</td> <td>75.7</td> <td>0.853</td> </tr> <tr> <td>Dist. Supervision + EasyAdapt</td> <td>81.2</td> <td>79</td> <td>0.885</td> </tr> </tbody></table>
Table 7
table_7
P19-1495
8
acl2019
Results presented in Table 7 show that the domain adaptation approach further boosts F1 by 1 point to 79 (t-test, p<0.5) and ROC AUC by 0.012. However, simply pooling the data actually hurts predictive performance leading to a drop of more than 2 points in F1.
[1, 1]
['Results presented in Table 7 show that the domain adaptation approach further boosts F1 by 1 point to 79 (t-test, p<0.5) and ROC AUC by 0.012.', 'However, simply pooling the data actually hurts predictive performance leading to a drop of more than 2 points in F1.']
[['Dist. Supervision + EasyAdapt', 'F1', 'AUC'], ['Dist. Supervision + Pooling', 'F1']]
1
P19-1503table_1
Experimental results of abstractive summarization on Gigaword test set with ROUGE metric. The top section is prefix baselines, the second section is recent unsupervised methods and ours, the third section is state-of-the-art supervised method along with our implementation of a seq-to-seq model with attention, and the bottom section is our model’s oracle performance. Wang and Lee (2018) is by author correspondence (scores differ because of evaluation setup). For another unsupervised work Fevry and Phang (2018), we attempted to replicate on our test set, but were unable to obtain results better than the baselines.
2
[['Model', 'Lead-75C'], ['Model', 'Lead-8'], ['Model', 'Schumann (2018)'], ['Model', 'Wang and Lee (2018)'], ['Model', 'Contextual Match'], ['Model', 'Cao et al. (2018)'], ['Model', 'seq2seq'], ['Model', 'Contextual Oracle']]
1
[['R1'], ['R2'], ['RL']]
[['23.69', '7.93', '21.5'], ['21.3', '7.34', '19.94'], ['22.19', '4.56', '19.88'], ['27.09', '9.86', '24.97'], ['26.48', '10.05', '24.41'], ['37.04', '19.03', '34.46'], ['33.5', '15.85', '31.44'], ['37.03', '15.46', '33.23']]
column
['R1', 'R2', 'RL']
['Contextual Match', 'Contextual Oracle']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R1</th> <th>R2</th> <th>RL</th> </tr> </thead> <tbody> <tr> <td>Model || Lead-75C</td> <td>23.69</td> <td>7.93</td> <td>21.5</td> </tr> <tr> <td>Model || Lead-8</td> <td>21.3</td> <td>7.34</td> <td>19.94</td> </tr> <tr> <td>Model || Schumann (2018)</td> <td>22.19</td> <td>4.56</td> <td>19.88</td> </tr> <tr> <td>Model || Wang and Lee (2018)</td> <td>27.09</td> <td>9.86</td> <td>24.97</td> </tr> <tr> <td>Model || Contextual Match</td> <td>26.48</td> <td>10.05</td> <td>24.41</td> </tr> <tr> <td>Model || Cao et al. (2018)</td> <td>37.04</td> <td>19.03</td> <td>34.46</td> </tr> <tr> <td>Model || seq2seq</td> <td>33.5</td> <td>15.85</td> <td>31.44</td> </tr> <tr> <td>Model || Contextual Oracle</td> <td>37.03</td> <td>15.46</td> <td>33.23</td> </tr> </tbody></table>
Table 1
table_1
P19-1503
4
acl2019
The automatic evaluation scores are presented in Table 1. For abstractive sentence summarization, we report the ROUGE F1 scores compared with baselines and previous unsupervised methods. Our method outperforms commonly used prefix baselines for this task which take the first 75 characters or 8 words of the source as a summary. Our system achieves comparable results to Wang and Lee (2018) a system based on both GANs and reinforcement training. Note that the GAN-based system needs both source and target sentences for training (they are unpaired), whereas our method only needs the target domain sentences for a simple language model. In Table 1, we also list scores of the state-of-the-art supervised model, an attention based seq-to-seq model of our own implementation, as well as the oracle scores of our method obtained by choosing the best summary among all finished hypothesis from beam search.
[1, 1, 1, 1, 2, 1]
['The automatic evaluation scores are presented in Table 1.', 'For abstractive sentence summarization, we report the ROUGE F1 scores compared with baselines and previous unsupervised methods.', 'Our method outperforms commonly used prefix baselines for this task which take the first 75 characters or 8 words of the source as a summary.', 'Our system achieves comparable results to Wang and Lee (2018) a system based on both GANs and reinforcement training.', 'Note that the GAN-based system needs both source and target sentences for training (they are unpaired), whereas our method only needs the target domain sentences for a simple language model.', 'In Table 1, we also list scores of the state-of-the-art supervised model, an attention based seq-to-seq model of our own implementation, as well as the oracle scores of our method obtained by choosing the best summary among all finished hypothesis from beam search.']
[None, ['R1', 'R2', 'RL'], ['Contextual Match', 'Lead-75C', 'Lead-8'], ['Contextual Match', 'Wang and Lee (2018)'], None, ['seq2seq', 'Contextual Oracle']]
1
P19-1514table_3
Performance comparison between our model and three baselines on four frequent attributes. For baselines, only the performance on AE-110K is reported since they do not scale up to large set of attributes; while for our model, the performances on both AE-110K and AE-650K are reported.
4
[['Attributes', 'Brand Name', 'Models', 'BiLSTM'], ['Attributes', 'Brand Name', 'Models', 'BiLSTM-CRF'], ['Attributes', 'Brand Name', 'Models', 'OpenTag'], ['Attributes', 'Brand Name', 'Models', 'Our model-110k'], ['Attributes', 'Brand Name', 'Models', 'Our model-650k'], ['Attributes', 'Material', 'Models', 'BiLSTM'], ['Attributes', 'Material', 'Models', 'BiLSTM-CRF'], ['Attributes', 'Material', 'Models', 'Opentag'], ['Attributes', 'Material', 'Models', 'Our model-110k'], ['Attributes', 'Material', 'Models', 'Ourmodel-650k'], ['Attributes', 'Color', 'Models', 'BiLSTM'], ['Attributes', 'Color', 'Models', 'BiLSTM-CRF'], ['Attributes', 'Color', 'Models', 'Opentag'], ['Attributes', 'Color', 'Models', 'Our model-110k'], ['Attributes', 'Color', 'Models', 'Our model-650k'], ['Attributes', 'Category', 'Models', 'BiLSTM'], ['Attributes', 'Category', 'Models', 'BiLSTM-CRF'], ['Attributes', 'Category', 'Models', 'Opentag'], ['Attributes', 'Category', 'Models', 'Our model-110k'], ['Attributes', 'Category', 'Models', 'Our model-650k']]
1
[['P (%)'], ['R (%)'], ['F1 (%)']]
[['95.08', '96.81', '95.94'], ['95.45', '97.17', '96.3'], ['95.18', '97.55', '96.35'], ['97.21', '96.68', '96.94'], ['96.94', '97.14', '97.04'], ['78.26', '78.54', '78.4'], ['77.15', '78.12', '77.63'], ['78.69', '78.62', '78.65'], ['82.76', '83.57', '83.16'], ['83.3', '82.94', '83.12'], ['68.08', '68', '68.04'], ['68.13', '67.46', '67.79'], ['71.19', '70.5', '70.84'], ['75.11', '72.61', '73.84'], ['77.55', '72.8', '75.1'], ['82.74', '78.4', '80.51'], ['81.57', '79.94', '80.75'], ['82.74', '80.63', '81.67'], ['84.11', '80.8', '82.42'], ['88.11', '81.79', '84.83']]
column
['P (%)', 'R (%)', 'F1 (%)']
['Our model-110k', 'Our model-650k']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P (%)</th> <th>R (%)</th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Attributes || Brand Name || Models || BiLSTM</td> <td>95.08</td> <td>96.81</td> <td>95.94</td> </tr> <tr> <td>Attributes || Brand Name || Models || BiLSTM-CRF</td> <td>95.45</td> <td>97.17</td> <td>96.3</td> </tr> <tr> <td>Attributes || Brand Name || Models || OpenTag</td> <td>95.18</td> <td>97.55</td> <td>96.35</td> </tr> <tr> <td>Attributes || Brand Name || Models || Our model-110k</td> <td>97.21</td> <td>96.68</td> <td>96.94</td> </tr> <tr> <td>Attributes || Brand Name || Models || Our model-650k</td> <td>96.94</td> <td>97.14</td> <td>97.04</td> </tr> <tr> <td>Attributes || Material || Models || BiLSTM</td> <td>78.26</td> <td>78.54</td> <td>78.4</td> </tr> <tr> <td>Attributes || Material || Models || BiLSTM-CRF</td> <td>77.15</td> <td>78.12</td> <td>77.63</td> </tr> <tr> <td>Attributes || Material || Models || Opentag</td> <td>78.69</td> <td>78.62</td> <td>78.65</td> </tr> <tr> <td>Attributes || Material || Models || Our model-110k</td> <td>82.76</td> <td>83.57</td> <td>83.16</td> </tr> <tr> <td>Attributes || Material || Models || Ourmodel-650k</td> <td>83.3</td> <td>82.94</td> <td>83.12</td> </tr> <tr> <td>Attributes || Color || Models || BiLSTM</td> <td>68.08</td> <td>68</td> <td>68.04</td> </tr> <tr> <td>Attributes || Color || Models || BiLSTM-CRF</td> <td>68.13</td> <td>67.46</td> <td>67.79</td> </tr> <tr> <td>Attributes || Color || Models || Opentag</td> <td>71.19</td> <td>70.5</td> <td>70.84</td> </tr> <tr> <td>Attributes || Color || Models || Our model-110k</td> <td>75.11</td> <td>72.61</td> <td>73.84</td> </tr> <tr> <td>Attributes || Color || Models || Our model-650k</td> <td>77.55</td> <td>72.8</td> <td>75.1</td> </tr> <tr> <td>Attributes || Category || Models || BiLSTM</td> <td>82.74</td> <td>78.4</td> <td>80.51</td> </tr> <tr> <td>Attributes || Category || Models || BiLSTM-CRF</td> <td>81.57</td> <td>79.94</td> <td>80.75</td> </tr> <tr> <td>Attributes || Category || Models || Opentag</td> <td>82.74</td> <td>80.63</td> <td>81.67</td> </tr> <tr> <td>Attributes || Category || Models || Our model-110k</td> <td>84.11</td> <td>80.8</td> <td>82.42</td> </tr> <tr> <td>Attributes || Category || Models || Our model-650k</td> <td>88.11</td> <td>81.79</td> <td>84.83</td> </tr> </tbody></table>
Table 3
table_3
P19-1514
6
acl2019
5.1 Results on Frequent Attributes . The first experiment is conducted on four frequent attributes (i.e., with sufficient data) on AE-110k and AE-650k datasets. Table 3 reports the comparison results of our two models (on AE-110k and AE-650k datasets) and three baselines. It is observed that our models are consistently ranked the best over all competing baselines. This indicates that our idea of regarding 'attribute' as 'query' successfully models the semantic information embedded in attribute which has been ignored by previous sequence tagging models. Besides, different from the self-attention mechanism only inside title adopted by OpenTag, our interacted similarity between attribute and title does attend to words which are more relevant to current extraction.
[2, 2, 1, 1, 2, 2]
['5.1 Results on Frequent Attributes .', 'The first experiment is conducted on four frequent attributes (i.e., with sufficient data) on AE-110k and AE-650k datasets.', 'Table 3 reports the comparison results of our two models (on AE-110k and AE-650k datasets) and three baselines.', 'It is observed that our models are consistently ranked the best over all competing baselines.', "This indicates that our idea of regarding 'attribute' as 'query' successfully models the semantic information embedded in attribute which has been ignored by previous sequence tagging models.", 'Besides, different from the self-attention mechanism only inside title adopted by OpenTag, our interacted similarity between attribute and title does attend to words which are more relevant to current extraction.']
[None, None, ['BiLSTM', 'BiLSTM-CRF', 'OpenTag', 'Our model-110k', 'Our model-650k'], ['Our model-110k', 'Our model-650k'], None, ['OpenTag', 'Our model-110k', 'Our model-650k']]
1
P19-1516table_1
Result comparison of the proposed method with the state-of-art baseline methods. Here, ‘P’, ‘R’, ‘F1’ represents Precision, Recall and F1-Score. The results on CADEC and MEDLINE are on 10-fold cross validation; for the twitter dataset, we use the train and test sets as provided by the PSB 2016 shared task.
2
[['Models', 'ST-BLSTM'], ['Models', 'ST-CNN'], ['Models', 'CRNN (Huynh et al., 2016)'], ['Models', 'RCNN (Huynh et al., 2016)'], ['Models', 'MT-BLSTM (Chowdhury et al., 2018)'], ['Models', 'MT-Atten-BLSTM (Chowdhury et al., 2018)'], ['Models', 'Proposed Model']]
2
[['Twitter', 'P'], ['Twitter', 'R'], ['Twitter', 'F1'], ['CADEC', 'P'], ['CADEC', 'R'], ['CADEC', 'F1'], ['MEDLINE', 'P'], ['MEDLINE', 'R'], ['MEDLINE', 'F1']]
[['57.7', '56.8', '57.3', '52.9', '49.4', '51.1', '71.65', '72.19', '71.91'], ['63.8', '65.8', '67.1', '39.7', '42.7', '42', '66.88', '73.81', '70.17'], ['61.1', '62.4', '64.9', '49.5', '46.9', '48.2', '71', '77.3', '75.5'], ['57.6', '58.7', '63.6', '42.4', '44.9', '43.6', '73.5', '72', '74'], ['65.57', '61.02', '63.19', '60.5', '55.16', '57.62', '72.72', '75.49', '74'], ['62.26', '69.62', '65.73', '56.63', '60', '58.27', '75.08', '81.06', '77.95'], ['68.78', '70.81', '69.69', '64.33', '67.03', '65.58', '81.97', '82.61', '82.18']]
column
['P', 'R', 'F1', 'P', 'R', 'F1', 'P', 'R', 'F1']
['MT-BLSTM (Chowdhury et al., 2018)', 'MT-Atten-BLSTM (Chowdhury et al., 2018)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter || P</th> <th>Twitter || R</th> <th>Twitter || F1</th> <th>CADEC || P</th> <th>CADEC || R</th> <th>CADEC || F1</th> <th>MEDLINE || P</th> <th>MEDLINE || R</th> <th>MEDLINE || F1</th> </tr> </thead> <tbody> <tr> <td>Models || ST-BLSTM</td> <td>57.7</td> <td>56.8</td> <td>57.3</td> <td>52.9</td> <td>49.4</td> <td>51.1</td> <td>71.65</td> <td>72.19</td> <td>71.91</td> </tr> <tr> <td>Models || ST-CNN</td> <td>63.8</td> <td>65.8</td> <td>67.1</td> <td>39.7</td> <td>42.7</td> <td>42</td> <td>66.88</td> <td>73.81</td> <td>70.17</td> </tr> <tr> <td>Models || CRNN (Huynh et al., 2016)</td> <td>61.1</td> <td>62.4</td> <td>64.9</td> <td>49.5</td> <td>46.9</td> <td>48.2</td> <td>71</td> <td>77.3</td> <td>75.5</td> </tr> <tr> <td>Models || RCNN (Huynh et al., 2016)</td> <td>57.6</td> <td>58.7</td> <td>63.6</td> <td>42.4</td> <td>44.9</td> <td>43.6</td> <td>73.5</td> <td>72</td> <td>74</td> </tr> <tr> <td>Models || MT-BLSTM (Chowdhury et al., 2018)</td> <td>65.57</td> <td>61.02</td> <td>63.19</td> <td>60.5</td> <td>55.16</td> <td>57.62</td> <td>72.72</td> <td>75.49</td> <td>74</td> </tr> <tr> <td>Models || MT-Atten-BLSTM (Chowdhury et al., 2018)</td> <td>62.26</td> <td>69.62</td> <td>65.73</td> <td>56.63</td> <td>60</td> <td>58.27</td> <td>75.08</td> <td>81.06</td> <td>77.95</td> </tr> <tr> <td>Models || Proposed Model</td> <td>68.78</td> <td>70.81</td> <td>69.69</td> <td>64.33</td> <td>67.03</td> <td>65.58</td> <td>81.97</td> <td>82.61</td> <td>82.18</td> </tr> </tbody></table>
Table 1
table_1
P19-1516
8
acl2019
The extensive results of our proposed model with comparisons to the state-of-the-art baselines techniques are reported in Table 1. Our proposed model outperforms the state-of-the-art baselines techniques by fair margins in terms of precision, recall and F1-Score for all the datasets. In our first experiment, we train two models (i.e. SingleTask BLSTM and Multi-Task BLSTM) to analyze the effect of the multi-task model (MT-BLSTM) over a single task model (ST-BLSTM). On all the three datasets, we can visualize from Table 1 that, the multi-task framework with its sharing scheme can help in boost the performance of the system. We observe the performance improvement of 5.89, 6.52 and 2.09 F1-Score points on Twitter, CADEC, and MEDLINE dataset, respectively. The similar improvement is also observed in terms of precision and recall.
[1, 1, 2, 1, 1, 2]
['The extensive results of our proposed model with comparisons to the state-of-the-art baselines techniques are reported in Table 1.', 'Our proposed model outperforms the state-of-the-art baselines techniques by fair margins in terms of precision, recall and F1-Score for all the datasets.', 'In our first experiment, we train two models (i.e. SingleTask BLSTM and Multi-Task BLSTM) to analyze the effect of the multi-task model (MT-BLSTM) over a single task model (ST-BLSTM).', 'On all the three datasets, we can visualize from Table 1 that, the multi-task framework with its sharing scheme can help in boost the performance of the system.', 'We observe the performance improvement of 5.89, 6.52 and 2.09 F1-Score points on Twitter, CADEC, and MEDLINE dataset, respectively.', 'The similar improvement is also observed in terms of precision and recall.']
[None, ['Proposed Model'], ['ST-BLSTM', 'MT-BLSTM (Chowdhury et al., 2018)', 'MT-Atten-BLSTM (Chowdhury et al., 2018)'], ['MT-BLSTM (Chowdhury et al., 2018)', 'MT-Atten-BLSTM (Chowdhury et al., 2018)'], ['MT-BLSTM (Chowdhury et al., 2018)', 'MT-Atten-BLSTM (Chowdhury et al., 2018)'], None]
1
P19-1520table_2
Aspect and opinion term extraction performance of different approaches. F 1 score is reported. IHS RD, DLIREC, Elixa and WDEmb* use manually designed features. For different versions of RINANTE, “Shared” and “Double” means shared BiLSTM model and double BiLSTM model, respectively; “Alt” and “Pre” means the first and the second training method, respectively.
2
[['Approach', 'DP (Qiu et al. 2011)'], ['Approach', 'IHS RD (Chernyshevich 2014)'], ['Approach', 'DLIREC (Toh and Wang 2014)'], ['Approach', 'Elixa (Vicente et al. 2017)'], ['Approach', 'WDEmb (Yin et al. 2016)'], ['Approach', 'WDEmb* (Yin et al. 2016)'], ['Approach', 'RNCRF (Wang et al. 2016)'], ['Approach', 'CMLA (Wang et al. 2017)'], ['Approach', 'NCRF-AE (Zhang et al. 2017)'], ['Approach', 'HAST (Li et al. 2018)'], ['Approach', 'DE-CNN (Xu et al. 2018)'], ['Approach', 'Mined Rules'], ['Approach', 'RINANTE (No Rule)'], ['Approach', 'RINANTE-Shared-Alt'], ['Approach', 'RINANTE-Shared-Pre'], ['Approach', 'RINANTE-Double-Alt'], ['Approach', 'RINANTE-Double-Pre']]
2
[['SE14-R', 'Aspect'], ['SE14-R', 'Opinion'], ['SE14-L', 'Aspect'], ['SE14-L', 'Opinion'], ['SE15-R', 'Aspect'], ['SE15-R', 'Opinion']]
[['38.72', '65.94', '19.19', '55.29', '27.32', '46.31'], ['79.62', ' -', '74.55', ' -', ' -', ' -'], ['84.01', ' -', '73.78', ' -', ' -', ' -'], [' -', ' -', ' -', ' -', '70.04', ' -'], ['84.31', ' -', '74.68', ' -', '69.12', ' -'], ['84.97', ' -', '75.16', ' -', '69.73', ' -'], ['82.23', '83.93', '75.28', '77.03', '65.39', '63.75'], ['82.46', '84.67', '73.63', '79.16', '68.22', '70.5'], ['83.28', '85.23', '74.32', '75.44', '65.33', '70.16'], ['85.61', ' -', '79.52', ' -', '69.77', ' -'], ['85.2', ' -', '81.59', ' -', '68.28', ' -'], ['70.82', '79.6', '67.67', '76.1', '57.67', '64.29'], ['84.06', '84.59', '73.47', '75.41', '66.17', '68.16'], ['86.76', '86.05', '77.92', '79.2', '67.47', '71.41'], ['85.09', '85.63', '79.16', '79.03', '68.15', '70.44'], ['85.8', '86.34', '78.59', '78.94', '67.42', '70.53'], ['86.45', '85.67', '80.16', '81.96', '69.9', '72.09']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['Mined Rules']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SE14-R || Aspect</th> <th>SE14-R || Opinion</th> <th>SE14-L || Aspect</th> <th>SE14-L || Opinion</th> <th>SE15-R || Aspect</th> <th>SE15-R || Opinion</th> </tr> </thead> <tbody> <tr> <td>Approach || DP (Qiu et al. 2011)</td> <td>38.72</td> <td>65.94</td> <td>19.19</td> <td>55.29</td> <td>27.32</td> <td>46.31</td> </tr> <tr> <td>Approach || IHS RD (Chernyshevich 2014)</td> <td>79.62</td> <td>-</td> <td>74.55</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Approach || DLIREC (Toh and Wang 2014)</td> <td>84.01</td> <td>-</td> <td>73.78</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>Approach || Elixa (Vicente et al. 2017)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>70.04</td> <td>-</td> </tr> <tr> <td>Approach || WDEmb (Yin et al. 2016)</td> <td>84.31</td> <td>-</td> <td>74.68</td> <td>-</td> <td>69.12</td> <td>-</td> </tr> <tr> <td>Approach || WDEmb* (Yin et al. 2016)</td> <td>84.97</td> <td>-</td> <td>75.16</td> <td>-</td> <td>69.73</td> <td>-</td> </tr> <tr> <td>Approach || RNCRF (Wang et al. 2016)</td> <td>82.23</td> <td>83.93</td> <td>75.28</td> <td>77.03</td> <td>65.39</td> <td>63.75</td> </tr> <tr> <td>Approach || CMLA (Wang et al. 2017)</td> <td>82.46</td> <td>84.67</td> <td>73.63</td> <td>79.16</td> <td>68.22</td> <td>70.5</td> </tr> <tr> <td>Approach || NCRF-AE (Zhang et al. 2017)</td> <td>83.28</td> <td>85.23</td> <td>74.32</td> <td>75.44</td> <td>65.33</td> <td>70.16</td> </tr> <tr> <td>Approach || HAST (Li et al. 2018)</td> <td>85.61</td> <td>-</td> <td>79.52</td> <td>-</td> <td>69.77</td> <td>-</td> </tr> <tr> <td>Approach || DE-CNN (Xu et al. 2018)</td> <td>85.2</td> <td>-</td> <td>81.59</td> <td>-</td> <td>68.28</td> <td>-</td> </tr> <tr> <td>Approach || Mined Rules</td> <td>70.82</td> <td>79.6</td> <td>67.67</td> <td>76.1</td> <td>57.67</td> <td>64.29</td> </tr> <tr> <td>Approach || RINANTE (No Rule)</td> <td>84.06</td> <td>84.59</td> <td>73.47</td> <td>75.41</td> <td>66.17</td> <td>68.16</td> </tr> <tr> <td>Approach || RINANTE-Shared-Alt</td> <td>86.76</td> <td>86.05</td> <td>77.92</td> <td>79.2</td> <td>67.47</td> <td>71.41</td> </tr> <tr> <td>Approach || RINANTE-Shared-Pre</td> <td>85.09</td> <td>85.63</td> <td>79.16</td> <td>79.03</td> <td>68.15</td> <td>70.44</td> </tr> <tr> <td>Approach || RINANTE-Double-Alt</td> <td>85.8</td> <td>86.34</td> <td>78.59</td> <td>78.94</td> <td>67.42</td> <td>70.53</td> </tr> <tr> <td>Approach || RINANTE-Double-Pre</td> <td>86.45</td> <td>85.67</td> <td>80.16</td> <td>81.96</td> <td>69.9</td> <td>72.09</td> </tr> </tbody></table>
Table 2
table_2
P19-1520
7
acl2019
The experimental results are shown in Table 2. From the results, we can see that the mined rules alone do not perform well. However, by learning from the data automatically labeled by these rules, all four versions of RINANTE achieves better performances than RINANTE (no rule). This verifies that we can indeed use the results of the mined rules to improve the performance of neural models. Moreover, the improvement over RINANTE (no rule) can be especially significant on SE14-L and SE15-R. We think this is because SE14-L is relatively more difficult and SE15-R has much less manually labeled training data. We can also see from Table 2 that the rules mined with our rule mining algorithm performs much better than Double Propagation. This is because our algorithm is able to mine hundreds of effective rules, while Double Propagation only has eight manually designed rules.
[1, 1, 1, 2, 1, 2, 1, 2]
['The experimental results are shown in Table 2.', 'From the results, we can see that the mined rules alone do not perform well.', 'However, by learning from the data automatically labeled by these rules, all four versions of RINANTE achieves better performances than RINANTE (no rule).', 'This verifies that we can indeed use the results of the mined rules to improve the performance of neural models.', 'Moreover, the improvement over RINANTE (no rule) can be especially significant on SE14-L and SE15-R.', 'We think this is because SE14-L is relatively more difficult and SE15-R has much less manually labeled training data.', 'We can also see from Table 2 that the rules mined with our rule mining algorithm performs much better than Double Propagation.', 'This is because our algorithm is able to mine hundreds of effective rules, while Double Propagation only has eight manually designed rules.']
[None, ['Mined Rules'], ['RINANTE-Shared-Alt', 'RINANTE-Shared-Pre', 'RINANTE-Double-Alt', 'RINANTE-Double-Pre', 'RINANTE (No Rule)'], None, ['RINANTE (No Rule)', 'SE14-L', 'SE15-R'], ['SE14-L', 'SE15-R'], ['Mined Rules', 'DP (Qiu et al. 2011)'], ['DP (Qiu et al. 2011)']]
1
P19-1524table_1
Results on CoNLL 2003 and OntoNotes 5.0
2
[['Model', 'Ma and Hovy (2016)'], ['Model', 'Lample et al. (2016)'], ['Model', 'Liu et al. (2018)'], ['Model', 'Devlin et al. (2018)'], ['Model', 'Chiu and Nichols (2016)'], ['Model', 'Ghaddar and Langlais ’18'], ['Model', 'Peters et al. (2018)'], ['Model', 'Clark et al. (2018)'], ['Model', 'Akbik et al. (2018)'], ['Model', 'HSCRF'], ['Model', 'HSCRF + concat'], ['Model', 'HSCRF + gazemb'], ['Model', 'HSCRF + softdict']]
2
[['F1-score', 'CoNLL'], ['F1-score', 'OntoNotes']]
[['91.21', ' -'], ['90.94', ' -'], ['91.24±0.12', ' -'], ['92.8', ' -'], ['91.62±0.33', ' 86.28±0.26'], ['91.73±0.10', ' 87.95±0.13'], ['92.22±0.10', ' 89.04±0.27'], ['92.6 ±0.1', ' 88.8±0.1'], ['93.09±0.12', '89.71'], ['92.54±0.11', ' 89.38±0.11'], ['92.52±0.09', ' 89.73±0.19'], ['92.63±0.08', ' 89.77±0.20'], ['92.75±0.18', ' 89.94±0.16']]
column
['F1-score', 'F1-score']
['HSCRF + softdict']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1-score || CoNLL</th> <th>F1-score || OntoNotes</th> </tr> </thead> <tbody> <tr> <td>Model || Ma and Hovy (2016)</td> <td>91.21</td> <td>-</td> </tr> <tr> <td>Model || Lample et al. (2016)</td> <td>90.94</td> <td>-</td> </tr> <tr> <td>Model || Liu et al. (2018)</td> <td>91.24±0.12</td> <td>-</td> </tr> <tr> <td>Model || Devlin et al. (2018)</td> <td>92.8</td> <td>-</td> </tr> <tr> <td>Model || Chiu and Nichols (2016)</td> <td>91.62±0.33</td> <td>86.28±0.26</td> </tr> <tr> <td>Model || Ghaddar and Langlais ’18</td> <td>91.73±0.10</td> <td>87.95±0.13</td> </tr> <tr> <td>Model || Peters et al. (2018)</td> <td>92.22±0.10</td> <td>89.04±0.27</td> </tr> <tr> <td>Model || Clark et al. (2018)</td> <td>92.6 ±0.1</td> <td>88.8±0.1</td> </tr> <tr> <td>Model || Akbik et al. (2018)</td> <td>93.09±0.12</td> <td>89.71</td> </tr> <tr> <td>Model || HSCRF</td> <td>92.54±0.11</td> <td>89.38±0.11</td> </tr> <tr> <td>Model || HSCRF + concat</td> <td>92.52±0.09</td> <td>89.73±0.19</td> </tr> <tr> <td>Model || HSCRF + gazemb</td> <td>92.63±0.08</td> <td>89.77±0.20</td> </tr> <tr> <td>Model || HSCRF + softdict</td> <td>92.75±0.18</td> <td>89.94±0.16</td> </tr> </tbody></table>
Table 1
table_1
P19-1524
4
acl2019
3.5 Results . Table 1 shows the results on the CoNLL 2003 dataset and OntoNotes 5.0 dataset respectively. HSCRFs using gazetteer-enhanced sub-tagger outperform the baselines, achieving comparable results with those of more complex or larger models on CoNLL 2003 and new state-of-the-art results on OntoNotes 5.0. We also attached some out-of-domain analysis in the Appendix.
[2, 1, 1, 2]
['3.5 Results .', 'Table 1 shows the results on the CoNLL 2003 dataset and OntoNotes 5.0 dataset respectively.', 'HSCRFs using gazetteer-enhanced sub-tagger outperform the baselines, achieving comparable results with those of more complex or larger models on CoNLL 2003 and new state-of-the-art results on OntoNotes 5.0.', 'We also attached some out-of-domain analysis in the Appendix.']
[None, ['CoNLL', 'OntoNotes'], ['HSCRF + softdict', 'CoNLL', 'OntoNotes'], None]
1
P19-1526table_6
Comparison of different sentence encoders in D-NDMV.
2
[['SENTENCE ENCODER', 'Bag-of-Tags Method'], ['SENTENCE ENCODER', 'Anchored Words Method'], ['SENTENCE ENCODER', 'LSTM'], ['SENTENCE ENCODER', 'Attention-Based LSTM'], ['SENTENCE ENCODER', 'Bi-LSTM']]
1
[[' DDA']]
[['74.1'], ['75.1'], ['75.9'], ['75.5'], ['74.2']]
column
['DDA']
['LSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DDA</th> </tr> </thead> <tbody> <tr> <td>SENTENCE ENCODER || Bag-of-Tags Method</td> <td>74.1</td> </tr> <tr> <td>SENTENCE ENCODER || Anchored Words Method</td> <td>75.1</td> </tr> <tr> <td>SENTENCE ENCODER || LSTM</td> <td>75.9</td> </tr> <tr> <td>SENTENCE ENCODER || Attention-Based LSTM</td> <td>75.5</td> </tr> <tr> <td>SENTENCE ENCODER || Bi-LSTM</td> <td>74.2</td> </tr> </tbody></table>
Table 6
table_6
P19-1526
8
acl2019
Besides LSTM, there are a few other methods of producing the sentence representation. Table 6 compares the experimental results of these methods. The bag-of-tags method simply computes the average of all the POS tag embeddings and has the lowest accuracy, showing that the word order is informative for sentence encoding in D-NDMV. The anchored words method replaces the POS tag embddings used in the neural network of the neural DMV with the corresponding hidden vectors produced by a LSTM on top of the input sentence, which leads to better accuracy than bag-of-tags but is still worse than LSTM. Replacing LSTM with Bi-LSTM or attention-based LSTM also does not lead to better performance, probably because these models are more powerful and hence more likely to result in degeneration and overfitting.
[1, 1, 1, 1, 1]
['Besides LSTM, there are a few other methods of producing the sentence representation.', 'Table 6 compares the experimental results of these methods.', 'The bag-of-tags method simply computes the average of all the POS tag embeddings and has the lowest accuracy, showing that the word order is informative for sentence encoding in D-NDMV.', 'The anchored words method replaces the POS tag embddings used in the neural network of the neural DMV with the corresponding hidden vectors produced by a LSTM on top of the input sentence, which leads to better accuracy than bag-of-tags but is still worse than LSTM.', 'Replacing LSTM with Bi-LSTM or attention-based LSTM also does not lead to better performance, probably because these models are more powerful and hence more likely to result in degeneration and overfitting.']
[['LSTM'], None, ['Bag-of-Tags Method'], ['LSTM', 'Bag-of-Tags Method'], ['LSTM']]
1
P19-1527table_1
Nested NER results (F1) for ACE-2004, ACE-2005, GENIA and CNEC 1.0 (Czech) corpora. Bold indicates the best result, italics results above SoTA and gray background indicates the main contribution. * uses different data split in ACE-2005. ** non-neural model
2
[['model', '(Finkel and Manning, 2009)**'], ['model', '(Lu and Roth, 2015)**'], ['model', '(Muis and Lu, 2017)**'], ['model', '(Katiyar and Cardie, 2018)'], ['model', '(Ju et al., 2018)*'], ['model', '(Wang and Lu, 2018)'], ['model', '(Straková et al., 2016)'], ['model', 'LSTM-CRF'], ['model', 'LSTM-CRF+ELMo'], ['model', 'LSTM-CRF+BERT'], ['model', 'LSTM-CRF+Flair'], ['model', 'LSTM-CRF+BERT+ELMo'], ['model', 'LSTM-CRF+BERT+Flair'], ['model', 'LSTM-CRF+ELMo+BERT+Flair'], ['model', 'seq2seq'], ['model', 'seq2seq+ELMo'], ['model', 'seq2seq+BERT'], ['model', 'seq2seq+Flair'], ['model', 'seq2seq+BERT+ELMo'], ['model', 'seq2seq+BERT+Flair'], ['model', 'seq2seq+ELMo+BERT+Flair']]
1
[['ACE-2004'], ['ACE-2005'], ['GENIA'], ['CNEC 1.0']]
[['-', '-', '70.3', '-'], ['62.8', '62.5', '70.3', '-'], ['64.5', '63.1', '70.8', '-'], ['72.7', '70.5', '73.6', '-'], ['-', '72.2', '74.7', '-'], ['75.1', '74.5', '75.1', '-'], ['-', '-', '-', '81.2'], ['72.26', '71.62', '76.23', '80.28'], ['78.72', '78.36', '75.94', '-'], ['81.48', '79.95', '77.8', '85.67'], ['77.65', '77.25', '76.65', '81.74'], ['80.07', '80.04', '76.29', '-'], ['81.22', '80.82', '77.91', '85.7'], ['80.19', '79.85', '76.56', '-'], ['77.08', '75.36', '76.44', '82.96'], ['81.94', '81.95', '77.33', '-'], ['84.33', '83.42', '78.2', '86.73'], ['81.38', '79.83', '76.63', '83.55'], ['84.32', '82.15', '77.77', '-'], ['84.4', '84.33', '78.31', '86.88'], ['84.07', '83.41', '78.01', '-']]
column
['F1', 'F1', 'F1', 'F1']
['LSTM-CRF']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ACE-2004</th> <th>ACE-2005</th> <th>GENIA</th> <th>CNEC 1.0</th> </tr> </thead> <tbody> <tr> <td>model || (Finkel and Manning, 2009)**</td> <td>-</td> <td>-</td> <td>70.3</td> <td>-</td> </tr> <tr> <td>model || (Lu and Roth, 2015)**</td> <td>62.8</td> <td>62.5</td> <td>70.3</td> <td>-</td> </tr> <tr> <td>model || (Muis and Lu, 2017)**</td> <td>64.5</td> <td>63.1</td> <td>70.8</td> <td>-</td> </tr> <tr> <td>model || (Katiyar and Cardie, 2018)</td> <td>72.7</td> <td>70.5</td> <td>73.6</td> <td>-</td> </tr> <tr> <td>model || (Ju et al., 2018)*</td> <td>-</td> <td>72.2</td> <td>74.7</td> <td>-</td> </tr> <tr> <td>model || (Wang and Lu, 2018)</td> <td>75.1</td> <td>74.5</td> <td>75.1</td> <td>-</td> </tr> <tr> <td>model || (Straková et al., 2016)</td> <td>-</td> <td>-</td> <td>-</td> <td>81.2</td> </tr> <tr> <td>model || LSTM-CRF</td> <td>72.26</td> <td>71.62</td> <td>76.23</td> <td>80.28</td> </tr> <tr> <td>model || LSTM-CRF+ELMo</td> <td>78.72</td> <td>78.36</td> <td>75.94</td> <td>-</td> </tr> <tr> <td>model || LSTM-CRF+BERT</td> <td>81.48</td> <td>79.95</td> <td>77.8</td> <td>85.67</td> </tr> <tr> <td>model || LSTM-CRF+Flair</td> <td>77.65</td> <td>77.25</td> <td>76.65</td> <td>81.74</td> </tr> <tr> <td>model || LSTM-CRF+BERT+ELMo</td> <td>80.07</td> <td>80.04</td> <td>76.29</td> <td>-</td> </tr> <tr> <td>model || LSTM-CRF+BERT+Flair</td> <td>81.22</td> <td>80.82</td> <td>77.91</td> <td>85.7</td> </tr> <tr> <td>model || LSTM-CRF+ELMo+BERT+Flair</td> <td>80.19</td> <td>79.85</td> <td>76.56</td> <td>-</td> </tr> <tr> <td>model || seq2seq</td> <td>77.08</td> <td>75.36</td> <td>76.44</td> <td>82.96</td> </tr> <tr> <td>model || seq2seq+ELMo</td> <td>81.94</td> <td>81.95</td> <td>77.33</td> <td>-</td> </tr> <tr> <td>model || seq2seq+BERT</td> <td>84.33</td> <td>83.42</td> <td>78.2</td> <td>86.73</td> </tr> <tr> <td>model || seq2seq+Flair</td> <td>81.38</td> <td>79.83</td> <td>76.63</td> <td>83.55</td> </tr> <tr> <td>model || seq2seq+BERT+ELMo</td> <td>84.32</td> <td>82.15</td> <td>77.77</td> <td>-</td> </tr> <tr> <td>model || seq2seq+BERT+Flair</td> <td>84.4</td> <td>84.33</td> <td>78.31</td> <td>86.88</td> </tr> <tr> <td>model || seq2seq+ELMo+BERT+Flair</td> <td>84.07</td> <td>83.41</td> <td>78.01</td> <td>-</td> </tr> </tbody></table>
Table 1
table_1
P19-1527
4
acl2019
5 Results . Table 1 shows the F1 score for the nested NER. When comparing the results for the nested NER in the baseline models (without the contextual word embeddings) to the previous results in literature, we see that LSTM-CRF reaches comparable, but suboptimal results in three out of four nested NE corpora, while seq2seq clearly outperforms all the known methods by a wide margin. We hypothesize that seq2seq, although more complex (the system must predict multiple labels per token, including the special label), is more suitable for more complex corpora. The gain is most visible in ACE-2004 and ACE-2005, which contain extremely long named entities and the level of “nestedness” is greater than in the other nested corpora.
[2, 1, 1, 1, 1]
['5 Results .', 'Table 1 shows the F1 score for the nested NER.', 'When comparing the results for the nested NER in the baseline models (without the contextual word embeddings) to the previous results in literature, we see that LSTM-CRF reaches comparable, but suboptimal results in three out of four nested NE corpora, while seq2seq clearly outperforms all the known methods by a wide margin.', 'We hypothesize that seq2seq, although more complex (the system must predict multiple labels per token, including the special label), is more suitable for more complex corpora.', 'The gain is most visible in ACE-2004 and ACE-2005, which contain extremely long named entities and the level of “nestedness” is greater than in the other nested corpora.']
[None, None, ['LSTM-CRF', 'seq2seq'], ['seq2seq'], ['ACE-2004', 'ACE-2005']]
1
P19-1531table_2
Results on the PTB and SPMRL test sets.
3
[['English (PTB)', 'Model', 'S-S'], ['English (PTB)', 'Model', 'S-MTL'], ['English (PTB)', 'Model', 'D-MTL-AUX'], ['English (PTB)', 'Model', 'D-MTL'], ['Basque', 'Model', 'S-S'], ['Basque', 'Model', 'S-MTL'], ['Basque', 'Model', 'D-MTL-AUX'], ['Basque', 'Model', 'D-MTL'], ['French', 'Model', 'S-S'], ['French', 'Model', 'S-MTL'], ['French', 'Model', 'D-MTL-AUX'], ['French', 'Model', 'D-MTL'], ['German', 'Model', 'S-S'], ['German', 'Model', 'S-MTL'], ['German', 'Model', 'D-MTL-AUX'], ['German', 'Model', 'D-MTL'], ['Hebrew', 'Model', 'S-S'], ['Hebrew', 'Model', 'S-MTL'], ['Hebrew', 'Model', 'D-MTL-AUX'], ['Hebrew', 'Model', 'D-MTL'], ['Hungarian', 'Model', 'S-S'], ['Hungarian', 'Model', 'S-MTL'], ['Hungarian', 'Model', 'D-MTL-AUX'], ['Hungarian', 'Model', 'D-MTL'], ['Korean', 'Model', 'S-S'], ['Korean', 'Model', 'S-MTL'], ['Korean', 'Model', 'D - MTL - AUX'], ['Korean', 'Model', 'D-MTL'], ['Polish', 'Model', 'S-S'], ['Polish', 'Model', 'S-MTL'], ['Polish', 'Model', 'D-MTL-AUX'], ['Polish', 'Model', 'D-MTL'], ['Swedish', 'Model', 'S-S'], ['Swedish', 'Model', 'S-MTL'], ['Swedish', 'Model', 'D-MTL-AUX'], ['Swedish', 'Model', 'D-MTL'], ['average', 'Model', 'S-S'], ['average', 'Model', 'S-MTL'], ['average', 'Model', 'D-MTL-AUX'], ['average', 'Model', 'D-MTL']]
2
[['Dependency Parsing', 'UAS'], ['Dependency Parsing', 'LAS'], ['Constituency Parsing', 'F1']]
[['93.6', '91.74', '90.14'], ['93.84', '91.83', '90.32'], ['94.05', '92.01', '90.39'], ['93.96', '91.9', '89.81'], ['86.2', '81.7', '89.54'], ['87.42', '81.71', '90.86'], ['87.19', '81.73', '91.12'], ['87.09', '81.77', '90.76'], ['89.13', '85.03', '80.68'], ['89.54', '84.89', '81.34'], ['89.52', '84.97', '81.33'], ['89.45', '85.07', '81.19'], ['91.24', '88.76', '84.19'], ['91.54', '88.75', '84.46'], ['91.58', '88.8', '84.38'], ['91.45', '88.67', '84.28'], ['82.74', '75.08', '88.85'], ['83.42', '74.91', '91.91'], ['83.9', '75.89', '91.83'], ['82.6', '73.73', '91.1'], ['88.24', '84.54', '90.42'], ['88.69', '84.54', '90.76'], ['88.99', '84.95', '90.69'], ['88.89', '84.89', '90.93'], ['86.47', '84.12', '83.33'], ['86.78', '84.39', '83.51'], ['87', '84.6', '83.39'], ['86.64', '84.34', '83.08'], ['91.17', '85.64', '92.59'], ['91.58', '85.04', '93.17'], ['91.37', '85.2', '93.36'], ['92', '85.92', '93.52'], ['86.49', '80.6', '83.81'], ['87.22', '80.61', '86.23'], ['87.24', '80.34', '86.53'], ['87.15', '80.71', '86.44'], ['88.36', '84.13', '87.06'], ['88.89', '84.07', '88.06'], ['88.98', '84.28', '88.11'], ['88.8', '84.11', '87.9']]
column
['UAS', 'LAS', 'F1']
['D-MTL-AUX']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dependency Parsing || UAS</th> <th>Dependency Parsing || LAS</th> <th>Constituency Parsing || F1</th> </tr> </thead> <tbody> <tr> <td>English (PTB) || Model || S-S</td> <td>93.6</td> <td>91.74</td> <td>90.14</td> </tr> <tr> <td>English (PTB) || Model || S-MTL</td> <td>93.84</td> <td>91.83</td> <td>90.32</td> </tr> <tr> <td>English (PTB) || Model || D-MTL-AUX</td> <td>94.05</td> <td>92.01</td> <td>90.39</td> </tr> <tr> <td>English (PTB) || Model || D-MTL</td> <td>93.96</td> <td>91.9</td> <td>89.81</td> </tr> <tr> <td>Basque || Model || S-S</td> <td>86.2</td> <td>81.7</td> <td>89.54</td> </tr> <tr> <td>Basque || Model || S-MTL</td> <td>87.42</td> <td>81.71</td> <td>90.86</td> </tr> <tr> <td>Basque || Model || D-MTL-AUX</td> <td>87.19</td> <td>81.73</td> <td>91.12</td> </tr> <tr> <td>Basque || Model || D-MTL</td> <td>87.09</td> <td>81.77</td> <td>90.76</td> </tr> <tr> <td>French || Model || S-S</td> <td>89.13</td> <td>85.03</td> <td>80.68</td> </tr> <tr> <td>French || Model || S-MTL</td> <td>89.54</td> <td>84.89</td> <td>81.34</td> </tr> <tr> <td>French || Model || D-MTL-AUX</td> <td>89.52</td> <td>84.97</td> <td>81.33</td> </tr> <tr> <td>French || Model || D-MTL</td> <td>89.45</td> <td>85.07</td> <td>81.19</td> </tr> <tr> <td>German || Model || S-S</td> <td>91.24</td> <td>88.76</td> <td>84.19</td> </tr> <tr> <td>German || Model || S-MTL</td> <td>91.54</td> <td>88.75</td> <td>84.46</td> </tr> <tr> <td>German || Model || D-MTL-AUX</td> <td>91.58</td> <td>88.8</td> <td>84.38</td> </tr> <tr> <td>German || Model || D-MTL</td> <td>91.45</td> <td>88.67</td> <td>84.28</td> </tr> <tr> <td>Hebrew || Model || S-S</td> <td>82.74</td> <td>75.08</td> <td>88.85</td> </tr> <tr> <td>Hebrew || Model || S-MTL</td> <td>83.42</td> <td>74.91</td> <td>91.91</td> </tr> <tr> <td>Hebrew || Model || D-MTL-AUX</td> <td>83.9</td> <td>75.89</td> <td>91.83</td> </tr> <tr> <td>Hebrew || Model || D-MTL</td> <td>82.6</td> <td>73.73</td> <td>91.1</td> </tr> <tr> <td>Hungarian || Model || S-S</td> <td>88.24</td> <td>84.54</td> <td>90.42</td> </tr> <tr> <td>Hungarian || Model || S-MTL</td> <td>88.69</td> <td>84.54</td> <td>90.76</td> </tr> <tr> <td>Hungarian || Model || D-MTL-AUX</td> <td>88.99</td> <td>84.95</td> <td>90.69</td> </tr> <tr> <td>Hungarian || Model || D-MTL</td> <td>88.89</td> <td>84.89</td> <td>90.93</td> </tr> <tr> <td>Korean || Model || S-S</td> <td>86.47</td> <td>84.12</td> <td>83.33</td> </tr> <tr> <td>Korean || Model || S-MTL</td> <td>86.78</td> <td>84.39</td> <td>83.51</td> </tr> <tr> <td>Korean || Model || D - MTL - AUX</td> <td>87</td> <td>84.6</td> <td>83.39</td> </tr> <tr> <td>Korean || Model || D-MTL</td> <td>86.64</td> <td>84.34</td> <td>83.08</td> </tr> <tr> <td>Polish || Model || S-S</td> <td>91.17</td> <td>85.64</td> <td>92.59</td> </tr> <tr> <td>Polish || Model || S-MTL</td> <td>91.58</td> <td>85.04</td> <td>93.17</td> </tr> <tr> <td>Polish || Model || D-MTL-AUX</td> <td>91.37</td> <td>85.2</td> <td>93.36</td> </tr> <tr> <td>Polish || Model || D-MTL</td> <td>92</td> <td>85.92</td> <td>93.52</td> </tr> <tr> <td>Swedish || Model || S-S</td> <td>86.49</td> <td>80.6</td> <td>83.81</td> </tr> <tr> <td>Swedish || Model || S-MTL</td> <td>87.22</td> <td>80.61</td> <td>86.23</td> </tr> <tr> <td>Swedish || Model || D-MTL-AUX</td> <td>87.24</td> <td>80.34</td> <td>86.53</td> </tr> <tr> <td>Swedish || Model || D-MTL</td> <td>87.15</td> <td>80.71</td> <td>86.44</td> </tr> <tr> <td>average || Model || S-S</td> <td>88.36</td> <td>84.13</td> <td>87.06</td> </tr> <tr> <td>average || Model || S-MTL</td> <td>88.89</td> <td>84.07</td> <td>88.06</td> </tr> <tr> <td>average || Model || D-MTL-AUX</td> <td>88.98</td> <td>84.28</td> <td>88.11</td> </tr> <tr> <td>average || Model || D-MTL</td> <td>88.8</td> <td>84.11</td> <td>87.9</td> </tr> </tbody></table>
Table 2
table_2
P19-1531
4
acl2019
4.2 Results . Table 2 compares single-paradigm models against their double-paradigm MTL versions. On average, MTL models with auxiliary losses achieve the best performance for both parsing abstractions. They gain 1.05 F1 points on average in comparison with the single model for constituency parsing, and 0.62 UAS and 0.15 LAS points for dependency parsing. In comparison to the single-paradigm MTL models, the average gain is smaller: 0.05 F1 points for constituency parsing, and 0.09 UAS and 0.21 LAS points for dependency parsing.
[2, 1, 1, 1, 1]
['4.2 Results .', 'Table 2 compares single-paradigm models against their double-paradigm MTL versions.', 'On average, MTL models with auxiliary losses achieve the best performance for both parsing abstractions.', 'They gain 1.05 F1 points on average in comparison with the single model for constituency parsing, and 0.62 UAS and 0.15 LAS points for dependency parsing.', 'In comparison to the single-paradigm MTL models, the average gain is smaller: 0.05 F1 points for constituency parsing, and 0.09 UAS and 0.21 LAS points for dependency parsing.']
[None, None, ['D-MTL-AUX'], ['D-MTL-AUX', 'F1', 'UAS', 'LAS'], ['S-MTL', 'F1', 'UAS', 'LAS']]
1
P19-1542table_1
Results of automatic and human evaluation: PAML vs Dialogue+Persona shows the our approach can achieve good consistency by using few dialogues instead of conditioning on the persona description, PAML vs Dialogue+Fine-tuning shows the effectiveness of meta-learning approach in personalizing dialogue model.
1
[['Human'], ['Dialogue+Persona'], ['Dialogue'], ['Dialogue+Fine-tuning'], ['PAML']]
2
[['Automatic', 'PPL'], ['Automatic', 'BLEU'], ['Automatic', 'C'], ['Human', 'Fluency'], ['Human', 'Consistency']]
[['-', '-', '0.33', '3.434', '0.234'], ['30.42', '1', '0.07', '3.053', '0.011'], ['36.75', '0.64', '-0.03', '-', '-'], ['32.96', '0.9', '0', '3.103', '0.038'], ['41.64', '0.74', '0.2', '3.185', '0.197']]
column
['PPL', 'BLEU', 'C', 'Fluency', 'Consistency']
['PAML']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Automatic || PPL</th> <th>Automatic || BLEU</th> <th>Automatic || C</th> <th>Human || Fluency</th> <th>Human || Consistency</th> </tr> </thead> <tbody> <tr> <td>Human</td> <td>-</td> <td>-</td> <td>0.33</td> <td>3.434</td> <td>0.234</td> </tr> <tr> <td>Dialogue+Persona</td> <td>30.42</td> <td>1</td> <td>0.07</td> <td>3.053</td> <td>0.011</td> </tr> <tr> <td>Dialogue</td> <td>36.75</td> <td>0.64</td> <td>-0.03</td> <td>-</td> <td>-</td> </tr> <tr> <td>Dialogue+Fine-tuning</td> <td>32.96</td> <td>0.9</td> <td>0</td> <td>3.103</td> <td>0.038</td> </tr> <tr> <td>PAML</td> <td>41.64</td> <td>0.74</td> <td>0.2</td> <td>3.185</td> <td>0.197</td> </tr> </tbody></table>
Table 1
table_1
P19-1542
3
acl2019
3.2 Results. Table 1 shows both automatic and human evaluation results. PAML achieve consistently better results in term of dialogue consistency in both automatic and human evaluation. The latter also shows that all the experimental settings have comparable fluency scores, where instead perplexity and BLEU score are lower in PAML. This confirms that these measures are not correlated to human judgment (Liu et al., 2016). For completeness, we also show generated responses examples from PAML and baseline models in Appendix.
[0, 1, 1, 1, 1, 2]
['3.2 Results.', ' Table 1 shows both automatic and human evaluation results.', 'PAML achieve consistently better results in term of dialogue consistency in both automatic and human evaluation.', 'The latter also shows that all the experimental settings have comparable fluency scores, where instead perplexity and BLEU score are lower in PAML.', 'This confirms that these measures are not correlated to human judgment (Liu et al., 2016).', 'For completeness, we also show generated responses examples from PAML and baseline models in Appendix.']
[None, None, ['PAML'], ['Fluency', 'PPL', 'BLEU', 'PAML'], ['Human'], ['PAML']]
1
P19-1543table_1
Comparison with baseline models.
2
[['Model', 'Pointer LSTM'], ['Model', 'Bi-DAF'], ['Model', 'R-Net'], ['Model', 'Utterance-based HA'], ['Model', 'Turn-based HA (Proposed)']]
1
[['EM Score'], ['F1 Score']]
[['77.85', '82.73'], ['87.24', '88.67'], ['88.93', '90.41'], ['88.59', '90.12'], ['91.07', '92.39']]
column
['EM Score', 'F1 Score']
['Turn-based HA (Proposed)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM Score</th> <th>F1 Score</th> </tr> </thead> <tbody> <tr> <td>Model || Pointer LSTM</td> <td>77.85</td> <td>82.73</td> </tr> <tr> <td>Model || Bi-DAF</td> <td>87.24</td> <td>88.67</td> </tr> <tr> <td>Model || R-Net</td> <td>88.93</td> <td>90.41</td> </tr> <tr> <td>Model || Utterance-based HA</td> <td>88.59</td> <td>90.12</td> </tr> <tr> <td>Model || Turn-based HA (Proposed)</td> <td>91.07</td> <td>92.39</td> </tr> </tbody></table>
Table 1
table_1
P19-1543
4
acl2019
We adopted Exact Match (EM) and F1 score in SQuAD as metrics (Rajpurkar et al., 2016). Results in Table 1 show that while the utterance-based HA network is on par with established baselines, the proposed turn-based HA model obtains more gains, achieving the best EM and F1 scores.
[2, 1]
['We adopted Exact Match (EM) and F1 score in SQuAD as metrics (Rajpurkar et al., 2016).', 'Results in Table 1 show that while the utterance-based HA network is on par with established baselines, the proposed turn-based HA model obtains more gains, achieving the best EM and F1 scores.']
[['EM Score', 'F1 Score'], ['Utterance-based HA', 'Turn-based HA (Proposed)', 'EM Score', 'F1 Score']]
1
P19-1557table_4
Competitive results on DBpedia and AG News reported in accuracy (%) without any hyper-parameter tuning.
1
[['Bi-BloSAN(Shen et al., 2018)'], ['LEAM(Wang et al., 2018a)'], ['This work']]
1
[['DBpedia(%)'], ['AG News (%)']]
[['98.77', '93.32'], ['99.02', '92.45'], ['98.9', '92.05']]
column
['accuracy', 'accuracy']
['This work']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DBpedia(%)</th> <th>AG News (%)</th> </tr> </thead> <tbody> <tr> <td>Bi-BloSAN(Shen et al., 2018)</td> <td>98.77</td> <td>93.32</td> </tr> <tr> <td>LEAM(Wang et al., 2018a)</td> <td>99.02</td> <td>92.45</td> </tr> <tr> <td>This work</td> <td>98.9</td> <td>92.05</td> </tr> </tbody></table>
Table 4
table_4
P19-1557
5
acl2019
Table 3 shows that the system obtains superior results in the Hate Speech dataset and yields competitive results on the Kaggle data in comparison to some sate-of-the-art baseline systems. Table 4 shows the results of our system on the DBpedia and AG News datasets. Using the same model without any tuning, we managed to obtain competitive results again compared to previous stateof-the-art systems.
[0, 1, 1]
['Table 3 shows that the system obtains superior results in the Hate Speech dataset and yields competitive results on the Kaggle data in comparison to some sate-of-the-art baseline systems.', 'Table 4 shows the results of our system on the DBpedia and AG News datasets.', 'Using the same model without any tuning, we managed to obtain competitive results again compared to previous stateof-the-art systems.']
[None, ['DBpedia(%)', 'AG News (%)'], ['This work']]
1
P19-1564table_2
Comparison of MTN (Base) to state-of-the-art visual dialogue models on the test-std v1.0. The best measure is highlighted in bold.
2
[['Model', 'MTN (Base)'], ['Model', 'CorefNMN (Kottur et al., 2018)'], ['Model', 'MN (Das et al., 2017a)'], ['Model', 'HRE (Das et al., 2017a)'], ['Model', 'LF (Das et al., 2017a)']]
1
[['NDCG']]
[['55.33'], ['54.7'], ['47.5'], ['45.46'], ['45.31']]
column
['NDCG']
['MTN (Base)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NDCG</th> </tr> </thead> <tbody> <tr> <td>Model || MTN (Base)</td> <td>55.33</td> </tr> <tr> <td>Model || CorefNMN (Kottur et al., 2018)</td> <td>54.7</td> </tr> <tr> <td>Model || MN (Das et al., 2017a)</td> <td>47.5</td> </tr> <tr> <td>Model || HRE (Das et al., 2017a)</td> <td>45.46</td> </tr> <tr> <td>Model || LF (Das et al., 2017a)</td> <td>45.31</td> </tr> </tbody></table>
Table 2
table_2
P19-1564
8
acl2019
We trained MTN with the Base parameters on the Visual Dialogue v1.0 2 training data and evaluate on the test-std v1.0 set. The image features are extracted by a pre-trained object detection model (Refer to the appendix Section A.2 for data preprocessing). We evaluate our model with Normalized Discounted Cumulative Gain (NDCG) score by submitting the predicted ranks of the response candidates to the evaluation server (as the groundtruth for the test-std v1.0 split is not published). We keep all the training procedures unchanged from the video-grounded dialogue task. Table 2 shows that our proposed MTN is able to generalize to the visually grounded dialogue setting. It is interesting that our generative model outperforms other retrieval-based approaches in NDCG without any task-specific fine-tuning. There are other submissions with higher NDCG scores from the leaderboard 3 but the approaches of these submissions are not clearly detailed to compare with.
[2, 2, 2, 2, 1, 1, 0]
['We trained MTN with the Base parameters on the Visual Dialogue v1.0 2 training data and evaluate on the test-std v1.0 set.', 'The image features are extracted by a pre-trained object detection model (Refer to the appendix Section A.2 for data preprocessing).', 'We evaluate our model with Normalized Discounted Cumulative Gain (NDCG) score by submitting the predicted ranks of the response candidates to the evaluation server (as the groundtruth for the test-std v1.0 split is not published).', 'We keep all the training procedures unchanged from the video-grounded dialogue task.', 'Table 2 shows that our proposed MTN is able to generalize to the visually grounded dialogue setting.', 'It is interesting that our generative model outperforms other retrieval-based approaches in NDCG without any task-specific fine-tuning.', 'There are other submissions with higher NDCG scores from the leaderboard 3 but the approaches of these submissions are not clearly detailed to compare with.']
[None, None, None, None, None, ['MTN (Base)'], ['CorefNMN (Kottur et al., 2018)', 'MN (Das et al., 2017a)', 'HRE (Das et al., 2017a)', 'LF (Das et al., 2017a)']]
1
P19-1565table_3
Results of Turn-level Evaluation.
2
[['System', 'Retrieval'], ['System', 'Ours-Random'], ['System', 'Ours-PMI'], ['System', 'Ours-Neural'], ['System', 'Ours-Kernel']]
2
[['Keyword Prediction', 'Rw@1'], ['Keyword Prediction', 'Rw@3'], ['Keyword Prediction', 'Rw@5'], ['Keyword Prediction', 'P@1'], ['Keyword Prediction', 'Cor.'], ['Response Retrieval', 'R20@1'], ['Response Retrieval', 'R20@3'], ['Response Retrieval', 'R20@5'], ['Response Retrieval', 'MRR']]
[['-', '-', '-', '-', '-', '0.5196', '0.7636', '0.8622', '0.6661'], ['0.0005', '0.0015', '0.0025', '0.0009', '0.4995', '0.5187', '0.7619', '0.8631', '0.665'], ['0.0585', '0.1351', '0.1872', '0.0871', '0.7974', '0.5441', '0.7839', '0.8716', '0.6847'], ['0.0609', '0.1324', '0.1825', '0.1006', '0.8075', '0.5395', '0.7801', '0.8799', '0.6816'], ['0.0642', '0.1431', '0.1928', '0.1191', '0.8164', '0.5486', '0.7827', '0.8845', '0.6914']]
column
['Rw@1', 'Rw@3', 'Rw@5', 'P@1', 'Cor.', 'R20@1', 'R20@3', 'R20@5', 'MRR']
['Ours-Random', 'Ours-PMI', 'Ours-Neural', 'Ours-Kernel']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Keyword Prediction || Rw@1</th> <th>Keyword Prediction || Rw@3</th> <th>Keyword Prediction || Rw@5</th> <th>Keyword Prediction || P@1</th> <th>Keyword Prediction || Cor.</th> <th>Response Retrieval || R20@1</th> <th>Response Retrieval || R20@3</th> <th>Response Retrieval || R20@5</th> <th>Response Retrieval || MRR</th> </tr> </thead> <tbody> <tr> <td>System || Retrieval</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>0.5196</td> <td>0.7636</td> <td>0.8622</td> <td>0.6661</td> </tr> <tr> <td>System || Ours-Random</td> <td>0.0005</td> <td>0.0015</td> <td>0.0025</td> <td>0.0009</td> <td>0.4995</td> <td>0.5187</td> <td>0.7619</td> <td>0.8631</td> <td>0.665</td> </tr> <tr> <td>System || Ours-PMI</td> <td>0.0585</td> <td>0.1351</td> <td>0.1872</td> <td>0.0871</td> <td>0.7974</td> <td>0.5441</td> <td>0.7839</td> <td>0.8716</td> <td>0.6847</td> </tr> <tr> <td>System || Ours-Neural</td> <td>0.0609</td> <td>0.1324</td> <td>0.1825</td> <td>0.1006</td> <td>0.8075</td> <td>0.5395</td> <td>0.7801</td> <td>0.8799</td> <td>0.6816</td> </tr> <tr> <td>System || Ours-Kernel</td> <td>0.0642</td> <td>0.1431</td> <td>0.1928</td> <td>0.1191</td> <td>0.8164</td> <td>0.5486</td> <td>0.7827</td> <td>0.8845</td> <td>0.6914</td> </tr> </tbody></table>
Table 3
table_3
P19-1565
8
acl2019
Results . Table 3 shows the evaluation results. Our system with Kernel transition module outperforms all other systems in terms of all metrics on both two tasks, expect for R20@3 where the system with PMI transition performs best. The Kernel approach can predict the next keywords more precisely. In the task of response selection, our systems that are augmented with predicted keywords significantly outperform the base Retrieval approach, showing predicted keywords are helpful for better retrieving responses by capturing coarsegrained information of the next utterances. Interestingly, the system with Random transition has a close performance to the base Retrieval model, indicating that the erroneous keywords can be ignored by the system after training.
[2, 1, 1, 1, 1, 1]
['Results .', 'Table 3 shows the evaluation results.', 'Our system with Kernel transition module outperforms all other systems in terms of all metrics on both two tasks, expect for R20@3 where the system with PMI transition performs best.', 'The Kernel approach can predict the next keywords more precisely.', 'In the task of response selection, our systems that are augmented with predicted keywords significantly outperform the base Retrieval approach, showing predicted keywords are helpful for better retrieving responses by capturing coarsegrained information of the next utterances.', 'Interestingly, the system with Random transition has a close performance to the base Retrieval model, indicating that the erroneous keywords can be ignored by the system after training.']
[None, None, ['Ours-Kernel', 'Keyword Prediction', 'Response Retrieval', 'Ours-PMI', 'R20@3'], ['Ours-Kernel'], ['Response Retrieval', 'Ours-PMI', 'Retrieval'], ['Response Retrieval', 'Ours-Random', 'Retrieval']]
1
P19-1569table_3
Comparison with other works on the test sets of Raganato et al. (2017a). All works used sense annotations from SemCor as supervision, although often different pretrained embeddings. † reproduced from Raganato et al. (2017a); * used as a development set; bold new state-of-the-art (SOTA); underlined previous SOTA.
2
[['Model', 'MFS† (Most Frequent Sense)'], ['Model', 'IMS† (2010)'], ['Model', 'IMS + embeddings† (2016)'], ['Model', 'context2vec k-NN† (2016)'], ['Model', 'word2vec k-NN (2016)'], ['Model', 'LSTM-LP (Label Prop.) (2016)'], ['Model', 'Seq2Seq (Task Modelling) (2017b)'], ['Model', 'BiLSTM (Task Modelling) (2017b)'], ['Model', 'ELMo k-NN (2018)'], ['Model', 'HCAN (Hier. Co-Attention) (2018a)'], ['Model', 'BiLSTM w/Vocab. Reduction (2018)'], ['Model', 'BERT k-NN'], ['Model', 'LMMS2348 (ELMo)'], ['Model', 'LMMS2348 (BERT)']]
1
[['Senseval2'], ['Senseval3'], ['SemEval2007'], ['SemEval2013'], ['SemEval2015'], ['ALL']]
[['65.6', '66', '54.5', '63.8', '67.1', '64.8'], ['70.90', '69.30', '61.30', '65.3', '69.5', '68.4'], ['72.2', '70.4', '62.6', '65.9', '71.5', '69.6'], ['71.80', '69.10', '61.30', '65.60', '71.9', '69'], ['67.80', '62.10', '58.50', '66.10', '66.7', '-'], ['73.80', '71.80', '63.50', '69.50', '72.6', '-'], ['70.10', '68.50', '63.1*', '66.50', '69.2', '68.6*'], ['72.00', '69.10', '64.8*', '66.90', '71.5', '69.9*'], ['71.50', '67.50', '57.10', '65.30', '69.9', '67.9'], ['72.80', '70.30', '-*', '68.50', '72.8', '-*'], ['72.60', '70.40', '61.50', '70.80', '71.3', '70.8'], ['76.30', '73.20', '66.20', '71.70', '74.1', '73.5'], ['68.10', '64.70', '53.80', '66.90', '66.2', ''], ['76.30', '75.60', '68.10', '75.10', '77', '75.4']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['LMMS2348 (ELMo)', 'LMMS2348 (BERT)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Senseval2</th> <th>Senseval3</th> <th>SemEval2007</th> <th>SemEval2013</th> <th>SemEval2015</th> <th>ALL</th> </tr> </thead> <tbody> <tr> <td>Model || MFS† (Most Frequent Sense)</td> <td>65.6</td> <td>66</td> <td>54.5</td> <td>63.8</td> <td>67.1</td> <td>64.8</td> </tr> <tr> <td>Model || IMS† (2010)</td> <td>70.90</td> <td>69.30</td> <td>61.30</td> <td>65.3</td> <td>69.5</td> <td>68.4</td> </tr> <tr> <td>Model || IMS + embeddings† (2016)</td> <td>72.2</td> <td>70.4</td> <td>62.6</td> <td>65.9</td> <td>71.5</td> <td>69.6</td> </tr> <tr> <td>Model || context2vec k-NN† (2016)</td> <td>71.80</td> <td>69.10</td> <td>61.30</td> <td>65.60</td> <td>71.9</td> <td>69</td> </tr> <tr> <td>Model || word2vec k-NN (2016)</td> <td>67.80</td> <td>62.10</td> <td>58.50</td> <td>66.10</td> <td>66.7</td> <td>-</td> </tr> <tr> <td>Model || LSTM-LP (Label Prop.) (2016)</td> <td>73.80</td> <td>71.80</td> <td>63.50</td> <td>69.50</td> <td>72.6</td> <td>-</td> </tr> <tr> <td>Model || Seq2Seq (Task Modelling) (2017b)</td> <td>70.10</td> <td>68.50</td> <td>63.1*</td> <td>66.50</td> <td>69.2</td> <td>68.6*</td> </tr> <tr> <td>Model || BiLSTM (Task Modelling) (2017b)</td> <td>72.00</td> <td>69.10</td> <td>64.8*</td> <td>66.90</td> <td>71.5</td> <td>69.9*</td> </tr> <tr> <td>Model || ELMo k-NN (2018)</td> <td>71.50</td> <td>67.50</td> <td>57.10</td> <td>65.30</td> <td>69.9</td> <td>67.9</td> </tr> <tr> <td>Model || HCAN (Hier. Co-Attention) (2018a)</td> <td>72.80</td> <td>70.30</td> <td>-*</td> <td>68.50</td> <td>72.8</td> <td>-*</td> </tr> <tr> <td>Model || BiLSTM w/Vocab. Reduction (2018)</td> <td>72.60</td> <td>70.40</td> <td>61.50</td> <td>70.80</td> <td>71.3</td> <td>70.8</td> </tr> <tr> <td>Model || BERT k-NN</td> <td>76.30</td> <td>73.20</td> <td>66.20</td> <td>71.70</td> <td>74.1</td> <td>73.5</td> </tr> <tr> <td>Model || LMMS2348 (ELMo)</td> <td>68.10</td> <td>64.70</td> <td>53.80</td> <td>66.90</td> <td>66.2</td> <td></td> </tr> <tr> <td>Model || LMMS2348 (BERT)</td> <td>76.30</td> <td>75.60</td> <td>68.10</td> <td>75.10</td> <td>77</td> <td>75.4</td> </tr> </tbody></table>
Table 3
table_3
P19-1569
7
acl2019
5.1 All-Words Disambiguation . In Table 3 we show our results for all tasks of Raganato et al. (2017a)’s evaluation framework. We used the framework’s scoring scripts to avoid any discrepancies in the scoring methodology. Note that the k-NN referred in Table 3 always refers to the closest neighbor, and relies on MFS fallbacks. The first noteworthy result we obtained was that simply replicating Peters et al. (2018)’s method for WSD using BERT instead of ELMo, we were able to significantly, and consistently, surpass the performance of all previous works. When using our method (LMMS), performance still improves significantly over the previous impressive results (+1.9 F1 on ALL, +3.4 F1 on SemEval 2013). Interestingly, we found that our method using ELMo embeddings didn’t outperform ELMo k-NN with MFS fallback, suggesting that it’s necessary to achieve a minimum competence level of embeddings from sense annotations (and glosses) before the inferred sense embeddings become more useful than MFS.
[2, 1, 2, 2, 1, 1, 1]
['5.1 All-Words Disambiguation .', 'In Table 3 we show our results for all tasks of Raganato et al. (2017a)’s evaluation framework.', 'We used the framework’s scoring scripts to avoid any discrepancies in the scoring methodology.', 'Note that the k-NN referred in Table 3 always refers to the closest neighbor, and relies on MFS fallbacks.', 'The first noteworthy result we obtained was that simply replicating Peters et al. (2018)’s method for WSD using BERT instead of ELMo, we were able to significantly, and consistently, surpass the performance of all previous works.', 'When using our method (LMMS), performance still improves significantly over the previous impressive results (+1.9 F1 on ALL, +3.4 F1 on SemEval 2013).', 'Interestingly, we found that our method using ELMo embeddings didn’t outperform ELMo k-NN with MFS fallback, suggesting that it’s necessary to achieve a minimum competence level of embeddings from sense annotations (and glosses) before the inferred sense embeddings become more useful than MFS.']
[None, None, None, ['context2vec k-NN† (2016)', 'word2vec k-NN (2016)', 'ELMo k-NN (2018)', 'BERT k-NN'], ['LMMS2348 (BERT)'], ['LMMS2348 (BERT)', 'BERT k-NN', 'ALL', 'SemEval2013'], ['LMMS2348 (ELMo)', 'ELMo k-NN (2018)']]
1
P19-1570table_6
Comparison of W ordCtx2Sense with the state-of-the-art methods for Word Sense Induction on MakeSense-2016 and SemEval-2010 dataset. We report Fscore and V-measure scores multiplied by 100.
4
[['Method', '(Huang et al., 2012)', 'K', '-'], ['Method', '(Neelakantan et al., 2015) 300D.30K.key', 'K', '-'], ['Method', '(Neelakantan et al., 2015) 300D.6K.key', 'K', '-'], ['Method', '(Mu et al., 2017)', 'K', '2'], ['Method', '(Mu et al., 2017)', 'K', '5'], ['Method', '(Arora et al., 2018)', 'K', '2'], ['Method', '(Arora et al., 2018)', 'K', '5'], ['Method', 'WordCtx2Sense (? = 0.0)', 'K', '2'], ['Method', 'WordCtx2Sense (? = 0.0)', 'K', '5'], ['Method', 'WordCtx2Sense (? = 0.0)', 'K', '6'], ['Method', 'WordCtx2Sense (? = 10^?2)', 'K', '2'], ['Method', 'WordCtx2Sense (? = 10^?2)', 'K', '5'], ['Method', 'WordCtx2Sense (? = 10^?2)', 'K', '6']]
2
[['MakeSense-2016', 'F-scr'], ['MakeSense-2016', 'V-msr'], ['SemEval-2010', 'F-scr'], ['SemEval-2010', 'V-msr']]
[['47.4', '15.5', '38.05', '10.6'], ['54.49', '19.4', '47.26', '9'], ['57.91', '14.4', '48.43', '6.9'], ['64.66', '28.8', '57.14', '7.1'], ['58.25', '34.3', '44.07', '14.5'], ['-', '-', '58.55', '6.1'], ['-', '-', '46.38', '11.5'], ['63.71', '22.2', '59.38', '6.8'], ['59.75', '32.9', '46.47', '13.2'], ['59.13', '34.2', '44.04', '14.3'], ['65.27', '24.4', '59.15', '6.7'], ['62.88', '35', '47.34', '13.7'], ['61.43', '35.3', '44.7', '15']]
column
['F-scr', 'V-msr', 'F-scr', 'V-msr']
['WordCtx2Sense (? = 0.0)', 'WordCtx2Sense (? = 10^?2)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MakeSense-2016 || F-scr</th> <th>MakeSense-2016 || V-msr</th> <th>SemEval-2010 || F-scr</th> <th>SemEval-2010 || V-msr</th> </tr> </thead> <tbody> <tr> <td>Method || (Huang et al., 2012) || K || -</td> <td>47.4</td> <td>15.5</td> <td>38.05</td> <td>10.6</td> </tr> <tr> <td>Method || (Neelakantan et al., 2015) 300D.30K.key || K || -</td> <td>54.49</td> <td>19.4</td> <td>47.26</td> <td>9</td> </tr> <tr> <td>Method || (Neelakantan et al., 2015) 300D.6K.key || K || -</td> <td>57.91</td> <td>14.4</td> <td>48.43</td> <td>6.9</td> </tr> <tr> <td>Method || (Mu et al., 2017) || K || 2</td> <td>64.66</td> <td>28.8</td> <td>57.14</td> <td>7.1</td> </tr> <tr> <td>Method || (Mu et al., 2017) || K || 5</td> <td>58.25</td> <td>34.3</td> <td>44.07</td> <td>14.5</td> </tr> <tr> <td>Method || (Arora et al., 2018) || K || 2</td> <td>-</td> <td>-</td> <td>58.55</td> <td>6.1</td> </tr> <tr> <td>Method || (Arora et al., 2018) || K || 5</td> <td>-</td> <td>-</td> <td>46.38</td> <td>11.5</td> </tr> <tr> <td>Method || WordCtx2Sense (? = 0.0) || K || 2</td> <td>63.71</td> <td>22.2</td> <td>59.38</td> <td>6.8</td> </tr> <tr> <td>Method || WordCtx2Sense (? = 0.0) || K || 5</td> <td>59.75</td> <td>32.9</td> <td>46.47</td> <td>13.2</td> </tr> <tr> <td>Method || WordCtx2Sense (? = 0.0) || K || 6</td> <td>59.13</td> <td>34.2</td> <td>44.04</td> <td>14.3</td> </tr> <tr> <td>Method || WordCtx2Sense (? = 10^?2) || K || 2</td> <td>65.27</td> <td>24.4</td> <td>59.15</td> <td>6.7</td> </tr> <tr> <td>Method || WordCtx2Sense (? = 10^?2) || K || 5</td> <td>62.88</td> <td>35</td> <td>47.34</td> <td>13.7</td> </tr> <tr> <td>Method || WordCtx2Sense (? = 10^?2) || K || 6</td> <td>61.43</td> <td>35.3</td> <td>44.7</td> <td>15</td> </tr> </tbody></table>
Table 6
table_6
P19-1570
9
acl2019
Results . Table 6 shows the results of clustering on WSI SemEval-2010 dataset. WordCtx2Sense outperforms (Arora et al., 2018) and (Mu et al., 2017) on both F-score and V-measure scores by a considerable margin. We observe similar improvements on the MakeSense-2016 dataset.
[2, 1, 1, 1]
['Results .', 'Table 6 shows the results of clustering on WSI SemEval-2010 dataset.', 'WordCtx2Sense outperforms (Arora et al., 2018) and (Mu et al., 2017) on both F-score and V-measure scores by a considerable margin.', 'We observe similar improvements on the MakeSense-2016 dataset.']
[None, ['SemEval-2010'], ['WordCtx2Sense (? = 0.0)', 'WordCtx2Sense (? = 10^?2)', '(Arora et al., 2018)', '(Mu et al., 2017)'], ['MakeSense-2016']]
1
P19-1584table_2
Binary HIPAA F1 scores of our non-private (top) and private (bottom) de-identification approaches on the i2b2 2014 test set in comparison to non-private the state of the art. Our private approaches use N = 100 neighbors as a privacy criterion.
2
[['Model', 'Our non-private FastText'], ['Model', 'Our non-private GloVe'], ['Model', 'Our non-private GloVe + casing'], ['Model', 'Dernoncourt et al. (LSTM-CRF)'], ['Model', 'Liu et al. (ensemble + rules)']]
1
[['F1 (%)']]
[['97.67'], ['97.24'], ['97.62'], ['97.85'], ['98.27']]
column
['F1 (%)']
['Our non-private FastText', 'Our non-private GloVe', 'Our non-private GloVe + casing']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>F1 (%)</th> </tr> </thead> <tbody> <tr> <td>Model || Our non-private FastText</td> <td>97.67</td> </tr> <tr> <td>Model || Our non-private GloVe</td> <td>97.24</td> </tr> <tr> <td>Model || Our non-private GloVe + casing</td> <td>97.62</td> </tr> <tr> <td>Model || Dernoncourt et al. (LSTM-CRF)</td> <td>97.85</td> </tr> <tr> <td>Model || Liu et al. (ensemble + rules)</td> <td>98.27</td> </tr> </tbody></table>
Table 2
table_2
P19-1584
6
acl2019
7 Result. Table 2 shows de-identification performance results for the non-private de-identification classifier in comparison to the state of the art. The results are average values out of five experiment runs. When trained on the raw i2b2 2014 data, our models achieve F1 scores that are comparable to Dernoncourt et al. results. The casing feature improves GloVe by 0.4 percentage points.
[2, 1, 1, 1, 1]
['7 Result.', 'Table 2 shows de-identification performance results for the non-private de-identification classifier in comparison to the state of the art.', 'The results are average values out of five experiment runs.', 'When trained on the raw i2b2 2014 data, our models achieve F1 scores that are comparable to Dernoncourt et al. results.', 'The casing feature improves GloVe by 0.4 percentage points.']
[None, None, None, ['Our non-private FastText', 'Our non-private GloVe', 'Our non-private GloVe + casing', 'Dernoncourt et al. (LSTM-CRF)'], ['Our non-private GloVe + casing', 'Our non-private GloVe']]
1
P19-1595table_2
Comparison of test set results. *MT-DNNKD is distilled from a diverse ensemble of models.
2
[['Model', 'BERT-Base (Devlin et al., 2019)'], ['Model', 'BERT-Large (Devlin et al., 2019)'], ['Model', 'BERT on STILTs (Phang et al., 2018)'], ['Model', 'MT-DNN (Liu et al., 2019b)'], ['Model', 'Span-Extractive BERT on STILTs (Keskar et al., 2019)'], ['Model', 'Snorkel MeTaL ensemble (Hancock et al., 2019)'], ['Model', 'MT-DNNKD* (Liu et al., 2019a)'], ['Model', 'BERT-Large + BAM (ours)']]
1
[['GLUE score']]
[['78.5'], ['80.5'], ['82'], ['82.2'], ['82.3'], ['83.2'], ['83.7'], ['82.3']]
column
['GLUE score']
['BERT-Large + BAM (ours)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>GLUE score</th> </tr> </thead> <tbody> <tr> <td>Model || BERT-Base (Devlin et al., 2019)</td> <td>78.5</td> </tr> <tr> <td>Model || BERT-Large (Devlin et al., 2019)</td> <td>80.5</td> </tr> <tr> <td>Model || BERT on STILTs (Phang et al., 2018)</td> <td>82</td> </tr> <tr> <td>Model || MT-DNN (Liu et al., 2019b)</td> <td>82.2</td> </tr> <tr> <td>Model || Span-Extractive BERT on STILTs (Keskar et al., 2019)</td> <td>82.3</td> </tr> <tr> <td>Model || Snorkel MeTaL ensemble (Hancock et al., 2019)</td> <td>83.2</td> </tr> <tr> <td>Model || MT-DNNKD* (Liu et al., 2019a)</td> <td>83.7</td> </tr> <tr> <td>Model || BERT-Large + BAM (ours)</td> <td>82.3</td> </tr> </tbody></table>
Table 2
table_2
P19-1595
4
acl2019
We compare against recent work by submitting to the GLUE leaderboard. We use Single→Multi distillation. Following the procedure used by BERT, we train multiple models and submit the one with the highest average dev set score to the test set. BERT trained 10 models for each task (80 total);. we trained 20 multi-task models. Results are shown in Table 2. Our work outperforms or matches existing published results that do not rely on ensembling. However, due to the variance between trials discussed under “Reporting Results,” we think these test set numbers should be taken with a grain of salt, as they only show the performance of individual training runs. We believe significance testing over multiple trials would be needed to have a definitive comparison.
[2, 2, 2, 2, 2, 1, 1, 2, 2]
['We compare against recent work by submitting to the GLUE leaderboard.', 'We use Single→Multi distillation.', 'Following the procedure used by BERT, we train multiple models and submit the one with the highest average dev set score to the test set.', 'BERT trained 10 models for each task (80 total);.', 'we trained 20 multi-task models.', 'Results are shown in Table 2.', 'Our work outperforms or matches existing published results that do not rely on ensembling.', 'However, due to the variance between trials discussed under “Reporting Results,” we think these test set numbers should be taken with a grain of salt, as they only show the performance of individual training runs.', 'We believe significance testing over multiple trials would be needed to have a definitive comparison.']
[['GLUE score'], None, None, None, None, None, ['BERT-Large + BAM (ours)'], None, None]
1
P19-1599table_3
Test results with WPL at different positions.
1
[['VGVAE w/o WPL'], ['Dec. hidden state'], ['Enc. emb.'], ['Dec. emb.'], ['Enc. & Dec. emb.']]
1
[['BL R-1'], ['R-2'], ['R-L'], ['MET'], ['ST']]
[['3.5 24.8', '7.3', '29.7', '12.6', '10.6'], ['3.6 24.9', '7.3', '29.7', '12.6', '10.5'], ['3.9 26.1', '7.8', '31', '12.9', '10.2'], ['4.1 26.3', '8.1', '31.3', '13.1', '10.1'], ['4.5 26.5', '8.2', '31.5', '13.3', '10']]
column
['BL R-1', 'R-2', 'R-L', 'MET', 'ST']
['Dec. hidden state']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BL R-1</th> <th>R-2</th> <th>R-L</th> <th>MET</th> <th>ST</th> </tr> </thead> <tbody> <tr> <td>VGVAE w/o WPL</td> <td>3.5 24.8</td> <td>7.3</td> <td>29.7</td> <td>12.6</td> <td>10.6</td> </tr> <tr> <td>Dec. hidden state</td> <td>3.6 24.9</td> <td>7.3</td> <td>29.7</td> <td>12.6</td> <td>10.5</td> </tr> <tr> <td>Enc. emb.</td> <td>3.9 26.1</td> <td>7.8</td> <td>31</td> <td>12.9</td> <td>10.2</td> </tr> <tr> <td>Dec. emb.</td> <td>4.1 26.3</td> <td>8.1</td> <td>31.3</td> <td>13.1</td> <td>10.1</td> </tr> <tr> <td>Enc. &amp; Dec. emb.</td> <td>4.5 26.5</td> <td>8.2</td> <td>31.5</td> <td>13.3</td> <td>10</td> </tr> </tbody></table>
Table 3
table_3
P19-1599
7
acl2019
Effect of Position of Word Position Loss. We also study the effect of the position of WPL by (1) using the decoder hidden state, (2) using the concatenation of word embeddings in the syntactic encoder and the syntactic variable, (3) using the concatenation of word embeddings in the decoder and the syntactic variable, or (4) adding it on both the encoder embeddings and decoder word embeddings. Table 3 shows that adding WPL on hidden states can help improve performance slightly but not as good as adding it on word embeddings. In practice, we also observe that the value of WPL tends to vanish when using WPL on hidden states, which is presumably caused by the fact that LSTMs have sequence information, making the optimization of WPL trivial. We also observe that adding WPL to both the encoder and decoder brings the largest improvement.
[0, 0, 1, 1, 1]
['Effect of Position of Word Position Loss.', 'We also study the effect of the position of WPL by (1) using the decoder hidden state, (2) using the concatenation of word embeddings in the syntactic encoder and the syntactic variable, (3) using the concatenation of word embeddings in the decoder and the syntactic variable, or (4) adding it on both the encoder embeddings and decoder word embeddings.', 'Table 3 shows that adding WPL on hidden states can help improve performance slightly but not as good as adding it on word embeddings.', 'In practice, we also observe that the value of WPL tends to vanish when using WPL on hidden states, which is presumably caused by the fact that LSTMs have sequence information, making the optimization of WPL trivial.', 'We also observe that adding WPL to both the encoder and decoder brings the largest improvement.']
[None, None, ['Dec. hidden state'], ['Dec. hidden state'], ['Dec. hidden state']]
1
P19-1599table_7
Test results when using a single code.
1
[['LC'], ['Single LC']]
1
[['BL'], ['R-1'], ['R-2'], ['R-L'], ['MET'], ['ST']]
[['13.6', '44.7', '21', '48.3', '24.8', '6.7'], ['12.9', '44.2', '20.3', '47.4', '24.1', '6.9']]
column
['BL', 'R-1', 'R-2', 'R-L', 'MET', 'ST']
['LC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BL</th> <th>R-1</th> <th>R-2</th> <th>R-L</th> <th>MET</th> <th>ST</th> </tr> </thead> <tbody> <tr> <td>LC</td> <td>13.6</td> <td>44.7</td> <td>21</td> <td>48.3</td> <td>24.8</td> <td>6.7</td> </tr> <tr> <td>Single LC</td> <td>12.9</td> <td>44.2</td> <td>20.3</td> <td>47.4</td> <td>24.1</td> <td>6.9</td> </tr> </tbody></table>
Table 7
table_7
P19-1599
8
acl2019
We also compare the performance of LC by using a single latent code that has 50 classes. The results in Table 7 show that it is better to use smaller number of classes for each cluster instead of using a cluster with a large number of classes.
[2, 1]
['We also compare the performance of LC by using a single latent code that has 50 classes.', 'The results in Table 7 show that it is better to use smaller number of classes for each cluster instead of using a cluster with a large number of classes.']
[['Single LC'], ['LC', 'Single LC']]
1
P19-1602table_3
Performance of paraphrase generation. The larger↑ (or lower↓), the better. Some results are quoted from †Miao et al. (2019) and ‡Gupta et al. (2018).
2
[['Model', 'Origin Sentence'], ['Model', 'VAE-SVG-eq (supervised)'], ['Model', 'VAE (unsupervised)'], ['Model', 'CGMH'], ['Model', 'DSS-VAE']]
1
[['BLEU-ref'], ['BLEU-ori']]
[['30.49', '100'], ['22.9', '–'], ['9.25', '27.23'], ['18.85', '50.18'], ['20.54', '52.77']]
column
['BLEU-ref', 'BLEU-ori']
['DSS-VAE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-ref</th> <th>BLEU-ori</th> </tr> </thead> <tbody> <tr> <td>Model || Origin Sentence</td> <td>30.49</td> <td>100</td> </tr> <tr> <td>Model || VAE-SVG-eq (supervised)</td> <td>22.9</td> <td>–</td> </tr> <tr> <td>Model || VAE (unsupervised)</td> <td>9.25</td> <td>27.23</td> </tr> <tr> <td>Model || CGMH</td> <td>18.85</td> <td>50.18</td> </tr> <tr> <td>Model || DSS-VAE</td> <td>20.54</td> <td>52.77</td> </tr> </tbody></table>
Table 3
table_3
P19-1602
7
acl2019
Results Table 3 shows the performance of unsupervised paraphrase generation. In the first row of Table 3, simply copying the original sentences yields the highest BLEU-ref, but is meaningless as it has a BLEU-ori score of 100. We see that DSS-VAE outperforms the CGMH and the original VAE in BLEU-ref. Especially, DSS-VAE achieves a closer BLEU-ref compared with supervised paraphrase methods (Gupta et al., 2018). We admit that it is hard to present the trade-off by listing a single score for each model in the Table 3. We therefore have the scatter plot in Figure 4 to further compare these methods. As seen, the trade-off is pretty linear and less noisy compared with Figure 3. It is seen that the line of DSS-VAE is located to the upper-left of the competing methods. In other words, the plain VAE and CGMH are “inadmissible,” meaning that DSSVAE simultaneously outperforms them in both BLEU-ori and BLEU-ref, indicating that DSSVAE outperforms previous state-of-the-art methods in unsupervised paraphrase generation.
[1, 1, 1, 1, 2, 2, 2, 2, 1]
['Results Table 3 shows the performance of unsupervised paraphrase generation.', 'In the first row of Table 3, simply copying the original sentences yields the highest BLEU-ref, but is meaningless as it has a BLEU-ori score of 100.', 'We see that DSS-VAE outperforms the CGMH and the original VAE in BLEU-ref.', 'Especially, DSS-VAE achieves a closer BLEU-ref compared with supervised paraphrase methods (Gupta et al., 2018).', 'We admit that it is hard to present the trade-off by listing a single score for each model in the Table 3.', 'We therefore have the scatter plot in Figure 4 to further compare these methods.', 'As seen, the trade-off is pretty linear and less noisy compared with Figure 3.', 'It is seen that the line of DSS-VAE is located to the upper-left of the competing methods.', 'In other words, the plain VAE and CGMH are “inadmissible,” meaning that DSSVAE simultaneously outperforms them in both BLEU-ori and BLEU-ref, indicating that DSSVAE outperforms previous state-of-the-art methods in unsupervised paraphrase generation.']
[None, ['Origin Sentence', 'BLEU-ref', 'BLEU-ori'], ['DSS-VAE', 'CGMH', 'VAE (unsupervised)', 'BLEU-ref'], ['DSS-VAE', 'BLEU-ref'], None, None, None, ['DSS-VAE'], ['VAE (unsupervised)', 'CGMH', 'DSS-VAE']]
1
P19-1603table_2
Automatic evaluation of generation models.
2
[['Model', 'Seq2Seq + SentiMod'], ['Model', 'SIC-Seq2Seq + RB'], ['Model', 'SIC-Seq2Seq + RM'], ['Model', 'SIC-Seq2Seq + DA']]
1
[['BLEU-1'], ['BLEU-2'], ['I-O SentiCons']]
[['10.7', '3.2', '0.788'], ['19.3', '6.3', '0.879'], ['19.5', '6.2', '0.83'], ['19.8', '6.7', '0.794']]
column
['BLEU-1', 'BLEU-2', 'I-O SentiCons']
['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>I-O SentiCons</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq + SentiMod</td> <td>10.7</td> <td>3.2</td> <td>0.788</td> </tr> <tr> <td>Model || SIC-Seq2Seq + RB</td> <td>19.3</td> <td>6.3</td> <td>0.879</td> </tr> <tr> <td>Model || SIC-Seq2Seq + RM</td> <td>19.5</td> <td>6.2</td> <td>0.83</td> </tr> <tr> <td>Model || SIC-Seq2Seq + DA</td> <td>19.8</td> <td>6.7</td> <td>0.794</td> </tr> </tbody></table>
Table 2
table_2
P19-1603
4
acl2019
The automatic results of four generation models are shown in Table 2. We have the following observations: (1) Three models based on our proposed framework do not have obvious performance difference in terms of BLEU. Meanwhile, all of them can largely outperform the Seq2Seq+SentiMod baseline which does not follow our framework. Thus it shows the effectiveness of the proposed framework. (2) HM SentiCons which measures the performance of sentiment analyzer is marginally consistent with the I-O SentiCons and Sentiment which measure the performance of sentimental generator. This accords with our expectations because the sentimental generator takes the sentiment intensity predicted by the sentiment analyzer as the input signal for controlling the sentiment of the output.
[1, 1, 1, 1, 1, 2]
['The automatic results of four generation models are shown in Table 2.', 'We have the following observations: (1) Three models based on our proposed framework do not have obvious performance difference in terms of BLEU.', 'Meanwhile, all of them can largely outperform the Seq2Seq+SentiMod baseline which does not follow our framework.', 'Thus it shows the effectiveness of the proposed framework.', '(2) HM SentiCons which measures the performance of sentiment analyzer is marginally consistent with the I-O SentiCons and Sentiment which measure the performance of sentimental generator.', 'This accords with our expectations because the sentimental generator takes the sentiment intensity predicted by the sentiment analyzer as the input signal for controlling the sentiment of the output.']
[None, ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA', 'BLEU-2'], ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA', 'Seq2Seq + SentiMod'], ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA'], ['I-O SentiCons'], None]
1
P19-1603table_3
Human evaluation of generation models.
2
[['Model', 'Seq2Seq + SentiMod'], ['Model', 'SIC-Seq2Seq + RB'], ['Model', 'SIC-Seq2Seq + RM'], ['Model', 'SIC-Seq2Seq + DA']]
1
[['Coherency'], ['Fluency'], ['Sentiment']]
[['1.5', '2.5', '3.68'], ['2.65', '4.75', '4.09'], ['2.15', '4.6', '3.65'], ['2.2', '4.5', '3.71']]
column
['Coherency', 'Fluency', 'Sentiment']
['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Coherency</th> <th>Fluency</th> <th>Sentiment</th> </tr> </thead> <tbody> <tr> <td>Model || Seq2Seq + SentiMod</td> <td>1.5</td> <td>2.5</td> <td>3.68</td> </tr> <tr> <td>Model || SIC-Seq2Seq + RB</td> <td>2.65</td> <td>4.75</td> <td>4.09</td> </tr> <tr> <td>Model || SIC-Seq2Seq + RM</td> <td>2.15</td> <td>4.6</td> <td>3.65</td> </tr> <tr> <td>Model || SIC-Seq2Seq + DA</td> <td>2.2</td> <td>4.5</td> <td>3.71</td> </tr> </tbody></table>
Table 3
table_3
P19-1603
4
acl2019
The automatic and human evaluation results of four generation models are shown in Table 2 and Table 3 respectively. We have the following observations: (1) Three models based on our proposed framework do not have obvious performance difference in terms of BLEU, Coherency, and Fluency. Meanwhile, all of them can largely outperform the Seq2Seq+SentiMod baseline which does not follow our framework. Thus it shows the effectiveness of the proposed framework. (2) HM SentiCons which measures the performance of sentiment analyzer is marginally consistent with the I-O SentiCons and Sentiment which measure the performance of sentimental generator. This accords with our expectations because the sentimental generator takes the sentiment intensity predicted by the sentiment analyzer as the input signal for controlling the sentiment of the output.
[1, 1, 1, 2, 2, 2]
['The automatic and human evaluation results of four generation models are shown in Table 2 and Table 3 respectively.', 'We have the following observations: (1) Three models based on our proposed framework do not have obvious performance difference in terms of BLEU, Coherency, and Fluency.', 'Meanwhile, all of them can largely outperform the Seq2Seq+SentiMod baseline which does not follow our framework.', 'Thus it shows the effectiveness of the proposed framework.', '(2) HM SentiCons which measures the performance of sentiment analyzer is marginally consistent with the I-O SentiCons and Sentiment which measure the performance of sentimental generator.', 'This accords with our expectations because the sentimental generator takes the sentiment intensity predicted by the sentiment analyzer as the input signal for controlling the sentiment of the output.']
[None, ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA'], ['SIC-Seq2Seq + RB', 'SIC-Seq2Seq + RM', 'SIC-Seq2Seq + DA', 'Seq2Seq + SentiMod'], None, None, None]
1
P19-1607table_3
Comparison with previous models on text simplification in Newsela dataset and formality transfer in GYAFC dataset. Our models achieved the best BLEU scores across styles and domains.
1
[['Source'], ['Reference'], ['Dress-LS'], ['BiFT-Ens'], ['Ours (RNN)'], ['Ours (SAN)']]
2
[['Nowsela', 'Add'], ['Nowsela', 'Keep'], ['Nowsela', 'Del'], ['Nowsela', 'BLEU'], ['Nowsela', 'SARI'], ['GYAFC-E&M', 'Add'], ['GYAFC-E&M', 'Keep'], ['GYAFC-E&M', 'Del'], ['GYAFC-E&M', 'BLEU'], ['GYAFC-F&R', 'Add'], ['GYAFC-F&R', 'Keep'], ['GYAFC-F&R', 'Del'], ['GYAFC-F&R', 'BLEU']]
[['0', '60.3', '0', '21.4', '2.8', '0', '85.4', '0', '49.1', '0', '85.8', '0', '51'], ['100', '100', '100', '100', '70.3', '57.2', '82.9', '61.2', '100', '56.5', '82.7', '60.6', '100'], ['2.4', '60.7', '44.9', '24.3', '26.6', '', '', '', '', '', '', '', ''], ['', '', '', '', '', '32.1', '90', '58.2', '71.4', '32.6', '90.6', '60.9', '74.5'], ['2.8', '61.1', '36.5', '24.7', '22.8', '33.5', '90', '59.9', '71.7', '34.3', '90.9', '63.1', '75.9'], ['2.5', '61.3', '38', '24.6', '23.3', '35.2', '90', '61.2', '72.1', '35.3', '91.1', '64', '77']]
column
['Add', 'Keep', 'Del', 'BLEU', 'SARI', 'Add', 'Keep', 'Del', 'BLEU', 'Add', 'Keep', 'Del', 'BLEU']
['Ours (RNN)', 'Ours (SAN)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Nowsela || Add</th> <th>Nowsela || Keep</th> <th>Nowsela || Del</th> <th>Nowsela || BLEU</th> <th>Nowsela || SARI</th> <th>GYAFC-E&amp;M || Add</th> <th>GYAFC-E&amp;M || Keep</th> <th>GYAFC-E&amp;M || Del</th> <th>GYAFC-E&amp;M || BLEU</th> <th>GYAFC-F&amp;R || Add</th> <th>GYAFC-F&amp;R || Keep</th> <th>GYAFC-F&amp;R || Del</th> <th>GYAFC-F&amp;R || BLEU</th> </tr> </thead> <tbody> <tr> <td>Source</td> <td>0</td> <td>60.3</td> <td>0</td> <td>21.4</td> <td>2.8</td> <td>0</td> <td>85.4</td> <td>0</td> <td>49.1</td> <td>0</td> <td>85.8</td> <td>0</td> <td>51</td> </tr> <tr> <td>Reference</td> <td>100</td> <td>100</td> <td>100</td> <td>100</td> <td>70.3</td> <td>57.2</td> <td>82.9</td> <td>61.2</td> <td>100</td> <td>56.5</td> <td>82.7</td> <td>60.6</td> <td>100</td> </tr> <tr> <td>Dress-LS</td> <td>2.4</td> <td>60.7</td> <td>44.9</td> <td>24.3</td> <td>26.6</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>BiFT-Ens</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>32.1</td> <td>90</td> <td>58.2</td> <td>71.4</td> <td>32.6</td> <td>90.6</td> <td>60.9</td> <td>74.5</td> </tr> <tr> <td>Ours (RNN)</td> <td>2.8</td> <td>61.1</td> <td>36.5</td> <td>24.7</td> <td>22.8</td> <td>33.5</td> <td>90</td> <td>59.9</td> <td>71.7</td> <td>34.3</td> <td>90.9</td> <td>63.1</td> <td>75.9</td> </tr> <tr> <td>Ours (SAN)</td> <td>2.5</td> <td>61.3</td> <td>38</td> <td>24.6</td> <td>23.3</td> <td>35.2</td> <td>90</td> <td>61.2</td> <td>72.1</td> <td>35.3</td> <td>91.1</td> <td>64</td> <td>77</td> </tr> </tbody></table>
Table 3
table_3
P19-1607
4
acl2019
Table 3 shows a comparison between our models and comparative models. Whereas Dress-LS has a higher SARI score because it directly optimizes SARI using reinforcement learning, our models achieved the best BLEU scores across styles and domains.
[1, 1]
['Table 3 shows a comparison between our models and comparative models.', 'Whereas Dress-LS has a higher SARI score because it directly optimizes SARI using reinforcement learning, our models achieved the best BLEU scores across styles and domains.']
[None, ['Dress-LS', 'SARI', 'Ours (RNN)', 'Ours (SAN)', 'BLEU']]
1
P19-1623table_2
Human evaluation results on the Chinese-toEnglish task. “Flu.” denotes fluency and “Ade.” denotes adequacy. Two human evaluators who can read both Chinese and English were asked to assess the fluency and adequacy of the translations. The scores of fluency and adequacy range from 1 to 5.
3
[['Method', 'Evaluator 1', 'MLE'], ['Method', 'Evaluator 1', 'MLE + CP'], ['Method', 'Evaluator 1', 'WordDropout'], ['Method', 'Evaluator 1', 'CLone'], ['Method', 'Evaluator 2', 'MLE'], ['Method', 'Evaluator 2', 'MLE + CP'], ['Method', 'Evaluator 2', 'WordDropout'], ['Method', 'Evaluator 2', 'CLone']]
1
[['Flu.'], ['Ade.']]
[['4.31', '4.25'], ['4.31', '4.31'], ['4.29', '4.25'], ['4.32', '4.58'], ['4.27', '4.22'], ['4.26', '4.25'], ['4.25', '4.23'], ['4.27', '4.53']]
column
['Flu.', 'Ade.']
['CLone']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Flu.</th> <th>Ade.</th> </tr> </thead> <tbody> <tr> <td>Method || Evaluator 1 || MLE</td> <td>4.31</td> <td>4.25</td> </tr> <tr> <td>Method || Evaluator 1 || MLE + CP</td> <td>4.31</td> <td>4.31</td> </tr> <tr> <td>Method || Evaluator 1 || WordDropout</td> <td>4.29</td> <td>4.25</td> </tr> <tr> <td>Method || Evaluator 1 || CLone</td> <td>4.32</td> <td>4.58</td> </tr> <tr> <td>Method || Evaluator 2 || MLE</td> <td>4.27</td> <td>4.22</td> </tr> <tr> <td>Method || Evaluator 2 || MLE + CP</td> <td>4.26</td> <td>4.25</td> </tr> <tr> <td>Method || Evaluator 2 || WordDropout</td> <td>4.25</td> <td>4.23</td> </tr> <tr> <td>Method || Evaluator 2 || CLone</td> <td>4.27</td> <td>4.53</td> </tr> </tbody></table>
Table 2
table_2
P19-1623
4
acl2019
Table 2 shows the results of human evaluation on the Chinese-to-English task. We asked two human evaluators who can read both Chinese and English to evaluate the fluency and adequacy of the translations generated by MLE, MLE + CP, MLE + data, and CLone. The scores of fluency and adequacy range from 1 to 5. The translations were shuffled randomly, and the name of each method was anonymous to human evaluators. We find that CLone significantly improves the adequacy over all baselines. This is because omitting important information in source sentences decreases the adequacy of translation. CLone is capable of alleviating this problem by assigning lower probabilities to translations with word omission errors.
[1, 1, 2, 2, 1, 2, 2]
['Table 2 shows the results of human evaluation on the Chinese-to-English task.', 'We asked two human evaluators who can read both Chinese and English to evaluate the fluency and adequacy of the translations generated by MLE, MLE + CP, MLE + data, and CLone.', 'The scores of fluency and adequacy range from 1 to 5.', 'The translations were shuffled randomly, and the name of each method was anonymous to human evaluators.', 'We find that CLone significantly improves the adequacy over all baselines.', 'This is because omitting important information in source sentences decreases the adequacy of translation.', 'CLone is capable of alleviating this problem by assigning lower probabilities to translations with word omission errors.']
[None, ['WordDropout', 'CLone'], ['Flu.', 'Ade.'], None, ['CLone', 'Ade.'], ['Ade.'], ['CLone']]
1
P19-1628table_2
Test set results on the NYT and CNNDailyMail datasets using ROUGE F1 (R-1 and R-2 are shorthands for unigram and bigram overlap, R-L is the longest common subsequence).
2
[['Method', 'ORACLE'], ['Method', 'REFRESH 4 (Narayan et al., 2018b)'], ['Method', 'POINTER-GENERATOR (See et al., 2017)'], ['Method', 'LEAD-3'], ['Method', 'DEGREE (tf-idf)'], ['Method', 'TEXTRANK (tf-idf)'], ['Method', 'TEXTRANK (skip-thought vectors)'], ['Method', 'TEXTRANK (BERT)'], ['Method', 'PACSUM (tf-idf)'], ['Method', 'PACSUM (skip-thought vectors)'], ['Method', 'PACSUM (BERT)']]
2
[['NYT', 'R-1'], ['NYT', 'R-2'], ['NYT', 'R-L'], ['CNN+DM', 'R-1'], ['CNN+DM', 'R-2'], ['CNN+DM', 'R-L']]
[['61.9', '41.7', '58.3', '54.7', '30.4', '50.8'], ['41.3', '22', '37.8', '41.3', '18.4', '37.5'], ['42.7', '22.1', '38', '39.5', '17.3', '36.4'], ['35.5', '17.2', '32', '40.5', '17.7', '36.7'], ['33.2', '13.1', '29', '33.0', '11.7', '29.5'], ['33.2', '13.1', '29', '33.2', '11.8', '29.6'], ['30.1', '9.6', '26.1', '31.4', '10.2', '28.2'], ['29.7', '9', '25.3', '30.8', '9.6', '27.4'], ['40.4', '20.6', '36.4', '39.2', '16.3', '35.3'], ['38.3', '18.8', '34.5', '38.6', '16.1', '34.9'], ['41.4', '21.7', '37.5', '40.7', '17.8', '36.9']]
column
['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L']
['PACSUM (tf-idf)', 'DEGREE (tf-idf)', 'TEXTRANK (tf-idf)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NYT || R-1</th> <th>NYT || R-2</th> <th>NYT || R-L</th> <th>CNN+DM || R-1</th> <th>CNN+DM || R-2</th> <th>CNN+DM || R-L</th> </tr> </thead> <tbody> <tr> <td>Method || ORACLE</td> <td>61.9</td> <td>41.7</td> <td>58.3</td> <td>54.7</td> <td>30.4</td> <td>50.8</td> </tr> <tr> <td>Method || REFRESH 4 (Narayan et al., 2018b)</td> <td>41.3</td> <td>22</td> <td>37.8</td> <td>41.3</td> <td>18.4</td> <td>37.5</td> </tr> <tr> <td>Method || POINTER-GENERATOR (See et al., 2017)</td> <td>42.7</td> <td>22.1</td> <td>38</td> <td>39.5</td> <td>17.3</td> <td>36.4</td> </tr> <tr> <td>Method || LEAD-3</td> <td>35.5</td> <td>17.2</td> <td>32</td> <td>40.5</td> <td>17.7</td> <td>36.7</td> </tr> <tr> <td>Method || DEGREE (tf-idf)</td> <td>33.2</td> <td>13.1</td> <td>29</td> <td>33.0</td> <td>11.7</td> <td>29.5</td> </tr> <tr> <td>Method || TEXTRANK (tf-idf)</td> <td>33.2</td> <td>13.1</td> <td>29</td> <td>33.2</td> <td>11.8</td> <td>29.6</td> </tr> <tr> <td>Method || TEXTRANK (skip-thought vectors)</td> <td>30.1</td> <td>9.6</td> <td>26.1</td> <td>31.4</td> <td>10.2</td> <td>28.2</td> </tr> <tr> <td>Method || TEXTRANK (BERT)</td> <td>29.7</td> <td>9</td> <td>25.3</td> <td>30.8</td> <td>9.6</td> <td>27.4</td> </tr> <tr> <td>Method || PACSUM (tf-idf)</td> <td>40.4</td> <td>20.6</td> <td>36.4</td> <td>39.2</td> <td>16.3</td> <td>35.3</td> </tr> <tr> <td>Method || PACSUM (skip-thought vectors)</td> <td>38.3</td> <td>18.8</td> <td>34.5</td> <td>38.6</td> <td>16.1</td> <td>34.9</td> </tr> <tr> <td>Method || PACSUM (BERT)</td> <td>41.4</td> <td>21.7</td> <td>37.5</td> <td>40.7</td> <td>17.8</td> <td>36.9</td> </tr> </tbody></table>
Table 2
table_2
P19-1628
6
acl2019
As can be seen in Table 2, DEGREE (tf-idf) is very close to TEXTRANK (tf-idf). . Due to space limitations, we only show comparisons between DEGREE and TEXTRANK with tf-idf, however, we observed similar trends across sentence rep-resentations. These results indicate that considering global structure does not make a difference when selecting salient sentences for NYT and CNN/Daily Mail, possibly due to the fact that news articles in these datasets are relatively short (see Table 1). The results in Table 2 further show that PACSUM substantially outperforms TEXTRANK across sentence representations, directly confirming our assumption that position information is beneficial for determining sentence centrality in news single-document summarization. In Figure 1 we further show how PACSUM's performance (ROUGE-1 F1) on the NYT validation set varies as λ1 ranges from -2 to 1 (λ2 = 1 and β = 0, 0.3, 0.6). The plot highlights that differentially weighting a connection's contribution (via relative position) has a huge impact on performance (ROUGE ranges from 0.30 to 0.40). In addition, the optimal λ1 is negative, suggesting that similarity with the previous content actually hurts centrality in this case.
[1, 1, 1, 1, 2, 2, 2]
['As can be seen in Table 2, DEGREE (tf-idf) is very close to TEXTRANK (tf-idf). .', 'Due to space limitations, we only show comparisons between DEGREE and TEXTRANK with tf-idf, however, we observed similar trends across sentence rep-resentations.', ' These results indicate that considering global structure does not make a difference when selecting salient sentences for NYT and CNN/Daily Mail, possibly due to the fact that news articles in these datasets are relatively short (see Table 1).', 'The results in Table 2 further show that PACSUM substantially outperforms TEXTRANK across sentence representations, directly confirming our assumption that position information is beneficial for determining sentence centrality in news single-document summarization.', "In Figure 1 we further show how PACSUM's performance (ROUGE-1 F1) on the NYT validation set varies as λ1 ranges from -2 to 1 (λ2 = 1 and β = 0, 0.3, 0.6).", "The plot highlights that differentially weighting a connection's contribution (via relative position) has a huge impact on performance (ROUGE ranges from 0.30 to 0.40).", 'In addition, the optimal λ1 is negative, suggesting that similarity with the previous content actually hurts centrality in this case.']
[['DEGREE (tf-idf)', 'TEXTRANK (tf-idf)'], ['DEGREE (tf-idf)', 'TEXTRANK (tf-idf)'], ['NYT', 'CNN+DM'], ['PACSUM (tf-idf)', 'TEXTRANK (tf-idf)'], None, None, None]
1
P19-1629table_3
Results (dev set) on document-level GMB benchmark.
2
[['DRTS parser', 'Shallow'], ['DRTS parser', 'Deep'], ['DRTS parser', 'DeepFeat'], ['DRTS parser', 'DeepCopy']]
1
[['par-F1'], ['exa-F1']]
[['66.63', '61.74'], ['71.01', '65.42'], ['71.44', '66.43'], ['75.89', '69.45']]
column
['par-F1', 'exa-F1']
['Deep', 'DeepFeat', 'DeepCopy']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>par-F1</th> <th>exa-F1</th> </tr> </thead> <tbody> <tr> <td>DRTS parser || Shallow</td> <td>66.63</td> <td>61.74</td> </tr> <tr> <td>DRTS parser || Deep</td> <td>71.01</td> <td>65.42</td> </tr> <tr> <td>DRTS parser || DeepFeat</td> <td>71.44</td> <td>66.43</td> </tr> <tr> <td>DRTS parser || DeepCopy</td> <td>75.89</td> <td>69.45</td> </tr> </tbody></table>
Table 3
table_3
P19-1629
8
acl2019
Parsing Documents . Table 3 presents various ablation studies for the document-level model on the development set. Deep sentence representations when combined with multi-attention bring improvements over shallow representations (+3.68 exact-F1). Using alignments as features and as a way of highlighting where to copy from yields further performance gains both in terms of exact and partial F1. The best performing variant is Deep-Copy which combines supervised attention with copying. . Table 4 shows our results on the testset (see the Appendix for an example of model output); we compare the best performing DRTS parser (DeepCopy) against two baselines which rely on our sentence-level parser (DocSent and DocTree). The DRTS parser, which has a global view of the document, outperforms variants which construct document representations by aggregating individually parsed sentences.
[2, 1, 1, 1, 1, 0, 0]
['Parsing Documents .', 'Table 3 presents various ablation studies for the document-level model on the development set.', 'Deep sentence representations when combined with multi-attention bring improvements over shallow representations (+3.68 exact-F1).', ' Using alignments as features and as a way of highlighting where to copy from yields further performance gains both in terms of exact and partial F1.', 'The best performing variant is Deep-Copy which combines supervised attention with copying. .', 'Table 4 shows our results on the testset (see the Appendix for an example of model output); we compare the best performing DRTS parser (DeepCopy) against two baselines which rely on our sentence-level parser (DocSent and DocTree).', ' The DRTS parser, which has a global view of the document, outperforms variants which construct document representations by aggregating individually parsed sentences.']
[None, None, ['exa-F1', 'Deep'], ['Deep', 'par-F1', 'exa-F1'], ['DeepCopy'], None, None]
1
P19-1629table_4
Results (test set) on document-level GMB benchmark.
2
[['Models', 'DocSent'], ['Models', 'DocTree'], ['Models', 'DeepCopy']]
1
[['par-F1'], ['exa-F1']]
[['57.1', '53.27'], ['62.83', '58.22'], ['70.83', '66.56']]
column
['par-F1', 'exa-F1']
['DeepCopy']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>par-F1</th> <th>exa-F1</th> </tr> </thead> <tbody> <tr> <td>Models || DocSent</td> <td>57.1</td> <td>53.27</td> </tr> <tr> <td>Models || DocTree</td> <td>62.83</td> <td>58.22</td> </tr> <tr> <td>Models || DeepCopy</td> <td>70.83</td> <td>66.56</td> </tr> </tbody></table>
Table 4
table_4
P19-1629
8
acl2019
Parsing Documents. Table 3 presents various ablation studies for the document-level model on the development set. Deep sentence representations when combined with multi-attention bring improvements over shallow representations (+3.68 exact-F1). Using alignments as features and as a way of highlighting where to copy from yields further performance gains both in terms of exact and partial F1. The best performing variant is DeepCopy which combines supervised attention with copying. Table 4 shows our results on the test set (see the Appendix for an example of model output); we compare the best performing DRTS parser (DeepCopy) against two baselines which rely on our sentence-level parser (DocSent and DocTree). The DRTS parser, which has a global view of the document, outperforms variants which construct document representations by aggregating individually parsed sentences.
[2, 0, 0, 0, 0, 1, 1]
['Parsing Documents.', 'Table 3 presents various ablation studies for the document-level model on the development set.', 'Deep sentence representations when combined with multi-attention bring improvements over shallow representations (+3.68 exact-F1).', 'Using alignments as features and as a way of highlighting where to copy from yields further performance gains both in terms of exact and partial F1.', 'The best performing variant is DeepCopy which combines supervised attention with copying.', 'Table 4 shows our results on the test set (see the Appendix for an example of model output); we compare the best performing DRTS parser (DeepCopy) against two baselines which rely on our sentence-level parser (DocSent and DocTree).', 'The DRTS parser, which has a global view of the document, outperforms variants which construct document representations by aggregating individually parsed sentences.']
[None, None, None, None, None, ['DeepCopy', 'DocSent', 'DocTree'], ['DeepCopy']]
1
P19-1631table_5
Performance statistics of all approaches on the Wikipedia dataset filtered on samples including identity terms. Numbers represent the mean of 5 runs. Maximum variance is .001.
2
[['Identity', 'Baseline'], ['Identity', 'Importance'], ['Identity', 'TOK Replace'], ['Identity', 'Our Method'], ['Identity', 'Finetuned']]
1
[['Acc'], ['F1'], ['AUC'], ['FP'], ['FN']]
[['0.931', '0.692', '0.91', '0.011', '0.057'], ['0.933', '0.704', '0.945', '0.012', '0.055'], ['0.91', '0.528', '0.882', '0.008', '0.081'], ['0.934', '0.697', '0.949', '0.008', '0.058'], ['0.928', '0.66', '0.94', '0.007', '0.064']]
column
['Acc', 'F1', 'AUC', 'FP', 'FN']
['Our Method']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc</th> <th>F1</th> <th>AUC</th> <th>FP</th> <th>FN</th> </tr> </thead> <tbody> <tr> <td>Identity || Baseline</td> <td>0.931</td> <td>0.692</td> <td>0.91</td> <td>0.011</td> <td>0.057</td> </tr> <tr> <td>Identity || Importance</td> <td>0.933</td> <td>0.704</td> <td>0.945</td> <td>0.012</td> <td>0.055</td> </tr> <tr> <td>Identity || TOK Replace</td> <td>0.91</td> <td>0.528</td> <td>0.882</td> <td>0.008</td> <td>0.081</td> </tr> <tr> <td>Identity || Our Method</td> <td>0.934</td> <td>0.697</td> <td>0.949</td> <td>0.008</td> <td>0.058</td> </tr> <tr> <td>Identity || Finetuned</td> <td>0.928</td> <td>0.66</td> <td>0.94</td> <td>0.007</td> <td>0.064</td> </tr> </tbody></table>
Table 5
table_5
P19-1631
5
acl2019
4.3.1 Evaluation on Original Data . We first verify that the prior loss term does not adversely affect overall classifier performance on the main task using general performance metrics such as accuracy and F-1. Results are shown in Table 4. Unlike previous approaches (Park et al., 2018; Dixon et al., 2018; Madras et al., 2018), our method does not degrade classifier performance (it even improves) in terms of all reported metrics. We also look at samples containing identity terms. Table 5 shows classifier performance metrics for such samples. The importance weighting approach slightly outperforms the baseline classifier. Replacing identity words with a special tokens, on the other hand, hurts the performance on the main task. One of the reasons might be that replacing all identity terms with a token potentially removes other useful information model can rely on. If we were to make an analogy between the token replacement method and hard ablation, then the same analogy can be made between our method and soft ablation. Hence, the information pertaining to identity terms is not completely lost for our method, but come at a cost.
[2, 0, 0, 0, 0, 1, 1, 1, 2, 2, 1]
['4.3.1 Evaluation on Original Data .', 'We first verify that the prior loss term does not adversely affect overall classifier performance on the main task using general performance metrics such as accuracy and F-1.', 'Results are shown in Table 4.', 'Unlike previous approaches (Park et al., 2018; Dixon et al., 2018; Madras et al., 2018), our method does not degrade classifier performance (it even improves) in terms of all reported metrics.', 'We also look at samples containing identity terms.', 'Table 5 shows classifier performance metrics for such samples.', 'The importance weighting approach slightly outperforms the baseline classifier.', 'Replacing identity words with a special tokens, on the other hand, hurts the performance on the main task.', 'One of the reasons might be that replacing all identity terms with a token potentially removes other useful information model can rely on.', 'If we were to make an analogy between the token replacement method and hard ablation, then the same analogy can be made between our method and soft ablation.', 'Hence, the information pertaining to identity terms is not completely lost for our method, but come at a cost.']
[None, None, None, None, None, None, ['Importance', 'Baseline'], ['TOK Replace'], None, None, ['Our Method']]
1
P19-1635table_7
Results for exaggerated numeral detection.
2
[['Distortion factor', '±10%'], ['Distortion factor', '±30%'], ['Distortion factor', '±50%'], ['Distortion factor', '±70%'], ['Distortion factor', '±90%']]
1
[['Micro-F1'], ['Macro-F1']]
[['58.54%', '57.87%'], ['56.94%', '56.11%'], ['57.69%', '56.85%'], ['70.92%', '70.85%'], ['76.91%', '76.94%']]
column
['micro-f1', 'macro-f1']
['Distortion factor']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Micro-F1</th> <th>Macro-F1</th> </tr> </thead> <tbody> <tr> <td>Distortion factor || ±10%</td> <td>58.54%</td> <td>57.87%</td> </tr> <tr> <td>Distortion factor || ±30%</td> <td>56.94%</td> <td>56.11%</td> </tr> <tr> <td>Distortion factor || ±50%</td> <td>57.69%</td> <td>56.85%</td> </tr> <tr> <td>Distortion factor || ±70%</td> <td>70.92%</td> <td>70.85%</td> </tr> <tr> <td>Distortion factor || ±90%</td> <td>76.91%</td> <td>76.94%</td> </tr> </tbody></table>
Table 7
table_7
P19-1635
5
acl2019
In this experiment, we release the boundary limitation, and test the numeracy for all real numbers. For instance, the altered results of 138 with 10% distortion factor are in the same magnitude, and that with 30% distortion factor, 96.6 and 179.4, are in different magnitude. Table 7 lists the experimental results. We find that the model obtained better performance for numerals distorted by more than 50%, with more confusion in the range below that. Furthermore, according to the micro and macro-averaged F1 scores, the performance is similar among the three different cases (i.e., overstated, understated, and correct).
[2, 2, 1, 1, 1]
['In this experiment, we release the boundary limitation, and test the numeracy for all real numbers.', 'For instance, the altered results of 138 with 10% distortion factor are in the same magnitude, and that with 30% distortion factor, 96.6 and 179.4, are in different magnitude.', 'Table 7 lists the experimental results.', 'We find that the model obtained better performance for numerals distorted by more than 50%, with more confusion in the range below that.', 'Furthermore, according to the micro and macro-averaged F1 scores, the performance is similar among the three different cases (i.e., overstated, understated, and correct).']
[None, ['±10%', '±30%'], None, ['±70%', '±90%'], ['Micro-F1', 'Macro-F1']]
1
P19-1643table_6
Results from baselines and our best multimodal method on validation and test data. ActionG indicates action representation using GloVe embedding, and ActionE indicates action representation using ELMo embedding. ContextS indicates sentence-level context, and ContextA indicates action-level context.
2
[['Input Feature', 'Action E + Inception'], ['Input Feature', 'Action E + Inception + C3D'], ['Input Feature', 'Action E + POS + Inception + C3D'], ['Input Feature', 'Action E + Context S + Inception + C3D'], ['Input Feature', 'Action E + Context A + Inception + C3D'], ['Input Feature', 'Action E + Concreteness + Inception + C3D'], ['Input Feature', 'Action E + POS + Context S + Concreteness + Inception + C3D']]
2
[['Metric', 'Accuracy'], ['Metric', 'Precision'], ['Metric', 'Recall'], ['Metric', 'F1']]
[['0.722', '0.765', '0.863', '0.811'], ['0.725', '0.769', '0.869', '0.814'], ['0.731', '0.763', '0.885', '0.82'], ['0.725', '0.77', '0.859', '0.812'], ['0.729', '0.757', '0.895', '0.82'], ['0.723', '0.768', '0.86', '0.811'], ['0.737', '0.758', '0.911', '0.827']]
column
['Accuracy', 'Precision', 'Recall', 'F1']
['Action E + POS + Context S + Concreteness + Inception + C3D']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metric || Accuracy</th> <th>Metric || Precision</th> <th>Metric || Recall</th> <th>Metric || F1</th> </tr> </thead> <tbody> <tr> <td>Input Feature || Action E + Inception</td> <td>0.722</td> <td>0.765</td> <td>0.863</td> <td>0.811</td> </tr> <tr> <td>Input Feature || Action E + Inception + C3D</td> <td>0.725</td> <td>0.769</td> <td>0.869</td> <td>0.814</td> </tr> <tr> <td>Input Feature || Action E + POS + Inception + C3D</td> <td>0.731</td> <td>0.763</td> <td>0.885</td> <td>0.82</td> </tr> <tr> <td>Input Feature || Action E + Context S + Inception + C3D</td> <td>0.725</td> <td>0.77</td> <td>0.859</td> <td>0.812</td> </tr> <tr> <td>Input Feature || Action E + Context A + Inception + C3D</td> <td>0.729</td> <td>0.757</td> <td>0.895</td> <td>0.82</td> </tr> <tr> <td>Input Feature || Action E + Concreteness + Inception + C3D</td> <td>0.723</td> <td>0.768</td> <td>0.86</td> <td>0.811</td> </tr> <tr> <td>Input Feature || Action E + POS + Context S + Concreteness + Inception + C3D</td> <td>0.737</td> <td>0.758</td> <td>0.911</td> <td>0.827</td> </tr> </tbody></table>
Table 6
table_6
P19-1643
9
acl2019
Table 6 shows the results obtained using the multimodal model for different sets of input features. The model that uses all the input features available leads to the best results, improving significantly over the text-only and video-only methods.
[1, 1]
['Table 6 shows the results obtained using the multimodal model for different sets of input features.', 'The model that uses all the input features available leads to the best results, improving significantly over the text-only and video-only methods.']
[None, ['Action E + POS + Context S + Concreteness + Inception + C3D']]
1
P19-1645table_1
Summary of segmentation performance on phoneme version of the Brent Corpus (BR-phono).
2
[['Model', 'LSTM suprisal (Elman, 1990)'], ['Model', 'HMLSTM (Chung et al. 2017)'], ['Model', 'Unigram DP'], ['Model', 'Bigram HDP'], ['Model', 'SNLM (- memory, - length)'], ['Model', 'SNLM (+ memory, - length)'], ['Model', 'SNLM (- memory, + length)'], ['Model', 'SNLM (+ memory, + length)']]
2
[['Metric', 'P'], ['Metric', 'R'], ['Metric', 'F1']]
[['54.5', '55.5', '55'], ['8.1', '13.3', '10.1'], ['63.3', '50.4', '56.1'], ['53', '61.4', '56.9'], ['54.3', '34.9', '42.5'], ['52.4', '36.8', '43.3'], ['57.6', '43.4', '49.5'], ['81.3', '77.5', '79.3']]
column
['P', 'R', 'F1']
['SNLM (- memory, - length)', 'SNLM (- memory, + length)', 'SNLM (+ memory, - length)', 'SNLM (+ memory, + length)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metric || P</th> <th>Metric || R</th> <th>Metric || F1</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM suprisal (Elman, 1990)</td> <td>54.5</td> <td>55.5</td> <td>55</td> </tr> <tr> <td>Model || HMLSTM (Chung et al. 2017)</td> <td>8.1</td> <td>13.3</td> <td>10.1</td> </tr> <tr> <td>Model || Unigram DP</td> <td>63.3</td> <td>50.4</td> <td>56.1</td> </tr> <tr> <td>Model || Bigram HDP</td> <td>53</td> <td>61.4</td> <td>56.9</td> </tr> <tr> <td>Model || SNLM (- memory, - length)</td> <td>54.3</td> <td>34.9</td> <td>42.5</td> </tr> <tr> <td>Model || SNLM (+ memory, - length)</td> <td>52.4</td> <td>36.8</td> <td>43.3</td> </tr> <tr> <td>Model || SNLM (- memory, + length)</td> <td>57.6</td> <td>43.4</td> <td>49.5</td> </tr> <tr> <td>Model || SNLM (+ memory, + length)</td> <td>81.3</td> <td>77.5</td> <td>79.3</td> </tr> </tbody></table>
Table 1
table_1
P19-1645
6
acl2019
Table 1 summarizes the segmentation results on the widely used BR-phono corpus, comparing it to a variety of baselines. Unigram DP, Bigram HDP, LSTM suprisal and HMLSTM refer to the benchmark models explained in ¤6. The ablated versions of our model show that without the lexicon (-memory), without the expected length penalty (-length), and without either, our model fails to discover good segmentations.
[1, 1, 1]
['Table 1 summarizes the segmentation results on the widely used BR-phono corpus, comparing it to a variety of baselines.', 'Unigram DP, Bigram HDP, LSTM suprisal and HMLSTM refer to the benchmark models explained in ¤6.', 'The ablated versions of our model show that without the lexicon (-memory), without the expected length penalty (-length), and without either, our model fails to discover good segmentations.']
[None, ['Unigram DP', 'LSTM suprisal (Elman, 1990)', 'HMLSTM (Chung et al. 2017)'], ['SNLM (- memory, - length)', 'SNLM (- memory, + length)', 'SNLM (+ memory, + length)', 'SNLM (+ memory, - length)']]
1
P19-1645table_2
Summary of segmentation performance on other corpora.
4
[['Corpus', 'BR-text', 'Model', 'LSTM surprisal'], ['Corpus', 'BR-text', 'Model', 'Unigram DP'], ['Corpus', 'BR-text', 'Model', 'Bigram HDP'], ['Corpus', 'BR-text', 'Model', 'SNLM'], ['Corpus', 'PTB', 'Model', 'LSTM surprisal'], ['Corpus', 'PTB', 'Model', 'Unigram DP'], ['Corpus', 'PTB', 'Model', 'Bigram HDP'], ['Corpus', 'PTB', 'Model', 'SNLM'], ['Corpus', 'CTB', 'Model', 'LSTM surprisal'], ['Corpus', 'CTB', 'Model', 'Unigram DP'], ['Corpus', 'CTB', 'Model', 'Bigram HDP'], ['Corpus', 'CTB', 'Model', 'SNLM'], ['Corpus', 'PKU', 'Model', 'LSTM surprisal'], ['Corpus', 'PKU', 'Model', 'Unigram DP'], ['Corpus', 'PKU', 'Model', 'Bigram HDP'], ['Corpus', 'PKU', 'Model', 'SNLM']]
2
[['Metric', 'P'], ['Metric', 'R'], ['Metric', 'F1']]
[['36.4', '49', '41.7'], ['64.9', '55.7', '60'], ['52.5', '63.1', '57.3'], ['68.7', '78.9', '73.5'], ['27.3', '36.5', '31.2'], ['51', '49.1', '50'], ['34.8', '47.3', '40.1'], ['54.1', '60.1', '56.9'], ['41.6', '25.6', '31.7'], ['61.8', '49.6', '55'], ['67.3', '67.7', '67.5'], ['78.1', '81.5', '79.8'], ['38.1', '23', '28.7'], ['60.2', '48.2', '53.6'], ['66.8', '67.1', '66.9'], ['75', '71.2', '73.1']]
column
['P', 'R', 'F1']
['SNLM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metric || P</th> <th>Metric || R</th> <th>Metric || F1</th> </tr> </thead> <tbody> <tr> <td>Corpus || BR-text || Model || LSTM surprisal</td> <td>36.4</td> <td>49</td> <td>41.7</td> </tr> <tr> <td>Corpus || BR-text || Model || Unigram DP</td> <td>64.9</td> <td>55.7</td> <td>60</td> </tr> <tr> <td>Corpus || BR-text || Model || Bigram HDP</td> <td>52.5</td> <td>63.1</td> <td>57.3</td> </tr> <tr> <td>Corpus || BR-text || Model || SNLM</td> <td>68.7</td> <td>78.9</td> <td>73.5</td> </tr> <tr> <td>Corpus || PTB || Model || LSTM surprisal</td> <td>27.3</td> <td>36.5</td> <td>31.2</td> </tr> <tr> <td>Corpus || PTB || Model || Unigram DP</td> <td>51</td> <td>49.1</td> <td>50</td> </tr> <tr> <td>Corpus || PTB || Model || Bigram HDP</td> <td>34.8</td> <td>47.3</td> <td>40.1</td> </tr> <tr> <td>Corpus || PTB || Model || SNLM</td> <td>54.1</td> <td>60.1</td> <td>56.9</td> </tr> <tr> <td>Corpus || CTB || Model || LSTM surprisal</td> <td>41.6</td> <td>25.6</td> <td>31.7</td> </tr> <tr> <td>Corpus || CTB || Model || Unigram DP</td> <td>61.8</td> <td>49.6</td> <td>55</td> </tr> <tr> <td>Corpus || CTB || Model || Bigram HDP</td> <td>67.3</td> <td>67.7</td> <td>67.5</td> </tr> <tr> <td>Corpus || CTB || Model || SNLM</td> <td>78.1</td> <td>81.5</td> <td>79.8</td> </tr> <tr> <td>Corpus || PKU || Model || LSTM surprisal</td> <td>38.1</td> <td>23</td> <td>28.7</td> </tr> <tr> <td>Corpus || PKU || Model || Unigram DP</td> <td>60.2</td> <td>48.2</td> <td>53.6</td> </tr> <tr> <td>Corpus || PKU || Model || Bigram HDP</td> <td>66.8</td> <td>67.1</td> <td>66.9</td> </tr> <tr> <td>Corpus || PKU || Model || SNLM</td> <td>75</td> <td>71.2</td> <td>73.1</td> </tr> </tbody></table>
Table 2
table_2
P19-1645
7
acl2019
Table 2 summarizes results on the BR-text (orthographic Brent corpus) and Chinese corpora. As in the previous section, all the models were trained to maximize held-out likelihood. Here we observe a similar pattern, with the SNLM outperforming the baseline models, despite the tasks being quite different from each other and from the BR-phono task.
[1, 2, 1]
['Table 2 summarizes results on the BR-text (orthographic Brent corpus) and Chinese corpora.', 'As in the previous section, all the models were trained to maximize held-out likelihood.', 'Here we observe a similar pattern, with the SNLM outperforming the baseline models, despite the tasks being quite different from each other and from the BR-phono task.']
[None, None, ['SNLM']]
1
P19-1645table_4
Test language modeling performance (bpc).
2
[['Model', 'Unigram DP'], ['Model', 'Bigram HDP'], ['Model', 'LSTM'], ['Model', 'SNLM']]
2
[['Corpus', 'BR-text'], ['Corpus', 'BR-phono'], ['Corpus', 'PTB'], ['Corpus', 'CTB'], ['Corpus', 'PKU']]
[['2.33', '2.93', '2.25', '6.16', '6.88'], ['1.96', '2.55', '1.8', '5.4', '6.42'], ['2.03', '2.62', '1.65', '4.94', '6.2'], ['1.94', '2.54', '1.56', '4.84', '5.89']]
column
['bpc', 'bpc', 'bpc', 'bpc', 'bpc']
['SNLM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Corpus || BR-text</th> <th>Corpus || BR-phono</th> <th>Corpus || PTB</th> <th>Corpus || CTB</th> <th>Corpus || PKU</th> </tr> </thead> <tbody> <tr> <td>Model || Unigram DP</td> <td>2.33</td> <td>2.93</td> <td>2.25</td> <td>6.16</td> <td>6.88</td> </tr> <tr> <td>Model || Bigram HDP</td> <td>1.96</td> <td>2.55</td> <td>1.8</td> <td>5.4</td> <td>6.42</td> </tr> <tr> <td>Model || LSTM</td> <td>2.03</td> <td>2.62</td> <td>1.65</td> <td>4.94</td> <td>6.2</td> </tr> <tr> <td>Model || SNLM</td> <td>1.94</td> <td>2.54</td> <td>1.56</td> <td>4.84</td> <td>5.89</td> </tr> </tbody></table>
Table 4
table_4
P19-1645
8
acl2019
Table 4 summarizes the results of the language modeling experiments. Again, we see that SNLM outperforms the Bayesian models and a character LSTM.
[1, 1]
['Table 4 summarizes the results of the language modeling experiments.', 'Again, we see that SNLM outperforms the Bayesian models and a character LSTM.']
[None, ['SNLM', 'LSTM']]
1
P19-1648table_1
Comparison of ReDAN with a discriminative decoder to state-of-the-art methods on VisDial v1.0 validation set. Higher score is better for NDCG, MRR and Recall@k, while lower score is better for mean rank. All these baselines are re-implemented with bottom-up features and incorporated with GloVe vectors for fair comparison.
2
[['Model', 'MN-G (Das et al., 2017a)'], ['Model', 'HCIAE-G (Lu et al., 2017)'], ['Model', 'CoAtt-G (Wu et al., 2018)'], ['Model', 'ReDAN-G (T=1)'], ['Model', 'ReDAN-G (T=2)'], ['Model', 'ReDAN-G (T=3)'], ['Model', 'Ensemble of 4']]
1
[['NDCG'], ['MRR'], ['R@1'], ['R@5'], ['R@10'], ['Mean']]
[['56.99', '47.83', '38.01', '57.49', '64.08', '18.76'], ['59.7', '49.07', '39.72', '58.23', '64.73', '18.43'], ['59.24', '49.64', '40.09', '59.37', '65.92', '17.86'], ['59.41', '49.6', '39.95', '59.32', '65.97', '17.79'], ['60.11', '49.96', '40.36', '59.72', '66.57', '17.53'], ['60.47', '50.02', '40.27', '59.93', '66.78', '17.4'], ['61.43', '50.41', '40.85', '60.08', '67.17', '17.38']]
column
['NDCG', 'MRR', 'R@1', 'R@5', 'R@10', 'Mean']
['ReDAN-G (T=1)', 'ReDAN-G (T=2)', 'ReDAN-G (T=3)', 'Ensemble of 4']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NDCG</th> <th>MRR</th> <th>R@1</th> <th>R@5</th> <th>R@10</th> <th>Mean</th> </tr> </thead> <tbody> <tr> <td>Model || MN-G (Das et al., 2017a)</td> <td>56.99</td> <td>47.83</td> <td>38.01</td> <td>57.49</td> <td>64.08</td> <td>18.76</td> </tr> <tr> <td>Model || HCIAE-G (Lu et al., 2017)</td> <td>59.7</td> <td>49.07</td> <td>39.72</td> <td>58.23</td> <td>64.73</td> <td>18.43</td> </tr> <tr> <td>Model || CoAtt-G (Wu et al., 2018)</td> <td>59.24</td> <td>49.64</td> <td>40.09</td> <td>59.37</td> <td>65.92</td> <td>17.86</td> </tr> <tr> <td>Model || ReDAN-G (T=1)</td> <td>59.41</td> <td>49.6</td> <td>39.95</td> <td>59.32</td> <td>65.97</td> <td>17.79</td> </tr> <tr> <td>Model || ReDAN-G (T=2)</td> <td>60.11</td> <td>49.96</td> <td>40.36</td> <td>59.72</td> <td>66.57</td> <td>17.53</td> </tr> <tr> <td>Model || ReDAN-G (T=3)</td> <td>60.47</td> <td>50.02</td> <td>40.27</td> <td>59.93</td> <td>66.78</td> <td>17.4</td> </tr> <tr> <td>Model || Ensemble of 4</td> <td>61.43</td> <td>50.41</td> <td>40.85</td> <td>60.08</td> <td>67.17</td> <td>17.38</td> </tr> </tbody></table>
Table 1
table_1
P19-1648
7
acl2019
Results on VisDial val v1.0. Experimental results on val v1.0 are shown in Table 1. “-D” denotes that a discriminative decoder is used. With only one reasoning step, our ReDAN model already achieves better performance than CoAtt, which is the previous best-performing model. Using two or three reasoning steps further increases the performance. Further increasing the number of reasoning steps does not help, thus results are not shown. We also report results on an ensemble of 4 ReDAN-D models. Significant improvement was observed, boosting NDCG from 59.32 to 60.53, and MRR from 64.21 to 65.30.
[2, 1, 1, 1, 2, 1, 1]
['Results on VisDial val v1.0.', 'Experimental results on val v1.0 are shown in Table 1. “-D” denotes that a discriminative decoder is used.', 'With only one reasoning step, our ReDAN model already achieves better performance than CoAtt, which is the previous best-performing model.', 'Using two or three reasoning steps further increases the performance.', 'Further increasing the number of reasoning steps does not help, thus results are not shown.', 'We also report results on an ensemble of 4 ReDAN-D models.', 'Significant improvement was observed, boosting NDCG from 59.32 to 60.53, and MRR from 64.21 to 65.30.']
[None, None, ['ReDAN-G (T=1)', 'CoAtt-G (Wu et al., 2018)'], ['ReDAN-G (T=2)', 'ReDAN-G (T=3)'], None, ['Ensemble of 4'], ['Ensemble of 4', 'NDCG', 'MRR']]
1
P19-1650table_1
Baseline model results, using either image or entity labels (2nd column). The informativeness metric we is low when additional input labels are not used, Covr and high when they are.
4
[['Baseline', 'Labels-to-captions', ' Image|Label', 'N|Y'], ['Baseline', '(Anderson et al., 2018)', 'Image|Label', ' Y|N'], ['Baseline', '(Sharma et al., 2018)', 'Image|Label', ' Y|N'], ['Baseline', '(Lu et al., 2018) w/ T', ' Image|Label', 'Y|Y']]
1
[[' CIDEr'], [' Covr we'], [' Covp obj']]
[['62.08', '21.01', '6.19'], ['51.09', '7.3', '4.95'], ['62.35', '10.52', '6.74'], ['69.46', '36.8', '6.93']]
column
['CIDEr', 'Covr we', 'Covp obj']
[' Image|Label']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CIDEr</th> <th>Covr we</th> <th>Covp obj</th> </tr> </thead> <tbody> <tr> <td>Baseline || Labels-to-captions || Image|Label || N|Y</td> <td>62.08</td> <td>21.01</td> <td>6.19</td> </tr> <tr> <td>Baseline || (Anderson et al., 2018) || Image|Label || Y|N</td> <td>51.09</td> <td>7.3</td> <td>4.95</td> </tr> <tr> <td>Baseline || (Sharma et al., 2018) || Image|Label || Y|N</td> <td>62.35</td> <td>10.52</td> <td>6.74</td> </tr> <tr> <td>Baseline || (Lu et al., 2018) w/ T || Image|Label || Y|Y</td> <td>69.46</td> <td>36.8</td> <td>6.93</td> </tr> </tbody></table>
Table 1
table_1
P19-1650
6
acl2019
Table 1 shows the performance of these baselines. We observe that the image-only models perform poorly on Covr we because they are unable to identify them from the image pixels alone. On the other hand, the labels-only baseline and the proposal of Lu et al.(2018) has high performance across all three metrics.
[1, 1, 1]
['Table 1 shows the performance of these baselines.', 'We observe that the image-only models perform poorly on Covr we because they are unable to identify them from the image pixels alone.', 'On the other hand, the labels-only baseline and the proposal of Lu et al.(2018) has high performance across all three metrics.']
[None, [' Image|Label', ' Y|N', ' Covr we'], [' Image|Label', 'N|Y', 'Y|Y']]
1
P19-1653table_3
Human ranking results: normalised rank (micro-averaged). Bold highlights best results.
2
[['lang', 'DE'], ['lang', 'FR']]
1
[['base+att'], ['del'], ['del+obj']]
[['0.35', '0.62', '0.59'], ['0.41', '0.6', '0.67']]
column
['rank', 'rank', 'rank']
[' del+obj']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>base+att</th> <th>del</th> <th>del+obj</th> </tr> </thead> <tbody> <tr> <td>lang || DE</td> <td>0.35</td> <td>0.62</td> <td>0.59</td> </tr> <tr> <td>lang || FR</td> <td>0.41</td> <td>0.6</td> <td>0.67</td> </tr> </tbody></table>
Table 3
table_3
P19-1653
8
acl2019
Table 3 shows the human evaluation results. They are consistent with the automatic evaluation results when it comes to the preference of humans towards the deliberation-based setups, but show a more positive outlook regarding the addition of visual information (del+obj over del) for French.
[1, 1]
['Table 3 shows the human evaluation results.', 'They are consistent with the automatic evaluation results when it comes to the preference of humans towards the deliberation-based setups, but show a more positive outlook regarding the addition of visual information (del+obj over del) for French.']
[None, [' del+obj', ' del', 'FR']]
1
P19-1657table_1
Main results for findings generation on the CX-CHR (upper) and IU-Xray (lower) datasets. BLEU-n denotes the BLEU score that uses up to n-grams.
4
[['Dataset', 'CX-CHR', ' Methods', 'CNN-RNN (Vinyals et al.,2015)'], ['Dataset', 'CX-CHR', ' Methods', 'LRCN (Donahue et al., 2015)'], ['Dataset', 'CX-CHR', ' Methods', 'AdaAtt (Lu et al., 2017)'], ['Dataset', 'CX-CHR', ' Methods', 'Att2in (Rennie et al., 2017)'], ['Dataset', 'CX-CHR', ' Methods', 'CoAtt (Jing et al., 2018)'], ['Dataset', 'CX-CHR', ' Methods', 'HGRG-Agent (Li et al., 2018)'], ['Dataset', 'CX-CHR', ' Methods', 'CMASW'], ['Dataset', 'CX-CHR', ' Methods', 'CMASNWAW'], ['Dataset', 'CX-CHR', ' Methods', 'CMAS-IL'], ['Dataset', 'CX-CHR', ' Methods', 'CMAS-RL'], ['Dataset', 'IU-Xray', ' Methods', 'CNN-RNN (Vinyals et al., 2015)'], ['Dataset', 'IU-Xray', ' Methods', 'LRCN (Donahue et al., 2015)'], ['Dataset', 'IU-Xray', ' Methods', 'AdaAtt (Lu et al., 2017)'], ['Dataset', 'IU-Xray', ' Methods', 'Att2in (Rennie et al., 2017)'], ['Dataset', 'IU-Xray', ' Methods', 'CoAtt (Jing et al., 2018)'], ['Dataset', 'IU-Xray', ' Methods', 'HGRG-Agent (Li et al., 2018)'], ['Dataset', 'IU-Xray', ' Methods', 'CMASW'], ['Dataset', 'IU-Xray', ' Methods', 'CMASNW AW'], ['Dataset', 'IU-Xray', ' Methods', 'CMAS-IL'], ['Dataset', 'IU-Xray', ' Methods', 'CMAS-RL']]
1
[['BLEU-1'], ['BLEU-2'], ['BLEU-3'], ['BLEU-4'], ['ROUGE'], ['CIDEr']]
[['0.59', '0.506', '0.45', '0.411', '0.577', '1.58'], ['0.593', '0.508', '0.452', '0.413', '0.577', '1.588'], ['0.588', '0.503', '0.446', '0.409', '0.575', '1.568'], ['0.587', '0.503', '0.446', '0.408', '0.576', '1.566'], ['0.651', '0.568', '0.521', '0.469', '0.602', '2.532'], ['0.673', '0.587', '0.53', '0.486', '0.612', '2.895'], ['0.659', '0.585', '0.534', '0.497', '0.627', '2.564'], ['0.657', '0.579', '0.522', '0.479', '0.585', '1.532'], ['0.663', '0.592', '0.543', '0.507', '0.628', '2.475'], ['0.693', '0.626', '0.58', '0.545', '0.661', '2.9'], ['0.216', '0.124', '0.087', '0.066', '0.306', '0.294'], ['0.223', '0.128', '0.089', '0.067', '0.305', '0.284'], ['0.22', '0.127', '0.089', '0.068', '0.308', '0.295'], ['0.224', '0.129', '0.089', '0.068', '0.308', '0.297'], ['0.455', '0.288', '0.205', '0.154', '0.369', '0.277'], ['0.438', '0.298', '0.208', '0.151', '0.322', '0.343'], ['0.44', '0.292', '0.204', '0.147', '0.365', '0.252'], ['0.451', '0.286', '0.199', '0.146', '0.366', '0.269'], ['0.454', '0.283', '0.195', '0.143', '0.353', '0.266'], ['0.464', '0.301', '0.21', '0.154', '0.362', '0.275']]
column
['BLEU-1', 'BLEU-2', 'BLEU-3', 'BLEU-4', 'ROUGE', 'CIDEr']
['CMASW', 'CMASNWAW', 'CMAS-IL', 'CMAS-RL', 'CMASNW AW']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>ROUGE</th> <th>CIDEr</th> </tr> </thead> <tbody> <tr> <td>Dataset || CX-CHR || Methods || CNN-RNN (Vinyals et al.,2015)</td> <td>0.59</td> <td>0.506</td> <td>0.45</td> <td>0.411</td> <td>0.577</td> <td>1.58</td> </tr> <tr> <td>Dataset || CX-CHR || Methods || LRCN (Donahue et al., 2015)</td> <td>0.593</td> <td>0.508</td> <td>0.452</td> <td>0.413</td> <td>0.577</td> <td>1.588</td> </tr> <tr> <td>Dataset || CX-CHR || Methods || AdaAtt (Lu et al., 2017)</td> <td>0.588</td> <td>0.503</td> <td>0.446</td> <td>0.409</td> <td>0.575</td> <td>1.568</td> </tr> <tr> <td>Dataset || CX-CHR || Methods || Att2in (Rennie et al., 2017)</td> <td>0.587</td> <td>0.503</td> <td>0.446</td> <td>0.408</td> <td>0.576</td> <td>1.566</td> </tr> <tr> <td>Dataset || CX-CHR || Methods || CoAtt (Jing et al., 2018)</td> <td>0.651</td> <td>0.568</td> <td>0.521</td> <td>0.469</td> <td>0.602</td> <td>2.532</td> </tr> <tr> <td>Dataset || CX-CHR || Methods || HGRG-Agent (Li et al., 2018)</td> <td>0.673</td> <td>0.587</td> <td>0.53</td> <td>0.486</td> <td>0.612</td> <td>2.895</td> </tr> <tr> <td>Dataset || CX-CHR || Methods || CMASW</td> <td>0.659</td> <td>0.585</td> <td>0.534</td> <td>0.497</td> <td>0.627</td> <td>2.564</td> </tr> <tr> <td>Dataset || CX-CHR || Methods || CMASNWAW</td> <td>0.657</td> <td>0.579</td> <td>0.522</td> <td>0.479</td> <td>0.585</td> <td>1.532</td> </tr> <tr> <td>Dataset || CX-CHR || Methods || CMAS-IL</td> <td>0.663</td> <td>0.592</td> <td>0.543</td> <td>0.507</td> <td>0.628</td> <td>2.475</td> </tr> <tr> <td>Dataset || CX-CHR || Methods || CMAS-RL</td> <td>0.693</td> <td>0.626</td> <td>0.58</td> <td>0.545</td> <td>0.661</td> <td>2.9</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || CNN-RNN (Vinyals et al., 2015)</td> <td>0.216</td> <td>0.124</td> <td>0.087</td> <td>0.066</td> <td>0.306</td> <td>0.294</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || LRCN (Donahue et al., 2015)</td> <td>0.223</td> <td>0.128</td> <td>0.089</td> <td>0.067</td> <td>0.305</td> <td>0.284</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || AdaAtt (Lu et al., 2017)</td> <td>0.22</td> <td>0.127</td> <td>0.089</td> <td>0.068</td> <td>0.308</td> <td>0.295</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || Att2in (Rennie et al., 2017)</td> <td>0.224</td> <td>0.129</td> <td>0.089</td> <td>0.068</td> <td>0.308</td> <td>0.297</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || CoAtt (Jing et al., 2018)</td> <td>0.455</td> <td>0.288</td> <td>0.205</td> <td>0.154</td> <td>0.369</td> <td>0.277</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || HGRG-Agent (Li et al., 2018)</td> <td>0.438</td> <td>0.298</td> <td>0.208</td> <td>0.151</td> <td>0.322</td> <td>0.343</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || CMASW</td> <td>0.44</td> <td>0.292</td> <td>0.204</td> <td>0.147</td> <td>0.365</td> <td>0.252</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || CMASNW AW</td> <td>0.451</td> <td>0.286</td> <td>0.199</td> <td>0.146</td> <td>0.366</td> <td>0.269</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || CMAS-IL</td> <td>0.454</td> <td>0.283</td> <td>0.195</td> <td>0.143</td> <td>0.353</td> <td>0.266</td> </tr> <tr> <td>Dataset || IU-Xray || Methods || CMAS-RL</td> <td>0.464</td> <td>0.301</td> <td>0.21</td> <td>0.154</td> <td>0.362</td> <td>0.275</td> </tr> </tbody></table>
Table 1
table_1
P19-1657
7
acl2019
Ablation Study. CMASW has only one writer, which is trained on both normal and abnormal findings. Table 1 shows that CMASW can achieve competitive performances to the state-of-the-art methods. CMASNW, AW is a simple concatenation of two single agent models CMASNW and CMASAW, where CMASNW is trained only on normal findings and CMASAW is trained only on abnormal findings. At test time, the final paragraph of CMASNW, AW is simply a concatenation of normal and abnormal findings generated by CMASNW and CMASAW respectively. Surprisingly, CMASNW, AW performs worse than CMASW on the CX-CHR dataset. We believe the main reason is the missing communication protocol between the two agents, which could cause conflicts when they take actions independently. For example, for an image, NW might think “the heart size is normal”, while AW believes “the heart is enlarged”. Such conflict would negatively affect their joint performances. As evidently shown in Table 1, CMAS-IL achieves higher scores than CMASNW, AW, directly proving the importance of communication between agents and thus the importance of PL. Finally, it can be observed from Table 1 that CMAS-RL consistently outperforms CMAS-IL on all metrics, which demonstrates the effectiveness of reinforcement learning.
[2, 1, 1, 1, 1, 1, 2, 2, 2, 1, 1]
['Ablation Study.', 'CMASW has only one writer, which is trained on both normal and abnormal findings.', 'Table 1 shows that CMASW can achieve competitive performances to the state-of-the-art methods.', 'CMASNW, AW is a simple concatenation of two single agent models CMASNW and CMASAW, where CMASNW is trained only on normal findings and CMASAW is trained only on abnormal findings.', 'At test time, the final paragraph of CMASNW, AW is simply a concatenation of normal and abnormal findings generated by CMASNW and CMASAW respectively.', 'Surprisingly, CMASNW, AW performs worse than CMASW on the CX-CHR dataset.', 'We believe the main reason is the missing communication protocol between the two agents, which could cause conflicts when they take actions independently.', 'For example, for an image, NW might think \x93the heart size is normalâ\x80\x9d, while AW believes â\x80\x9cthe heart is enlargedâ\x80\x9d.', 'Such conï¬\x82ict would negatively affect their joint performances.', 'As evidently shown in Table 1, CMAS-IL achieves higher scores than CMASNW, AW, directly proving the importance of communication between agents and thus the importance of PL.', 'Finally, it can be observed from Table 1 that CMAS-RL consistently outperforms CMAS-IL on all metrics, which demonstrates the effectiveness of reinforcement learning.']
[None, ['CMASW'], ['CMASW'], ['CMASNWAW'], ['CMASNWAW'], ['CMASNWAW', 'CMASW', 'CX-CHR'], None, None, None, ['CMAS-IL', 'CMASNWAW'], ['CMAS-RL', 'CMAS-IL']]
1
P19-1658table_2
Human evaluation results. Five human judges on MTurk rate each story on the following six aspects, using a 5-point Likert scale (from Strongly Disagree to Strongly Agree): Focus, Structure and Coherence, Willingto-Share (“I Would Share”), Written-by-a-Human (“This story sounds like it was written by a human.”), VisuallyGrounded, and Detailed. We take the average of the five judgments as the final score for each story. LSTM(T) improves all aspects for stories by AREL, and improves “Focus” and “Human-like” aspects for stories by GLAC.
2
[['Edited', 'N/A'], ['Edited', 'TF (T)'], ['Edited', 'TF (T+I)'], ['Edited', 'LSTM (T)'], ['Edited', 'LSTM (T+I)'], ['Edited', 'Human']]
2
[['AREL', 'Focus'], ['AREL', 'Coherence'], ['AREL', 'Share'], ['AREL', 'Human'], ['AREL', 'Grounded'], ['AREL', 'Detailed'], [' GLAC', 'Focus'], [' GLAC', 'Coherence'], [' GLAC', 'Share'], [' GLAC', 'Human'], [' GLAC', 'Grounded'], [' GLAC', 'Detailed']]
[['3.487', '3.751', '3.763', '3.746', '3.602', '3.761', '3.878', '3.908', '3.93', '3.817', '3.864', '3.938'], ['3.433', '3.705', '3.641', '3.656', '3.619', '3.631', '3.717', '3.773', '3.863', '3.672', '3.765', '3.795'], ['3.542', '3.693', '3.676', '3.643', '3.548', '3.672', '3.734', '3.759', '3.786', '3.622', '3.758', '3.744'], ['3.551', '3.8', '3.771', '3.751', '3.631', '3.81', '3.894', '3.896', '3.864', '3.848', '3.751', '3.897'], ['3.497', '3.734', '3.746', '3.742', '3.573', '3.755', '3.815', '3.872', '3.847', '3.813', '3.75', '3.869'], ['3.592', '3.87', '3.856', '3.885', '3.779', '3.878', '4.003', '4.057', '4.072', '3.976', '3.994', '4.068']]
column
['Focus', 'Coherence', 'Share', 'Human', 'Grounded', 'Detailed', 'Focus', 'Coherence', 'Share', 'Human', 'Grounded', 'Detailed']
['LSTM (T)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>AREL || Focus</th> <th>AREL || Coherence</th> <th>AREL || Share</th> <th>AREL || Human</th> <th>AREL || Grounded</th> <th>AREL || Detailed</th> <th>GLAC || Focus</th> <th>GLAC || Coherence</th> <th>GLAC || Share</th> <th>GLAC || Human</th> <th>GLAC || Grounded</th> <th>GLAC || Detailed</th> </tr> </thead> <tbody> <tr> <td>Edited || N/A</td> <td>3.487</td> <td>3.751</td> <td>3.763</td> <td>3.746</td> <td>3.602</td> <td>3.761</td> <td>3.878</td> <td>3.908</td> <td>3.93</td> <td>3.817</td> <td>3.864</td> <td>3.938</td> </tr> <tr> <td>Edited || TF (T)</td> <td>3.433</td> <td>3.705</td> <td>3.641</td> <td>3.656</td> <td>3.619</td> <td>3.631</td> <td>3.717</td> <td>3.773</td> <td>3.863</td> <td>3.672</td> <td>3.765</td> <td>3.795</td> </tr> <tr> <td>Edited || TF (T+I)</td> <td>3.542</td> <td>3.693</td> <td>3.676</td> <td>3.643</td> <td>3.548</td> <td>3.672</td> <td>3.734</td> <td>3.759</td> <td>3.786</td> <td>3.622</td> <td>3.758</td> <td>3.744</td> </tr> <tr> <td>Edited || LSTM (T)</td> <td>3.551</td> <td>3.8</td> <td>3.771</td> <td>3.751</td> <td>3.631</td> <td>3.81</td> <td>3.894</td> <td>3.896</td> <td>3.864</td> <td>3.848</td> <td>3.751</td> <td>3.897</td> </tr> <tr> <td>Edited || LSTM (T+I)</td> <td>3.497</td> <td>3.734</td> <td>3.746</td> <td>3.742</td> <td>3.573</td> <td>3.755</td> <td>3.815</td> <td>3.872</td> <td>3.847</td> <td>3.813</td> <td>3.75</td> <td>3.869</td> </tr> <tr> <td>Edited || Human</td> <td>3.592</td> <td>3.87</td> <td>3.856</td> <td>3.885</td> <td>3.779</td> <td>3.878</td> <td>4.003</td> <td>4.057</td> <td>4.072</td> <td>3.976</td> <td>3.994</td> <td>4.068</td> </tr> </tbody></table>
Table 2
table_2
P19-1658
4
acl2019
Human Evaluation . Following the evaluation procedure of the first VIST Challenge (Mitchell et al., 2018), for each visual story, we recruit five human judges on MTurk to rate it on six aspects (at $0.1/HIT.). We take the average of the five judgments as the final scores for the story. Table 2 shows the results. The LSTM using text-only input outperforms all other baselines. It improves all six aspects for stories by AREL, and improves “Focus” and “Human-like” aspects for stories by GLAC. These results demonstrate that a relatively small set of human edits can be used to boost the story quality of an existing large VIST model. Table 2 also suggests that the quality of a post-edited story is heavily decided by its pre-edited version. Even after editing by human editors, AREL’s stories still do not achieve the quality of pre-edited stories by GLAC. The inefficacy of image features and Transformer model might be caused by the small size of VIST-Edit. It also requires further research to develop a post-editing model in a multimodal context.
[2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1]
['Human Evaluation .', 'Following the evaluation procedure of the first VIST Challenge (Mitchell et al., 2018), for each visual story, we recruit five human judges on MTurk to rate it on six aspects (at $0.1/HIT.).', 'We take the average of the five judgments as the final scores for the story.', 'Table 2 shows the results.', 'The LSTM using text-only input outperforms all other baselines.', 'It improves all six aspects for stories by AREL, and improves “Focus” and “Human-like” aspects for stories by GLAC.', 'These results demonstrate that a relatively small set of human edits can be used to boost the story quality of an existing large VIST model.', 'Table 2 also suggests that the quality of a post-edited story is heavily decided by its pre-edited version.', 'Even after editing by human editors, AREL’s stories still do not achieve the quality of pre-edited stories by GLAC.', 'The inefficacy of image features and Transformer model might be caused by the small size of VIST-Edit.', 'It also requires further research to develop a post-editing model in a multimodal context.']
[None, None, None, None, ['LSTM (T)'], ['AREL', ' GLAC', 'LSTM (T)'], None, None, ['Human', 'AREL', ' GLAC'], ['TF (T+I)'], None]
1