table_id_paper
stringlengths
15
15
caption
stringlengths
14
1.88k
row_header_level
int32
1
9
row_headers
large_stringlengths
15
1.75k
column_header_level
int32
1
6
column_headers
large_stringlengths
7
1.01k
contents
large_stringlengths
18
2.36k
metrics_loc
stringclasses
2 values
metrics_type
large_stringlengths
5
532
target_entity
large_stringlengths
2
330
table_html_clean
large_stringlengths
274
7.88k
table_name
stringclasses
9 values
table_id
stringclasses
9 values
paper_id
stringlengths
8
8
page_no
int32
1
13
dir
stringclasses
8 values
description
large_stringlengths
103
3.8k
class_sentence
stringlengths
3
120
sentences
large_stringlengths
110
3.92k
header_mention
stringlengths
12
1.8k
valid
int32
0
1
P18-2015table_2
Performance of seed selection methods.
2
[['Method', 'K-means'], ['Method', 'HITS Graph1'], ['Method', 'HITS Graph2'], ['Method', 'HITS Graph3'], ['Method', 'HITS+K-means Graph1'], ['Method', 'HITS+K-means Graph2'], ['Method', 'HITS+K-means Graph3'], ['Method', 'LSA'], ['Method', 'NMF'], ['Method', 'Random']]
1
[['Average P@50']]
[['0.96'], ['0.90'], ['0.85'], ['0.90'], ['0.92'], ['0.85'], ['0.94'], ['0.90'], ['0.89'], ['0.75']]
column
['Average P@50']
['HITS Graph1', 'HITS Graph2', 'HITS Graph3', 'HITS+K-means Graph1', 'HITS+K-means Graph2', 'HITS+K-means Graph3']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Average P@50</th> </tr> </thead> <tbody> <tr> <td>Method || K-means</td> <td>0.96</td> </tr> <tr> <td>Method || HITS Graph1</td> <td>0.90</td> </tr> <tr> <td>Method || HITS Graph2</td> <td>0.85</td> </tr> <tr> <td>Method || HITS Graph3</td> <td>0.90</td> </tr> <tr> <td>Method || HITS+K-means Graph1</td> <td>0.92</td> </tr> <tr> <td>Method || HITS+K-means Graph2</td> <td>0.85</td> </tr> <tr> <td>Method || HITS+K-means Graph3</td> <td>0.94</td> </tr> <tr> <td>Method || LSA</td> <td>0.90</td> </tr> <tr> <td>Method || NMF</td> <td>0.89</td> </tr> <tr> <td>Method || Random</td> <td>0.75</td> </tr> </tbody></table>
Table 2
table_2
P18-2015
4
acl2018
The performances of the seed selection methods are presented in Table 2. For the HITS-based and HITS+K-means-based methods, we display the P@50 with three types of graph representation as shown in Section 4.2. We use random seed selection as the baseline for comparison. As Table 2 shows, the random method achieved a precision of 0.75. The relation extraction system that uses the random method has the worst average P@50 among all seed selection strategies. The HITS-based method P@50 when using Graph1 and Graph3 are confirmed to be better than when using Graph2. This indicates that relying on reliable instances is better than reasoning over patterns (recall that for the Graph2, we first choose the patterns, then select the instances associated with those patterns), as there is a possibility that a pattern can be ambiguous, and therefore, instances linked to that pattern can be incorrect. The Kmeans-based seed selection method provides the best average P@50 with a performance of 0.96. The HITS+K-means-based method performs better than using only the HITS strategy, while the LSA-based and NMF-based methods have a comparable performance.
[1, 1, 1, 1, 2, 1, 2, 1, 1]
['The performances of the seed selection methods are presented in Table 2.', 'For the HITS-based and HITS+K-means-based methods, we display the P@50 with three types of graph representation as shown in Section 4.2.', 'We use random seed selection as the baseline for comparison.', 'As Table 2 shows, the random method achieved a precision of 0.75.', 'The relation extraction system that uses the random method has the worst average P@50 among all seed selection strategies.', 'The HITS-based method P@50 when using Graph1 and Graph3 are confirmed to be better than when using Graph2.', 'This indicates that relying on reliable instances is better than reasoning over patterns (recall that for the Graph2, we first choose the patterns, then select the instances associated with those patterns), as there is a possibility that a pattern can be ambiguous, and therefore, instances linked to that pattern can be incorrect.', 'The Kmeans-based seed selection method provides the best average P@50 with a performance of 0.96.', 'The HITS+K-means-based method performs better than using only the HITS strategy, while the LSA-based and NMF-based methods have a comparable performance.']
[None, ['HITS Graph1', 'HITS Graph2', 'HITS Graph3', 'HITS+K-means Graph1', 'HITS+K-means Graph2', 'HITS+K-means Graph3', 'Average P@50'], ['Random'], ['Random', 'Average P@50'], ['Random', 'Average P@50'], ['HITS Graph1', 'HITS Graph2', 'HITS Graph3'], ['HITS Graph1', 'HITS Graph2', 'HITS Graph3'], ['K-means', 'Average P@50'], ['HITS+K-means Graph1', 'HITS+K-means Graph2', 'HITS+K-means Graph3', 'HITS Graph1', 'HITS Graph2', 'HITS Graph3', 'LSA', 'NMF']]
1
P18-2016table_3
Ranking results of scoring functions.
2
[['f', 'f0'], ['f', 'f1'], ['f', 'f2'], ['f', 'f3'], ['f', 'f4']]
1
[['MAP'], ['P@50'], ['P@100'], ['P@200'], ['P@300']]
[['0.42', '0.40', '0.44', '0.42', '0.38'], ['0.58', '0.70', '0.60', '0.53', '0.44'], ['0.48', '0.56', '0.52', '0.49', '0.42'], ['0.59', '0.68', '0.63', '0.55', '0.44'], ['0.56', '0.40', '0.48', '0.50', '0.42']]
column
['MAP', 'P@50', 'P@100', 'P@200', 'P@300']
['f3']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MAP</th> <th>P@50</th> <th>P@100</th> <th>P@200</th> <th>P@300</th> </tr> </thead> <tbody> <tr> <td>f || f0</td> <td>0.42</td> <td>0.40</td> <td>0.44</td> <td>0.42</td> <td>0.38</td> </tr> <tr> <td>f || f1</td> <td>0.58</td> <td>0.70</td> <td>0.60</td> <td>0.53</td> <td>0.44</td> </tr> <tr> <td>f || f2</td> <td>0.48</td> <td>0.56</td> <td>0.52</td> <td>0.49</td> <td>0.42</td> </tr> <tr> <td>f || f3</td> <td>0.59</td> <td>0.68</td> <td>0.63</td> <td>0.55</td> <td>0.44</td> </tr> <tr> <td>f || f4</td> <td>0.56</td> <td>0.40</td> <td>0.48</td> <td>0.50</td> <td>0.42</td> </tr> </tbody></table>
Table 3
table_3
P18-2016
5
acl2018
Table 3 shows the ranking results using Mean Average Precision (MAP) and Precision at K as the metrics. Accumulative scores (f1 and f3) generally do better. Thus, we choose f = f3 with a MAP score of 0.59 as the scoring function.
[1, 1, 2]
['Table 3 shows the ranking results using Mean Average Precision (MAP) and Precision at K as the metrics.', 'Accumulative scores (f1 and f3) generally do better.', 'Thus, we choose f = f3 with a MAP score of 0.59 as the scoring function.']
[['MAP', 'P@50', 'P@100', 'P@200', 'P@300'], ['f1', 'f3'], ['f3']]
1
P18-2020table_1
Evaluation on GermEval data, using the official metric (metric 1) of the GermEval 2014 task that combines inner and outer chunks.
4
[['Type', 'CRF', 'Model', 'StanfordNER'], ['Type', 'CRF', 'Model', 'GermaNER'], ['Type', 'RNN', 'Model', 'UKP'], ['Type', '-', 'Model', 'ExB'], ['Type', 'RNN', 'Model', 'BiLSTM-WikiEmb'], ['Type', 'RNN', 'Model', 'BiLSTM-EuroEmb']]
1
[['Pr'], ['R'], ['F1']]
[['80.02', '62.29', '70.05'], ['81.31', '68.00', '74.06'], ['79.54', '71.10', '75.09'], ['78.07', '74.75', '76.38'], ['81.95', '78.13', '79.99*'], ['75.50', '70.72', '73.03']]
column
['Pr', 'R', 'F1']
['BiLSTM-WikiEmb']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Pr</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Type || CRF || Model || StanfordNER</td> <td>80.02</td> <td>62.29</td> <td>70.05</td> </tr> <tr> <td>Type || CRF || Model || GermaNER</td> <td>81.31</td> <td>68.00</td> <td>74.06</td> </tr> <tr> <td>Type || RNN || Model || UKP</td> <td>79.54</td> <td>71.10</td> <td>75.09</td> </tr> <tr> <td>Type || - || Model || ExB</td> <td>78.07</td> <td>74.75</td> <td>76.38</td> </tr> <tr> <td>Type || RNN || Model || BiLSTM-WikiEmb</td> <td>81.95</td> <td>78.13</td> <td>79.99*</td> </tr> <tr> <td>Type || RNN || Model || BiLSTM-EuroEmb</td> <td>75.50</td> <td>70.72</td> <td>73.03</td> </tr> </tbody></table>
Table 1
table_1
P18-2020
3
acl2018
Table 1 shows results on GermEval using the official metric (metric 1) for the best performing systems. This measure considers both outer and inner span annotations. Within the challenge, the ExB (Hanig et al., 2015) ensemble classifier achieved the best result with an F1 score of 76.38, followed by the RNN-based method from UKP (Reimers et al., 2014) with 75.09. GermaNER achieves high precision, but cannot compete in terms of recall. Our BiLSTM with Wikipedia word embeddings, scores highest (79.99) and outperforms the shared task winner ExB significantly, based on a bootstrap resampling test (Efron and Tibshirani, 1994). Using Europeana embeddings, the performance drops to an F1 score of 73.03 ? due to the difference in vocabulary.
[1, 2, 1, 1, 1, 1]
['Table 1 shows results on GermEval using the official metric (metric 1) for the best performing systems.', 'This measure considers both outer and inner span annotations.', 'Within the challenge, the ExB (Hanig et al., 2015) ensemble classifier achieved the best result with an F1 score of 76.38, followed by the RNN-based method from UKP (Reimers et al., 2014) with 75.09.', 'GermaNER achieves high precision, but cannot compete in terms of recall.', 'Our BiLSTM with Wikipedia word embeddings, scores highest (79.99) and outperforms the shared task winner ExB significantly, based on a bootstrap\nresampling test (Efron and Tibshirani, 1994).', 'Using Europeana embeddings, the performance drops to an F1 score of 73.03 ? due to the difference in vocabulary.']
[None, None, ['ExB', 'F1', 'RNN', 'UKP'], ['GermaNER', 'Pr', 'R'], ['BiLSTM-WikiEmb', 'ExB', 'F1'], ['BiLSTM-EuroEmb', 'F1']]
1
P18-2020table_5
Results for different test sets when using transfer learning. † marks results statistically significantly better than the ones reported in Table 4.
4
[['Train', 'CoNLL', 'Transfer', 'GermEval'], ['Train', 'CoNLL', 'Transfer', 'LFT'], ['Train', 'CoNLL', 'Transfer', 'ONB'], ['Train', 'GermEval', 'Transfer', 'CoNLL'], ['Train', 'GermEval', 'Transfer', 'LFT'], ['Train', 'GermEval', 'Transfer', 'ONB']]
2
[['BiLSTM-WikiEmb', 'CoNLL'], ['BiLSTM-WikiEmb', 'GermEval'], ['BiLSTM-WikiEmb', 'LFT'], ['BiLSTM-WikiEmb', 'ONB'], ['BiLSTM-EuroEmb', 'CoNLL'], ['BiLSTM-EuroEmb', 'GermEval'], ['BiLSTM-EuroEmb', 'LFT'], ['BiLSTM-EuroEmb', 'ONB']]
[['78.55', '82.93', '55.28', '64.93', '72.23', '75.78', '51.98', '61.74'], ['62.80', '58.89', '72.90', '67.96', '56.30', '51.25', '70.04', '65.65'], ['62.05', '57.19', '59.43', '76.17', '55.82', '49.14', '54.19', '73.68'], ['84.73†', '72.11', '54.21', '65.95', '78.41', '63.42', '52.02', '59.28'], ['67.77', '69.09', '74.33†', '70.57', '55.83', '57.71', '72.03', '70.36'], ['72.15', '73.18', '62.52', '76.06', '64.05', '64.20', '57.12', '78.56†']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['BiLSTM-WikiEmb', 'BiLSTM-EuroEmb']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BiLSTM-WikiEmb || CoNLL</th> <th>BiLSTM-WikiEmb || GermEval</th> <th>BiLSTM-WikiEmb || LFT</th> <th>BiLSTM-WikiEmb || ONB</th> <th>BiLSTM-EuroEmb || CoNLL</th> <th>BiLSTM-EuroEmb || GermEval</th> <th>BiLSTM-EuroEmb || LFT</th> <th>BiLSTM-EuroEmb || ONB</th> </tr> </thead> <tbody> <tr> <td>Train || CoNLL || Transfer || GermEval</td> <td>78.55</td> <td>82.93</td> <td>55.28</td> <td>64.93</td> <td>72.23</td> <td>75.78</td> <td>51.98</td> <td>61.74</td> </tr> <tr> <td>Train || CoNLL || Transfer || LFT</td> <td>62.80</td> <td>58.89</td> <td>72.90</td> <td>67.96</td> <td>56.30</td> <td>51.25</td> <td>70.04</td> <td>65.65</td> </tr> <tr> <td>Train || CoNLL || Transfer || ONB</td> <td>62.05</td> <td>57.19</td> <td>59.43</td> <td>76.17</td> <td>55.82</td> <td>49.14</td> <td>54.19</td> <td>73.68</td> </tr> <tr> <td>Train || GermEval || Transfer || CoNLL</td> <td>84.73†</td> <td>72.11</td> <td>54.21</td> <td>65.95</td> <td>78.41</td> <td>63.42</td> <td>52.02</td> <td>59.28</td> </tr> <tr> <td>Train || GermEval || Transfer || LFT</td> <td>67.77</td> <td>69.09</td> <td>74.33†</td> <td>70.57</td> <td>55.83</td> <td>57.71</td> <td>72.03</td> <td>70.36</td> </tr> <tr> <td>Train || GermEval || Transfer || ONB</td> <td>72.15</td> <td>73.18</td> <td>62.52</td> <td>76.06</td> <td>64.05</td> <td>64.20</td> <td>57.12</td> <td>78.56†</td> </tr> </tbody></table>
Table 5
table_5
P18-2020
5
acl2018
The results in Table 5 show significant improvements for the CoNLL dataset but performance drops for GermEval. Combining contemporary sources with historic target corpora yields to consistent benefits. Performance on LFT increases from 69.62 to 74.33 and on ONB from 73.31 to 78.56. Cross-domain classification scores are also improved consistently. The GermEval corpus is more appropriate as a source corpus, presumably because it is both larger and drawn from encyclopaedic text, more varied than newswire. We conclude that transfer learning is beneficial for BiLSTMs, especially when training data for the target domain is scarce.
[1, 2, 1, 2, 2, 2]
['The results in Table 5 show significant improvements for the CoNLL dataset but performance drops for GermEval.', 'Combining contemporary sources with historic target corpora yields to consistent benefits.', 'Performance on LFT increases from 69.62 to 74.33 and on ONB from 73.31 to 78.56.', 'Cross-domain classification scores are also improved consistently.', 'The GermEval corpus is more appropriate as a source corpus, presumably because it is both larger and drawn from encyclopaedic text, more varied than newswire.', 'We conclude that transfer learning is beneficial for BiLSTMs, especially when training data for the target domain is scarce.']
[['CoNLL', 'GermEval'], None, ['BiLSTM-WikiEmb', 'LFT', 'BiLSTM-EuroEmb', 'ONB'], None, ['GermEval'], ['BiLSTM-WikiEmb', 'BiLSTM-EuroEmb']]
1
P18-2021table_2
Results of 10× 10−fold cross-validation.
2
[['Feature Set', '# Tokens'], ['Feature Set', '# Sentences'], ['Feature Set', 'All'], ['Feature Set', 'Significant'], ['Feature Set', 'Relevant']]
1
[['Precision'], ['Recall'], ['F1'], ['AUC']]
[['0.793', '0.996', '0.883', '0.610'], ['0.792', '0.999', '0.884', '0.584'], ['0.829', '0.926', '0.872', '0.849'], ['0.805', '0.953', '0.871', '0.805'], ['0.802', '0.963', '0.874', '0.819']]
column
['Precision', 'Recall', 'F1', 'AUC']
['All', 'Significant', 'Relevant']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1</th> <th>AUC</th> </tr> </thead> <tbody> <tr> <td>Feature Set || # Tokens</td> <td>0.793</td> <td>0.996</td> <td>0.883</td> <td>0.610</td> </tr> <tr> <td>Feature Set || # Sentences</td> <td>0.792</td> <td>0.999</td> <td>0.884</td> <td>0.584</td> </tr> <tr> <td>Feature Set || All</td> <td>0.829</td> <td>0.926</td> <td>0.872</td> <td>0.849</td> </tr> <tr> <td>Feature Set || Significant</td> <td>0.805</td> <td>0.953</td> <td>0.871</td> <td>0.805</td> </tr> <tr> <td>Feature Set || Relevant</td> <td>0.802</td> <td>0.963</td> <td>0.874</td> <td>0.819</td> </tr> </tbody></table>
Table 2
table_2
P18-2021
4
acl2018
Table 2 shows the average precision, recall, F1-measure, and AUC. The classifiers trained on the linguistic features, while performing near the baselines on the first three measures, substantially outperform the baselines on AUC, with all three yielding values over 0.8. Given these results and the imbalanced nature of the dataset, it seems that the classifiers trained on the linguistic features are able to identify both classes of comments with high accuracy, while the baseline classifiers perform only marginally better than a majority class baseline.
[1, 1, 2]
['Table 2 shows the average precision, recall, F1-measure, and AUC.', 'The classifiers trained on the linguistic features, while performing near the baselines on the first three measures, substantially outperform the baselines on AUC, with all three yielding values over 0.8.', 'Given these results and the imbalanced nature of the dataset, it seems that the classifiers trained on the linguistic features are able to identify both classes of comments with high accuracy, while the baseline classifiers perform only marginally better than a majority class baseline.']
[['Precision', 'Recall', 'F1', 'AUC'], ['All', 'Significant', 'Relevant', 'Precision', 'Recall', 'F1', 'AUC'], ['All', 'Significant', 'Relevant', '# Tokens', '# Sentences']]
1
P18-2023table_4
Performance of word representations learned under different configurations. Baidubaike is used as the training corpus. The top 1 results are in bold.
2
[['SGNS', 'word'], ['SGNS', 'word+ngram'], ['SGNS', 'word+char'], ['PPMI', 'word'], ['PPMI', 'word+ngram'], ['PPMI', 'word+char']]
2
[['CA_translated', 'Cap.'], ['CA_translated', 'Sta.'], ['CA_translated', 'Fam.'], ['CA8', 'A'], ['CA8', 'AB'], ['CA8', 'Pre.'], ['CA8', 'Suf.'], ['CA8', 'Mor.'], ['CA8', 'Geo.'], ['CA8', 'His.'], ['CA8', 'Nat.'], ['CA8', 'Peo.'], ['CA8', 'Sem.']]
[['.706', '.966', '.603', '.117', '.162', '.181', '.389', '.222', '.414', '.345', '.236', '.223', '.327'], ['.715', '.977', '.640', '.143', '.184', '.197', '.429', '.250', '.449', '.308', '.276', '.310', '.368'], ['.676', '.966', '.548', '.358', '.540', '.326', '.612', '.455', '.468', '.226', '.296', '.305', '.368'], ['.925', '.920', '.548', '.103', '.139', '.138', '.464', '.226', '.627', '.501', '.300', '.515', '.522'], ['.943', '.960', '.658', '.102', '.129', '.168', '.456', '.230', '.680', '.535', '.371', '.626', '.586'], ['.913', '.886', '.614', '.106', '.190', '.173', '.505', '.260', '.638', '.502', '.288', '.515', '.524']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['word+ngram', 'word+char']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CA_translated || Cap.</th> <th>CA_translated || Sta.</th> <th>CA_translated || Fam.</th> <th>CA8 || A</th> <th>CA8 || AB</th> <th>CA8 || Pre.</th> <th>CA8 || Suf.</th> <th>CA8 || Mor.</th> <th>CA8 || Geo.</th> <th>CA8 || His.</th> <th>CA8 || Nat.</th> <th>CA8 || Peo.</th> <th>CA8 || Sem.</th> </tr> </thead> <tbody> <tr> <td>SGNS || word</td> <td>.706</td> <td>.966</td> <td>.603</td> <td>.117</td> <td>.162</td> <td>.181</td> <td>.389</td> <td>.222</td> <td>.414</td> <td>.345</td> <td>.236</td> <td>.223</td> <td>.327</td> </tr> <tr> <td>SGNS || word+ngram</td> <td>.715</td> <td>.977</td> <td>.640</td> <td>.143</td> <td>.184</td> <td>.197</td> <td>.429</td> <td>.250</td> <td>.449</td> <td>.308</td> <td>.276</td> <td>.310</td> <td>.368</td> </tr> <tr> <td>SGNS || word+char</td> <td>.676</td> <td>.966</td> <td>.548</td> <td>.358</td> <td>.540</td> <td>.326</td> <td>.612</td> <td>.455</td> <td>.468</td> <td>.226</td> <td>.296</td> <td>.305</td> <td>.368</td> </tr> <tr> <td>PPMI || word</td> <td>.925</td> <td>.920</td> <td>.548</td> <td>.103</td> <td>.139</td> <td>.138</td> <td>.464</td> <td>.226</td> <td>.627</td> <td>.501</td> <td>.300</td> <td>.515</td> <td>.522</td> </tr> <tr> <td>PPMI || word+ngram</td> <td>.943</td> <td>.960</td> <td>.658</td> <td>.102</td> <td>.129</td> <td>.168</td> <td>.456</td> <td>.230</td> <td>.680</td> <td>.535</td> <td>.371</td> <td>.626</td> <td>.586</td> </tr> <tr> <td>PPMI || word+char</td> <td>.913</td> <td>.886</td> <td>.614</td> <td>.106</td> <td>.190</td> <td>.173</td> <td>.505</td> <td>.260</td> <td>.638</td> <td>.502</td> <td>.288</td> <td>.515</td> <td>.524</td> </tr> </tbody></table>
Table 4
table_4
P18-2023
4
acl2018
Table 4 lists the performance of them on CA_translated and CA8 datasets under different configurations. We can observe that on CA8 dataset, SGNS representations perform better in analogical reasoning of morphological relations and PPMI representations show great advantages in semantic relations. However, Table 4 shows that there is only a slight increase on CA_translated dataset with ngram features, and the accuracies in most cases decrease after integrating character features. In contrast, on CA8 dataset, the introduction of ngram and character features brings significant and consistent improvements on almost all the categories. Furthermore, character features are especially advantageous for reasoning of morphological relations. SGNS model integrating with character features even doubles the accuracy in morphological questions.
[1, 1, 1, 1, 2, 1]
['Table 4 lists the performance of them on CA_translated and CA8 datasets under different configurations.', 'We can observe that on CA8 dataset, SGNS representations perform better in analogical reasoning of morphological relations and PPMI representations show great advantages in semantic relations.', 'However, Table 4 shows that there is only a slight increase on CA_translated dataset with ngram features, and the accuracies in most cases decrease after integrating character features.', 'In contrast, on CA8 dataset, the introduction of ngram and character features brings significant and consistent improvements on almost all the categories.', 'Furthermore, character features are especially advantageous for reasoning of morphological relations.', 'SGNS model integrating with character features even doubles the accuracy in morphological questions.']
[['CA_translated', 'CA8'], ['CA8', 'SGNS', 'Mor.', 'PPMI', 'Sem.'], ['CA_translated', 'word+ngram', 'word'], ['CA8', 'word+ngram', 'word+char'], ['word+char', 'Mor.'], ['SGNS', 'word+char', 'Mor.']]
1
P18-2023table_5
Performance of word representations learned upon different training corpora by SGNS with context feature of word. The top 2 results are in bold.
1
[['Wikipedia 1.2G'], ['Baidubaike 4.3G'], ['People Daily 4.2G'], ['Sogou News 4.0G'], ['Zhihu QA 2.2G'], ['Combination 15.9G']]
2
[['CA_translated', 'Cap.'], ['CA_translated', 'Sta.'], ['CA_translated', 'Fam.'], ['CA8', 'A'], ['CA8', 'AB'], ['CA8', 'Pre.'], ['CA8', 'Suf.'], ['CA8', 'Mor.'], ['CA8', 'Geo.'], ['CA8', 'His.'], ['CA8', 'Nat.'], ['CA8', 'Peo.'], ['CA8', 'Sem.']]
[['.597', '.771', '.360', '.029', '.018', '.152', '.266', '.180', '.339', '.125', '.147', '.079', '.236'], ['.706', '.966', '.603', '.117', '.162', '.181', '.389', '.222', '.414', '.345', '.236', '.223', '.327'], ['.925', '.989', '.547', '.140', '.158', '.213', '.355', '.226', '.694', '.019', '.206', '.157', '.455'], ['.619', '.966', '.496', '.057', '.075', '.131', '.176', '.115', '.432', '.067', '.150', '.145', '.302'], ['.277', '.491', '.625', '.175', '.199', '.134', '.251', '.189', '.146', '.147', '.250', '.189', '.181'], ['.872', '.994', '.710', '.223', '.300', '.234', '.518', '.321', '.662', '.293', '.310', '.307', '.467']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Combination 15.9G']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CA_translated || Cap.</th> <th>CA_translated || Sta.</th> <th>CA_translated || Fam.</th> <th>CA8 || A</th> <th>CA8 || AB</th> <th>CA8 || Pre.</th> <th>CA8 || Suf.</th> <th>CA8 || Mor.</th> <th>CA8 || Geo.</th> <th>CA8 || His.</th> <th>CA8 || Nat.</th> <th>CA8 || Peo.</th> <th>CA8 || Sem.</th> </tr> </thead> <tbody> <tr> <td>Wikipedia 1.2G</td> <td>.597</td> <td>.771</td> <td>.360</td> <td>.029</td> <td>.018</td> <td>.152</td> <td>.266</td> <td>.180</td> <td>.339</td> <td>.125</td> <td>.147</td> <td>.079</td> <td>.236</td> </tr> <tr> <td>Baidubaike 4.3G</td> <td>.706</td> <td>.966</td> <td>.603</td> <td>.117</td> <td>.162</td> <td>.181</td> <td>.389</td> <td>.222</td> <td>.414</td> <td>.345</td> <td>.236</td> <td>.223</td> <td>.327</td> </tr> <tr> <td>People Daily 4.2G</td> <td>.925</td> <td>.989</td> <td>.547</td> <td>.140</td> <td>.158</td> <td>.213</td> <td>.355</td> <td>.226</td> <td>.694</td> <td>.019</td> <td>.206</td> <td>.157</td> <td>.455</td> </tr> <tr> <td>Sogou News 4.0G</td> <td>.619</td> <td>.966</td> <td>.496</td> <td>.057</td> <td>.075</td> <td>.131</td> <td>.176</td> <td>.115</td> <td>.432</td> <td>.067</td> <td>.150</td> <td>.145</td> <td>.302</td> </tr> <tr> <td>Zhihu QA 2.2G</td> <td>.277</td> <td>.491</td> <td>.625</td> <td>.175</td> <td>.199</td> <td>.134</td> <td>.251</td> <td>.189</td> <td>.146</td> <td>.147</td> <td>.250</td> <td>.189</td> <td>.181</td> </tr> <tr> <td>Combination 15.9G</td> <td>.872</td> <td>.994</td> <td>.710</td> <td>.223</td> <td>.300</td> <td>.234</td> <td>.518</td> <td>.321</td> <td>.662</td> <td>.293</td> <td>.310</td> <td>.307</td> <td>.467</td> </tr> </tbody></table>
Table 5
table_5
P18-2023
5
acl2018
Table 5 shows that accuracies increase with the growth in corpus size, e.g .Baidubaike (an online Chinese encyclopedia) has a clear advantage over Wikipedia. Also, the domain of a corpus plays an important role in the experiments. We can observe that vectors trained on news data are beneficial to geography relations, especially on People’s Daily which has a focus on political news. Another example is Zhihu QA, an online questionanswering corpus which contains more informal data than others. It is helpful to reduplication relations since many reduplication words appear frequently in spoken language. With the largest size and varied domains, Combinationcorpus performs much better than others in both morphological and semantic relations.
[1, 2, 1, 1, 2, 1]
['Table 5 shows that accuracies increase with the growth in corpus size, e.g .Baidubaike (an online Chinese encyclopedia) has a clear advantage over Wikipedia.', 'Also, the domain of a corpus plays an important role in the experiments.', 'We can observe that vectors trained on news data are beneficial to geography relations, especially on People’s Daily which has a focus on political news.', 'Another example is Zhihu QA, an online questionanswering corpus which contains more informal data than others.', 'It is helpful to reduplication relations since many reduplication words appear frequently in spoken language.', 'With the largest size and varied domains, Combinationcorpus performs much better than others in both morphological and semantic relations.']
[['Baidubaike 4.3G', 'Wikipedia 1.2G'], ['Cap.', 'Sta.', 'Fam.', 'A', 'AB', 'Pre.', 'Suf.', 'Mor.', 'Geo.', 'His.', 'Nat.', 'Peo.', 'Sem.'], ['People Daily 4.2G', 'Geo.'], ['Zhihu QA 2.2G'], None, ['Combination 15.9G', 'Mor.', 'Sem.']]
1
P18-2051table_4
Single models on Ja-En. Previous evaluation result included for comparison.
4
[['Architecture', 'Seq2seq (8-model ensemble)', 'Representation', 'Best WAT17 result (Morishita et al. 2017)'], ['Architecture', 'Seq2seq', 'Representation', 'Plain BPE'], ['Architecture', 'Seq2seq', 'Representation', 'Linearized derivation'], ['Architecture', 'Transformer', 'Representation', 'Plain BPE'], ['Architecture', 'Transformer', 'Representation', 'Linearized tree'], ['Architecture', 'Transformer', 'Representation', 'Linearized derivation'], ['Architecture', 'Transformer', 'Representation', 'POS/BPE']]
1
[['Dev BLEU'], ['Test BLEU']]
[['-', '28.4'], ['21.6', '21.2'], ['21.9', '21.2'], ['28.0', '28.9'], ['28.2', '28.4'], ['28.5', '28.7'], ['28.5', '29.1']]
column
['Dev BLEU', 'Test BLEU']
['Transformer', 'Plain BPE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev BLEU</th> <th>Test BLEU</th> </tr> </thead> <tbody> <tr> <td>Architecture || Seq2seq (8-model ensemble) || Representation || Best WAT17 result (Morishita et al. 2017)</td> <td>-</td> <td>28.4</td> </tr> <tr> <td>Architecture || Seq2seq || Representation || Plain BPE</td> <td>21.6</td> <td>21.2</td> </tr> <tr> <td>Architecture || Seq2seq || Representation || Linearized derivation</td> <td>21.9</td> <td>21.2</td> </tr> <tr> <td>Architecture || Transformer || Representation || Plain BPE</td> <td>28.0</td> <td>28.9</td> </tr> <tr> <td>Architecture || Transformer || Representation || Linearized tree</td> <td>28.2</td> <td>28.4</td> </tr> <tr> <td>Architecture || Transformer || Representation || Linearized derivation</td> <td>28.5</td> <td>28.7</td> </tr> <tr> <td>Architecture || Transformer || Representation || POS/BPE</td> <td>28.5</td> <td>29.1</td> </tr> </tbody></table>
Table 4
table_4
P18-2051
5
acl2018
Our plain BPE baseline (Table 4) outperforms the current best system on WAT Ja-En, an 8-model ensemble (Morishita et al., 2017). Our syntax models achieve similar results despite producing much longer sequences.
[1, 2]
['Our plain BPE baseline (Table 4) outperforms the current best system on WAT Ja-En, an 8-model ensemble (Morishita et al., 2017).', 'Our syntax models achieve similar results despite producing much longer sequences.']
[['Transformer', 'Plain BPE', 'Seq2seq (8-model ensemble)', 'Best WAT17 result (Morishita et al. 2017)'], None]
1
P18-2058table_2
Experiment results with gold predicates.
1
[['Ours'], ['Tan et al. (2018)'], ['He et al. (2017)'], ['Yang and Mitchell (2017)'], ['Zhou and Xu (2015)']]
1
[['WSJ'], ['Brown'], ['OntoNotes']]
[['83.9', '73.7', '82.1'], ['84.8', '74.1', '82.7'], ['83.1', '72.1', '81.7'], ['81.9', '72.0', '-'], ['82.8', '69.4', '81.1']]
column
['accuracy', 'accuracy', 'accuracy']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WSJ</th> <th>Brown</th> <th>OntoNotes</th> </tr> </thead> <tbody> <tr> <td>Ours</td> <td>83.9</td> <td>73.7</td> <td>82.1</td> </tr> <tr> <td>Tan et al. (2018)</td> <td>84.8</td> <td>74.1</td> <td>82.7</td> </tr> <tr> <td>He et al. (2017)</td> <td>83.1</td> <td>72.1</td> <td>81.7</td> </tr> <tr> <td>Yang and Mitchell (2017)</td> <td>81.9</td> <td>72.0</td> <td>-</td> </tr> <tr> <td>Zhou and Xu (2015)</td> <td>82.8</td> <td>69.4</td> <td>81.1</td> </tr> </tbody></table>
Table 2
table_2
P18-2058
4
acl2018
To compare with additional previous systems, we also conduct experiments with gold predicates by constraining our predicate beam to be gold predicates only. As shown in Table 2, our model significantly out-performs He et al. (2017), but falls short of Tan et al. (2018).
[2, 1]
['To compare with additional previous systems, we also conduct experiments with gold predicates by constraining our predicate beam to be gold predicates only.', 'As shown in Table 2, our model significantly out-performs He et al. (2017), but falls short of Tan et al. (2018).']
[None, ['Ours', 'He et al. (2017)', 'Tan et al. (2018)']]
1
P19-1001table_2
Evaluation results on the E-commerce data. Numbers in bold mean that the improvement to the best performing baseline is statistically significant (ttest with p-value < 0.05).
2
[['Models', 'RNN (Lowe et al., 2015)'], ['Models', 'CNN (Lowe et al., 2015)'], ['Models', 'LSTM (Lowe et al., 2015)'], ['Models', 'BiLSTM (Kadlec et al., 2015)'], ['Models', 'DL2R (Yan et al., 2016)'], ['Models', 'MV-LSTM (Wan et al., 2016)'], ['Models', 'Match-LSTM (Wang and Jiang, 2016)'], ['Models', 'Multi-View (Zhou et al., 2016)'], ['Models', 'SMN (Wu et al., 2017)'], ['Models', 'DUA(Zhang et al., 2018b)'], ['Models', 'DAM (Zhou et al., 2018b)'], ['Models', 'IoI-global'], ['Models', 'IoI-local']]
2
[['Metrics', 'R10@1'], ['Metrics', 'R10@2'], ['Metrics', 'R10@5']]
[['0.325', '0.463', '0.775'], ['0.328', '0.515', '0.792'], ['0.365', '0.536', '0.828'], ['0.355', '0.525', '0.825'], ['0.399', '0.571', '0.842'], ['0.412', '0.591', '0.857'], ['0.41', '0.59', '0.858'], ['0.421', '0.601', '0.861'], ['0.453', '0.654', '0.886'], ['0.501', '0.7', '0.921'], ['0.526', '0.727', '0.933'], ['0.554', '0.747', '0.942'], ['0.563', '0.768', '0.95']]
column
['R10@1', 'R10@2', 'R10@5']
['IoI-global', 'IoI-local']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Metrics || R10@1</th> <th>Metrics || R10@2</th> <th>Metrics || R10@5</th> </tr> </thead> <tbody> <tr> <td>Models || RNN (Lowe et al., 2015)</td> <td>0.325</td> <td>0.463</td> <td>0.775</td> </tr> <tr> <td>Models || CNN (Lowe et al., 2015)</td> <td>0.328</td> <td>0.515</td> <td>0.792</td> </tr> <tr> <td>Models || LSTM (Lowe et al., 2015)</td> <td>0.365</td> <td>0.536</td> <td>0.828</td> </tr> <tr> <td>Models || BiLSTM (Kadlec et al., 2015)</td> <td>0.355</td> <td>0.525</td> <td>0.825</td> </tr> <tr> <td>Models || DL2R (Yan et al., 2016)</td> <td>0.399</td> <td>0.571</td> <td>0.842</td> </tr> <tr> <td>Models || MV-LSTM (Wan et al., 2016)</td> <td>0.412</td> <td>0.591</td> <td>0.857</td> </tr> <tr> <td>Models || Match-LSTM (Wang and Jiang, 2016)</td> <td>0.41</td> <td>0.59</td> <td>0.858</td> </tr> <tr> <td>Models || Multi-View (Zhou et al., 2016)</td> <td>0.421</td> <td>0.601</td> <td>0.861</td> </tr> <tr> <td>Models || SMN (Wu et al., 2017)</td> <td>0.453</td> <td>0.654</td> <td>0.886</td> </tr> <tr> <td>Models || DUA(Zhang et al., 2018b)</td> <td>0.501</td> <td>0.7</td> <td>0.921</td> </tr> <tr> <td>Models || DAM (Zhou et al., 2018b)</td> <td>0.526</td> <td>0.727</td> <td>0.933</td> </tr> <tr> <td>Models || IoI-global</td> <td>0.554</td> <td>0.747</td> <td>0.942</td> </tr> <tr> <td>Models || IoI-local</td> <td>0.563</td> <td>0.768</td> <td>0.95</td> </tr> </tbody></table>
Table 2
table_2
P19-1001
7
acl2019
6.4 Evaluation Results . Table 2 report evaluation results on the three data sets where IoI-global and IoI-local represent models learned with Objective (17) and Objective (18) respectively. We can see that both IoI-local and IoI-global outperform the best performing baseline, and improvements from IoI-local on all metrics and from IoI-global on a few metrics are statistically significant (t-test with p-value < 0.05). IoI-local is consistently better than IoI-global over all metrics on all the three data sets, demonstrating that directly supervising each block in learning can lead to a more optimal deep structure than optimizing the final matching model.
[2, 1, 1, 1]
['6.4 Evaluation Results .', 'Table 2 report evaluation results on the three data sets where IoI-global and IoI-local represent models learned with Objective (17) and Objective (18) respectively.', 'We can see that both IoI-local and IoI-global outperform the best performing baseline, and improvements from IoI-local on all metrics and from IoI-global on a few metrics are statistically significant (t-test with p-value < 0.05).', 'IoI-local is consistently better than IoI-global over all metrics on all the three data sets, demonstrating that directly supervising each block in learning can lead to a more optimal deep structure than optimizing the final matching model.']
[None, ['IoI-global', 'IoI-local'], ['IoI-local', 'IoI-global'], ['IoI-local', 'IoI-global']]
1
P19-1006table_1
Experiment Result on the Ubuntu Corpus.
2
[['Model', 'Baseline'], ['Model', 'DAM'], ['Model', 'DAM+Fine-tune'], ['Model', 'DME'], ['Model', 'DME-SMN'], ['Model', 'STM(Transform)'], ['Model', 'STM(GRU)'], ['Model', 'STM(Ensemble)'], ['Model', 'STM(BERT)']]
1
[['R100@1'], ['R100@10'], ['MRR']]
[['0.083', '0.359', '-'], ['0.347', '0.663', '0.356'], ['0.364', '0.664', '0.443'], ['0.383', '0.725', '0.498'], ['0.455', '0.761', '0.558'], ['0.49', '0.764', '0.588'], ['0.503', '0.783', '0.597'], ['0.521', '0.797', '0.616'], ['0.548', '0.827', '0.614']]
column
['R100@1', 'R100@10', 'MRR']
['STM(Transform)', 'STM(GRU)', 'STM(Ensemble)', 'STM(BERT)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>R100@1</th> <th>R100@10</th> <th>MRR</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>0.083</td> <td>0.359</td> <td>-</td> </tr> <tr> <td>Model || DAM</td> <td>0.347</td> <td>0.663</td> <td>0.356</td> </tr> <tr> <td>Model || DAM+Fine-tune</td> <td>0.364</td> <td>0.664</td> <td>0.443</td> </tr> <tr> <td>Model || DME</td> <td>0.383</td> <td>0.725</td> <td>0.498</td> </tr> <tr> <td>Model || DME-SMN</td> <td>0.455</td> <td>0.761</td> <td>0.558</td> </tr> <tr> <td>Model || STM(Transform)</td> <td>0.49</td> <td>0.764</td> <td>0.588</td> </tr> <tr> <td>Model || STM(GRU)</td> <td>0.503</td> <td>0.783</td> <td>0.597</td> </tr> <tr> <td>Model || STM(Ensemble)</td> <td>0.521</td> <td>0.797</td> <td>0.616</td> </tr> <tr> <td>Model || STM(BERT)</td> <td>0.548</td> <td>0.827</td> <td>0.614</td> </tr> </tbody></table>
Table 1
table_1
P19-1006
4
acl2019
3.5 Ablation Study. As it is shown in Table 1, we conduct an ablation study on the testset of the Ubuntu Corpus, where we aim to examine the effect of each part in our proposed model. Firstly, we verify the effectiveness of dual multi-turn encoder by comparing Baseline and DME in Table 1. Thanks to dual multi-turn en-coder, DME achieves 0.725 at R100@10 which is 0.366 better than the Baseline (Lowe et al.,2015b). Secondly, we study the ability of representation module by testing LSTM, GRU and Transformer with the default hyperparameter in Tensorflow. We note that GRU is better for this task. . After removing spatio-temporal matching block, the per-formance degrades significantly. In order to verify the effectiveness of STM block further, we design a DME-SMN which uses 2D convolution for extracting spatial attention information and employ GRU for modeling temporal information. The STM block makes a 10.54% improvement at R100@1. Next, we replace GRU with Transformer in STM. Supposed the data has maximal m turns and n candidates, the time complexity of cross-attention (Zhou et al., 2018), O(mn), is much higher than that of the Dual-Encoder based model,O(m+n). Thus, cross-attention is an impractical operation when the candidate set is large. So we remove cross-attention operations in DAM and extend it with Dual-Encoder architecture. The result in Table 1 shows that using self-attention only may not be enough for representation. As BERT (Devlin et al., 2018) has been shown to be a powerful feature extractor for various tasks, we employ BERT as a feature-based approach to generate ELMo-like pre-trained contextual representations (Peters et al., 2018b). It succeed the highest results and outperforms other methods by a significant margin.
[2, 1, 1, 1, 2, 2, 2, 2, 1, 1, 2, 2, 2, 1, 2, 1]
['3.5 Ablation Study.', 'As it is shown in Table 1, we conduct an ablation study on the testset of the Ubuntu Corpus, where we aim to examine the effect of each part in our proposed model.', 'Firstly, we verify the effectiveness of dual multi-turn encoder by comparing Baseline and DME in Table 1.', 'Thanks to dual multi-turn en-coder, DME achieves 0.725 at R100@10 which is 0.366 better than the Baseline (Lowe et al.,2015b).', 'Secondly, we study the ability of representation module by testing LSTM, GRU and Transformer with the default hyperparameter in Tensorflow.', 'We note that GRU is better for this task. .', 'After removing spatio-temporal matching block, the per-formance degrades significantly.', 'In order to verify the effectiveness of STM block further, we design a DME-SMN which uses 2D convolution for extracting spatial attention information and employ GRU for modeling temporal information.', 'The STM block makes a 10.54% improvement at R100@1.', 'Next, we replace GRU with Transformer in STM.', 'Supposed the data has maximal m turns and n candidates, the time complexity of cross-attention (Zhou et al., 2018), O(mn), is much higher than that of the Dual-Encoder based model,O(m+n).', 'Thus, cross-attention is an impractical operation when the candidate set is large.', 'So we remove cross-attention operations in DAM and extend it with Dual-Encoder architecture.', 'The result in Table 1 shows that using self-attention only may not be enough for representation.', 'As BERT (Devlin et al., 2018) has been shown to be a powerful feature extractor for various tasks, we employ BERT as a feature-based approach to generate ELMo-like pre-trained contextual representations (Peters et al., 2018b).', 'It succeed the highest results and outperforms other methods by a significant margin.']
[None, None, ['Baseline', 'DME'], ['DME', 'Baseline', 'R100@10'], ['STM(Transform)', 'STM(GRU)'], ['STM(GRU)'], ['STM(GRU)'], ['STM(GRU)', 'DME-SMN'], ['STM(GRU)', 'R100@1'], ['STM(Transform)'], None, None, ['DAM'], None, ['STM(BERT)'], ['STM(BERT)']]
1
P19-1013table_3
Results on the biomedical domain dataset (§5.3). P and R represent precision and recall, respectively. The scores of C&C and EasySRL fine-tuned on the GENIA1000 is included for comparison (excerpted from Lewis et al. (2016)).
2
[['Method', 'C&C'], ['Method', 'EasySRL'], ['Method', 'depccg'], ['Method', '#NAME?'], ['Method', '#NAME?'], ['Method', '#NAME?']]
1
[['P'], ['R'], ['F1']]
[['77.8', '71.4', '74.5'], ['81.8', '82.6', '82.2'], ['83.11', '82.63', '82.87'], ['85.87', '85.34', '85.61'], ['85.45', '84.49', '84.97'], ['86.9', '86.14', '86.52']]
column
['P', 'R', 'F1']
['depccg']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || C&amp;C</td> <td>77.8</td> <td>71.4</td> <td>74.5</td> </tr> <tr> <td>Method || EasySRL</td> <td>81.8</td> <td>82.6</td> <td>82.2</td> </tr> <tr> <td>Method || depccg</td> <td>83.11</td> <td>82.63</td> <td>82.87</td> </tr> <tr> <td>Method || #NAME?</td> <td>85.87</td> <td>85.34</td> <td>85.61</td> </tr> <tr> <td>Method || #NAME?</td> <td>85.45</td> <td>84.49</td> <td>84.97</td> </tr> <tr> <td>Method || #NAME?</td> <td>86.9</td> <td>86.14</td> <td>86.52</td> </tr> </tbody></table>
Table 3
table_3
P19-1013
6
acl2019
Table 3 shows the results of the parsing experiment, where the scores of revious work (C&C (Clark and Curran, 2007) and EasySRL (Lewis et al., 2016)) are included for reference. The plain depccg already achieves higher scores than these methods, and boosts when combined with ELMo (improvement of 2.73 points in terms of F1). Fine-tuning the parser on GENIA1000 results in a mixed result, with slightly lower scores. This is presumably because the automatically annotated Head First dependencies are not accurate. Finally, by fine-tuning on the Genia CCGbank, we observe another improvement, resulting in the highest 86.52 F1 score.
[2, 1, 2, 1]
['Table 3 shows the results of the parsing experiment, where the scores of revious work (C&C (Clark and Curran, 2007) and EasySRL (Lewis et al., 2016)) are included for reference.', 'The plain depccg already achieves higher scores than these methods, and boosts when combined with ELMo (improvement of 2.73 points in terms of F1). Fine-tuning the parser on GENIA1000 results in a mixed result, with slightly lower scores.', 'This is presumably because the automatically annotated Head First dependencies are not accurate.', 'Finally, by fine-tuning on the Genia CCGbank, we observe another improvement, resulting in the highest 86.52 F1 score.']
[None, ['depccg'], None, None]
1
P19-1013table_4
Results on question sentences (§5.3). All of baseline C&C, EasySRL and depccg parsers are retrained on Questions data.
2
[['Method', 'C&C'], ['Method', 'EasySRL'], ['Method', 'depccg'], ['Method', 'depccg+elmo'], ['Method', 'depccg+proposed']]
1
[['P'], ['R'], ['F1']]
[['-', '-', '86.8'], ['88.2', '87.9', '88'], ['90.42', '90.15', '90.29'], ['90.55', '89.86', '90.21'], ['90.27', '89.97', '90.12']]
column
['P', 'R', 'F1']
['depccg', 'depccg+elmo', 'depccg+proposed']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P</th> <th>R</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Method || C&amp;C</td> <td>-</td> <td>-</td> <td>86.8</td> </tr> <tr> <td>Method || EasySRL</td> <td>88.2</td> <td>87.9</td> <td>88</td> </tr> <tr> <td>Method || depccg</td> <td>90.42</td> <td>90.15</td> <td>90.29</td> </tr> <tr> <td>Method || depccg+elmo</td> <td>90.55</td> <td>89.86</td> <td>90.21</td> </tr> <tr> <td>Method || depccg+proposed</td> <td>90.27</td> <td>89.97</td> <td>90.12</td> </tr> </tbody></table>
Table 4
table_4
P19-1013
6
acl2019
Table 4 compares the performance of depccg fine-tuned on the QuestionBank, along with other baselines. Contrary to our expectation, the plain depccg retrained on Questions data performs the best, with neither ELMo nor the proposed method taking any effect. We hypothesize that, since the evaluation set contains sentences with similar constructions, the contributions of the latter two methods are less observable on top of Questions data. Inspection of the output trees reveals that this is actually the case; the majority of differences among parser's configurations are irrelevant to question constructions, suggesting that the models capture well the syntax of question in the data.11.
[1, 1, 2, 2]
['Table 4 compares the performance of depccg fine-tuned on the QuestionBank, along with other baselines.', 'Contrary to our expectation, the plain depccg retrained on Questions data performs the best, with neither ELMo nor the proposed method taking any effect.', 'We hypothesize that, since the evaluation set contains sentences with similar constructions, the contributions of the latter two methods are less observable on top of Questions data.', "Inspection of the output trees reveals that this is actually the case; the majority of differences among parser's configurations are irrelevant to question constructions, suggesting that the models capture well the syntax of question in the data.11."]
[None, ['depccg', 'depccg+elmo', 'depccg+proposed'], None, None]
1
P19-1019table_1
Results of the proposed method in comparison to previous work (BLEU). Overall best results are in bold, the best ones in each group are underlined. ∗Detokenized BLEU equivalent to the official mteval-v13a.pl script. The rest use tokenized BLEU with multi-bleu.perl (or similar).
2
[['NMT', 'Artetxe et al. (2018c)'], ['NMT', 'Lample et al. (2018a)'], ['NMT', 'Yang et al. (2018)'], ['NMT', 'Lample et al. (2018b)'], ['SMT', 'Artetxe et al. (2018b)'], ['SMT', 'Lample et al. (2018b)'], ['SMT', 'Marie and Fujita (2018)'], ['SMT', 'Proposed system'], ['SMT', '+detok. SacreBLEU'], ['SMT + NMT', 'Lample et al. (2018b)'], ['SMT + NMT', 'Marie and Fujita (2018)'], ['SMT + NMT', 'Ren et al. (2019)'], ['SMT + NMT', 'Proposed system'], ['SMT + NMT', '+detok. SacreBLEU']]
2
[['language pair', 'fr-en'], ['language pair', 'en-fr'], ['language pair', 'de-en'], ['language pair', 'en-de'], ['language pair', 'de-en'], ['language pair', 'en-de']]
[['15.6', '15.1', '10.2', '6.6', '-', '-'], ['14.3', '15.1', '-', '-', '13.3', '9.6'], ['15.6', '17', '-', '-', '14.6', '10.9'], ['24.2', '25.1', '-', '-', '21', '17.2'], ['25.9', '26.2', '17.4', '14.1', '23.1', '18.2'], ['27.2', '28.1', '-', '-', '22.9', '17.9'], ['-', '-', '-', '-', '20.2', '15.5'], ['28.4', '30.1', '20.1', '15.8', '25.4', '19.7'], ['27.9', '27.8', '19.7', '14.7', '24.8', '19.4'], ['27.7', '27.6', '-', '-', '25.2', '20.2'], ['-', '-', '-', '-', '26.7', '20'], ['28.9', '29.5', '20.4', '17', '26.3', '21.7'], ['33.5', '36.2', '27', '22.5', '34.4', '26.9'], ['33.2', '33.6', '26.4', '21.2', '33.8', '26.4']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['Proposed system']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>language pair || fr-en</th> <th>language pair || en-fr</th> <th>language pair || de-en</th> <th>language pair || en-de</th> <th>language pair || de-en</th> <th>language pair || en-de</th> </tr> </thead> <tbody> <tr> <td>NMT || Artetxe et al. (2018c)</td> <td>15.6</td> <td>15.1</td> <td>10.2</td> <td>6.6</td> <td>-</td> <td>-</td> </tr> <tr> <td>NMT || Lample et al. (2018a)</td> <td>14.3</td> <td>15.1</td> <td>-</td> <td>-</td> <td>13.3</td> <td>9.6</td> </tr> <tr> <td>NMT || Yang et al. (2018)</td> <td>15.6</td> <td>17</td> <td>-</td> <td>-</td> <td>14.6</td> <td>10.9</td> </tr> <tr> <td>NMT || Lample et al. (2018b)</td> <td>24.2</td> <td>25.1</td> <td>-</td> <td>-</td> <td>21</td> <td>17.2</td> </tr> <tr> <td>SMT || Artetxe et al. (2018b)</td> <td>25.9</td> <td>26.2</td> <td>17.4</td> <td>14.1</td> <td>23.1</td> <td>18.2</td> </tr> <tr> <td>SMT || Lample et al. (2018b)</td> <td>27.2</td> <td>28.1</td> <td>-</td> <td>-</td> <td>22.9</td> <td>17.9</td> </tr> <tr> <td>SMT || Marie and Fujita (2018)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>20.2</td> <td>15.5</td> </tr> <tr> <td>SMT || Proposed system</td> <td>28.4</td> <td>30.1</td> <td>20.1</td> <td>15.8</td> <td>25.4</td> <td>19.7</td> </tr> <tr> <td>SMT || +detok. SacreBLEU</td> <td>27.9</td> <td>27.8</td> <td>19.7</td> <td>14.7</td> <td>24.8</td> <td>19.4</td> </tr> <tr> <td>SMT + NMT || Lample et al. (2018b)</td> <td>27.7</td> <td>27.6</td> <td>-</td> <td>-</td> <td>25.2</td> <td>20.2</td> </tr> <tr> <td>SMT + NMT || Marie and Fujita (2018)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>26.7</td> <td>20</td> </tr> <tr> <td>SMT + NMT || Ren et al. (2019)</td> <td>28.9</td> <td>29.5</td> <td>20.4</td> <td>17</td> <td>26.3</td> <td>21.7</td> </tr> <tr> <td>SMT + NMT || Proposed system</td> <td>33.5</td> <td>36.2</td> <td>27</td> <td>22.5</td> <td>34.4</td> <td>26.9</td> </tr> <tr> <td>SMT + NMT || +detok. SacreBLEU</td> <td>33.2</td> <td>33.6</td> <td>26.4</td> <td>21.2</td> <td>33.8</td> <td>26.4</td> </tr> </tbody></table>
Table 1
table_1
P19-1019
6
acl2019
Table 1 reports the results of the proposed system in comparison to previous work. As it can be seen, our full system obtains the best published results in all cases, outperforming the previous state-of-the-art by 5-7 BLEU points in all datasets and translation directions.
[1, 1]
['Table 1 reports the results of the proposed system in comparison to previous work.', 'As it can be seen, our full system obtains the best published results in all cases, outperforming the previous state-of-the-art by 5-7 BLEU points in all datasets and translation directions.']
[None, ['Proposed system']]
1
P19-1019table_3
Results of the proposed method in comparison to different supervised systems (BLEU). ∗Detokenized BLEU equivalent to the official mteval-v13a.pl script. The rest use tokenized BLEU with multi-bleu.perl (or similar). †Results in the original test set from WMT 2014, which slightly differs from the full test set used in all subsequent work. Our proposed system obtains 22.4 BLEU points (21.1 detokenized) in that same subset.
2
[['Unsupervised', 'Proposed system'], ['Unsupervised', '+detok SacreBLEU*'], ['Supervised', 'WMT best*'], ['Supervised', 'Vaswani et al. (2017)'], ['Supervised', 'Edunov et al. (2018)']]
2
[['WMT-14', 'fr-en'], ['WMT-14', 'en-fr'], ['WMT-14', 'de-en'], ['WMT-14', 'en-de']]
[['33.5', '36.2', '27', '22.5'], ['33.2', '33.6', '26.4', '21.2'], ['35', '35.8', '29', '20.6'], ['-', '41', '-', '28.4'], ['-', '45.6', '-', '35']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU']
['Proposed system']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WMT-14 || fr-en</th> <th>WMT-14 || en-fr</th> <th>WMT-14 || de-en</th> <th>WMT-14 || en-de</th> </tr> </thead> <tbody> <tr> <td>Unsupervised || Proposed system</td> <td>33.5</td> <td>36.2</td> <td>27</td> <td>22.5</td> </tr> <tr> <td>Unsupervised || +detok SacreBLEU*</td> <td>33.2</td> <td>33.6</td> <td>26.4</td> <td>21.2</td> </tr> <tr> <td>Supervised || WMT best*</td> <td>35</td> <td>35.8</td> <td>29</td> <td>20.6</td> </tr> <tr> <td>Supervised || Vaswani et al. (2017)</td> <td>-</td> <td>41</td> <td>-</td> <td>28.4</td> </tr> <tr> <td>Supervised || Edunov et al. (2018)</td> <td>-</td> <td>45.6</td> <td>-</td> <td>35</td> </tr> </tbody></table>
Table 3
table_3
P19-1019
7
acl2019
So as to put our results into perspective, Table 3 reports the results of different supervised systems in the same WMT 2014 test set. More concretely, we include the best results from the shared task itself, which reflect the state-of-the-art in machine translation back in 2014; those of Vaswani et al. (2017), who introduced the now predominant transformer architecture; and those of Edunov et al. (2018), who apply back-translation at a large scale and, to the best of our knowledge, hold the current best results in the test set. As it can be seen, our unsupervised system outperforms the WMT 2014 shared task winner in English-to-German, and is around 2 BLEU points behind it in the other translation directions. This shows that unsupervised machine translation is already competitive with the state-of-the-art in supervised machine translation in 2014.
[1, 2, 1, 1]
['So as to put our results into perspective, Table 3 reports the results of different supervised systems in the same WMT 2014 test set.', 'More concretely, we include the best results from the shared task itself, which reflect the state-of-the-art in machine translation back in 2014; those of Vaswani et al. (2017), who introduced the now predominant transformer architecture; and those of Edunov et al. (2018), who apply back-translation at a large scale and, to the best of our knowledge, hold the current best results in the test set.', 'As it can be seen, our unsupervised system outperforms the WMT 2014 shared task winner in English-to-German, and is around 2 BLEU points behind it in the other translation directions.', 'This shows that unsupervised machine translation is already competitive with the state-of-the-art in supervised machine translation in 2014.']
[None, ['Vaswani et al. (2017)', 'Edunov et al. (2018)'], ['Unsupervised', 'Proposed system', 'en-de'], ['Unsupervised', 'Supervised']]
1
P19-1021table_4
Korean→English results. Mean and standard deviation of three training runs reported.
2
[['system', '(Gu et al., 2018b) (supervised Transformer)'], ['system', 'phrase-based SMT'], ['system', 'NMT baseline (2)'], ['system', 'NMT optimized (8)']]
1
[['BLEU']]
[['5.97'], ['6.57 ± 0.17'], ['2.93 ± 0.34'], ['10.37 ± 0.29']]
column
['BLEU']
['NMT optimized (8)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> </tr> </thead> <tbody> <tr> <td>system || (Gu et al., 2018b) (supervised Transformer)</td> <td>5.97</td> </tr> <tr> <td>system || phrase-based SMT</td> <td>6.57 ± 0.17</td> </tr> <tr> <td>system || NMT baseline (2)</td> <td>2.93 ± 0.34</td> </tr> <tr> <td>system || NMT optimized (8)</td> <td>10.37 ± 0.29</td> </tr> </tbody></table>
Table 4
table_4
P19-1021
5
acl2019
Table 4 shows results for Korean - English, using the same configurations (1, 2 and 8) as for German - English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by Gu et al. (2018b).
[1, 1]
['Table 4 shows results for Korean - English, using the same configurations (1, 2 and 8) as for German - English.', 'Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by Gu et al. (2018b).']
[None, ['NMT optimized (8)', 'BLEU', '(Gu et al., 2018b) (supervised Transformer)']]
1
P19-1023table_3
Experiments result.
3
[['Model', 'Existing Models', 'MinIE (+AIDA)'], ['Model', 'Existing Models', 'MinIE (+NeuralEL)'], ['Model', 'Existing Models', 'ClausIE (+AIDA)'], ['Model', 'Existing Models', 'ClausIE (+NeuralEL)'], ['Model', 'Existing Models', 'CNN (+AIDA)'], ['Model', 'Existing Models', 'CNN (+NeuralEL)'], ['Model', 'Encoder-Decoder Models', 'Single Attention'], ['Model', 'Encoder-Decoder Models', 'Single Attention (+pre-trained)'], ['Model', 'Encoder-Decoder Models', 'Single Attention (+beam)'], ['Model', 'Encoder-Decoder Models', 'Single Attention (+triple classifier)'], ['Model', 'Encoder-Decoder Models', 'Transformer'], ['Model', 'Encoder-Decoder Models', 'Transformer (+pre-trained)'], ['Model', 'Encoder-Decoder Models', 'Transformer (+beam)'], ['Model', 'Encoder-Decoder Models', 'Transformer (+triple classifier)'], ['Model', 'Proposed', 'N-gram Attention'], ['Model', 'Proposed', 'N-gram Attention (+pre-trained)'], ['Model', 'Proposed', 'N-gram Attention (+beam)'], ['Model', 'Proposed', 'N-gram Attention (+triple classifier)']]
2
[['WIKI', 'Precision'], ['WIKI', 'Recall'], ['WIKI', 'F1'], ['GEO', 'Precision'], ['GEO', 'Recall'], ['GEO', 'F1']]
[['0.3672', '0.4856', '0.4182', '0.3574', '0.3901', '0.373'], ['0.3511', '0.3967', '0.3725', '0.3644', '0.3811', '0.3726'], ['0.3617', '0.4728', '0.4099', '0.3531', '0.3951', '0.3729'], ['0.3445', '0.3786', '0.3607', '0.3563', '0.3791', '0.3673'], ['0.4035', '0.3503', '0.375', '0.3715', '0.3165', '0.3418'], ['0.3689', '0.3521', '0.3603', '0.3781', '0.3005', '0.3349'], ['0.4591', '0.3836', '0.418', '0.401', '0.3912', '0.396'], ['0.4725', '0.4053', '0.4363', '0.4314', '0.4311', '0.4312'], ['0.6056', '0.5231', '0.5613', '0.5869', '0.4851', '0.5312'], ['0.7378', '0.5013', '0.597', '0.6704', '0.5301', '0.5921'], ['0.4628', '0.3897', '0.4231', '0.4575', '0.462', '0.4597'], ['0.4748', '0.4091', '0.4395', '0.4841', '0.4831', '0.4836'], ['0.5829', '0.5025', '0.5397', '0.6181', '0.6161', '0.6171'], ['0.7307', '0.4866', '0.5842', '0.7124', '0.5761', '0.637'], ['0.7014', '0.6432', '0.671', '0.6029', '0.6033', '0.6031'], ['0.7157', '0.6634', '0.6886', '0.6581', '0.6631', '0.6606'], ['0.7424', '0.6845', '0.7123', '0.6816', '0.6861', '0.6838'], ['0.8471', '0.6762', '0.7521', '0.7705', '0.6771', '0.7208']]
column
['Precision', 'Recall', 'F1', 'Precision', 'Recall', 'F1']
['N-gram Attention (+beam)', 'N-gram Attention (+triple classifier)', 'N-gram Attention (+pre-trained)', 'N-gram Attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>WIKI || Precision</th> <th>WIKI || Recall</th> <th>WIKI || F1</th> <th>GEO || Precision</th> <th>GEO || Recall</th> <th>GEO || F1</th> </tr> </thead> <tbody> <tr> <td>Model || Existing Models || MinIE (+AIDA)</td> <td>0.3672</td> <td>0.4856</td> <td>0.4182</td> <td>0.3574</td> <td>0.3901</td> <td>0.373</td> </tr> <tr> <td>Model || Existing Models || MinIE (+NeuralEL)</td> <td>0.3511</td> <td>0.3967</td> <td>0.3725</td> <td>0.3644</td> <td>0.3811</td> <td>0.3726</td> </tr> <tr> <td>Model || Existing Models || ClausIE (+AIDA)</td> <td>0.3617</td> <td>0.4728</td> <td>0.4099</td> <td>0.3531</td> <td>0.3951</td> <td>0.3729</td> </tr> <tr> <td>Model || Existing Models || ClausIE (+NeuralEL)</td> <td>0.3445</td> <td>0.3786</td> <td>0.3607</td> <td>0.3563</td> <td>0.3791</td> <td>0.3673</td> </tr> <tr> <td>Model || Existing Models || CNN (+AIDA)</td> <td>0.4035</td> <td>0.3503</td> <td>0.375</td> <td>0.3715</td> <td>0.3165</td> <td>0.3418</td> </tr> <tr> <td>Model || Existing Models || CNN (+NeuralEL)</td> <td>0.3689</td> <td>0.3521</td> <td>0.3603</td> <td>0.3781</td> <td>0.3005</td> <td>0.3349</td> </tr> <tr> <td>Model || Encoder-Decoder Models || Single Attention</td> <td>0.4591</td> <td>0.3836</td> <td>0.418</td> <td>0.401</td> <td>0.3912</td> <td>0.396</td> </tr> <tr> <td>Model || Encoder-Decoder Models || Single Attention (+pre-trained)</td> <td>0.4725</td> <td>0.4053</td> <td>0.4363</td> <td>0.4314</td> <td>0.4311</td> <td>0.4312</td> </tr> <tr> <td>Model || Encoder-Decoder Models || Single Attention (+beam)</td> <td>0.6056</td> <td>0.5231</td> <td>0.5613</td> <td>0.5869</td> <td>0.4851</td> <td>0.5312</td> </tr> <tr> <td>Model || Encoder-Decoder Models || Single Attention (+triple classifier)</td> <td>0.7378</td> <td>0.5013</td> <td>0.597</td> <td>0.6704</td> <td>0.5301</td> <td>0.5921</td> </tr> <tr> <td>Model || Encoder-Decoder Models || Transformer</td> <td>0.4628</td> <td>0.3897</td> <td>0.4231</td> <td>0.4575</td> <td>0.462</td> <td>0.4597</td> </tr> <tr> <td>Model || Encoder-Decoder Models || Transformer (+pre-trained)</td> <td>0.4748</td> <td>0.4091</td> <td>0.4395</td> <td>0.4841</td> <td>0.4831</td> <td>0.4836</td> </tr> <tr> <td>Model || Encoder-Decoder Models || Transformer (+beam)</td> <td>0.5829</td> <td>0.5025</td> <td>0.5397</td> <td>0.6181</td> <td>0.6161</td> <td>0.6171</td> </tr> <tr> <td>Model || Encoder-Decoder Models || Transformer (+triple classifier)</td> <td>0.7307</td> <td>0.4866</td> <td>0.5842</td> <td>0.7124</td> <td>0.5761</td> <td>0.637</td> </tr> <tr> <td>Model || Proposed || N-gram Attention</td> <td>0.7014</td> <td>0.6432</td> <td>0.671</td> <td>0.6029</td> <td>0.6033</td> <td>0.6031</td> </tr> <tr> <td>Model || Proposed || N-gram Attention (+pre-trained)</td> <td>0.7157</td> <td>0.6634</td> <td>0.6886</td> <td>0.6581</td> <td>0.6631</td> <td>0.6606</td> </tr> <tr> <td>Model || Proposed || N-gram Attention (+beam)</td> <td>0.7424</td> <td>0.6845</td> <td>0.7123</td> <td>0.6816</td> <td>0.6861</td> <td>0.6838</td> </tr> <tr> <td>Model || Proposed || N-gram Attention (+triple classifier)</td> <td>0.8471</td> <td>0.6762</td> <td>0.7521</td> <td>0.7705</td> <td>0.6771</td> <td>0.7208</td> </tr> </tbody></table>
Table 3
table_3
P19-1023
8
acl2019
4.3 Result. Table 3 shows that the end-to-end models outper-form the existing model. In particular, our proposed n-gram attention model achieves the best results in terms of precision, recall, and F1 score. Our proposed model outperforms the best existing model (MinIE) by 33.39% and 34.78% in terms of F1 score on the WIKI and GEO test dataset re-spectively. These results are expected since the existing models are affected by the error propagation of the NED. As expected, the combination of the existing models with AIDA achieves higher F1 scores than the combination with NeuralEL as AIDA achieves a higher precision than NeuralEL. To further show the effect of error propagation, we set up an experiment without the canonicalization task (i.e., the objective is predicting a relationship between known entities). We removethe NED pre-processing step by allowing the CNN model to access the correct entities. Meanwhile,we provide the correct entities to the decoder of our proposed model. In this setup, our proposed model achieves 86.34% and 79.11%, while CNN achieves 81.92% and 75.82% in precision over the WIKI and GEO test datasets, respectively. Our proposed n-gram attention model outper-forms the end-to-end models by 15.51% and 8.38% in terms of F1 score on the WIKI and GEO test datasets, respectively. The Transformer model also only yields similar performance to that of the Single Attention model, which is worse than ours. These results indicate that our model captures multi-word entity name (in both datasets, 82.9% of the entities have multi-word entity name) in the input sentence better than the other models. Table 3 also shows that the pre-trained embeddings improve the performance of the model in all measures. Moreover, the pre-trained embeddings help the model to converge faster. In our experiments, the models that use the pre-trained embeddings converge in 20 epochs on average, while the models that do not use the pre-trained embeddings converge in 30 - 40 epochs. Our triple classifier combined with the modified beam search boost the performance of the model. The modified beam search provides a high recall by extracting the correct entities based on the surface form in the input sentence while the triple classifier provides a high precision by filtering the invalid triples.
[0, 1, 1, 1, 2, 1, 2, 2, 2, 1, 1, 1, 1, 1, 2, 2, 1, 2]
['4.3 Result.', 'Table 3 shows that the end-to-end models outper-form the existing model.', 'In particular, our proposed n-gram attention model achieves the best results in terms of precision, recall, and F1 score.', 'Our proposed model outperforms the best existing model (MinIE) by 33.39% and 34.78% in terms of F1 score on the WIKI and GEO test dataset re-spectively.', 'These results are expected since the existing models are affected by the error propagation of the NED.', 'As expected, the combination of the existing models with AIDA achieves higher F1 scores than the combination with NeuralEL as AIDA achieves a higher precision than NeuralEL.', 'To further show the effect of error propagation, we set up an experiment without the canonicalization task (i.e., the objective is predicting a relationship between known entities).', 'We removethe NED pre-processing step by allowing the CNN model to access the correct entities.', ' Meanwhile,we provide the correct entities to the decoder of our proposed model.', 'In this setup, our proposed model achieves 86.34% and 79.11%, while CNN achieves 81.92% and 75.82% in precision over the WIKI and GEO test datasets, respectively.', 'Our proposed n-gram attention model outper-forms the end-to-end models by 15.51% and 8.38% in terms of F1 score on the WIKI and GEO test datasets, respectively.', 'The Transformer model also only yields similar performance to that of the Single Attention model, which is worse than ours.', ' These results indicate that our model captures multi-word entity name (in both datasets, 82.9% of the entities have multi-word entity name) in the input sentence better than the other models.', 'Table 3 also shows that the pre-trained embeddings improve the performance of the model in all measures.', 'Moreover, the pre-trained embeddings help the model to converge faster.', 'In our experiments, the models that use the pre-trained embeddings converge in 20 epochs on average, while the models that do not use the pre-trained embeddings converge in 30 - 40 epochs.', 'Our triple classifier combined with the modified beam search boost the performance of the model.', 'The modified beam search provides a high recall by extracting the correct entities based on the surface form in the input sentence while the triple classifier provides a high precision by filtering the invalid triples.']
[None, ['Proposed'], ['Proposed', 'N-gram Attention', 'Precision', 'Recall', 'F1'], ['Proposed', 'N-gram Attention', 'MinIE (+AIDA)', 'MinIE (+NeuralEL)', 'F1', 'WIKI', 'GEO'], ['MinIE (+AIDA)', 'MinIE (+NeuralEL)'], ['MinIE (+AIDA)', 'F1', 'Precision', 'MinIE (+NeuralEL)'], None, ['CNN (+AIDA)', 'CNN (+NeuralEL)'], ['N-gram Attention'], ['N-gram Attention', 'CNN (+AIDA)', 'CNN (+NeuralEL)', 'Precision', 'WIKI', 'GEO'], ['N-gram Attention', 'F1', 'WIKI', 'GEO'], ['Transformer', 'Single Attention', 'N-gram Attention'], ['N-gram Attention'], ['N-gram Attention (+pre-trained)'], None, ['N-gram Attention (+pre-trained)'], ['N-gram Attention (+triple classifier)'], ['N-gram Attention (+beam)', 'Recall', 'N-gram Attention (+triple classifier)']]
1
P19-1029table_3
Evaluation of models at early stopping points. Results for three random seeds on IWSLT are averaged, reporting the standard deviation in the subscript. The translation of the dev set is obtained by greedy decoding (as during validation) and of the test set with beam search of width five. The costs are measured in character edits and clicks, as described in Section 4.
2
[['Model', 'Baseline'], ['Model', 'Full'], ['Model', 'Weak'], ['Model', 'Self'], ['Model', 'Reg4'], ['Model', 'Reg3'], ['Model', 'Reg2']]
2
[['IWSLT dev', 'BLEU'], ['IWSLT dev', 'Cost'], ['IWSLT test', 'BLEU'], ['IWSLT test', 'TER']]
[['28.28', '-', '24.84', '62.42'], ['28.93±0.02', ' 417k', '25.60±0.02', '61.86±0.03'], ['28.65±0.01', '32k', '25.10±0.09', '62.12±0.12'], ['28.58±0.02', '-', '25.33±0.06', '61.96±0.05'], ['28.57±0.04', '68k', '25.23±0.05', '62.02±0.12'], ['28.61±0.03', '18k', '25.23±0.09', '62.07±0.06'], ['28.66±0.06', '88k', '25.27±0.09', '61.91±0.06']]
column
['BLEU', 'cost', 'BLEU', 'TER']
['Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>IWSLT dev || BLEU</th> <th>IWSLT dev || Cost</th> <th>IWSLT test || BLEU</th> <th>IWSLT test || TER</th> </tr> </thead> <tbody> <tr> <td>Model || Baseline</td> <td>28.28</td> <td>-</td> <td>24.84</td> <td>62.42</td> </tr> <tr> <td>Model || Full</td> <td>28.93±0.02</td> <td>417k</td> <td>25.60±0.02</td> <td>61.86±0.03</td> </tr> <tr> <td>Model || Weak</td> <td>28.65±0.01</td> <td>32k</td> <td>25.10±0.09</td> <td>62.12±0.12</td> </tr> <tr> <td>Model || Self</td> <td>28.58±0.02</td> <td>-</td> <td>25.33±0.06</td> <td>61.96±0.05</td> </tr> <tr> <td>Model || Reg4</td> <td>28.57±0.04</td> <td>68k</td> <td>25.23±0.05</td> <td>62.02±0.12</td> </tr> <tr> <td>Model || Reg3</td> <td>28.61±0.03</td> <td>18k</td> <td>25.23±0.09</td> <td>62.07±0.06</td> </tr> <tr> <td>Model || Reg2</td> <td>28.66±0.06</td> <td>88k</td> <td>25.27±0.09</td> <td>61.91±0.06</td> </tr> </tbody></table>
Table 3
table_3
P19-1029
12
acl2019
A.3 Offline Evaluation on IWSLT. Table 3 reports the offline held-out set evaluations for the early stopping points selected on the dev set for all feedback modes. All models notably improve over the baseline, only using full feedback leads to the overall best model on IWSLT (+0.6 BLEU / -0.6 TER), but costs a massive amounts of edits (417k characters). Self regulating models still achieve improvements of 0.4-0.5 BLEU/TER with costs reduced up to a factor of 23. The reduction in cost is enabled by the use of cheaper feedback, here markings and self-supervision, which in isolation are successful as well. Self-supervision works surprisingly well, which makes it attractive for cheap but effective unsupervised domain adaptation. It has to be noted that both weak and self-supervision worked only well when targets were pre-computed with the baseline model and held fixed during training. We suspect that the strong reward signal (ft= 1)for non-reference outputs leads otherwise to undesired local overfitting effects that a learner with online-generated targets cannot recover from.
[2, 1, 1, 1, 2, 1, 1, 2]
['A.3 Offline Evaluation on IWSLT.', ' Table 3 reports the offline held-out set evaluations for the early stopping points selected on the dev set for all feedback modes.', 'All models notably improve over the baseline, only using full feedback leads to the overall best model on IWSLT (+0.6 BLEU / -0.6 TER), but costs a massive amounts of edits (417k characters).', 'Self regulating models still achieve improvements of 0.4-0.5 BLEU/TER with costs reduced up to a factor of 23.', 'The reduction in cost is enabled by the use of cheaper feedback, here markings and self-supervision, which in isolation are successful as well.', 'Self-supervision works surprisingly well, which makes it attractive for cheap but effective unsupervised domain adaptation.', 'It has to be noted that both weak and self-supervision worked only well when targets were pre-computed with the baseline model and held fixed during training.', 'We suspect that the strong reward signal (ft= 1)for non-reference outputs leads otherwise to undesired local overfitting effects that a learner with online-generated targets cannot recover from.']
[None, None, ['Full', 'Baseline', 'BLEU', 'TER', 'Cost'], ['Self', 'BLEU', 'TER'], None, ['Self'], ['Weak', 'Self', 'Baseline'], None]
1
P19-1035table_1
Results of the difficulty prediction approaches. SVM (original) has been taken from Beinborn (2016)
2
[['Model', 'SVM (original)'], ['Model', 'SVM (reproduced)'], ['Model', 'MLP'], ['Model', 'BiLSTM']]
2
[['Original data', 'rho'], ['Original data', 'RMSE'], ['Original data', 'qw k'], ['New data', 'rho'], ['New data', 'RMSE'], ['New data', 'qw k']]
[['0.5', '0.23', '0.44', '–', '–', '–'], ['0.49', '0.24', '0.47', '0.5', '0.21', '0.39'], ['0.42', '0.25', '0.31', '0.41', '0.22', '0.25'], ['0.49', '0.24', '0.35', '0.39', '0.24', '0.27']]
column
['rho', 'RMSE', 'qw k', 'rho', 'RMSE', 'qw k']
['SVM (original)', 'SVM (reproduced)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Original data || rho</th> <th>Original data || RMSE</th> <th>Original data || qw k</th> <th>New data || rho</th> <th>New data || RMSE</th> <th>New data || qw k</th> </tr> </thead> <tbody> <tr> <td>Model || SVM (original)</td> <td>0.5</td> <td>0.23</td> <td>0.44</td> <td>–</td> <td>–</td> <td>–</td> </tr> <tr> <td>Model || SVM (reproduced)</td> <td>0.49</td> <td>0.24</td> <td>0.47</td> <td>0.5</td> <td>0.21</td> <td>0.39</td> </tr> <tr> <td>Model || MLP</td> <td>0.42</td> <td>0.25</td> <td>0.31</td> <td>0.41</td> <td>0.22</td> <td>0.25</td> </tr> <tr> <td>Model || BiLSTM</td> <td>0.49</td> <td>0.24</td> <td>0.35</td> <td>0.39</td> <td>0.24</td> <td>0.27</td> </tr> </tbody></table>
Table 1
table_1
P19-1035
4
acl2019
The right-hand side of table 1 shows the performance of our SVM and the two neural methods. The results indicate that the SVM setup is wellsuited for the difficulty prediction task and that it successfully generalizes to new data.
[1, 1]
['The right-hand side of table 1 shows the performance of our SVM and the two neural methods.', 'The results indicate that the SVM setup is wellsuited for the difficulty prediction task and that it successfully generalizes to new data.']
[['SVM (original)', 'SVM (reproduced)'], ['SVM (original)', 'SVM (reproduced)']]
1
P19-1036table_4
Performance of our Method on the Operational Risk Text Classification Task
2
[['Taxonomy level', 'Level 1'], ['Taxonomy level', 'Level 2'], ['Taxonomy level', 'Level 3']]
1
[['Precision'], ['Recall'], ['F1-Score']]
[['91.8', '89.37', '90.45'], ['86.08', '74.8', '78.1'], ['34.98', '19.88', '22.95']]
column
['Precision', 'Recall', 'F1-score']
['Taxonomy level', 'Level 1']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>F1-Score</th> </tr> </thead> <tbody> <tr> <td>Taxonomy level || Level 1</td> <td>91.8</td> <td>89.37</td> <td>90.45</td> </tr> <tr> <td>Taxonomy level || Level 2</td> <td>86.08</td> <td>74.8</td> <td>78.1</td> </tr> <tr> <td>Taxonomy level || Level 3</td> <td>34.98</td> <td>19.88</td> <td>22.95</td> </tr> </tbody></table>
Table 4
table_4
P19-1036
8
acl2019
5.2 Result . For the purpose of experiment, operational teams (not experts) were asked to provide manual tags for a sample of 989 operational incidents. Table 4 provide the classification results of our approach when compared to those manual annotations, considering all three levels of the taxonomy. In a second step in the evaluation, an expert was given the difficult task to challenge each time they disagreed the computer and human annotation and determine which was ultimately correct. This exercise indicated that in 32 cases out of 989 operational incidents under consideration for the Level 1 classification, the machine generated category were more relevant (hence correct) than those identified by the operational team.
[2, 2, 1, 2, 1]
['5.2 Result .', 'For the purpose of experiment, operational teams (not experts) were asked to provide manual tags for a sample of 989 operational incidents.', 'Table 4 provide the classification results of our approach when compared to those manual annotations, considering all three levels of the taxonomy.', 'In a second step in the evaluation, an expert was given the difficult task to challenge each time they disagreed the computer and human annotation and determine which was ultimately correct.', 'This exercise indicated that in 32 cases out of 989 operational incidents under consideration for the Level 1 classification, the machine generated category were more relevant (hence correct) than those identified by the operational team.']
[None, None, ['Taxonomy level'], None, ['Level 1']]
1
P19-1041table_3
Manual evaluation on the Yelp dataset.
2
[['Model', 'Fu et al. (2018)'], ['Model', 'Shen et al. (2017)'], ['Model', 'Zhao et al. (2018)'], ['Model', 'Ours (DAE)'], ['Model', 'Ours (VAE)']]
1
[['TS'], ['CP'], ['LQ'], ['GM']]
[['1.67', '3.84', '3.66', '2.86'], ['3.63', '3.07', '3.08', '3.25'], ['3.55', '3.09', '3.77', '3.46'], ['3.67', '3.64', '4.19', '3.83'], ['4.32', '3.73', '4.48', '4.16']]
column
['TS', 'CP', 'LQ', 'GM']
['Ours (DAE)', 'Ours (VAE)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>TS</th> <th>CP</th> <th>LQ</th> <th>GM</th> </tr> </thead> <tbody> <tr> <td>Model || Fu et al. (2018)</td> <td>1.67</td> <td>3.84</td> <td>3.66</td> <td>2.86</td> </tr> <tr> <td>Model || Shen et al. (2017)</td> <td>3.63</td> <td>3.07</td> <td>3.08</td> <td>3.25</td> </tr> <tr> <td>Model || Zhao et al. (2018)</td> <td>3.55</td> <td>3.09</td> <td>3.77</td> <td>3.46</td> </tr> <tr> <td>Model || Ours (DAE)</td> <td>3.67</td> <td>3.64</td> <td>4.19</td> <td>3.83</td> </tr> <tr> <td>Model || Ours (VAE)</td> <td>4.32</td> <td>3.73</td> <td>4.48</td> <td>4.16</td> </tr> </tbody></table>
Table 3
table_3
P19-1041
8
acl2019
Table 3 presents the results of human evaluation on selected methods. Again, we see that the style embedding model (Fu et al., 2018) is ineffective as it has a very low transfer strength, and that our method outperforms other baselines in all aspects. The results are consistent with Table 2. This also implies that the automatic metrics we used are reasonable, and could be extrapolated to different models; it also shows consistent evidence of the effectiveness of our approach.
[1, 1, 2, 2]
['Table 3 presents the results of human evaluation on selected methods.', 'Again, we see that the style embedding model (Fu et al., 2018) is ineffective as it has a very low transfer strength, and that our method outperforms other baselines in all aspects.', 'The results are consistent with Table 2.', 'This also implies that the automatic metrics we used are reasonable, and could be extrapolated to different models; it also shows consistent evidence of the effectiveness of our approach.']
[None, ['Fu et al. (2018)', 'Ours (DAE)', 'Ours (VAE)'], None, None]
1
P19-1042table_4
Average single model results comparing different strategies to model cross-sentence context. ‘aux (+gate)’ is used in our CROSENT model.
1
[['BASELINE'], ['concat'], ['aux (no gate)'], ['aux (+gate)']]
2
[['Dev', 'F 0.5'], ['CoNLL-2013', 'P'], ['CoNLL-2013', 'R'], ['CoNLL-2013', 'F 0.5']]
[['33.21', '54.51', '15.16', '35.88'], ['33.41', '55.14', '15.28', '36.23'], ['32.99', '55.1', '14.83', '35.69'], ['35.68', '55.65', '16.93', '38.17']]
column
['F 0.5', 'P', 'R', 'F 0.5']
['BASELINE', 'concat', 'aux (no gate)', 'aux (+gate)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || F 0.5</th> <th>CoNLL-2013 || P</th> <th>CoNLL-2013 || R</th> <th>CoNLL-2013 || F 0.5</th> </tr> </thead> <tbody> <tr> <td>BASELINE</td> <td>33.21</td> <td>54.51</td> <td>15.16</td> <td>35.88</td> </tr> <tr> <td>concat</td> <td>33.41</td> <td>55.14</td> <td>15.28</td> <td>36.23</td> </tr> <tr> <td>aux (no gate)</td> <td>32.99</td> <td>55.1</td> <td>14.83</td> <td>35.69</td> </tr> <tr> <td>aux (+gate)</td> <td>35.68</td> <td>55.65</td> <td>16.93</td> <td>38.17</td> </tr> </tbody></table>
Table 4
table_4
P19-1042
6
acl2019
5.1 Modeling Cross-Sentence Context . We investigate different mechanisms of integrating cross-sentence context. Table 4 shows the average single model results of our sentence-level BASELINE compared to two different strategies of integrating cross-sentence context. concat' refers to simply prepending the previous source sentences to the current source sentence. The context and the current source sentence is separated by a special token (<CONCAT>). This model does not have an auxiliary encoder. aux (no gate)' uses an auxiliary encoder similar to our CROSENT model except for gating. aux (+gate)' is our CROSENT model (Section 2.2) which employs the auxiliary encoder with the gating mechanism. The first two variants perform comparably to our sentence-level BASELINE and shows no notable gains from using cross-sentence context. When the gating mechanism is added, results improve substantially. Using the gating mechanism is crucial in our CROSENTmodel, as it has the ability to selectively pass information through. This shows that properly modeling cross-sentence context is essential to improve overall performance.
[2, 1, 1, 2, 2, 2, 2, 2, 1, 1, 1, 1]
['5.1 Modeling Cross-Sentence Context .', 'We investigate different mechanisms of integrating cross-sentence context.', 'Table 4 shows the average single model results of our sentence-level BASELINE compared to two different strategies of integrating cross-sentence context.', "concat' refers to simply prepending the previous source sentences to the current source sentence.", 'The context and the current source sentence is separated by a special token (<CONCAT>).', 'This model does not have an auxiliary encoder.', "aux (no gate)' uses an auxiliary encoder similar to our CROSENT model except for gating.", "aux (+gate)' is our CROSENT model (Section 2.2) which employs the auxiliary encoder with the gating mechanism.", 'The first two variants perform comparably to our sentence-level BASELINE and shows no notable gains from using cross-sentence context.', 'When the gating mechanism is added, results improve substantially.', 'Using the gating mechanism is crucial in our CROSENTmodel, as it has the ability to selectively pass information through.', 'This shows that properly modeling cross-sentence context is essential to improve overall performance.']
[None, None, ['BASELINE'], ['concat'], None, None, ['aux (no gate)'], ['aux (+gate)'], ['BASELINE', 'aux (no gate)', 'concat'], ['BASELINE'], None, None]
1
P19-1046table_5
Comparison of Efficiency.
2
[['Methods', 'BC-LSTM'], ['Methods', 'TFN'], ['Methods', 'HFFN']]
1
[['FLOPs'], ['Number of Parameters']]
[['1322024', '1383902'], ['8491845', '4245986'], ['16665', '8301']]
column
['FLOPs', 'Number of Parameters']
['HFFN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>FLOPs</th> <th>Number of Parameters</th> </tr> </thead> <tbody> <tr> <td>Methods || BC-LSTM</td> <td>1322024</td> <td>1383902</td> </tr> <tr> <td>Methods || TFN</td> <td>8491845</td> <td>4245986</td> </tr> <tr> <td>Methods || HFFN</td> <td>16665</td> <td>8301</td> </tr> </tbody></table>
Table 5
table_5
P19-1046
8
acl2019
Table 5 shows that in terms of the number of parameters, TFN is around 511 times larger than our HFFN, even under the situation where we adopt a more complex module after tensor fusion, demonstrating the high efficiency of HFFN. Note that if TFN adopts the original setting as stated in (Zadeh et al., 2017) where the FC layers have 128 units, it would even have more parameters than our version of TFN. Compared to BC-LSTM, HFFN has about 166 times fewer parameters and the FLOPs of HFFN is over 79 times fewer than that of BCLSTM. Moreover, BC-LSTM is over 6 times faster than TFN in time complexity measured by FLOPs and the number of parameters is over 3 times smaller. These results demonstrate that outer product applied in TFN results in heavy computational complexity and a substantial number of parameters compared with other methods such as BC-LSTM, while HFFN can avoid these two problems and is even more efficient than other approaches adopting low-complexity fusion methods.
[1, 2, 1, 1, 1]
['Table 5 shows that in terms of the number of parameters, TFN is around 511 times larger than our HFFN, even under the situation where we adopt a more complex module after tensor fusion, demonstrating the high efficiency of HFFN.', 'Note that if TFN adopts the original setting as stated in (Zadeh et al., 2017) where the FC layers have 128 units, it would even have more parameters than our version of TFN.', 'Compared to BC-LSTM, HFFN has about 166 times fewer parameters and the FLOPs of HFFN is over 79 times fewer than that of BCLSTM.', 'Moreover, BC-LSTM is over 6 times faster than TFN in time complexity measured by FLOPs and the number of parameters is over 3 times smaller.', 'These results demonstrate that outer product applied in TFN results in heavy computational complexity and a substantial number of parameters compared with other methods such as BC-LSTM, while HFFN can avoid these two problems and is even more efficient than other approaches adopting low-complexity fusion methods.']
[['TFN', 'HFFN'], ['TFN'], ['HFFN', 'FLOPs'], ['TFN', 'FLOPs'], ['TFN', 'BC-LSTM', 'HFFN']]
1
P19-1048table_4
F1-I scores of different model variants. Average results over 5 runs are reported.
2
[['Model variants', 'Vanilla model'], ['Model variants', '+Opinion transmission'], ['Model variants', '+Message passing-a (IMN -d)'], ['Model variants', '+DS'], ['Model variants', '+DD'], ['Model variants', '+Message passing-d (IMN)']]
1
[['D1'], ['D2'], ['D3']]
[['66.66', '55.63', '56.24'], ['66.98', '56.03', '56.65'], ['68.32', '57.66', '57.91'], ['68.48', '57.86', '58.03'], ['68.65', '57.5', '58.26'], ['69.54', '58.37', '59.18']]
column
['F1-I', 'F1-I', 'F1-I']
['+Message passing-d (IMN)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>D1</th> <th>D2</th> <th>D3</th> </tr> </thead> <tbody> <tr> <td>Model variants || Vanilla model</td> <td>66.66</td> <td>55.63</td> <td>56.24</td> </tr> <tr> <td>Model variants || +Opinion transmission</td> <td>66.98</td> <td>56.03</td> <td>56.65</td> </tr> <tr> <td>Model variants || +Message passing-a (IMN -d)</td> <td>68.32</td> <td>57.66</td> <td>57.91</td> </tr> <tr> <td>Model variants || +DS</td> <td>68.48</td> <td>57.86</td> <td>58.03</td> </tr> <tr> <td>Model variants || +DD</td> <td>68.65</td> <td>57.5</td> <td>58.26</td> </tr> <tr> <td>Model variants || +Message passing-d (IMN)</td> <td>69.54</td> <td>58.37</td> <td>59.18</td> </tr> </tbody></table>
Table 4
table_4
P19-1048
8
acl2019
Ablation study. To investigate the impact of different components, we start with a vanilla model which consists of f θs , f θae , and f θas only without any informative message passing, and add other components one at a time. Table 4 shows the results of different model variants. +Opinion transmission denotes the operation of providing additional information P op to the self-attention layer j as shown in Eq.(1). +Message passing-a denotes propagating the outputs from aspect-level tasks only at each Message passing iteration. +DS and +DD denote adding DS and DD with parameter sharing only. +Message passing-d denotes involving the document-level information for message passing. We observe that +Message passing-a and +Message passing-d contribute to the performance gains the most, which demonstrates the effectiveness of the proposed message passing mechanism. We also observe that simply adding document-level tasks (+DS/DD) with parameter sharing only marginally improves the performance of IMN ?d. This again indicates that domain-specific knowledge has already been captured by domain embeddings, while knowledge obtained from DD and DS via parameter sharing could be redundant in this case. However, +Message passing-d is still help-ful with considerable performance gains, showingthat aspect-level tasks can benefit from knowingpredictions of the relevant document-level tasks.
[2, 2, 1, 2, 2, 2, 2, 1, 1, 2, 1]
['Ablation study.', 'To investigate the impact of different components, we start with a vanilla model which consists of f θs , f θae , and f θas only without any informative message passing, and add other components one at a time.', 'Table 4 shows the results of different model variants.', '+Opinion transmission denotes the operation of providing additional information P op to the self-attention layer j as shown in Eq.(1).', '+Message passing-a denotes propagating the outputs from aspect-level tasks only at each Message passing iteration.', '+DS and +DD denote adding DS and DD with parameter sharing only.', '+Message passing-d denotes involving the document-level information for message passing.', 'We observe that +Message passing-a and +Message passing-d contribute to the performance gains the most, which demonstrates the effectiveness of the proposed message passing mechanism.', 'We also observe that simply adding document-level tasks (+DS/DD) with parameter sharing only marginally improves the performance of IMN ?d.', 'This again indicates that domain-specific knowledge has already been captured by domain embeddings, while knowledge obtained from DD and DS via parameter sharing could be redundant in this case.', 'However, +Message passing-d is still help-ful with considerable performance gains, showingthat aspect-level tasks can benefit from knowingpredictions of the relevant document-level tasks.']
[None, None, None, ['+Opinion transmission'], ['+Message passing-a (IMN -d)'], ['+DS', '+DD'], ['+Message passing-d (IMN)'], ['+Message passing-a (IMN -d)', '+Message passing-d (IMN)'], ['+DS', '+DD'], ['+DS', '+DD'], ['+Message passing-d (IMN)']]
1
P19-1048table_7
Model comparison in a setting without opinion term labels. Average results over 5 runs with random initialization are reported. ∗ indicates the proposed method is significantly better than the other baselines (p < 0.05) based on one-tailed unpaired t-test.
2
[['Methods', 'DECNN-ALSTM'], ['Methods', 'DECNN-dTrans'], ['Methods', 'PIPELINE'], ['Methods', 'MNN'], ['Methods', 'INABSA'], ['Methods', 'IMN -d'], ['Methods', 'IMN']]
2
[['D1', 'F1-a'], ['D1', 'acc-s'], ['D1', 'F1-s'], ['D1', 'F1-I'], ['D2', 'F1-a'], ['D2', 'acc-s'], ['D2', 'F1-s'], ['D2', 'F1-I'], ['D3', 'F1-a'], ['D3', 'acc-s'], ['D3', 'F1-s'], ['D3', 'F1-I']]
[['83.33', '77.63', '70.09', '64.32', '80.28', '69.98', '66.2', '55.92', '68.72', '79.22', '54.4', '54.22'], ['83.33', '79.45', '73.08', '66.15', '80.28', '71.51', '68.03', '57.28', '68.72', '82.09', '68.35', '56.08'], ['83.33', '79.39', '69.45', '65.96', '80.28', '72.12', '68.56', '57.29', '68.72', '81.85', '58.74', '56.04'], ['83.2', '77.57', '68.19', '64.26', '76.33', '70.62', '65.44', '53.77', '69.29', '80.86', '55.45', '55.93'], ['83.12', '79.06', '68.77', '65.94', '77.67', '71.72', '68.36', '55.95', '68.79', '80.96', '57.1', '55.45'], ['83.89', '80.69', '72.09', '67.27', '78.43', '72.49', '69.71', '57.13', '70.35', '81.86', '56.88', '57.86'], ['83.04', '83.05', '73.3', '68.71', '77.69', '75.12', '71.35', '58.04', '69.25', '84.53', '70.85', '58.18']]
column
['F1-a', 'acc-s', 'F1-s', 'F1-I', 'F1-a', 'acc-s', 'F1-s', 'F1-I', 'F1-a', 'acc-s', 'F1-s', 'F1-I']
['IMN -d', 'IMN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>D1 || F1-a</th> <th>D1 || acc-s</th> <th>D1 || F1-s</th> <th>D1 || F1-I</th> <th>D2 || F1-a</th> <th>D2 || acc-s</th> <th>D2 || F1-s</th> <th>D2 || F1-I</th> <th>D3 || F1-a</th> <th>D3 || acc-s</th> <th>D3 || F1-s</th> <th>D3 || F1-I</th> </tr> </thead> <tbody> <tr> <td>Methods || DECNN-ALSTM</td> <td>83.33</td> <td>77.63</td> <td>70.09</td> <td>64.32</td> <td>80.28</td> <td>69.98</td> <td>66.2</td> <td>55.92</td> <td>68.72</td> <td>79.22</td> <td>54.4</td> <td>54.22</td> </tr> <tr> <td>Methods || DECNN-dTrans</td> <td>83.33</td> <td>79.45</td> <td>73.08</td> <td>66.15</td> <td>80.28</td> <td>71.51</td> <td>68.03</td> <td>57.28</td> <td>68.72</td> <td>82.09</td> <td>68.35</td> <td>56.08</td> </tr> <tr> <td>Methods || PIPELINE</td> <td>83.33</td> <td>79.39</td> <td>69.45</td> <td>65.96</td> <td>80.28</td> <td>72.12</td> <td>68.56</td> <td>57.29</td> <td>68.72</td> <td>81.85</td> <td>58.74</td> <td>56.04</td> </tr> <tr> <td>Methods || MNN</td> <td>83.2</td> <td>77.57</td> <td>68.19</td> <td>64.26</td> <td>76.33</td> <td>70.62</td> <td>65.44</td> <td>53.77</td> <td>69.29</td> <td>80.86</td> <td>55.45</td> <td>55.93</td> </tr> <tr> <td>Methods || INABSA</td> <td>83.12</td> <td>79.06</td> <td>68.77</td> <td>65.94</td> <td>77.67</td> <td>71.72</td> <td>68.36</td> <td>55.95</td> <td>68.79</td> <td>80.96</td> <td>57.1</td> <td>55.45</td> </tr> <tr> <td>Methods || IMN -d</td> <td>83.89</td> <td>80.69</td> <td>72.09</td> <td>67.27</td> <td>78.43</td> <td>72.49</td> <td>69.71</td> <td>57.13</td> <td>70.35</td> <td>81.86</td> <td>56.88</td> <td>57.86</td> </tr> <tr> <td>Methods || IMN</td> <td>83.04</td> <td>83.05</td> <td>73.3</td> <td>68.71</td> <td>77.69</td> <td>75.12</td> <td>71.35</td> <td>58.04</td> <td>69.25</td> <td>84.53</td> <td>70.85</td> <td>58.18</td> </tr> </tbody></table>
Table 7
table_7
P19-1048
12
acl2019
Both IMN -d and IMN still significantly outperform other baselines in most cases under this setting. In addition, when compare the results in Table 7 and Table 3, we observe that IMN -d and IMN consistently yield better F1-I scores on all datasets in Table 3, when opinion term extraction is also considered. Consistent improvements are not observed in other baseline methods when trained with opinion term labels. These findings suggest that knowledge obtained from learning opinion term extraction is indeed beneficial, however, a carefully-designed network structure is needed to utilize such information. IMN is designed to exploit task correlations by explicitly modeling interactions between tasks, and thus it better integrates knowledge obtained from training different tasks.
[2, 1, 2, 2, 2]
['Both IMN -d and IMN still significantly outperform other baselines in most cases under this setting.', 'In addition, when compare the results in Table 7 and Table 3, we observe that IMN -d and IMN consistently yield better F1-I scores on all datasets in Table 3, when opinion term extraction is also considered.', 'Consistent improvements are not observed in other baseline methods when trained with opinion term labels.', 'These findings suggest that knowledge obtained from learning opinion term extraction is indeed beneficial, however, a carefully-designed network structure is needed to utilize such information.', 'IMN is designed to exploit task correlations by explicitly modeling interactions between tasks, and thus it better integrates knowledge obtained from training different tasks.']
[['IMN -d', 'IMN'], ['F1-a', 'F1-s', 'F1-I'], None, None, ['IMN']]
1
P19-1056table_3
F1 score (%) comparison only for aspect term extraction.
2
[['Model', 'DE-CNN'], ['Model', 'DOER*'], ['Model', 'DOER']]
1
[['SL'], ['SR'], ['ST']]
[['81.26', '78.98', '63.23'], ['82.11', '79.98', '68.99'], ['82.61', '81.06', '71.35']]
column
['F1', 'F1', 'F1']
['DOER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SL</th> <th>SR</th> <th>ST</th> </tr> </thead> <tbody> <tr> <td>Model || DE-CNN</td> <td>81.26</td> <td>78.98</td> <td>63.23</td> </tr> <tr> <td>Model || DOER*</td> <td>82.11</td> <td>79.98</td> <td>68.99</td> </tr> <tr> <td>Model || DOER</td> <td>82.61</td> <td>81.06</td> <td>71.35</td> </tr> </tbody></table>
Table 3
table_3
P19-1056
8
acl2019
Results on ATE. Table 3 shows the results of aspect term extraction only. DE-CNN is the current state-of-the-art model on ATE as mentioned above. Comparing with it, DOER achieves new state-of-the-art scores. DOER* denotes the DOER without ASC part. As the table shows, DOER achieves better performance than DOER*, which indicates the interaction between ATE and ASC can yield better performance for ATE than only conduct a single task.
[2, 1, 2, 1, 2, 1]
['Results on ATE.', 'Table 3 shows the results of aspect term extraction only.', 'DE-CNN is the current state-of-the-art model on ATE as mentioned above.', 'Comparing with it, DOER achieves new state-of-the-art scores.', 'DOER* denotes the DOER without ASC part.', 'As the table shows, DOER achieves better performance than DOER*, which indicates the interaction between ATE and ASC can yield better performance for ATE than only conduct a single task.']
[None, None, ['DE-CNN'], ['DOER'], ['DOER*'], ['DOER', 'DOER*']]
1
P19-1069table_5
Results of IMS trained on different corpora on the English all-words WSD tasks. † marks statistical significance between OneSeC and its competitors.
2
[['Dataset', 'Senseval-2'], ['Dataset', 'Senseval-3'], ['Dataset', 'SemEval-07'], ['Dataset', 'SemEval-13'], ['Dataset', 'SemEval-15'], ['Dataset', 'ALL']]
1
[['OneSeC'], ['TOM'], ['OMSTI'], ['SemCor'], ['MFS']]
[['73.2', '70.5', '74.1', '76.8', '72.1'], ['68.2', '67.4', '67.2', '73.8', '72'], ['63.5', '59.8', '62.3', '67.3', '65.4'], ['66.5', '65.5', '62.8', '65.5', '63'], ['70.8', '68.6', '63.1', '66.1', '66.3'], ['69', '67.3', '66.4', '70.4', '67.6']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['OneSeC']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OneSeC</th> <th>TOM</th> <th>OMSTI</th> <th>SemCor</th> <th>MFS</th> </tr> </thead> <tbody> <tr> <td>Dataset || Senseval-2</td> <td>73.2</td> <td>70.5</td> <td>74.1</td> <td>76.8</td> <td>72.1</td> </tr> <tr> <td>Dataset || Senseval-3</td> <td>68.2</td> <td>67.4</td> <td>67.2</td> <td>73.8</td> <td>72</td> </tr> <tr> <td>Dataset || SemEval-07</td> <td>63.5</td> <td>59.8</td> <td>62.3</td> <td>67.3</td> <td>65.4</td> </tr> <tr> <td>Dataset || SemEval-13</td> <td>66.5</td> <td>65.5</td> <td>62.8</td> <td>65.5</td> <td>63</td> </tr> <tr> <td>Dataset || SemEval-15</td> <td>70.8</td> <td>68.6</td> <td>63.1</td> <td>66.1</td> <td>66.3</td> </tr> <tr> <td>Dataset || ALL</td> <td>69</td> <td>67.3</td> <td>66.4</td> <td>70.4</td> <td>67.6</td> </tr> </tbody></table>
Table 5
table_5
P19-1069
6
acl2019
In Table 5 we compare the results of IMS when trained on different corpora. As one can see, OneSeC achieves the best results on ALL when compared to automatic and semi-automatic approaches, and ranks second only with respect to SemCor. Interestingly enough, OneSeC beats its manual competitor on SemEval-2013 by 1 point and on SemEval-2015 by 4.7 points, an impressive result considering that OneSeC does not involve any human intervention during the generation of the corpus. In Table 5 we also report the statistical significance between OneSeC and its competitors on the ALL dataset by juxtaposing a † symbol next to the score. In order to do this, we computed the McNemar’s chi-square test (McNemar, 1947) with significance level alpha = 0.01 between OneSeC and SemCor. It resulted in no statistical significance, meaning that IMS trained on OneSeC is in the same ballpark as when trained on SemCor. We note that the goal of this work was not to achieve state-of-the-art results on English WSD compared to manually-annotated corpora. However, performing competitively on standard benchmarks represents one step further towards getting rid of the limitation imposed by resources like SemCor. Moreover, our approach outperforms Train-O-Matic, our direct competitor, on all the datasets, with the highest increment of 3.7 points on SemEval-2007, while scoring almost 2 points higher than TOM overall.
[1, 1, 1, 1, 2, 2, 2, 2, 1]
['In Table 5 we compare the results of IMS when trained on different corpora.', 'As one can see, OneSeC achieves the best results on ALL when compared to automatic and semi-automatic approaches, and ranks second only with respect to SemCor.', 'Interestingly enough, OneSeC beats its manual competitor on SemEval-2013 by 1 point and on SemEval-2015 by 4.7 points, an impressive result considering that OneSeC does not involve any human intervention during the generation of the corpus.', 'In Table 5 we also report the statistical significance between OneSeC and its competitors on the ALL dataset by juxtaposing a †\xa0symbol next to the score.', 'In order to do this, we computed the McNemar’s chi-square test (McNemar, 1947) with significance level alpha = 0.01 between OneSeC and SemCor.', 'It resulted in no statistical significance, meaning that IMS trained on OneSeC is in the same ballpark as when trained on SemCor.', 'We note that the goal of this work was not to achieve state-of-the-art results on English WSD compared to manually-annotated corpora.', 'However, performing competitively on standard benchmarks represents one step further towards getting rid of the limitation imposed by resources like SemCor.', 'Moreover, our approach outperforms Train-O-Matic, our direct competitor, on all the datasets, with the highest increment of 3.7 points on SemEval-2007, while scoring almost 2 points higher than TOM overall.']
[None, ['OneSeC', 'ALL', 'SemCor'], ['OneSeC', 'SemEval-13', 'SemEval-15'], ['OneSeC', 'ALL'], ['OneSeC', 'SemCor'], ['OneSeC', 'SemCor'], None, ['SemCor'], ['TOM']]
1
P19-1074table_7
Performance of joint relation and supporting evidence prediction in F1 measurement (%).
2
[['Method', 'Heuristic predictor'], ['Method', 'Neural predictor']]
1
[['Dev'], ['Test']]
[['36.21', '36.76'], ['44.07', '43.85']]
column
['F1', 'F1']
['Neural predictor', 'Heuristic predictor']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Method || Heuristic predictor</td> <td>36.21</td> <td>36.76</td> </tr> <tr> <td>Method || Neural predictor</td> <td>44.07</td> <td>43.85</td> </tr> </tbody></table>
Table 7
table_7
P19-1074
8
acl2019
Supporting Evidence Prediction. We propose a new task to predict the supporting evidence for relation instances. On the one hand, jointly predicting the evidence provides better explainability. On the other hand, identifying supporting evidence and reasoning relational facts from text are naturally dual tasks with potential mutual enhancement. We design two supporting evidence prediction methods: (1) Heuristic predictor. We implement a simple heuristic-based model that considers all sentences containing the head or tail entity as supporting evidence. (2) Neural predictor. We also design a neural supporting evidence predictor. Given an entity pair and a predicted relation, sentences are first transformed into input representations by the concatenation of word embeddings and position embeddings, and then fed into a BiLSTM encoder for contextual representations. Inspired by Yang et al. (2018), we concatenate the output of the BiLSTM at the first and last positions with a trainable relation embedding to obtain a sentence’s representation, which is used to predict whether the sentence is adopted as supporting evidence for the given relation instance. As Table 7 shows, the neural predictor significantly outperforms heuristic-based baseline in predicting supporting evidence, which indicates the potential of RE models in joint relation and supporting evidence prediction.
[2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1]
['Supporting Evidence Prediction.', 'We propose a new task to predict the supporting evidence for relation instances.', 'On the one hand, jointly predicting the evidence provides better explainability.', 'On the other hand, identifying supporting evidence and reasoning relational facts from text are naturally dual tasks with potential mutual enhancement.', 'We design two supporting evidence prediction methods: (1) Heuristic predictor.', 'We implement a simple heuristic-based model that considers all sentences containing the head or tail entity as supporting evidence.', '(2) Neural predictor.', 'We also design a neural supporting evidence predictor.', 'Given an entity pair and a predicted relation, sentences are first transformed into input representations by the concatenation of word embeddings and position embeddings, and then fed into a BiLSTM encoder for contextual representations.', 'Inspired by Yang et al. (2018), we concatenate the output of the BiLSTM at the first and last positions with a trainable relation embedding to obtain a sentence’s representation, which is used to predict whether the sentence is adopted as supporting evidence for the given relation instance.', 'As Table 7 shows, the neural predictor significantly outperforms heuristic-based baseline in predicting supporting evidence, which indicates the potential of RE models in joint relation and supporting evidence prediction.']
[None, None, None, None, ['Heuristic predictor'], None, ['Neural predictor'], None, None, None, ['Neural predictor', 'Heuristic predictor']]
1
P19-1079table_5
Results on benchmark dataset (ATIS and subsets of Snips).
2
[['Model', 'JOINT-SF-IC'], ['Model', 'PARALLEL[UNIV]'], ['Model', 'PARALLEL[UNIV+TASK]'], ['Model', 'PARALLEL[UNIV+GROUP+TASK]'], ['Model', 'SERIAL'], ['Model', 'SERIAL+HIGHWAY'], ['Model', 'SERIAL+HIGHWAY+SWAP']]
2
[['ATIS', 'Intent Acc.'], ['ATIS', 'Slot F1'], ['Snips-location', 'Intent Acc.'], ['Snips-location', 'Slot F1'], ['Snips-music', 'Intent Acc.'], ['Snips-music', 'Slot F1'], ['Snips-creative', 'Intent Acc.'], ['Snips-creative', 'Slot F1']]
[['96.1', '95.4', '99.7', '96.3', '100', '93.1', '100', '96.6'], ['96.4', '95.4', '99.7', '95.8', '100', '92.1', '100', '95.8'], ['96.2', '95.5', '99.7', '96', '100', '93.4', '100', '97.2'], ['96.9', '95.4', '99.7', '96.5', '99.5', '94.4', '100', '97.3'], ['97.2', '95.8', '100', '96.5', '100', '93.8', '100', '97.2'], ['96.9', '95.7', '100', '97.2', '99.5', '94.8', '100', '97.2'], ['97.5', '95.6', '99.7', '96', '100', '93.9', '100', '97.8']]
column
['Intent Acc.', 'Slot F1', 'Intent Acc.', 'Slot F1', 'Intent Acc.', 'Slot F1', 'Intent Acc.', 'Slot F1']
['SERIAL', 'SERIAL+HIGHWAY', 'SERIAL+HIGHWAY+SWAP']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ATIS || Intent Acc.</th> <th>ATIS || Slot F1</th> <th>Snips-location || Intent Acc.</th> <th>Snips-location || Slot F1</th> <th>Snips-music || Intent Acc.</th> <th>Snips-music || Slot F1</th> <th>Snips-creative || Intent Acc.</th> <th>Snips-creative || Slot F1</th> </tr> </thead> <tbody> <tr> <td>Model || JOINT-SF-IC</td> <td>96.1</td> <td>95.4</td> <td>99.7</td> <td>96.3</td> <td>100</td> <td>93.1</td> <td>100</td> <td>96.6</td> </tr> <tr> <td>Model || PARALLEL[UNIV]</td> <td>96.4</td> <td>95.4</td> <td>99.7</td> <td>95.8</td> <td>100</td> <td>92.1</td> <td>100</td> <td>95.8</td> </tr> <tr> <td>Model || PARALLEL[UNIV+TASK]</td> <td>96.2</td> <td>95.5</td> <td>99.7</td> <td>96</td> <td>100</td> <td>93.4</td> <td>100</td> <td>97.2</td> </tr> <tr> <td>Model || PARALLEL[UNIV+GROUP+TASK]</td> <td>96.9</td> <td>95.4</td> <td>99.7</td> <td>96.5</td> <td>99.5</td> <td>94.4</td> <td>100</td> <td>97.3</td> </tr> <tr> <td>Model || SERIAL</td> <td>97.2</td> <td>95.8</td> <td>100</td> <td>96.5</td> <td>100</td> <td>93.8</td> <td>100</td> <td>97.2</td> </tr> <tr> <td>Model || SERIAL+HIGHWAY</td> <td>96.9</td> <td>95.7</td> <td>100</td> <td>97.2</td> <td>99.5</td> <td>94.8</td> <td>100</td> <td>97.2</td> </tr> <tr> <td>Model || SERIAL+HIGHWAY+SWAP</td> <td>97.5</td> <td>95.6</td> <td>99.7</td> <td>96</td> <td>100</td> <td>93.9</td> <td>100</td> <td>97.8</td> </tr> </tbody></table>
Table 5
table_5
P19-1079
7
acl2019
Table 5 shows results on ATIS and our split version of Snips. We now have four tasks: ATIS, Snips-location, Snips-music, and Snips-creative. JOINT-SF-IC is our baseline that treats these four tasks independently. All other models process the four tasks together in the MTL setup. For the models introduced in this paper, we define two task groups: ATIS and Snips-location as one group, and Snips-music and Snips-creative as another. Our models, which use these groups, generally outperform the other MTL models (PARALLEL[UNIV] and PARALLEL[UNIV+TASK]); especially the serial MTL architectures perform well.
[1, 1, 2, 2, 1, 1]
['Table 5 shows results on ATIS and our split version of Snips.', 'We now have four tasks: ATIS, Snips-location, Snips-music, and Snips-creative.', 'JOINT-SF-IC is our baseline that treats these four tasks independently.', 'All other models process the four tasks together in the MTL setup.', 'For the models introduced in this paper, we define two task groups: ATIS and Snips-location as one group, and Snips-music and Snips-creative as another.', 'Our models, which use these groups, generally outperform the other MTL models (PARALLEL[UNIV] and PARALLEL[UNIV+TASK]); especially the serial MTL architectures perform well.']
[['ATIS', 'Snips-location', 'Snips-music', 'Snips-creative'], ['ATIS', 'Snips-location', 'Snips-music', 'Snips-creative'], ['JOINT-SF-IC'], None, ['ATIS', 'Snips-location', 'Snips-music', 'Snips-creative'], ['SERIAL', 'SERIAL+HIGHWAY', 'SERIAL+HIGHWAY+SWAP', 'PARALLEL[UNIV]', 'PARALLEL[UNIV+TASK]']]
1
P19-1079table_6
Results on the Alexa dataset. Best results on mean intent accuracy and slot F1 values, and results that are not statistically different from the best model are marked in bold.
2
[['Model', 'JOINT-SF-IC'], ['Model', 'PARALLEL[UNIV]'], ['Model', 'PARALLEL[UNIV+TASK]'], ['Model', 'PARALLEL[UNIV+GROUP+TASK]'], ['Model', 'SERIAL'], ['Model', 'SERIAL+HIGHWAY'], ['Model', 'SERIAL+HIGHWAY+SWAP']]
2
[['Intent Acc.', 'Mean'], ['Intent Acc.', 'Median'], ['Slot F1', 'Mean'], ['Slot F1', 'Median']]
[['93.36', '95.9', '79.97', '85.23'], ['93.44', '95.5', '80.76', '86.18'], ['93.78', '96.35', '80.49', '85.81'], ['93.87', '96.31', '80.84', '86.21'], ['93.83', '96.24', '80.84', '86.14'], ['93.81', '96.28', '80.73', '85.71'], ['94.02', '96.42', '80.8', '86.44']]
column
['Intent Acc.', 'Intent Acc.', 'Slot F1', 'Slot F1']
['Model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Intent Acc. || Mean</th> <th>Intent Acc. || Median</th> <th>Slot F1 || Mean</th> <th>Slot F1 || Median</th> </tr> </thead> <tbody> <tr> <td>Model || JOINT-SF-IC</td> <td>93.36</td> <td>95.9</td> <td>79.97</td> <td>85.23</td> </tr> <tr> <td>Model || PARALLEL[UNIV]</td> <td>93.44</td> <td>95.5</td> <td>80.76</td> <td>86.18</td> </tr> <tr> <td>Model || PARALLEL[UNIV+TASK]</td> <td>93.78</td> <td>96.35</td> <td>80.49</td> <td>85.81</td> </tr> <tr> <td>Model || PARALLEL[UNIV+GROUP+TASK]</td> <td>93.87</td> <td>96.31</td> <td>80.84</td> <td>86.21</td> </tr> <tr> <td>Model || SERIAL</td> <td>93.83</td> <td>96.24</td> <td>80.84</td> <td>86.14</td> </tr> <tr> <td>Model || SERIAL+HIGHWAY</td> <td>93.81</td> <td>96.28</td> <td>80.73</td> <td>85.71</td> </tr> <tr> <td>Model || SERIAL+HIGHWAY+SWAP</td> <td>94.02</td> <td>96.42</td> <td>80.8</td> <td>86.44</td> </tr> </tbody></table>
Table 6
table_6
P19-1079
7
acl2019
4.2 Alexa data. Table 6 shows the results of the single-domain model and the MTL models on the Alexa dataset. The trend is clearly visible in these results compared to the results on the benchmark data. As Alexa data has more domains, there might not be many features that are common across all the domains. Capturing those features that are only common across a group became possible by incorporating task group encoders. SERIAL+HIGHWAY+SWAP yields the best mean intent accuracy. PARALLEL+UNIV+GROUP+TASK and SERIAL+HIGHWAY show statistically indistinguishable results. For slot filling, all MTL architectures achieve competitive results on mean Slot F1.
[2, 1, 2, 2, 2, 1, 1, 1]
['4.2 Alexa data.', 'Table 6 shows the results of the single-domain model and the MTL models on the Alexa dataset.', 'The trend is clearly visible in these results compared to the results on the benchmark data.', 'As Alexa data has more domains, there might not be many features that are common across all the domains.', 'Capturing those features that are only common across a group became possible by incorporating task group encoders.', 'SERIAL+HIGHWAY+SWAP yields the best mean intent accuracy.', 'PARALLEL+UNIV+GROUP+TASK and SERIAL+HIGHWAY show statistically indistinguishable results.', 'For slot filling, all MTL architectures achieve competitive results on mean Slot F1.']
[None, None, None, None, None, ['SERIAL+HIGHWAY+SWAP'], ['PARALLEL[UNIV+GROUP+TASK]', 'SERIAL+HIGHWAY'], ['Mean', 'Slot F1']]
1
P19-1081table_3
Cross-domain (train/test on the different domain) response generation performance on the OpenDialKG dataset (metric: recall@k). E: entities, S: sentence, D: dialog contexts.
4
[['Input', 'E+S+D', 'Model', 'seq2seq (Sutskever et al.,2014)'], ['Input', 'E+S', 'Model', 'Tri-LSTM (Young et al.,2018)'], ['Input', 'E+S', 'Model', 'Ext-ED (Parthasarathi and Pineau,2018)'], ['Input', 'E', 'Model', 'DialKG Walker (ablation)'], ['Input', 'E+S', 'Model', 'DialKG Walker (ablation)'], ['Input', 'E+S+D', 'Model', 'DialKG Walker (proposed)']]
2
[['Movie®Book', 'r@1'], ['Movie®Book', 'r@3'], ['Movie®Book', 'r@5'], ['Movie®Book', 'r@10'], ['Movie®Book', 'r@25'], ['Movie®Music', 'r@1'], ['Movie®Music', 'r@3'], ['Movie®Music', 'r@5'], ['Movie®Music', 'r@10'], ['Movie®Music', 'r@25']]
[['2.9', '21.3', '35.1', '50.6', '64.2', '1.5', '12.1', '19.7', '34.9', '49.4'], ['2.3', '17.9', '29.7', '44.9', '61', '1.9', '8.7', '12.9', '25.8', '44.4'], ['2', '7.9', '11.2', '16.4', '22.4', '1.3', '2.6', '3.8', '4.1', '8.3'], ['8.2', '15.7', '22.8', '31.8', '48.9', '4.5', '16.7', '21.6', '25.8', '33'], ['12.6', '28.6', '38.6', '54.1', '65.6', '6', '15.9', '22.8', '33', '47.5'], ['13.5', '28.8', '39.5', '52.6', '64.8', '5.3', '13.3', '19.7', '28.8', '38']]
column
['r@1', 'r@3', 'r@5', 'r@10', 'r@25', 'r@1', 'r@3', 'r@5', 'r@10', 'r@25']
['DialKG Walker (proposed)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Movie®Book || r@1</th> <th>Movie®Book || r@3</th> <th>Movie®Book || r@5</th> <th>Movie®Book || r@10</th> <th>Movie®Book || r@25</th> <th>Movie®Music || r@1</th> <th>Movie®Music || r@3</th> <th>Movie®Music || r@5</th> <th>Movie®Music || r@10</th> <th>Movie®Music || r@25</th> </tr> </thead> <tbody> <tr> <td>Input || E+S+D || Model || seq2seq (Sutskever et al.,2014)</td> <td>2.9</td> <td>21.3</td> <td>35.1</td> <td>50.6</td> <td>64.2</td> <td>1.5</td> <td>12.1</td> <td>19.7</td> <td>34.9</td> <td>49.4</td> </tr> <tr> <td>Input || E+S || Model || Tri-LSTM (Young et al.,2018)</td> <td>2.3</td> <td>17.9</td> <td>29.7</td> <td>44.9</td> <td>61</td> <td>1.9</td> <td>8.7</td> <td>12.9</td> <td>25.8</td> <td>44.4</td> </tr> <tr> <td>Input || E+S || Model || Ext-ED (Parthasarathi and Pineau,2018)</td> <td>2</td> <td>7.9</td> <td>11.2</td> <td>16.4</td> <td>22.4</td> <td>1.3</td> <td>2.6</td> <td>3.8</td> <td>4.1</td> <td>8.3</td> </tr> <tr> <td>Input || E || Model || DialKG Walker (ablation)</td> <td>8.2</td> <td>15.7</td> <td>22.8</td> <td>31.8</td> <td>48.9</td> <td>4.5</td> <td>16.7</td> <td>21.6</td> <td>25.8</td> <td>33</td> </tr> <tr> <td>Input || E+S || Model || DialKG Walker (ablation)</td> <td>12.6</td> <td>28.6</td> <td>38.6</td> <td>54.1</td> <td>65.6</td> <td>6</td> <td>15.9</td> <td>22.8</td> <td>33</td> <td>47.5</td> </tr> <tr> <td>Input || E+S+D || Model || DialKG Walker (proposed)</td> <td>13.5</td> <td>28.8</td> <td>39.5</td> <td>52.6</td> <td>64.8</td> <td>5.3</td> <td>13.3</td> <td>19.7</td> <td>28.8</td> <td>38</td> </tr> </tbody></table>
Table 3
table_3
P19-1081
7
acl2019
Cross-domain evaluation: Table 3 demonstrates that the DialKG Walker model can generalize to multiple domains better than the baseline approaches (train: movie & test: book / train: movie & test: music). This result indicates that our method also allows for zeroshot pruning by relations based on their proximity in the KG embeddings space, thus effective in cross-domain cases as well. For example, relations 'scenario by' and 'author' are close neighbors in the KG embeddings space, thus allowing for zeroshot prediction in cross-domain tests, although their training examples usually appear in two separate domains: movie and book.
[1, 1, 2]
['Cross-domain evaluation: Table 3 demonstrates that the DialKG Walker model can generalize to multiple domains better than the baseline approaches (train: movie & test: book / train: movie & test: music).', 'This result indicates that our method also allows for zeroshot pruning by relations based on their proximity in the KG embeddings space, thus effective in cross-domain cases as well.', "For example, relations 'scenario by' and 'author' are close neighbors in the KG embeddings space, thus allowing for zeroshot prediction in cross-domain tests, although their training examples usually appear in two separate domains: movie and book."]
[['DialKG Walker (proposed)'], None, None]
1
P19-1085table_4
Sentence selection evaluation and average label accuracy of GEAR with different thresholds on dev set (%).
2
[['threshold', '0'], ['threshold', '10^-4'], ['threshold', '10^-3'], ['threshold', '10^-2'], ['threshold', '10^-1']]
1
[['OFEVER'], ['Precision'], ['Recall'], ['F1'], ['GEAR LA']]
[['91.1', '24.08', '86.72', '37.69', '74.84'], ['91.04', '30.88', '86.63', '45.53', '74.86'], ['90.86', '40.6', '86.36', '55.23', '74.91'], ['90.27', '53.12', '85.47', '65.52', '74.89'], ['87.7', '70.61', '81.64', '75.72', '74.81']]
column
['OFEVER', 'Precision', 'Recall', 'F1', 'GEAR LA']
['GEAR LA', 'threshold']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>OFEVER</th> <th>Precision</th> <th>Recall</th> <th>F1</th> <th>GEAR LA</th> </tr> </thead> <tbody> <tr> <td>threshold || 0</td> <td>91.1</td> <td>24.08</td> <td>86.72</td> <td>37.69</td> <td>74.84</td> </tr> <tr> <td>threshold || 10^-4</td> <td>91.04</td> <td>30.88</td> <td>86.63</td> <td>45.53</td> <td>74.86</td> </tr> <tr> <td>threshold || 10^-3</td> <td>90.86</td> <td>40.6</td> <td>86.36</td> <td>55.23</td> <td>74.91</td> </tr> <tr> <td>threshold || 10^-2</td> <td>90.27</td> <td>53.12</td> <td>85.47</td> <td>65.52</td> <td>74.89</td> </tr> <tr> <td>threshold || 10^-1</td> <td>87.7</td> <td>70.61</td> <td>81.64</td> <td>75.72</td> <td>74.81</td> </tr> </tbody></table>
Table 4
table_4
P19-1085
6
acl2019
The rightmost column of Table 4 shows the results of our GEAR frameworks with different sentence selection thresholds. We choose the model with threshold 10^-3, which has the highest label accuracy, as our final model. When the threshold increases from 0 to 10^-3, the label accuracy increases due to less noisy information. However, when the threshold increases from 10^-3 to 10^-1, the label accuracy decreases because informative evidence is filtered out, and the model can not obtain sufficient evidence to make the right inference.
[1, 1, 1, 1]
['The rightmost column of Table 4 shows the results of our GEAR frameworks with different sentence selection thresholds.', 'We choose the model with threshold 10^-3, which has the highest label accuracy, as our final model.', 'When the threshold increases from 0 to 10^-3, the label accuracy increases due to less noisy information.', 'However, when the threshold increases from 10^-3 to 10^-1, the label accuracy decreases because informative evidence is filtered out, and the model can not obtain sufficient evidence to make the right inference.']
[['GEAR LA'], ['threshold', '10^-3'], ['threshold', '0', '10^-3'], ['threshold', '10^-3', '10^-1']]
1
P19-1085table_7
Evaluations of the full pipeline. The results of our pipeline are chosen from the model which has the highest dev FEVER score (%).
2
[['Model', 'Athene'], ['Model', 'UCL MRG'], ['Model', 'UNC NLP'], ['Model', 'BERT Pair'], ['Model', 'BERT Concat'], ['Model', 'Our pipeline']]
2
[['Dev', 'LA'], ['Dev', 'FEVER'], ['Test', 'LA'], ['Test', 'FEVER']]
[['68.49', '64.74', '65.46', '61.58'], ['69.66', '65.41', '67.62', '62.52'], ['69.72', '66.49', '68.21', '64.21'], ['73.3', '68.9', '69.75', '65.18'], ['73.67', '68.89', '71.01', '65.64'], ['74.84', '70.69', '71.6', '67.1']]
column
['LA', 'FEVER', 'LA', 'FEVER']
['Our pipeline']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev || LA</th> <th>Dev || FEVER</th> <th>Test || LA</th> <th>Test || FEVER</th> </tr> </thead> <tbody> <tr> <td>Model || Athene</td> <td>68.49</td> <td>64.74</td> <td>65.46</td> <td>61.58</td> </tr> <tr> <td>Model || UCL MRG</td> <td>69.66</td> <td>65.41</td> <td>67.62</td> <td>62.52</td> </tr> <tr> <td>Model || UNC NLP</td> <td>69.72</td> <td>66.49</td> <td>68.21</td> <td>64.21</td> </tr> <tr> <td>Model || BERT Pair</td> <td>73.3</td> <td>68.9</td> <td>69.75</td> <td>65.18</td> </tr> <tr> <td>Model || BERT Concat</td> <td>73.67</td> <td>68.89</td> <td>71.01</td> <td>65.64</td> </tr> <tr> <td>Model || Our pipeline</td> <td>74.84</td> <td>70.69</td> <td>71.6</td> <td>67.1</td> </tr> </tbody></table>
Table 7
table_7
P19-1085
7
acl2019
Table 7 presents the evaluations of the full pipeline. We find the test FEVER score of BERT fine-tuning systems outperform other shared task models by nearly 1%. Furthermore, our full pipeline outperforms the BERT-Concat baseline by 1.46% and achieves significant improvements.
[1, 1, 1]
['Table 7 presents the evaluations of the full pipeline.', 'We find the test FEVER score of BERT fine-tuning systems outperform other shared task models by nearly 1%.', 'Furthermore, our full pipeline outperforms the BERT-Concat baseline by 1.46% and achieves significant improvements.']
[None, ['FEVER', 'BERT Pair', 'BERT Concat'], None]
1
P19-1087table_4
The comparison of Seq2Seq model performance using Transformer (Xformer) and LSTM encoders. Both encoders were pre-trained.
3
[['Encoder', 'Unweighted F1', 'Xformer'], ['Encoder', 'Unweighted F1', 'LSTM'], ['Encoder', 'Weighted F1', 'Xformer'], ['Encoder', 'Weighted F1', 'LSTM']]
1
[['Sx'], ['Sx + Status']]
[['0.67', '0.51'], ['0.70', '0.55'], ['0.76', '0.61'], ['0.79', '0.64']]
row
['Unweighted F1', 'Unweighted F1', 'Weighted F1', 'Weighted F1']
['LSTM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Sx</th> <th>Sx + Status</th> </tr> </thead> <tbody> <tr> <td>Encoder || Unweighted F1 || Xformer</td> <td>0.67</td> <td>0.51</td> </tr> <tr> <td>Encoder || Unweighted F1 || LSTM</td> <td>0.70</td> <td>0.55</td> </tr> <tr> <td>Encoder || Weighted F1 || Xformer</td> <td>0.76</td> <td>0.61</td> </tr> <tr> <td>Encoder || Weighted F1 || LSTM</td> <td>0.79</td> <td>0.64</td> </tr> </tbody></table>
Table 4
table_4
P19-1087
6
acl2019
Next, the Transformer encoder was compared against the LSTM encoder, using pre-training in both cases. Based on the performance on the development set, the best encoder was chosen which consists of two layers, each with 1024 hidden dimension and 16 attention heads. The results in Table 4 show that the LSTM-encoder outperforms the Transformer-encoder consistently in this task, when both are pre-trained. Therefore, for the rest of the experiments, we only report results using the LSTM-encoder.
[1, 2, 1, 1]
['Next, the Transformer encoder was compared against the LSTM encoder, using pre-training in both cases.', 'Based on the performance on the development set, the best encoder was chosen which consists of two layers, each with 1024 hidden dimension and 16 attention heads.', 'The results in Table 4 show that the LSTM-encoder outperforms the Transformer-encoder consistently in this task, when both are pre-trained.', 'Therefore, for the rest of the experiments, we only report results using the LSTM-encoder.']
[['Xformer', 'LSTM'], None, ['LSTM', 'Xformer'], ['LSTM']]
1
P19-1088table_5
Overall prediction results and F-scores for counseling quality using linguistic feature sets
2
[['Feature set', 'Baseline'], ['Feature set', 'N-grams'], ['Feature set', 'Semantic'], ['Feature set', 'Metafeatures'], ['Feature set', 'Sentiment'], ['Feature set', 'Alignment'], ['Feature set', 'Topics'], ['Feature set', 'MITI Behav'], ['Feature set', 'All features']]
3
[['Counseling Quality', 'Acc.', '-'], ['Counseling Quality', 'F-score', ' Low'], ['Counseling Quality', 'F-score', ' High']]
[['59.85%', '', ''], ['87.26%', '0.849', '0.89'], ['80.31%', '0.763', '0.832'], ['72.59%', '0.297', '0.83'], ['74.52%', '0.298', '0.844'], ['72.59%', '0.64', '0.779'], ['81.08%', '0.768', '0.84'], ['79.54%', '0.787', '0.808'], ['88.03%', '0.857', '0.897']]
column
['Acc.', 'F-score', 'F-score']
['All features']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Counseling Quality || Acc. || -</th> <th>Counseling Quality || F-score || Low</th> <th>Counseling Quality || F-score || High</th> </tr> </thead> <tbody> <tr> <td>Feature set || Baseline</td> <td>59.85%</td> <td></td> <td></td> </tr> <tr> <td>Feature set || N-grams</td> <td>87.26%</td> <td>0.849</td> <td>0.89</td> </tr> <tr> <td>Feature set || Semantic</td> <td>80.31%</td> <td>0.763</td> <td>0.832</td> </tr> <tr> <td>Feature set || Metafeatures</td> <td>72.59%</td> <td>0.297</td> <td>0.83</td> </tr> <tr> <td>Feature set || Sentiment</td> <td>74.52%</td> <td>0.298</td> <td>0.844</td> </tr> <tr> <td>Feature set || Alignment</td> <td>72.59%</td> <td>0.64</td> <td>0.779</td> </tr> <tr> <td>Feature set || Topics</td> <td>81.08%</td> <td>0.768</td> <td>0.84</td> </tr> <tr> <td>Feature set || MITI Behav</td> <td>79.54%</td> <td>0.787</td> <td>0.808</td> </tr> <tr> <td>Feature set || All features</td> <td>88.03%</td> <td>0.857</td> <td>0.897</td> </tr> </tbody></table>
Table 5
table_5
P19-1088
8
acl2019
Table 5 shows the classification performance obtained when using each feature set at the time. We measure the performance of the classifiers in terms of accuracy and F-score, which provide overall and class-specific performance assessments. Compared to the majority baseline, all the feature sets demonstrate a clear improvement in the classification of counseling quality. Among all feature sets, N-grams attain the best performance, followed by discourse topics and the semantic feature sets. Furthermore, the combination of all the features sets achieve the best accuracy values.
[1, 1, 1, 1, 1]
['Table 5 shows the classification performance obtained when using each feature set at the time.', 'We measure the performance of the classifiers in terms of accuracy and F-score, which provide overall and class-specific performance assessments.', 'Compared to the majority baseline, all the feature sets demonstrate a clear improvement in the classification of counseling quality.', 'Among all feature sets, N-grams attain the best performance, followed by discourse topics and the semantic feature sets.', 'Furthermore, the combination of all the features sets achieve the best accuracy values.']
[None, ['Acc.', 'F-score'], ['Baseline'], ['N-grams'], ['All features', 'Acc.']]
1
P19-1109table_1
SEQ vs. CAMB system results on words only and on words and phrases
2
[['Words Only', 'NEWS'], ['Words Only', 'WIKINEWS'], ['Words Only', 'WIKIPEDIA'], ['Words+Phrases', 'NEWS'], ['Words+Phrases', 'WIKINEWS'], ['Words+Phrases', 'WIKIPEDIA']]
2
[['Macro F-Scores', 'CAMB'], ['Macro F-Scores', 'SEQ']]
[['0.8633', '0.8763 (+1.30)'], ['0.8317', '0.8540 (+2.23)'], ['0.7780', '0.8140 (+3.60)'], ['0.8736', '0.8763 (+0.27)'], ['0.8400', '0.8505 (+1.05)'], ['0.8115', '0.8158 (+0.43)']]
column
['Macro F-Scores', 'Macro F-Scores']
['SEQ', 'CAMB']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Macro F-Scores || CAMB</th> <th>Macro F-Scores || SEQ</th> </tr> </thead> <tbody> <tr> <td>Words Only || NEWS</td> <td>0.8633</td> <td>0.8763 (+1.30)</td> </tr> <tr> <td>Words Only || WIKINEWS</td> <td>0.8317</td> <td>0.8540 (+2.23)</td> </tr> <tr> <td>Words Only || WIKIPEDIA</td> <td>0.7780</td> <td>0.8140 (+3.60)</td> </tr> <tr> <td>Words+Phrases || NEWS</td> <td>0.8736</td> <td>0.8763 (+0.27)</td> </tr> <tr> <td>Words+Phrases || WIKINEWS</td> <td>0.8400</td> <td>0.8505 (+1.05)</td> </tr> <tr> <td>Words+Phrases || WIKIPEDIA</td> <td>0.8115</td> <td>0.8158 (+0.43)</td> </tr> </tbody></table>
Table 1
table_1
P19-1109
4
acl2019
The results presented in Table 1 show that the SEQ system outperforms the CAMB system on all three genres on the task of binary complex word identification. The largest performance increase for words is on the WIKIPEDIA test set (+3.60%). Table 1 also shows that on the combined set of words and phrases (words+phrases) the two systems achieve similar results: the SEQ model beats the CAMB model only marginally, with the largest difference of +1.05% on the WIKINEWS data. However, it its worth highlighting that the CAMB system does not perform any phrase classification per se and simply marks all phrases as complex. Using the dataset statistics, we estimate that CAMB system achieves precision of 0.64. The SEQ model outperforms the CAMB system, achieving precision of 0.71.
[1, 1, 1, 2, 1, 1]
['The results presented in Table 1 show that the SEQ system outperforms the CAMB system on all three genres on the task of binary complex word identification.', 'The largest performance increase for words is on the WIKIPEDIA test set (+3.60%).', 'Table 1 also shows that on the combined set of words and phrases (words+phrases) the two systems achieve similar results: the SEQ model beats the CAMB model only marginally, with the largest difference of +1.05% on the WIKINEWS data.', 'However, it its worth highlighting that the CAMB system does not perform any phrase classification per se and simply marks all phrases as complex.', 'Using the dataset statistics, we estimate that CAMB system achieves precision of 0.64.', 'The SEQ model outperforms the CAMB system, achieving precision of 0.71.']
[['SEQ', 'CAMB'], ['WIKIPEDIA'], ['Words+Phrases', 'SEQ', 'CAMB'], ['CAMB'], ['CAMB'], ['SEQ']]
1
P19-1112table_3
Experimental results in SemEval setting
3
[['Label Distribution Learning Models', 'M1', 'DL-BiLSTM+GloVe'], ['Label Distribution Learning Models', 'M2', 'DL-BiLSTM+GloVe+Att'], ['Label Distribution Learning Models', 'M3', 'DL-BiLSTM+ELMo'], ['Label Distribution Learning Models', 'M4', 'DL-BiLSTM+ELMo+Att'], ['Single Label Learning Models', 'M5', 'SL-BiLSTM+GloVe'], ['Single Label Learning Models', 'M6', 'SL-BiLSTM+GloVe+Att'], ['Single Label Learning Models', 'M7', 'SL-BiLSTM+ELMo'], ['Single Label Learning Models', 'M8', 'SL-BiLSTM+ELMo+Att'], ['Single Label Learning Models', 'M9', 'CRF']]
3
[['Model/Eval', 'Match m', 'm=1'], ['Model/Eval', 'Match m', 'm=2'], ['Model/Eval', 'Match m', 'm=3'], ['Model/Eval', 'Match m', 'm=4']]
[['54.6', '69.2', '76.5', '81.9'], ['57.5', '69.7', '76.7', '80.7'], ['0.6', '71.7', '78.7', '84.1'], ['59.6', '72.7', '77.7', '84.6'], ['51.7', '66.7', '75.0', '81.1'], ['52.9', '66.5', '73.6', '0.8'], ['54.2', '69.0', '77.9', '83.0'], ['54.2', '70.7', '78.5', '82.8'], ['45.4', '66.0', '72.8', '80.2']]
column
['Match m', 'Match m', 'Match m', 'Match m']
['M3', 'M4']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model/Eval || Match m || m=1</th> <th>Model/Eval || Match m || m=2</th> <th>Model/Eval || Match m || m=3</th> <th>Model/Eval || Match m || m=4</th> </tr> </thead> <tbody> <tr> <td>Label Distribution Learning Models || M1 || DL-BiLSTM+GloVe</td> <td>54.6</td> <td>69.2</td> <td>76.5</td> <td>81.9</td> </tr> <tr> <td>Label Distribution Learning Models || M2 || DL-BiLSTM+GloVe+Att</td> <td>57.5</td> <td>69.7</td> <td>76.7</td> <td>80.7</td> </tr> <tr> <td>Label Distribution Learning Models || M3 || DL-BiLSTM+ELMo</td> <td>0.6</td> <td>71.7</td> <td>78.7</td> <td>84.1</td> </tr> <tr> <td>Label Distribution Learning Models || M4 || DL-BiLSTM+ELMo+Att</td> <td>59.6</td> <td>72.7</td> <td>77.7</td> <td>84.6</td> </tr> <tr> <td>Single Label Learning Models || M5 || SL-BiLSTM+GloVe</td> <td>51.7</td> <td>66.7</td> <td>75.0</td> <td>81.1</td> </tr> <tr> <td>Single Label Learning Models || M6 || SL-BiLSTM+GloVe+Att</td> <td>52.9</td> <td>66.5</td> <td>73.6</td> <td>0.8</td> </tr> <tr> <td>Single Label Learning Models || M7 || SL-BiLSTM+ELMo</td> <td>54.2</td> <td>69.0</td> <td>77.9</td> <td>83.0</td> </tr> <tr> <td>Single Label Learning Models || M8 || SL-BiLSTM+ELMo+Att</td> <td>54.2</td> <td>70.7</td> <td>78.5</td> <td>82.8</td> </tr> <tr> <td>Single Label Learning Models || M9 || CRF</td> <td>45.4</td> <td>66.0</td> <td>72.8</td> <td>80.2</td> </tr> </tbody></table>
Table 3
table_3
P19-1112
5
acl2019
We are organizing a SemEval shared task on emphasis selection called "Task 10: Emphasis Selection for Written Text in Visual Media". In order to set out a comparable baseline for this shared task, in this section, we report results of our models according to the SemEval setting defined for the task. After the submission of this paper, we continued to improve the quality of the annotated data by cleaning the data and fixing the annotations of some noisy instances. The SemEval version of Spark dataset contains 1,200 instances with a different split: 70% training, 10% development and 20% test sets. We choose Matchm as the evaluation metric for this shared task as it provides a comprehensive evaluation compared to MAX, as one can choose the value of m. Furthermore, compared to TopK, the Matchm metric can better handle cases where multiple tokens have the same label distribution according to the annotators in the ground truth. Table 3 shows the results of all nine models under the SemEval setting, using the Matchm evaluation metric. Similar to the results we showed in Table 2, M3 and M4 both perform competitively and outperform the other models.
[2, 2, 2, 2, 2, 1, 1]
['We are organizing a SemEval shared task on emphasis selection called "Task 10: Emphasis Selection for Written Text in Visual Media".', 'In order to set out a comparable baseline for this shared task, in this section, we report results of our models according to the SemEval setting defined for the task.', 'After the submission of this paper, we continued to improve the quality of the annotated data by cleaning the data and fixing the annotations of some noisy instances.', 'The SemEval version of Spark dataset contains 1,200 instances with a different split: 70% training, 10% development and 20% test sets.', 'We choose Matchm as the evaluation metric for this shared task as it provides a comprehensive evaluation compared to MAX, as one can choose the value of m. Furthermore, compared to TopK, the Matchm metric can better handle cases where multiple tokens have the same label distribution according to the annotators in the ground truth.', 'Table 3 shows the results of all nine models under the SemEval setting, using the Matchm evaluation metric.', 'Similar to the results we showed in Table 2, M3 and M4 both perform competitively and outperform the other models.']
[None, None, None, None, None, None, ['M3', 'M4']]
1
P19-1119table_1
Results of UNMT
1
[['Baseline'], ['Baseline-fix']]
1
[['Fr-En'], ['En-Fr'], ['Ja-En'], ['En-Ja']]
[['24.5', '25.37', '14.09', '21.63'], ['24.22', '25.26', '13.88', '21.93']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Baseline-fix']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Fr-En</th> <th>En-Fr</th> <th>Ja-En</th> <th>En-Ja</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>24.5</td> <td>25.37</td> <td>14.09</td> <td>21.63</td> </tr> <tr> <td>Baseline-fix</td> <td>24.22</td> <td>25.26</td> <td>13.88</td> <td>21.93</td> </tr> </tbody></table>
Table 1
table_1
P19-1119
3
acl2019
3.3 Analysis. The empirical results in this section show that the quality of pre-trained UBWE is important to UNMT. However, the quality of UBWE decreases significantly during UNMT training. We hypothesize that maintaining the quality of UBWE may enhance the performance of UNMT. In this subsection, we analyze some possible solutions to this issue. Use fixed embedding?. As Figure 2 shows, the UBWE performance decreases significantly during the UNMT training process. Therefore, we try to fix the embedding of the encoder and decoder on the basis of the original baseline system (Baseline-fix). Table 1 shows that the performance of the Baseline-fix system is quite similar to that of the original baseline system. In other words, Baseline-fix prevents the degradation of UBWE accuracy; however, the fixed embedding also prevents UBWE from further improving UNMT training. Therefore, the fixed UBWE does not enhance the performance of UNMT.
[2, 2, 2, 2, 2, 2, 2, 1, 1, 1, 2]
['3.3 Analysis.', ' The empirical results in this section show that the quality of pre-trained UBWE is important to UNMT.', 'However, the quality of UBWE decreases significantly during UNMT training.', 'We hypothesize that maintaining the quality of UBWE may enhance the performance of UNMT.', 'In this subsection, we analyze some possible solutions to this issue.', 'Use fixed embedding?.', 'As Figure 2 shows, the UBWE performance decreases significantly during the UNMT training process.', 'Therefore, we try to fix the embedding of the encoder and decoder on the basis of the original baseline system (Baseline-fix).', 'Table 1 shows that the performance of the Baseline-fix system is quite similar to that of the original baseline system.', 'In other words, Baseline-fix prevents the degradation of UBWE accuracy; however, the fixed embedding also prevents UBWE from further improving UNMT training.', 'Therefore, the fixed UBWE does not enhance the performance of UNMT.']
[None, None, None, None, None, None, None, ['Baseline-fix'], ['Baseline-fix', 'Baseline'], ['Baseline-fix'], None]
1
P19-1120table_2
Translation results of different transfer learning setups.
1
[['Baseline'], ['Multilingual (Johnson et al. 2017)'], ['Transfer (Zoph et al. 2016)'], [' + Cross-lingual word embedding'], [' + Artificial noises'], [' + Synthetic data']]
2
[['BLEU (%)', 'eu-en'], ['BLEU (%)', 'sl-en'], ['BLEU (%)', 'be-en'], ['BLEU (%)', 'az-en'], ['BLEU (%)', 'tr-en']]
[['1.7', '10.1', '3.2', '3.1', '0.8'], ['5.1', '16.7', '4.2', '4.5', '8.7'], ['4.9', '19.2', '8.9', '5.3', '7.4'], ['7.4', '20.6', '12.2', '7.4', '9.4'], ['8.2', '21.3', '12.8', '8.1', '10.1'], ['9.7', '22.1', '14', '9', '11.3']]
column
['BLEU (%)', 'BLEU (%)', 'BLEU (%)', 'BLEU (%)', 'BLEU (%)']
[' + Cross-lingual word embedding', ' + Artificial noises', ' + Synthetic data']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU (%) || eu-en</th> <th>BLEU (%) || sl-en</th> <th>BLEU (%) || be-en</th> <th>BLEU (%) || az-en</th> <th>BLEU (%) || tr-en</th> </tr> </thead> <tbody> <tr> <td>Baseline</td> <td>1.7</td> <td>10.1</td> <td>3.2</td> <td>3.1</td> <td>0.8</td> </tr> <tr> <td>Multilingual (Johnson et al. 2017)</td> <td>5.1</td> <td>16.7</td> <td>4.2</td> <td>4.5</td> <td>8.7</td> </tr> <tr> <td>Transfer (Zoph et al. 2016)</td> <td>4.9</td> <td>19.2</td> <td>8.9</td> <td>5.3</td> <td>7.4</td> </tr> <tr> <td>+ Cross-lingual word embedding</td> <td>7.4</td> <td>20.6</td> <td>12.2</td> <td>7.4</td> <td>9.4</td> </tr> <tr> <td>+ Artificial noises</td> <td>8.2</td> <td>21.3</td> <td>12.8</td> <td>8.1</td> <td>10.1</td> </tr> <tr> <td>+ Synthetic data</td> <td>9.7</td> <td>22.1</td> <td>14</td> <td>9</td> <td>11.3</td> </tr> </tbody></table>
Table 2
table_2
P19-1120
6
acl2019
Table 2 presents the results. Plain transfer learning already gives a boost but is still far from a satisfying quality, especially for Basque®-English and Azerbaijani®English. On top of that, each of our three techniques offers clear, incremental improvements in all child language pairs with a maximum of 5.1% BLEU in total.
[1, 1, 1]
['Table 2 presents the results.', 'Plain transfer learning already gives a boost but is still far from a satisfying quality, especially for Basque®-English and Azerbaijani®English.', 'On top of that, each of our three techniques offers clear, incremental improvements in all child language pairs with a maximum of 5.1% BLEU in total.']
[None, ['Transfer (Zoph et al. 2016)', 'be-en', 'az-en'], [' + Cross-lingual word embedding', ' + Artificial noises', ' + Synthetic data', 'BLEU (%)']]
1
P19-1120table_5
Translation results with different sizes of the source vocabulary.
2
[['BPE merges', '10k'], ['BPE merges', '20k'], ['BPE merges', '50k'], ['BPE merges', '70k']]
2
[['BLEU (%)', 'sl-en'], ['BLEU (%)', 'be-en']]
[['21', '11.2'], ['20.6', '12.2'], ['20.2', '10.9'], ['20', '10.9']]
column
['BLEU (%)', 'BLEU (%)']
['BPE merges']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU (%) || sl-en</th> <th>BLEU (%) || be-en</th> </tr> </thead> <tbody> <tr> <td>BPE merges || 10k</td> <td>21</td> <td>11.2</td> </tr> <tr> <td>BPE merges || 20k</td> <td>20.6</td> <td>12.2</td> </tr> <tr> <td>BPE merges || 50k</td> <td>20.2</td> <td>10.9</td> </tr> <tr> <td>BPE merges || 70k</td> <td>20</td> <td>10.9</td> </tr> </tbody></table>
Table 5
table_5
P19-1120
7
acl2019
Table 5 estimates how large the vocabulary should be for the language-switching side in NMT transfer. We varied the number of BPE merges on the source side, fixing the target vocabulary to 50k merges. The best results are with 10k or 20k of BPE merges, which shows that the source vocabulary should be reasonably small to maximize the transfer performance. Less BPE merges lead to more language-independent tokens; it is easier for the cross-lingual embedding to find the overlaps in the shared semantic space.
[1, 2, 1, 2]
['Table 5 estimates how large the vocabulary should be for the language-switching side in NMT transfer.', 'We varied the number of BPE merges on the source side, fixing the target vocabulary to 50k merges.', 'The best results are with 10k or 20k of BPE merges, which shows that the source vocabulary should be reasonably small to maximize the transfer performance.', 'Less BPE merges lead to more language-independent tokens; it is easier for the cross-lingual embedding to find the overlaps in the shared semantic space.']
[None, ['50k'], ['10k', '20k'], None]
1
P19-1127table_1
Relation extraction manual evaluation results: Precision of top 1000 predictions.
2
[['Precision@N', 'PCNN+ATT'], ['Precision@N', 'PCNN+ATT+GloRE'], ['Precision@N', 'PCNN+ATT+GloRE+'], ['Precision@N', 'PCNN+ATT+GloRE++']]
1
[['100'], ['300'], ['500'], ['700'], ['900'], ['1000']]
[['97', '93.7', '92.8', '89.1', '85.2', '83.9'], ['97', '97.3', '94.6', '93.3', '90.1', '89.3'], ['98', '98.7', '96.6', '93.1', '89.9', '88.8'], ['98', '97.3', '96', '93.6', '91', '89.8']]
column
['Precision', 'Precision', 'Precision', 'Precision', 'Precision', 'Precision']
['PCNN+ATT+GloRE', 'PCNN+ATT+GloRE+', 'PCNN+ATT+GloRE++']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>100</th> <th>300</th> <th>500</th> <th>700</th> <th>900</th> <th>1000</th> </tr> </thead> <tbody> <tr> <td>Precision@N || PCNN+ATT</td> <td>97</td> <td>93.7</td> <td>92.8</td> <td>89.1</td> <td>85.2</td> <td>83.9</td> </tr> <tr> <td>Precision@N || PCNN+ATT+GloRE</td> <td>97</td> <td>97.3</td> <td>94.6</td> <td>93.3</td> <td>90.1</td> <td>89.3</td> </tr> <tr> <td>Precision@N || PCNN+ATT+GloRE+</td> <td>98</td> <td>98.7</td> <td>96.6</td> <td>93.1</td> <td>89.9</td> <td>88.8</td> </tr> <tr> <td>Precision@N || PCNN+ATT+GloRE++</td> <td>98</td> <td>97.3</td> <td>96</td> <td>93.6</td> <td>91</td> <td>89.8</td> </tr> </tbody></table>
Table 1
table_1
P19-1127
4
acl2019
Same as (Su et al., 2018), we use PCNN+ATT (Lin et al., 2016) as our base model. GloRE++ improves its best F1-score from 42.7% to 45.2%, slightly outperforming the previous state-of-theart (GloRE, 44.7%). As shown in previous work (Su et al., 2018), on NYT dataset, due to a significant amount of false negatives, the PR curve on the held-out set may not be an accurate measure of performance. Therefore, we mainly employ manual evaluation. We invite graduate students to check top 1000 predictions of each method. They are present with the entity pair, the prediction, and all the contextual sentences of the entity pair. Each prediction is examined by two students until reaching an agreement after discussion. Besides, the students are not aware of the source of the predictions. Table 1 shows the manual evaluation results. Both GloRE+ and GloRE++ get improvements over GloRE. GloRE++ obtains the best results for top 700, 900 and 1000 predictions.
[2, 1, 2, 2, 2, 2, 2, 2, 1, 1, 1]
['Same as (Su et al., 2018), we use PCNN+ATT (Lin et al., 2016) as our base model.', 'GloRE++ improves its best F1-score from 42.7% to 45.2%, slightly outperforming the previous state-of-theart (GloRE, 44.7%).', 'As shown in previous work (Su et al., 2018), on NYT dataset, due to a significant amount of false negatives, the PR curve on the held-out set may not be an accurate measure of performance.', 'Therefore, we mainly employ manual evaluation.', 'We invite graduate students to check top 1000 predictions of each method.', 'They are present with the entity pair, the prediction, and all the contextual sentences of the entity pair.', 'Each prediction is examined by two students until reaching an agreement after discussion.', 'Besides, the students are not aware of the source of the predictions.', 'Table 1 shows the manual evaluation results.', 'Both GloRE+ and GloRE++ get improvements over GloRE.', 'GloRE++ obtains the best results for top 700, 900 and 1000 predictions.']
[['PCNN+ATT'], ['PCNN+ATT+GloRE++', 'PCNN+ATT+GloRE'], None, None, ['1000'], None, None, None, None, ['PCNN+ATT+GloRE+', 'PCNN+ATT+GloRE++', 'PCNN+ATT+GloRE'], ['PCNN+ATT+GloRE++', '700', '900', '1000']]
1
P19-1130table_1
Comparison between our model and the stateof-the-art models using ACE 2005 English corpus. F1scores higher than the state-of-the-art are in bold.
2
[['Model', 'SPTree'], ['Model', 'Walk-based'], ['Model', 'Baseline'], ['Model', 'Baseline+Tag'], ['Model', 'Baseline+MTL'], ['Model', 'Baseline+MTL+Tag']]
1
[['P%'], ['R%'], ['F1%']]
[['70.1', '61.2', '65.3'], ['69.7', '59.5', '64.2'], ['58.8', '57.3', '57.2'], ['61.3', '76.7', '67.4'], ['63.8', '56.1', '59.5'], ['66.5', '71.8', '68.9']]
column
['P%', 'R%', 'F1%']
['Baseline+MTL+Tag']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>P%</th> <th>R%</th> <th>F1%</th> </tr> </thead> <tbody> <tr> <td>Model || SPTree</td> <td>70.1</td> <td>61.2</td> <td>65.3</td> </tr> <tr> <td>Model || Walk-based</td> <td>69.7</td> <td>59.5</td> <td>64.2</td> </tr> <tr> <td>Model || Baseline</td> <td>58.8</td> <td>57.3</td> <td>57.2</td> </tr> <tr> <td>Model || Baseline+Tag</td> <td>61.3</td> <td>76.7</td> <td>67.4</td> </tr> <tr> <td>Model || Baseline+MTL</td> <td>63.8</td> <td>56.1</td> <td>59.5</td> </tr> <tr> <td>Model || Baseline+MTL+Tag</td> <td>66.5</td> <td>71.8</td> <td>68.9</td> </tr> </tbody></table>
Table 1
table_1
P19-1130
6
acl2019
Results. From Table 1, we can see both BIO tag embeddings and multi-task learning can improve the performance of the baseline model. Baseline+Tag can outperform the state-ofthe-art models on both the Chinese and English corpus. Compared to the baseline model, BIO tag embeddings lead to an absolute increase of about 10% in F1-score, which indicates that BIO tag embeddings are very effective. Multi-task learning can yield further improvement in addition to BIO tag embeddings: Baseline+MTL+Tag achieves the highest F1-score on both corpora.
[2, 1, 1, 1, 1]
['Results.', 'From Table 1, we can see both BIO tag embeddings and multi-task learning can improve the performance of the baseline model.', 'Baseline+Tag can outperform the state-ofthe-art models on both the Chinese and English corpus.', 'Compared to the baseline model, BIO tag embeddings lead to an absolute increase of about 10% in F1-score, which indicates that BIO tag embeddings are very effective.', 'Multi-task learning can yield further improvement in addition to BIO tag embeddings: Baseline+MTL+Tag achieves the highest F1-score on both corpora.']
[None, ['Baseline+MTL+Tag'], ['Baseline+Tag'], None, ['Baseline+MTL+Tag', 'F1%']]
1
P19-1136table_1
Results for both NYT and WebNLG datasets.
2
[['Method', 'NovelTagging'], ['Method', 'OneDecoder'], ['Method', 'MultiDecoder'], ['Method', 'GraphRel1p'], ['Method', 'GraphRel2p']]
2
[['NYT', 'Precision'], ['NYT', 'Recall'], ['NYT', 'F1'], ['WebNLG', 'Precision'], ['WebNLG', 'Recall'], ['WebNLG', 'F1']]
[['62.40%', '31.70%', '42.00%', '52.50%', '19.30%', '28.30%'], ['59.40%', '53.10%', '56.00%', '32.20%', '28.90%', '30.50%'], ['61.00%', '56.60%', '58.70%', '37.70%', '36.40%', '37.10%'], ['62.90%', '57.30%', '60.00%', '42.30%', '39.20%', '40.70%'], ['63.90%', '60.00%', '61.90%', '44.70%', '41.10%', '42.90%']]
column
['precision', 'recall', 'F1', 'precision', 'recall', 'F1']
['GraphRel1p', 'GraphRel2p']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NYT || Precision</th> <th>NYT || Recall</th> <th>NYT || F1</th> <th>WebNLG || Precision</th> <th>WebNLG || Recall</th> <th>WebNLG || F1</th> </tr> </thead> <tbody> <tr> <td>Method || NovelTagging</td> <td>62.40%</td> <td>31.70%</td> <td>42.00%</td> <td>52.50%</td> <td>19.30%</td> <td>28.30%</td> </tr> <tr> <td>Method || OneDecoder</td> <td>59.40%</td> <td>53.10%</td> <td>56.00%</td> <td>32.20%</td> <td>28.90%</td> <td>30.50%</td> </tr> <tr> <td>Method || MultiDecoder</td> <td>61.00%</td> <td>56.60%</td> <td>58.70%</td> <td>37.70%</td> <td>36.40%</td> <td>37.10%</td> </tr> <tr> <td>Method || GraphRel1p</td> <td>62.90%</td> <td>57.30%</td> <td>60.00%</td> <td>42.30%</td> <td>39.20%</td> <td>40.70%</td> </tr> <tr> <td>Method || GraphRel2p</td> <td>63.90%</td> <td>60.00%</td> <td>61.90%</td> <td>44.70%</td> <td>41.10%</td> <td>42.90%</td> </tr> </tbody></table>
Table 1
table_1
P19-1136
6
acl2019
5.3 Quantitative Results . Table 1 presents the precision, recall, and F1 score of NovelTagging, MultiDecoder, and GraphRel for both the NYT and WebNLG datasets. OneDecoder, proposed in MultiDecoderÕs original paper, uses only a single decoder to extract relation triplets. GraphRel1p is the proposed method but only 1st-phase, and GraphRel2p is the complete version, which predicts relations and entities after the 2nd-phase. For the NYT dataset, we see that GraphRel1-hop outperforms NovelTagging by 18.0%, OneDecoder by 4.0%, and MultiDecoder by 1.3% in terms of F1. As it acquires both sequential and regional dependency word features, GraphRel1-hop performs better on both precision and recall, resulting in a higher F1 score. With relation weighted GCN in 2nd-phase, GraphRel2p, which considers interaction between name entities and relations, further surpasses MultiDecoder by 3.2% and yields a 1.9% improvement in comparison with GraphRel1p.
[2, 1, 2, 2, 1, 1, 1]
['5.3 Quantitative Results .', 'Table 1 presents the precision, recall, and F1 score of NovelTagging, MultiDecoder, and GraphRel for both the NYT and WebNLG datasets.', 'OneDecoder, proposed in MultiDecoderÕs original paper, uses only a single decoder to extract relation triplets.', 'GraphRel1p is the proposed method but only 1st-phase, and GraphRel2p is the complete version, which predicts relations and entities after the 2nd-phase.', 'For the NYT dataset, we see that GraphRel1-hop outperforms NovelTagging by 18.0%, OneDecoder by 4.0%, and MultiDecoder by 1.3% in terms of F1.', ' As it acquires both sequential and regional dependency word features, GraphRel1-hop performs better on both precision and recall, resulting in a higher F1 score.', ' With relation weighted GCN in 2nd-phase, GraphRel2p, which considers interaction between name entities and relations, further surpasses MultiDecoder by 3.2% and yields a 1.9% improvement in comparison with GraphRel1p.']
[None, ['NovelTagging', 'MultiDecoder', 'GraphRel1p', 'GraphRel2p', 'Precision', 'Recall', 'F1'], ['OneDecoder', 'MultiDecoder'], ['GraphRel1p', 'GraphRel2p'], ['NYT', 'GraphRel1p', 'NovelTagging', 'OneDecoder', 'MultiDecoder'], ['GraphRel1p', 'Precision', 'Recall', 'F1'], ['GraphRel2p', 'MultiDecoder', 'GraphRel1p']]
1
P19-1137table_3
Total diagnostic results, where columns contain the precision, recall and accuracy of DS-generated labels evaluated on 200 human-annotated labels as well as the number of positive and negative patterns preserved after the pattern-refinement stage, and we underline some cases in which DS performs poorly.
1
[['R0'], ['R1'], ['R2'], ['R3'], ['R4'], ['R5'], ['R6'], ['R7'], ['R8'], ['R9'], ['R6u'], ['R7u'], ['R8u'], ['R9u']]
1
[['Prec.'], ['Recall'], ['Acc.'], ['#Pos.'], ['#Neg.']]
[['100', '81.8', '82', '20', '0'], ['93.9', '33.5', '36.2', '18', '0'], ['75.7', '88', '76.5', '9', '5'], ['100', '91.4', '92', '20', '0'], ['93.3', '72.4', '80.9', '10', '2'], ['93.8', '77.3', '86.5', '15', '0'], ['88.3', '76.9', '75.1', '14', '0'], ['91.9', '64.6', '64', '20', '0'], ['29.3', '30.4', '60', '4', '10'], ['66.7', '38.1', '74.4', '6', '11'], ['81.8', '90.7', '81', '7', '0'], ['93.5', '70.7', '68.3', '17', '1'], ['35', '70', '60', '4', '15'], ['87.5', '59.2', '67.7', '12', '5']]
column
['Prec.', 'Recall', 'Acc.', '#Pos.', '#Neg.']
None
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Prec.</th> <th>Recall</th> <th>Acc.</th> <th>#Pos.</th> <th>#Neg.</th> </tr> </thead> <tbody> <tr> <td>R0</td> <td>100</td> <td>81.8</td> <td>82</td> <td>20</td> <td>0</td> </tr> <tr> <td>R1</td> <td>93.9</td> <td>33.5</td> <td>36.2</td> <td>18</td> <td>0</td> </tr> <tr> <td>R2</td> <td>75.7</td> <td>88</td> <td>76.5</td> <td>9</td> <td>5</td> </tr> <tr> <td>R3</td> <td>100</td> <td>91.4</td> <td>92</td> <td>20</td> <td>0</td> </tr> <tr> <td>R4</td> <td>93.3</td> <td>72.4</td> <td>80.9</td> <td>10</td> <td>2</td> </tr> <tr> <td>R5</td> <td>93.8</td> <td>77.3</td> <td>86.5</td> <td>15</td> <td>0</td> </tr> <tr> <td>R6</td> <td>88.3</td> <td>76.9</td> <td>75.1</td> <td>14</td> <td>0</td> </tr> <tr> <td>R7</td> <td>91.9</td> <td>64.6</td> <td>64</td> <td>20</td> <td>0</td> </tr> <tr> <td>R8</td> <td>29.3</td> <td>30.4</td> <td>60</td> <td>4</td> <td>10</td> </tr> <tr> <td>R9</td> <td>66.7</td> <td>38.1</td> <td>74.4</td> <td>6</td> <td>11</td> </tr> <tr> <td>R6u</td> <td>81.8</td> <td>90.7</td> <td>81</td> <td>7</td> <td>0</td> </tr> <tr> <td>R7u</td> <td>93.5</td> <td>70.7</td> <td>68.3</td> <td>17</td> <td>1</td> </tr> <tr> <td>R8u</td> <td>35</td> <td>70</td> <td>60</td> <td>4</td> <td>15</td> </tr> <tr> <td>R9u</td> <td>87.5</td> <td>59.2</td> <td>67.7</td> <td>12</td> <td>5</td> </tr> </tbody></table>
Table 3
table_3
P19-1137
7
acl2019
4.3 Pattern-based Diagnostic Results . Besides for improving the extraction performance, DIAG-NRE can interpret different noise effects caused by DS via refined patterns, as Table 3 shows. Next, we elaborate these diagnostic results and the corresponding performance degradation of NRE models from two perspectives: false negatives (FN) and false positives (FP).
[1, 1, 1]
['4.3 Pattern-based Diagnostic Results .', 'Besides for improving the extraction performance, DIAG-NRE can interpret different noise effects caused by DS via refined patterns, as Table 3 shows.', 'Next, we elaborate these diagnostic results and the corresponding performance degradation of NRE models from two perspectives: false negatives (FN) and false positives (FP).']
[None, None, None]
1
P19-1140table_3
Overall performance.
1
[['MTransE'], ['JAPE'], ['AlignEA'], ['GCN-Align'], ['MuGNN w/o Asr'], ['MuGNN']]
3
[['Methods', 'DBPZH-EN', 'H@1'], ['Methods', 'DBPZH-EN', 'H@10'], ['Methods', 'DBPZH-EN', 'MRR'], ['Methods', 'DBPJA-EN', 'H@1'], ['Methods', 'DBPJA-EN', 'H@10'], ['Methods', 'DBPJA-EN', 'MRR'], ['Methods', 'DBPFR-EN', 'H@1'], ['Methods', 'DBPFR-EN', 'H@10'], ['Methods', 'DBPFR-EN', 'MRR'], ['Methods', 'DBP-WD', 'H@1'], ['Methods', 'DBP-WD', 'H@10'], ['Methods', 'DBP-WD', 'MRR'], ['Methods', 'DBP-YG', 'H@1'], ['Methods', 'DBP-YG', 'H@10'], ['Methods', 'DBP-YG', 'MRR']]
[['0.308', '0.614', '0.364', '0.279', '0.575', '0.349', '0.244', '0.556', '0.335', '0.281', '0.52', '0.363', '0.252', '0.493', '0.334'], ['0.412', '0.745', '0.49', '0.363', '0.685', '0.476', '0.324', '0.667', '0.43', '0.318', '0.589', '0.411', '0.236', '0.484', '0.32'], ['0.472', '0.792', '0.581', '0.448', '0.789', '0.563', '0.481', '0.824', '0.599', '0.566', '0.827', '0.655', '0.633', '0.848', '0.707'], ['0.413', '0.744', '0.549', '0.399', '0.745', '0.546', '0.373', '0.745', '0.532', '0.506', '0.772', '0.6', '0.597', '0.838', '0.682'], ['0.479', '0.833', '0.597', '0.487', '0.851', '0.604', '0.496', '0.869', '0.621', '0.59', '0.887', '0.693', '0.73', '0.934', '0.801'], ['0.494', '0.844', '0.611', '0.501', '0.857', '0.621', '0.495', '0.87', '0.621', '0.616', '0.897', '0.714', '0.741', '0.937', '0.81']]
column
['H@1', 'H@10', 'MRR', 'H@1', 'H@10', 'MRR', 'H@1', 'H@10', 'MRR', 'H@1', 'H@10', 'MRR', 'H@1', 'H@10', 'MRR']
['MuGNN']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Methods || DBPZH-EN || H@1</th> <th>Methods || DBPZH-EN || H@10</th> <th>Methods || DBPZH-EN || MRR</th> <th>Methods || DBPJA-EN || H@1</th> <th>Methods || DBPJA-EN || H@10</th> <th>Methods || DBPJA-EN || MRR</th> <th>Methods || DBPFR-EN || H@1</th> <th>Methods || DBPFR-EN || H@10</th> <th>Methods || DBPFR-EN || MRR</th> <th>Methods || DBP-WD || H@1</th> <th>Methods || DBP-WD || H@10</th> <th>Methods || DBP-WD || MRR</th> <th>Methods || DBP-YG || H@1</th> <th>Methods || DBP-YG || H@10</th> <th>Methods || DBP-YG || MRR</th> </tr> </thead> <tbody> <tr> <td>MTransE</td> <td>0.308</td> <td>0.614</td> <td>0.364</td> <td>0.279</td> <td>0.575</td> <td>0.349</td> <td>0.244</td> <td>0.556</td> <td>0.335</td> <td>0.281</td> <td>0.52</td> <td>0.363</td> <td>0.252</td> <td>0.493</td> <td>0.334</td> </tr> <tr> <td>JAPE</td> <td>0.412</td> <td>0.745</td> <td>0.49</td> <td>0.363</td> <td>0.685</td> <td>0.476</td> <td>0.324</td> <td>0.667</td> <td>0.43</td> <td>0.318</td> <td>0.589</td> <td>0.411</td> <td>0.236</td> <td>0.484</td> <td>0.32</td> </tr> <tr> <td>AlignEA</td> <td>0.472</td> <td>0.792</td> <td>0.581</td> <td>0.448</td> <td>0.789</td> <td>0.563</td> <td>0.481</td> <td>0.824</td> <td>0.599</td> <td>0.566</td> <td>0.827</td> <td>0.655</td> <td>0.633</td> <td>0.848</td> <td>0.707</td> </tr> <tr> <td>GCN-Align</td> <td>0.413</td> <td>0.744</td> <td>0.549</td> <td>0.399</td> <td>0.745</td> <td>0.546</td> <td>0.373</td> <td>0.745</td> <td>0.532</td> <td>0.506</td> <td>0.772</td> <td>0.6</td> <td>0.597</td> <td>0.838</td> <td>0.682</td> </tr> <tr> <td>MuGNN w/o Asr</td> <td>0.479</td> <td>0.833</td> <td>0.597</td> <td>0.487</td> <td>0.851</td> <td>0.604</td> <td>0.496</td> <td>0.869</td> <td>0.621</td> <td>0.59</td> <td>0.887</td> <td>0.693</td> <td>0.73</td> <td>0.934</td> <td>0.801</td> </tr> <tr> <td>MuGNN</td> <td>0.494</td> <td>0.844</td> <td>0.611</td> <td>0.501</td> <td>0.857</td> <td>0.621</td> <td>0.495</td> <td>0.87</td> <td>0.621</td> <td>0.616</td> <td>0.897</td> <td>0.714</td> <td>0.741</td> <td>0.937</td> <td>0.81</td> </tr> </tbody></table>
Table 3
table_3
P19-1140
7
acl2019
5.2 Overall Performance. Results on Table 3 shows DBP15K and DWY100K. In general, MuGNN significantly outperforms all baselines regarding all metrics, mainly because it reconciles the structural differences by two different schemes for KG completion and pruning, which are thus well modeled in multi-channel GNN.
[2, 1, 1]
['5.2 Overall Performance.', 'Results on Table 3 shows DBP15K and DWY100K.', 'In general, MuGNN significantly outperforms all baselines regarding all metrics, mainly because it reconciles the structural differences by two different schemes for KG completion and pruning, which are thus well modeled in multi-channel GNN.']
[None, None, ['MuGNN']]
1
P19-1147table_3
Comparison of LSTM and BERT models under human evaluations against GS-EC attack. Readability is a relative quality score between models, and Human Accuracy is the percentage that human raters correctly identify the adversarial examples.
1
[['LSTM'], ['BERT']]
1
[['Readability'], ['Human Accuracy']]
[['0.6', '52.10%'], ['1', '68.80%']]
column
['Readability', 'Human Accuracy']
['BERT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Readability</th> <th>Human Accuracy</th> </tr> </thead> <tbody> <tr> <td>LSTM</td> <td>0.6</td> <td>52.10%</td> </tr> <tr> <td>BERT</td> <td>1</td> <td>68.80%</td> </tr> </tbody></table>
Table 3
table_3
P19-1147
5
acl2019
The human accuracy metric is the percentage of human responses that matches the true label. Table 3 is a comparison of LSTM and BERT models using the GS-EC attack. It shows that the distance in embeddings space of BERT can better reflect semantic similarity and contribute to more natural adversarial examples. And, in Table 4, we compare using GS-GR and GS-EC method on BERT model. Again, we see that the GS-EC method, which restricts the distance between sentence embeddings of original and adversarial inputs, can produce superior adversaries.
[2, 1, 1, 0, 0]
['The human accuracy metric is the percentage of human responses that matches the true label.', 'Table 3 is a comparison of LSTM and BERT models using the GS-EC attack.', 'It shows that the distance in embeddings space of BERT can better reflect semantic similarity and contribute to more natural adversarial examples.', 'And, in Table 4, we compare using GS-GR and GS-EC method on BERT model.', 'Again, we see that the GS-EC method, which restricts the distance between sentence embeddings of original and adversarial inputs, can produce superior adversaries.']
[None, ['LSTM', 'BERT'], ['BERT'], None, None]
1
P19-1153table_2
SST-5 and SST-2 performance on all and root nodes respectively. Model results in the first section are from the Stanford Treebank paper (2013). GenSen and BERTBASE results are from (Subramanian et al., 2018) and (Devlin et al., 2018) respectively.
1
[['NB'], ['SVM'], ['BiNB'], ['VecAvg'], ['RNN'], ['MV-RNN'], ['RNTN'], ['RAE'], ['GenSen'], ['RAE + GenSen'], ['BERTBASE']]
1
[['SST-5(All)'], ['SST-2(Root)']]
[['67.2', '81.8'], ['64.3', '79.4'], ['71', '83.1'], ['73.3', '80.1'], ['79', '82.4'], ['78.7', '82.9'], ['80.7', '85.4'], ['81.07', '83'], ['-', '84.5'], ['-', '86.43'], ['-', '93.5']]
column
['accuracy', 'accuracy']
['BERTBASE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>SST-5(All)</th> <th>SST-2(Root)</th> </tr> </thead> <tbody> <tr> <td>NB</td> <td>67.2</td> <td>81.8</td> </tr> <tr> <td>SVM</td> <td>64.3</td> <td>79.4</td> </tr> <tr> <td>BiNB</td> <td>71</td> <td>83.1</td> </tr> <tr> <td>VecAvg</td> <td>73.3</td> <td>80.1</td> </tr> <tr> <td>RNN</td> <td>79</td> <td>82.4</td> </tr> <tr> <td>MV-RNN</td> <td>78.7</td> <td>82.9</td> </tr> <tr> <td>RNTN</td> <td>80.7</td> <td>85.4</td> </tr> <tr> <td>RAE</td> <td>81.07</td> <td>83</td> </tr> <tr> <td>GenSen</td> <td>-</td> <td>84.5</td> </tr> <tr> <td>RAE + GenSen</td> <td>-</td> <td>86.43</td> </tr> <tr> <td>BERTBASE</td> <td>-</td> <td>93.5</td> </tr> </tbody></table>
Table 2
table_2
P19-1153
5
acl2019
We present in Table 2 results for fine-grained sentiment analysis on all nodes as well as comparison with recent state-of-the-art methods on binary sentiment classification of the root node. For the five class sentiment task, we compare our model with the original Sentiment Treebank results and beat all the models. In order to compare our approach with state-of-the-art methods, we also trained our model on the binary classification task with sole classification of the root node. Other presented models are GenSen (Subramanian et al., 2018) and BERTBASE (Devlin et al., 2018). Both these recent methods perform extremely well on multiple natural language processing tasks. We set the RAE embedding size demb to 1024. Larger embedding sizes did not improve the accuracy of our model for this task. In this setting, the RAE has 11M parameters, while the models we compare with, GenSen and BERTBASE, have respectively 100M and 110M parameters. Both our model and GenSen fail to beat the RNTN model for the SST-2 task. We see an improvement in accuracy when combining both methods' embeddings, surpassing every model in the SST paper, while being close to BERTBASE's performance.
[2, 2, 2, 2, 1, 2, 2, 2, 1, 1]
['We present in Table 2 results for fine-grained sentiment analysis on all nodes as well as comparison with recent state-of-the-art methods on binary sentiment classification of the root node.', 'For the five class sentiment task, we compare our model with the original Sentiment Treebank results and beat all the models.', 'In order to compare our approach with state-of-the-art methods, we also trained our model on the binary classification task with sole classification of the root node.', 'Other presented models are GenSen (Subramanian et al., 2018) and BERTBASE (Devlin et al., 2018).', 'Both these recent methods perform extremely well on multiple natural language processing tasks.', 'We set the RAE embedding size demb to 1024.', 'Larger embedding sizes did not improve the accuracy of our model for this task.', 'In this setting, the RAE has 11M parameters, while the models we compare with, GenSen and BERTBASE, have respectively 100M and 110M parameters.', 'Both our model and GenSen fail to beat the RNTN model for the SST-2 task.', "We see an improvement in accuracy when combining both methods' embeddings, surpassing every model in the SST paper, while being close to BERTBASE's performance."]
[None, None, None, None, ['GenSen', 'BERTBASE'], ['RAE'], None, ['RAE', 'GenSen', 'BERTBASE'], ['GenSen', 'RNTN', 'SST-2(Root)'], ['BERTBASE']]
1
P19-1173table_3
The results (FEATS) of the learning curve over the EGY training dataset, for the EGY dataset alone, multitask learning (MTL), and the adversarial training (ADV). We do not use morphological analyzers here, so the results are not comparable to Table 2.
2
[['EGY Train Size', '2K (1.5%)'], ['EGY Train Size', '8K (6%)'], ['EGY Train Size', '16K (12%)'], ['EGY Train Size', '33K (25%)'], ['EGY Train Size', '67K (50%)'], ['EGY Train Size', '134K (100%)']]
2
[['EGY', 'None'], ['MSA-EGY', 'MTL'], ['MSA-EGY', 'ADV']]
[['29.7', '61.9', '71.1'], ['62.5', '73.5', '78.3'], ['74.7', '78.1', '81.5'], ['80.7', '81.6', '83.5'], ['83.3', '82', '84'], ['84.5', '85.4', '85.6']]
column
['accuracy', 'accuracy', 'accuracy']
['MSA-EGY']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EGY || None</th> <th>MSA-EGY || MTL</th> <th>MSA-EGY || ADV</th> </tr> </thead> <tbody> <tr> <td>EGY Train Size || 2K (1.5%)</td> <td>29.7</td> <td>61.9</td> <td>71.1</td> </tr> <tr> <td>EGY Train Size || 8K (6%)</td> <td>62.5</td> <td>73.5</td> <td>78.3</td> </tr> <tr> <td>EGY Train Size || 16K (12%)</td> <td>74.7</td> <td>78.1</td> <td>81.5</td> </tr> <tr> <td>EGY Train Size || 33K (25%)</td> <td>80.7</td> <td>81.6</td> <td>83.5</td> </tr> <tr> <td>EGY Train Size || 67K (50%)</td> <td>83.3</td> <td>82</td> <td>84</td> </tr> <tr> <td>EGY Train Size || 134K (100%)</td> <td>84.5</td> <td>85.4</td> <td>85.6</td> </tr> </tbody></table>
Table 3
table_3
P19-1173
8
acl2019
Table 3 shows the results. Multitask learning with MSA consistently outperforms the models that use EGY data only. The accuracy almost doubles in the 2K model. We also notice that the accuracy gap increases as the EGY training dataset size decreases, highlighting the importance of joint modeling with MSA in low-resource DA settings. The adversarial adaptation results in the learning curve further show a significant increase in accuracy with decreasing training data size, compared to the multitask learning results. The model seems to be facilitating more efficient knowledgetransfer, especially for the lower-resource EGY experiments. We can also observe that for the extreme low-resource setting, we can double the accuracy through adversarial multitask learning, achieving about 58% relative error reduction.
[1, 1, 1, 1, 1, 1, 1]
['Table 3 shows the results.', 'Multitask learning with MSA consistently outperforms the models that use EGY data only.', 'The accuracy almost doubles in the 2K model.', 'We also notice that the accuracy gap increases as the EGY training dataset size decreases, highlighting the importance of joint modeling with MSA in low-resource DA settings.', 'The adversarial adaptation results in the learning curve further show a significant increase in accuracy with decreasing training data size, compared to the multitask learning results.', 'The model seems to be facilitating more efficient knowledgetransfer, especially for the lower-resource EGY experiments.', 'We can also observe that for the extreme low-resource setting, we can double the accuracy through adversarial multitask learning, achieving about 58% relative error reduction.']
[None, ['MTL', 'MSA-EGY', 'EGY'], ['2K (1.5%)'], ['EGY Train Size', 'MSA-EGY'], ['ADV'], None, ['ADV']]
1
P19-1175table_6
Test results EN-NL (all sentences).
2
[['System', 'Baseline NMT'], ['System', 'Baseline SMT'], ['System', 'Baseline TM-SMT'], ['System', 'Google Translate'], ['System', 'Best NFR + NMT backoff'], ['System', 'Best NFR unified']]
1
[['BLEU'], ['TER'], ['MET.']]
[['51.45', '36.21', '69.83'], ['54.21', '35.99', '71.28'], ['55.72', '34.96', '72.25'], ['44.37', '41.51', '65.07'], ['58.91', '31.36', '74.12'], ['58.6', '31.57', '73.96']]
column
['bleu', 'ter', 'met.']
['Best NFR + NMT backoff']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>TER</th> <th>MET.</th> </tr> </thead> <tbody> <tr> <td>System || Baseline NMT</td> <td>51.45</td> <td>36.21</td> <td>69.83</td> </tr> <tr> <td>System || Baseline SMT</td> <td>54.21</td> <td>35.99</td> <td>71.28</td> </tr> <tr> <td>System || Baseline TM-SMT</td> <td>55.72</td> <td>34.96</td> <td>72.25</td> </tr> <tr> <td>System || Google Translate</td> <td>44.37</td> <td>41.51</td> <td>65.07</td> </tr> <tr> <td>System || Best NFR + NMT backoff</td> <td>58.91</td> <td>31.36</td> <td>74.12</td> </tr> <tr> <td>System || Best NFR unified</td> <td>58.6</td> <td>31.57</td> <td>73.96</td> </tr> </tbody></table>
Table 6
table_6
P19-1175
6
acl2019
5.3 Test set evaluation Table 6 contains the results for EN-NL for the entire test set (3207 sentences). The dedicated NFR + NMT backoff approach outperforms all baseline systems, scoring +3.19 BLEU, -3.6 TER and +1.87 METEOR points compared to the best baseline (TM-SMT). Compared to the NMT baseline, the difference is 7.46 BLEU points. The best unified NFR system (Unified F3) scores only slightly worse than the approach with a dedicated NFR system and NMT backoff. Both NFR systems score significantly higher than the best baseline in terms of BLEU (p < 0.001). We note that the baseline SMT outperforms the baseline NMT, which in turn obtains better scores than Google Translate on this data set.
[1, 1, 1, 1, 1, 1]
['5.3 Test set evaluation Table 6 contains the results for EN-NL for the entire test set (3207 sentences).', 'The dedicated NFR + NMT backoff approach outperforms all baseline systems, scoring +3.19 BLEU, -3.6 TER and +1.87 METEOR points compared to the best baseline (TM-SMT).', 'Compared to the NMT baseline, the difference is 7.46 BLEU points.', 'The best unified NFR system (Unified F3) scores only slightly worse than the approach with a dedicated NFR system and NMT backoff.', 'Both NFR systems score significantly higher than the best baseline in terms of BLEU (p < 0.001).', 'We note that the baseline SMT outperforms the baseline NMT, which in turn obtains better scores than Google Translate on this data set.']
[None, ['Best NFR + NMT backoff', 'BLEU', 'TER', 'MET.'], ['Baseline NMT', 'BLEU'], ['Best NFR unified', 'Best NFR + NMT backoff'], ['Best NFR + NMT backoff', 'Best NFR unified'], ['Baseline SMT', 'Baseline NMT']]
1
P19-1183table_4
Performance comparisons on the Personsentence dataset (Yamaguchi et al., 2017).
2
[['Methods', 'Random'], ['Methods', 'Proposal upper bound'], ['Methods', 'DVSA+Avg'], ['Methods', 'DVSA+NetVLAD'], ['Methods', 'DVSA+LSTM'], ['Methods', 'GroundeR+Avg'], ['Methods', 'GroundeR+NetVLAD'], ['Methods', 'GroundeR+LSTM'], ['Methods', 'Ours w/o L div'], ['Methods', 'Ours']]
2
[['Accuracy', '0.4'], ['Accuracy', '0.5'], ['Accuracy', '0.6'], ['Accuracy', 'Average']]
[['15.1', '7.2', '3.5', '8.6'], ['89.8', '79.9', '64.1', '77.9'], ['39.8', '30.3', '19.7', '29.9'], ['34.1', '25', '18.3', '25.8'], ['42.7', '30.2', '20', '31'], ['45.5', '32.2', '21.7', '33.1'], ['22.1', '16.1', '8.6', '15.6'], ['39.9', '28.2', '17.7', '28.6'], ['57.9', '47.7', '35.6', '47.1'], ['62.5', '52', '38.4', '51']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours', 'Ours w/o L div', 'Proposal upper bound']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy || 0.4</th> <th>Accuracy || 0.5</th> <th>Accuracy || 0.6</th> <th>Accuracy || Average</th> </tr> </thead> <tbody> <tr> <td>Methods || Random</td> <td>15.1</td> <td>7.2</td> <td>3.5</td> <td>8.6</td> </tr> <tr> <td>Methods || Proposal upper bound</td> <td>89.8</td> <td>79.9</td> <td>64.1</td> <td>77.9</td> </tr> <tr> <td>Methods || DVSA+Avg</td> <td>39.8</td> <td>30.3</td> <td>19.7</td> <td>29.9</td> </tr> <tr> <td>Methods || DVSA+NetVLAD</td> <td>34.1</td> <td>25</td> <td>18.3</td> <td>25.8</td> </tr> <tr> <td>Methods || DVSA+LSTM</td> <td>42.7</td> <td>30.2</td> <td>20</td> <td>31</td> </tr> <tr> <td>Methods || GroundeR+Avg</td> <td>45.5</td> <td>32.2</td> <td>21.7</td> <td>33.1</td> </tr> <tr> <td>Methods || GroundeR+NetVLAD</td> <td>22.1</td> <td>16.1</td> <td>8.6</td> <td>15.6</td> </tr> <tr> <td>Methods || GroundeR+LSTM</td> <td>39.9</td> <td>28.2</td> <td>17.7</td> <td>28.6</td> </tr> <tr> <td>Methods || Ours w/o L div</td> <td>57.9</td> <td>47.7</td> <td>35.6</td> <td>47.1</td> </tr> <tr> <td>Methods || Ours</td> <td>62.5</td> <td>52</td> <td>38.4</td> <td>51</td> </tr> </tbody></table>
Table 4
table_4
P19-1183
8
acl2019
Table 4 shows the results. Similarly, the proposed attentive interactor model (without the diversity loss) outperforms all the baselines. Moreover, the diversity loss further improves the performance. Note that the improvement of our model on this dataset is more significant than that on the VID-sentence dataset. The reason might be that the upper bound performance of the Personsentence is much higher than that of the VIDsentence (77.9 for Person-sentence versus 47.6 for VID-sentence on average). This also suggests that the created VID-sentence dataset is more challenging and more suitable as a benchmark dataset.
[1, 1, 1, 1, 1, 2]
['Table 4 shows the results.', 'Similarly, the proposed attentive interactor model (without the diversity loss) outperforms all the baselines.', 'Moreover, the diversity loss further improves the performance.', 'Note that the improvement of our model on this dataset is more significant than that on the VID-sentence dataset.', 'The reason might be that the upper bound performance of the Personsentence is much higher than that of the VIDsentence (77.9 for Person-sentence versus 47.6 for VID-sentence on average).', 'This also suggests that the created VID-sentence dataset is more challenging and more suitable as a benchmark dataset.']
[None, ['Ours'], ['Ours w/o L div'], ['Ours'], ['Proposal upper bound'], None]
1
P19-1185table_1
Test results on GLUE tasks for various models: Baseline, ENAS, and CAS (continual architecture search). The CAS results maintain statistical equality across each step.
3
[['Models', 'PREVIOUS WORK', 'BiLSTM+ELMo (2018)'], ['Models', 'PREVIOUS WORK', 'BiLSTM+ELMo+Attn (2018)'], ['Models', 'BASELINES', 'Baseline (with ELMo)'], ['Models', 'BASELINES', 'ENAS (Architecture Search)'], ['Models', 'CAS RESULTS', 'CAS Step-1 (QNLI training)'], ['Models', 'CAS RESULTS', 'CAS Step-2 (RTE training)'], ['Models', 'CAS RESULTS', 'CAS Step-3 (WNLI training)']]
1
[['QNLI'], ['RTE'], ['WNLI']]
[['69.4', '50.1', '65.1'], ['61.1', '50.3', '65.1'], ['73.2', '52.3', '65.1'], ['74.5', '52.9', '65.1'], ['73.8', 'N/A', 'N/A'], ['73.6', '54.1', 'N/A'], ['73.3', '54.0', '64.4']]
column
['accuracy', 'accuracy', 'accuracy']
['BASELINES', 'ENAS (Architecture Search)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>QNLI</th> <th>RTE</th> <th>WNLI</th> </tr> </thead> <tbody> <tr> <td>Models || PREVIOUS WORK || BiLSTM+ELMo (2018)</td> <td>69.4</td> <td>50.1</td> <td>65.1</td> </tr> <tr> <td>Models || PREVIOUS WORK || BiLSTM+ELMo+Attn (2018)</td> <td>61.1</td> <td>50.3</td> <td>65.1</td> </tr> <tr> <td>Models || BASELINES || Baseline (with ELMo)</td> <td>73.2</td> <td>52.3</td> <td>65.1</td> </tr> <tr> <td>Models || BASELINES || ENAS (Architecture Search)</td> <td>74.5</td> <td>52.9</td> <td>65.1</td> </tr> <tr> <td>Models || CAS RESULTS || CAS Step-1 (QNLI training)</td> <td>73.8</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Models || CAS RESULTS || CAS Step-2 (RTE training)</td> <td>73.6</td> <td>54.1</td> <td>N/A</td> </tr> <tr> <td>Models || CAS RESULTS || CAS Step-3 (WNLI training)</td> <td>73.3</td> <td>54.0</td> <td>64.4</td> </tr> </tbody></table>
Table 1
table_1
P19-1185
7
acl2019
7.1 Continual Learning on GLUE Tasks Baseline Models: We use bidirectional LSTMRNN encoders with max-pooling (Conneau et al., 2017) as our baseline. Further, we used the ELMo embeddings (Peters et al., 2018) as input to the encoders, where we allowed to train the weights on each layer of ELMo to get a final representation. Table 1 shows that our baseline models achieve strong results when compared with GLUE benchmark baselines (Wang et al., 2018). On top of these strong baselines, we add ENAS approach. ENAS Models: Next, Table 1 shows that our ENAS models (for all three tasks QNLI, RTE, WNLI) perform better or equal than the nonarchitecture search based models. Note that we only replace the LSTM-RNN cell with our ENAS cell, rest of the model architecture in ENAS model is same as our baseline model. CAS Models: Next, we apply our continual architecture search (CAS) approach on QNLI, RTE, and WNLI, where we sequentially allow the model to learn QNLI, RTE, and WNLI (in the order of decreasing dataset size, following standard transfer setup practice) and the results are as shown in Table 1. We train on QNLI task, RTE task, and WNLI task in step-1, step-2, and step-3, respectively. We observe that even though we learn the models sequentially, we are able to maintain performance on the previously-learned QNLI task in step-2 (74.1 vs. 74.2 on validation set which is statistically equal, and 73.6 vs. 73.8 on test). Note that if we remove our sparsity and orthogonality conditions (Sec. 4), the step-2 QNLI performance drops from 74.1 to 69.1 on validation set, demonstrating the importance of our conditions for CAS (see next paragraph on ‘CAS Condition Ablation’ for more details). Next, we observe a similar pattern when we extend CAS to the WNLI dataset (see step-3 in Table 1), i.e, we are still able to maintain the performance on QNLI (as well as RTE now) from step-2 to step-3 (scores are statistically equal on validation set). Further, if we compare the performance of QNLI from step-1 to step-3, we see that they are also stat. equal on val set (73.9 vs. 74.2). This shows that our CAS method can maintain the performance of a task in a continual learning setting with several steps.
[2, 2, 1, 2, 1, 2, 1, 2, 1, 2, 2, 2, 2]
['7.1 Continual Learning on GLUE Tasks Baseline Models: We use bidirectional LSTMRNN encoders with max-pooling (Conneau et al., 2017) as our baseline.', 'Further, we used the ELMo embeddings (Peters et al., 2018) as input to the encoders, where we allowed to train the weights on each layer of ELMo to get a final representation.', 'Table 1 shows that our baseline models achieve strong results when compared with GLUE benchmark baselines (Wang et al., 2018).', 'On top of these strong baselines, we add ENAS approach.', 'ENAS Models: Next, Table 1 shows that our ENAS models (for all three tasks QNLI, RTE, WNLI) perform better or equal than the nonarchitecture search based models.', 'Note that we only replace the LSTM-RNN cell with our ENAS cell, rest of the model architecture in ENAS model is same as our baseline model.', 'CAS Models: Next, we apply our continual architecture search (CAS) approach on QNLI, RTE, and WNLI, where we sequentially allow the model to learn QNLI, RTE, and WNLI (in the order of decreasing dataset size, following standard transfer setup practice) and the results are as shown in Table 1.', 'We train on QNLI task, RTE task, and WNLI task in step-1, step-2, and step-3, respectively.', 'We observe that even though we learn the models sequentially, we are able to maintain performance on the previously-learned QNLI task in step-2 (74.1 vs. 74.2 on validation set which is statistically equal, and 73.6 vs. 73.8 on test).', 'Note that if we remove our sparsity and orthogonality conditions (Sec. 4), the step-2 QNLI performance drops from 74.1 to 69.1 on validation set, demonstrating the importance of our conditions for CAS (see next paragraph on ‘CAS Condition Ablation’ for more details).', 'Next, we observe a similar pattern when we extend CAS to the WNLI dataset (see step-3 in Table 1), i.e, we are still able to maintain the performance on QNLI (as well as RTE now) from step-2 to step-3 (scores are statistically equal on validation set).', 'Further, if we compare the performance of QNLI from step-1 to step-3, we see that they are also stat. equal on val set (73.9 vs. 74.2).', 'This shows that our CAS method can maintain the performance of a task in a continual learning setting with several steps.']
[None, None, ['Baseline (with ELMo)'], ['ENAS (Architecture Search)'], ['ENAS (Architecture Search)', 'Baseline (with ELMo)'], ['ENAS (Architecture Search)'], ['CAS Step-1 (QNLI training)', 'CAS Step-2 (RTE training)', 'CAS Step-3 (WNLI training)'], ['CAS Step-1 (QNLI training)', 'CAS Step-2 (RTE training)', 'CAS Step-3 (WNLI training)'], ['CAS Step-2 (RTE training)'], None, None, None, None]
1
P19-1186table_3
Accuracy [%] of CSDA w. Dirichlet trained with different configurations of F and Y. Y on its own is only a little worse than only F, showing that target labels y are more important for learning than the domain d. The Y configuration fully domain unsupervised training still results in decent performance, boding well for application to very messy and heterogenous datasets with no domain metadata.
1
[['F'], ['F+Y'], ['Y']]
1
[['B'], ['D'], ['E'], ['K'], ['Average']]
[['77.9', '80.6', '84.4', '86.5', '82.3'], ['80.0', '84.3', '86.2', '87.0', '84.4'], ['77.6', '81.5', '83.7', '85.2', '82.0']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['F+Y']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B</th> <th>D</th> <th>E</th> <th>K</th> <th>Average</th> </tr> </thead> <tbody> <tr> <td>F</td> <td>77.9</td> <td>80.6</td> <td>84.4</td> <td>86.5</td> <td>82.3</td> </tr> <tr> <td>F+Y</td> <td>80.0</td> <td>84.3</td> <td>86.2</td> <td>87.0</td> <td>84.4</td> </tr> <tr> <td>Y</td> <td>77.6</td> <td>81.5</td> <td>83.7</td> <td>85.2</td> <td>82.0</td> </tr> </tbody></table>
Table 3
table_3
P19-1186
7
acl2019
Next, we consider the impact of using different combinations of F and Y. Table 3 shows the performance of difference configurations. Overall, F + Y gives excellent performance. Interestingly, Y on its own is only a little worse than only F, showing that target labels y are more important for learning than the domain d. The Y configuration fully domain unsupervised training still results in decent performance, boding well for application to very messy and heterogenous datasets with no domain metadata.
[1, 1, 1, 1, 2]
['Next, we consider the impact of using different combinations of F and Y.', 'Table 3 shows the performance of difference configurations.', 'Overall, F + Y gives excellent performance.', 'Interestingly, Y on its own is only a little worse than only F, showing that target labels y are more important for learning than the domain d.', 'The Y configuration fully domain unsupervised training still results in decent performance, boding well for application to very messy and heterogenous datasets with no domain metadata.']
[['F+Y'], None, ['F+Y'], ['Y', 'F'], None]
1
P19-1193table_2
Results of human evaluation. The best performance is highlighted in bold and “*” indicates the best result achieved by baselines. We calculate the Pearson correlation to show the inter-annotator agreement.
2
[['Methods', 'SC-LSTM'], ['Methods', 'PNN'], ['Methods', 'MTA'], ['Methods', 'CVAE'], ['Methods', 'Plan&Write'], ['Methods', 'Proposal']]
1
[['Consistency'], ['Novelty'], ['Diversity'], ['Coherence']]
[['1.67', '2.04', '1.39', '1.16'], ['2.52', '1.96', '1.95', '2.84'], ['3.17', '2.56', '2.43', '3.28'], ['3.42*', '2.87*', '2.74*', '2.63'], ['3.27', '2.81', '2.56', '3.36*'], ['3.84', '3.24', '3.16', '3.61']]
column
['Consistency', 'Novelty', 'Diversity', 'Coherence']
['Proposal']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Consistency</th> <th>Novelty</th> <th>Diversity</th> <th>Coherence</th> </tr> </thead> <tbody> <tr> <td>Methods || SC-LSTM</td> <td>1.67</td> <td>2.04</td> <td>1.39</td> <td>1.16</td> </tr> <tr> <td>Methods || PNN</td> <td>2.52</td> <td>1.96</td> <td>1.95</td> <td>2.84</td> </tr> <tr> <td>Methods || MTA</td> <td>3.17</td> <td>2.56</td> <td>2.43</td> <td>3.28</td> </tr> <tr> <td>Methods || CVAE</td> <td>3.42*</td> <td>2.87*</td> <td>2.74*</td> <td>2.63</td> </tr> <tr> <td>Methods || Plan&amp;Write</td> <td>3.27</td> <td>2.81</td> <td>2.56</td> <td>3.36*</td> </tr> <tr> <td>Methods || Proposal</td> <td>3.84</td> <td>3.24</td> <td>3.16</td> <td>3.61</td> </tr> </tbody></table>
Table 2
table_2
P19-1193
6
acl2019
Table 2 presents the human evaluation results, from which we can draw similar conclusions. It is obvious that our approach can outperform the baselines by a large margin, especially in terms of diversity and topic-consistency. For example, the proposed model achieves improvements of 15.33% diversity score and 12.28% consistency score over the best baseline. The main reason for this increase in diversity is that we integrate commonsense knowledge into the generator through the memory mechanism. This external commonsense knowledge provides additional background information, making the generated essays more novel and diverse. In addition, the adversarial training is employed to increase the coverage of the output on the target topics, which further enhances the topic-consistency.
[1, 1, 1, 2, 2, 2]
['Table 2 presents the human evaluation results, from which we can draw similar conclusions.', 'It is obvious that our approach can outperform the baselines by a large margin, especially in terms of diversity and topic-consistency.', 'For example, the proposed model achieves improvements of 15.33% diversity score and 12.28% consistency score over the best baseline.', 'The main reason for this increase in diversity is that we integrate commonsense knowledge into the generator through the memory mechanism.', 'This external commonsense knowledge provides additional background information, making the generated essays more novel and diverse.', 'In addition, the adversarial training is employed to increase the coverage of the output on the target topics, which further enhances the topic-consistency.']
[None, ['Proposal', 'Diversity', 'Consistency'], ['Proposal', 'Diversity', 'Consistency'], None, None, None]
1
P19-1193table_3
Automatic evaluations of ablation study. “w/o Dynamic” means that we use static memory mechanism.
2
[['Methods', 'Full Model'], ['Methods', 'w/o Adversarial Training'], ['Methods', 'w/o Memory'], ['Methods', 'w/o Dynamic']]
1
[['BLEU'], ['Consistency'], ['Novelty'], ['Dist-1'], ['Dist-2']]
[['9.72', '39.42', '75.71', '5.19', '20.49'], ['7.74', '31.74', '74.13', '5.22', '20.43'], ['8.4', '33.95', '71.86', '4.16', '17.59'], ['8.46', '36.18', '73.62', '4.18', '18.49']]
column
['BLEU', 'Consistency', 'Novelty', 'Dist-1', 'Dist-2']
['w/o Adversarial Training', 'w/o Memory']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>Consistency</th> <th>Novelty</th> <th>Dist-1</th> <th>Dist-2</th> </tr> </thead> <tbody> <tr> <td>Methods || Full Model</td> <td>9.72</td> <td>39.42</td> <td>75.71</td> <td>5.19</td> <td>20.49</td> </tr> <tr> <td>Methods || w/o Adversarial Training</td> <td>7.74</td> <td>31.74</td> <td>74.13</td> <td>5.22</td> <td>20.43</td> </tr> <tr> <td>Methods || w/o Memory</td> <td>8.4</td> <td>33.95</td> <td>71.86</td> <td>4.16</td> <td>17.59</td> </tr> <tr> <td>Methods || w/o Dynamic</td> <td>8.46</td> <td>36.18</td> <td>73.62</td> <td>4.18</td> <td>18.49</td> </tr> </tbody></table>
Table 3
table_3
P19-1193
7
acl2019
Memory mechanism. We find that the memory mechanism can significantly improve the novelty and diversity. As is shown in Table 3, compared to the removal of the adversarial training, the model exhibits larger degradation in terms of novelty and diversity when the memory mechanism is removed. This shows that with the help of external commonsense knowledge, the source information can be enriched, leading to the outputs that are more novel and diverse.
[2, 2, 1, 1]
['Memory mechanism.', 'We find that the memory mechanism can significantly improve the novelty and diversity.', 'As is shown in Table 3, compared to the removal of the adversarial training, the model exhibits larger degradation in terms of novelty and diversity when the memory mechanism is removed.', 'This shows that with the help of external commonsense knowledge, the source information can be enriched, leading to the outputs that are more novel and diverse.']
[None, None, ['w/o Adversarial Training', 'w/o Memory', 'Novelty', 'Dist-1', 'Dist-2'], None]
1
P19-1195table_7
Results on ROTOWIRE (RW) and MLB development sets using relation generation (RG) count (#) and precision (P%), content selection (CS) precision (P%) and recall (R%), content ordering (CO) in normalized Damerau-Levenshtein distance (DLD%), and BLEU.
2
[['RW', 'TEMPL'], ['RW', 'WS-2017'], ['RW', 'ED+CC'], ['RW', 'NCP+CC'], ['RW', 'ENT']]
2
[['RG', '#'], ['RG', 'P%'], ['CS', 'P%'], ['CS', 'R%'], ['CO', 'DLD%'], ['-', 'BLEU']]
[['54.29', '99.92', '26.61', '59.16', '14.42', '8.51'], ['23.95', '75.1', '28.11', '35.86', '15.33', '14.57'], ['22.68', '79.4', '29.96', '34.11', '16', '14'], ['33.88', '87.51', '33.52', '51.21', '18.57', '16.19'], ['31.84', '91.97', '36.65', '48.18', '19.68', '15.97']]
column
['#', 'P%', 'P%', 'R%', 'DLD%', 'BLEU']
['ENT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>RG || #</th> <th>RG || P%</th> <th>CS || P%</th> <th>CS || R%</th> <th>CO || DLD%</th> <th>- || BLEU</th> </tr> </thead> <tbody> <tr> <td>RW || TEMPL</td> <td>54.29</td> <td>99.92</td> <td>26.61</td> <td>59.16</td> <td>14.42</td> <td>8.51</td> </tr> <tr> <td>RW || WS-2017</td> <td>23.95</td> <td>75.1</td> <td>28.11</td> <td>35.86</td> <td>15.33</td> <td>14.57</td> </tr> <tr> <td>RW || ED+CC</td> <td>22.68</td> <td>79.4</td> <td>29.96</td> <td>34.11</td> <td>16</td> <td>14</td> </tr> <tr> <td>RW || NCP+CC</td> <td>33.88</td> <td>87.51</td> <td>33.52</td> <td>51.21</td> <td>18.57</td> <td>16.19</td> </tr> <tr> <td>RW || ENT</td> <td>31.84</td> <td>91.97</td> <td>36.65</td> <td>48.18</td> <td>19.68</td> <td>15.97</td> </tr> </tbody></table>
Table 7
table_7
P19-1195
12
acl2019
Results on the Development Set . Table 7 (top) shows results on the ROTOWIRE development set for our dynamic entity memory model (ENT), the best system of Wiseman et al. (2017) (WS-2017) which is an encoder-decoder model with conditional copy, the template generator (TEMPL), our implementation of encoder-decoder model with conditional copy (ED+CC), and NCP+CC (Puduppully et al., 2019). We see that ENT achieves scores comparable to NCP+CC, but performs better on the metrics of RG precision, CS precision, and CO. Table 7 (bottom) also presents our results on MLB. ENT achieves highest BLEU amongst all models and highest CS recall and RG count amongst neural models.
[2, 1, 1]
['Results on the Development Set .', 'Table 7 (top) shows results on the ROTOWIRE development set for our dynamic entity memory model (ENT), the best system of Wiseman et al. (2017) (WS-2017) which is an encoder-decoder model with conditional copy, the template generator (TEMPL), our implementation of encoder-decoder model with conditional copy (ED+CC), and NCP+CC (Puduppully et al., 2019).', 'We see that ENT achieves scores comparable to NCP+CC, but performs better on the metrics of RG precision, CS precision, and CO. Table 7 (bottom) also presents our results on MLB. ENT achieves highest BLEU amongst all models and highest CS recall and RG count amongst neural models.']
[None, ['TEMPL', 'WS-2017', 'ED+CC', 'NCP+CC', 'ENT'], ['ENT']]
1
P19-1197table_1
Results of our model and the baselines. Above is the performance of the key fact prediction component (F1: F1 score, P: precision, R: recall). Middle is the comparison between models under the Vanilla Seq2Seq framework. Below is the models implemented with the transformer framework.
2
[['Model', 'Vanilla Seq2Seq'], ['Model', 'Structure-S2S'], ['Model', 'PretrainedMT'], ['Model', 'SemiMT'], ['Model', 'PIVOT-Vanilla'], ['Model', 'Transformer'], ['Model', 'PretrainedMT'], ['Model', 'SemiMT'], ['Model', 'PIVOT-Trans']]
1
[['BLEU'], ['NIST'], ['ROUGE']]
[['2.14', '0.2809', '0.47'], ['3.27', '0.9612', '0.71'], ['4.35', '1.9937', '0.91'], ['6.76', '3.5017', '2.04'], ['20.09', '6.5130', '18.31'], ['5.48', '1.9873', '1.26'], ['6.43', '2.1019', '1.77'], ['9.71', '2.7019', '3.31'], ['27.34', '6.8763', '19.3']]
column
['BLEU', 'NIST', 'ROUGE']
['PIVOT-Trans', 'PIVOT-Vanilla']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BLEU</th> <th>NIST</th> <th>ROUGE</th> </tr> </thead> <tbody> <tr> <td>Model || Vanilla Seq2Seq</td> <td>2.14</td> <td>0.2809</td> <td>0.47</td> </tr> <tr> <td>Model || Structure-S2S</td> <td>3.27</td> <td>0.9612</td> <td>0.71</td> </tr> <tr> <td>Model || PretrainedMT</td> <td>4.35</td> <td>1.9937</td> <td>0.91</td> </tr> <tr> <td>Model || SemiMT</td> <td>6.76</td> <td>3.5017</td> <td>2.04</td> </tr> <tr> <td>Model || PIVOT-Vanilla</td> <td>20.09</td> <td>6.5130</td> <td>18.31</td> </tr> <tr> <td>Model || Transformer</td> <td>5.48</td> <td>1.9873</td> <td>1.26</td> </tr> <tr> <td>Model || PretrainedMT</td> <td>6.43</td> <td>2.1019</td> <td>1.77</td> </tr> <tr> <td>Model || SemiMT</td> <td>9.71</td> <td>2.7019</td> <td>3.31</td> </tr> <tr> <td>Model || PIVOT-Trans</td> <td>27.34</td> <td>6.8763</td> <td>19.3</td> </tr> </tbody></table>
Table 1
table_1
P19-1197
6
acl2019
3.5 Results. We compare our PIVOT model with the above baseline models. Table 1 summarizes the results of these models. It shows that our PIVOT model achieves 87.92% F1 score, 92.59% precision, and 83.70% recall at the stage of key fact prediction, which provides a good foundation for the stage of surface realization. Based on the selected key facts, our models achieve the scores of 20.09 BLEU, 6.5130 NIST, and 18.31 ROUGE under the vanilla Seq2Seq framework, and 27.34 BLEU, 6.8763 NIST, and 19.30 ROUGE under the Transformer framework, which significantly outperform all the baseline models in terms of all metrics. Furthermore, it shows that the implementation with the Transformer can obtain higher scores than that with the vanilla Seq2Seq.
[2, 0, 1, 0, 1, 1]
['3.5 Results.', 'We compare our PIVOT model with the above baseline models.', 'Table 1 summarizes the results of these models.', 'It shows that our PIVOT model achieves 87.92% F1 score, 92.59% precision, and 83.70% recall at the stage of key fact prediction, which provides a good foundation for the stage of surface realization.', 'Based on the selected key facts, our models achieve the scores of 20.09 BLEU, 6.5130 NIST, and 18.31 ROUGE under the vanilla Seq2Seq framework, and 27.34 BLEU, 6.8763 NIST, and 19.30 ROUGE under the Transformer framework, which significantly outperform all the baseline models in terms of all metrics.', 'Furthermore, it shows that the implementation with the Transformer can obtain higher scores than that with the vanilla Seq2Seq.']
[None, ['PIVOT-Vanilla', 'PIVOT-Trans'], None, None, ['PIVOT-Vanilla', 'BLEU', 'NIST', 'ROUGE', 'PIVOT-Trans'], ['PIVOT-Trans', 'PIVOT-Vanilla']]
1
P19-1204table_2
ROUGE scores on the CL-SciSumm 2016 test benchmark. *: results from Yasunaga et al. (2019).
2
[['Model', 'TALKSUMM-HYBRID'], ['Model', 'TALKSUMM-ONLY'], ['Model', 'GCN HYBRID2*'], ['Model', 'GCN CITED TEXT SPANS*'], ['Model', 'ABSTRACT*']]
1
[['2-R'], ['2-F'], ['3-F'], ['SU4-F']]
[['35.05', '34.11', '27.19', '24.13'], ['22.77', '21.94', '15.94', '12.55'], ['32.44', '30.08', '23.43', '23.77'], ['25.16', '24.26', '18.79', '17.67'], ['29.52', '29.4', '23.16', '23.34']]
column
['2-R', '2-F', '3-F', 'SU4-F']
['TALKSUMM-HYBRID', 'TALKSUMM-ONLY']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>2-R</th> <th>2-F</th> <th>3-F</th> <th>SU4-F</th> </tr> </thead> <tbody> <tr> <td>Model || TALKSUMM-HYBRID</td> <td>35.05</td> <td>34.11</td> <td>27.19</td> <td>24.13</td> </tr> <tr> <td>Model || TALKSUMM-ONLY</td> <td>22.77</td> <td>21.94</td> <td>15.94</td> <td>12.55</td> </tr> <tr> <td>Model || GCN HYBRID2*</td> <td>32.44</td> <td>30.08</td> <td>23.43</td> <td>23.77</td> </tr> <tr> <td>Model || GCN CITED TEXT SPANS*</td> <td>25.16</td> <td>24.26</td> <td>18.79</td> <td>17.67</td> </tr> <tr> <td>Model || ABSTRACT*</td> <td>29.52</td> <td>29.4</td> <td>23.16</td> <td>23.34</td> </tr> </tbody></table>
Table 2
table_2
P19-1204
4
acl2019
Automatic Evaluation . Table 2 summarizes the results: both GCN CITED TEXT SPANS and TALKSUMM-ONLY models, are not able to obtain better performance than ABSTRACT8. However, for the Hybrid approach, where the abstract is augmented with sentences from the summaries emitted by the models, our TALKSUMM-HYBRID outperforms both GCN HYBRID 2 and ABSTRACT. Importantly, our model, trained on automaticallygenerated summaries, performs on par with models trained over SCISUMMNET, in which training data was created manually.
[2, 1, 1, 1]
['Automatic Evaluation .', 'Table 2 summarizes the results: both GCN CITED TEXT SPANS and TALKSUMM-ONLY models, are not able to obtain better performance than ABSTRACT8.', 'However, for the Hybrid approach, where the abstract is augmented with sentences from the summaries emitted by the models, our TALKSUMM-HYBRID outperforms both GCN HYBRID 2 and ABSTRACT.', 'Importantly, our model, trained on automaticallygenerated summaries, performs on par with models trained over SCISUMMNET, in which training data was created manually.']
[None, ['GCN CITED TEXT SPANS*', 'TALKSUMM-ONLY'], ['TALKSUMM-HYBRID', 'GCN HYBRID2*', 'ABSTRACT*'], ['TALKSUMM-HYBRID', 'TALKSUMM-ONLY']]
1
P19-1206table_2
ROUGE F1 score of the evaluation set (%). R-1, R-2 and R-L denote ROUGE-1, ROUGE-2, and ROUGE-L, respectively. The best performing model among unsupervised approaches is shown in boldface.
2
[['Unsupervised Approach', 'TextRank'], ['Unsupervised Approach', 'Opinosis'], ['Unsupervised Approach', 'MeanSum-single'], ['Unsupervised Approach', 'StrSum'], ['Unsupervised Approach', 'StrSum+DiscourseRank'], ['Supervised baselines', 'Seq-Seq'], ['Supervised baselines', 'Seq-Seq-att']]
4
[['Domain', 'Toys & Games', 'Metric', 'R-1'], ['Domain', 'Toys & Games', 'Metric', 'R-2'], ['Domain', 'Toys & Games', 'Metric', 'R-L'], ['Domain', 'Sports & Outdoors', 'Metric', 'R-1'], ['Domain', 'Sports & Outdoors', 'Metric', 'R-2'], ['Domain', 'Sports & Outdoors', 'Metric', 'R-L'], ['Domain', 'Movies & TV', 'Metric', 'R-1'], ['Domain', 'Movies & TV', 'Metric', 'R-2'], ['Domain', 'Movies & TV', 'Metric', 'R-L']]
[['8.63', '1.24', '7.26', '7.16', '0.89', '6.39', '8.27', '1.44', '7.35'], ['8.25', '1.51', '7.52', '7.04', '1.42', '6.45', '7.8', '1.2', '7.11'], ['8.12', '0.58', '7.3', '5.42', '0.47', '4.97', '6.96', '0.35', '6.08'], ['11.61', '1.56', '11.04', '9.15', '1.38', '8.79', '7.38', '1.03', '6.94'], ['11.87', '1.63', '11.4', '9.62', '1.58', '9.28', '8.15', '1.33', '7.62'], ['13.5', '2.1', '13.31', '10.69', '2.02', '10.61', '7.71', '2.18', '7.08'], ['16.28', '3.13', '16.13', '11.49', '2.39', '11.47', '9.05', '2.99', '8.46']]
column
['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L']
['StrSum+DiscourseRank']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Domain || Toys &amp; Games || Metric || R-1</th> <th>Domain || Toys &amp; Games || Metric || R-2</th> <th>Domain || Toys &amp; Games || Metric || R-L</th> <th>Domain || Sports &amp; Outdoors || Metric || R-1</th> <th>Domain || Sports &amp; Outdoors || Metric || R-2</th> <th>Domain || Sports &amp; Outdoors || Metric || R-L</th> <th>Domain || Movies &amp; TV || Metric || R-1</th> <th>Domain || Movies &amp; TV || Metric || R-2</th> <th>Domain || Movies &amp; TV || Metric || R-L</th> </tr> </thead> <tbody> <tr> <td>Unsupervised Approach || TextRank</td> <td>8.63</td> <td>1.24</td> <td>7.26</td> <td>7.16</td> <td>0.89</td> <td>6.39</td> <td>8.27</td> <td>1.44</td> <td>7.35</td> </tr> <tr> <td>Unsupervised Approach || Opinosis</td> <td>8.25</td> <td>1.51</td> <td>7.52</td> <td>7.04</td> <td>1.42</td> <td>6.45</td> <td>7.8</td> <td>1.2</td> <td>7.11</td> </tr> <tr> <td>Unsupervised Approach || MeanSum-single</td> <td>8.12</td> <td>0.58</td> <td>7.3</td> <td>5.42</td> <td>0.47</td> <td>4.97</td> <td>6.96</td> <td>0.35</td> <td>6.08</td> </tr> <tr> <td>Unsupervised Approach || StrSum</td> <td>11.61</td> <td>1.56</td> <td>11.04</td> <td>9.15</td> <td>1.38</td> <td>8.79</td> <td>7.38</td> <td>1.03</td> <td>6.94</td> </tr> <tr> <td>Unsupervised Approach || StrSum+DiscourseRank</td> <td>11.87</td> <td>1.63</td> <td>11.4</td> <td>9.62</td> <td>1.58</td> <td>9.28</td> <td>8.15</td> <td>1.33</td> <td>7.62</td> </tr> <tr> <td>Supervised baselines || Seq-Seq</td> <td>13.5</td> <td>2.1</td> <td>13.31</td> <td>10.69</td> <td>2.02</td> <td>10.61</td> <td>7.71</td> <td>2.18</td> <td>7.08</td> </tr> <tr> <td>Supervised baselines || Seq-Seq-att</td> <td>16.28</td> <td>3.13</td> <td>16.13</td> <td>11.49</td> <td>2.39</td> <td>11.47</td> <td>9.05</td> <td>2.99</td> <td>8.46</td> </tr> </tbody></table>
Table 2
table_2
P19-1206
6
acl2019
4.4 Evaluation of Summary Generation . Table 2 shows the ROUGE scores of our models and the baselines for the evaluation sets. With regards to Toys & Games and Sports & Outdoors, our full model (StrSum + DiscourseRank) achieves the best ROUGE scores among the unsupervised approaches.
[0, 1, 1]
['4.4 Evaluation of Summary Generation .', 'Table 2 shows the ROUGE scores of our models and the baselines for the evaluation sets.', 'With regards to Toys & Games and Sports & Outdoors, our full model (StrSum + DiscourseRank) achieves the best ROUGE scores among the unsupervised approaches.']
[None, ['R-1', 'R-2', 'R-L'], ['StrSum+DiscourseRank', 'R-1', 'R-2', 'R-L']]
1
P19-1209table_2
Instance selection results; evaluated for primary, secondary, and all ground-truth sentences. Our BERTSingPairMix method achieves strong performance owing to its capability of building effective representations for both singletons and pairs.
3
[['System', 'CNN/Daily Mail', 'LEAD-Baseline'], ['System', 'CNN/Daily Mail', 'SumBasic (Vanderwende et al., 2007)'], ['System', 'CNN/Daily Mail', 'KL-Summ (Haghighi et al., 2009)'], ['System', 'CNN/Daily Mail', 'LexRank (Erkan and Radev, 2004)'], ['System', 'CNN/Daily Mail', 'VSM-SingOnly (This work)'], ['System', 'CNN/Daily Mail', 'VSM-SingPairMix (This work)'], ['System', 'CNN/Daily Mail', 'BERT-SingOnly (This work)'], ['System', 'CNN/Daily Mail', 'BERT-SingPairMix (This work)'], ['System', 'XSum', 'LEAD-Baseline'], ['System', 'XSum', 'SumBasic (Vanderwende et al., 2007)'], ['System', 'XSum', 'KL-Summ (Haghighi et al., 2009)'], ['System', 'XSum', 'LexRank (Erkan and Radev, 2004)'], ['System', 'XSum', 'VSM-SingOnly (This work)'], ['System', 'XSum', 'VSM-SingPairMix (This work)'], ['System', 'XSum', 'BERT-SingOnly (This work)'], ['System', 'XSum', 'BERT-SingPairMix (This work)'], ['System', 'DUC-04', 'LEAD-Baseline'], ['System', 'DUC-04', 'SumBasic (Vanderwende et al., 2007)'], ['System', 'DUC-04', 'KL-Summ (Haghighi et al., 2009)'], ['System', 'DUC-04', 'LexRank (Erkan and Radev, 2004)'], ['System', 'DUC-04', 'VSM-SingOnly (This work)'], ['System', 'DUC-04', 'VSM-SingPairMix (This work)'], ['System', 'DUC-04', 'BERT-SingOnly (This work)'], ['System', 'DUC-04', 'BERT-SingPairMix (This work)']]
2
[['Primary', 'P'], ['Primary', 'R'], ['Primary', 'F'], ['Secondary', 'P'], ['Secondary', 'R'], ['Secondary', 'F'], ['All', 'P'], ['All', 'R'], ['All', 'F']]
[['31.9', '38.4', '34.9', '10.7', '34.3', '16.3', '39.9', '37.3', '38.6'], ['15.2', '17.3', '16.2', '5.3', '15.8', '8', '19.6', '16.9', '18.1'], ['15.7', '17.9', '16.7', '5.4', '15.9', '8', '20', '17.4', '18.6'], ['22', '25.9', '23.8', '7.2', '21.4', '10.7', '27.5', '24.7', '26'], ['30.8', '36.9', '33.6', '9.8', '34.4', '15.2', '39.5', '35.7', '37.5'], ['27', '46.5', '34.2', '9', '42.1', '14.9', '34', '45.4', '38.9'], ['35.3', '41.9', '38.3', '9.8', '32.5', '15.1', '44', '38.6', '41.1'], ['33.6', '67.1', '44.8', '13.6', '70.2', '22.8', '44.7', '68', '53.9'], ['8.5', '9.4', '8.9', '5.3', '9.5', '6.8', '13.8', '9.4', '11.2'], ['8.7', '9.7', '9.2', '5', '8.9', '6.4', '13.7', '9.4', '11.1'], ['9.2', '10.2', '9.7', '5', '8.9', '6.4', '14.2', '9.7', '11.5'], ['9.7', '10.8', '10.2', '5.5', '9.8', '7', '15.2', '10.4', '12.4'], ['12.3', '14.1', '13.1', '3.8', '11', '5.6', '17.9', '12', '14.4'], ['10.1', '22.6', '13.9', '4.2', '17.4', '6.8', '14.3', '20.8', '17'], ['24.2', '26.1', '25.1', '6.6', '16.7', '9.5', '35.3', '20.8', '26.2'], ['33.2', '56', '41.7', '24.1', '65.5', '35.2', '57.3', '59.6', '58.5'], ['6', '4.8', '5.3', '2.8', '3.8', '3.2', '8.8', '4.4', '5.9'], ['4.2', '3.2', '3.6', '3', '3.8', '3.3', '7.2', '3.4', '4.6'], ['5.6', '4.5', '5', '2.8', '3.8', '3.2', '8', '4.2', '5.5'], ['8.5', '6.7', '7.5', '4.8', '6.5', '5.5', '12.1', '6.6', '8.6'], ['18', '14.7', '16.2', '3.6', '8.4', '5', '23.6', '11.8', '15.7'], ['3.8', '6.2', '4.7', '3.6', '11.4', '5.5', '7.4', '8', '7.7'], ['8.4', '6.5', '7.4', '2.8', '5.3', '3.7', '15.6', '6.6', '9.2'], ['4.8', '9.1', '6.3', '4.2', '14.2', '6.5', '9', '10.9', '9.9']]
column
['P', 'R', 'F', 'P', 'R', 'F', 'P', 'R', 'F']
['VSM-SingOnly (This work)', 'VSM-SingPairMix (This work)', 'BERT-SingOnly (This work)', 'BERT-SingPairMix (This work)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Primary || P</th> <th>Primary || R</th> <th>Primary || F</th> <th>Secondary || P</th> <th>Secondary || R</th> <th>Secondary || F</th> <th>All || P</th> <th>All || R</th> <th>All || F</th> </tr> </thead> <tbody> <tr> <td>System || CNN/Daily Mail || LEAD-Baseline</td> <td>31.9</td> <td>38.4</td> <td>34.9</td> <td>10.7</td> <td>34.3</td> <td>16.3</td> <td>39.9</td> <td>37.3</td> <td>38.6</td> </tr> <tr> <td>System || CNN/Daily Mail || SumBasic (Vanderwende et al., 2007)</td> <td>15.2</td> <td>17.3</td> <td>16.2</td> <td>5.3</td> <td>15.8</td> <td>8</td> <td>19.6</td> <td>16.9</td> <td>18.1</td> </tr> <tr> <td>System || CNN/Daily Mail || KL-Summ (Haghighi et al., 2009)</td> <td>15.7</td> <td>17.9</td> <td>16.7</td> <td>5.4</td> <td>15.9</td> <td>8</td> <td>20</td> <td>17.4</td> <td>18.6</td> </tr> <tr> <td>System || CNN/Daily Mail || LexRank (Erkan and Radev, 2004)</td> <td>22</td> <td>25.9</td> <td>23.8</td> <td>7.2</td> <td>21.4</td> <td>10.7</td> <td>27.5</td> <td>24.7</td> <td>26</td> </tr> <tr> <td>System || CNN/Daily Mail || VSM-SingOnly (This work)</td> <td>30.8</td> <td>36.9</td> <td>33.6</td> <td>9.8</td> <td>34.4</td> <td>15.2</td> <td>39.5</td> <td>35.7</td> <td>37.5</td> </tr> <tr> <td>System || CNN/Daily Mail || VSM-SingPairMix (This work)</td> <td>27</td> <td>46.5</td> <td>34.2</td> <td>9</td> <td>42.1</td> <td>14.9</td> <td>34</td> <td>45.4</td> <td>38.9</td> </tr> <tr> <td>System || CNN/Daily Mail || BERT-SingOnly (This work)</td> <td>35.3</td> <td>41.9</td> <td>38.3</td> <td>9.8</td> <td>32.5</td> <td>15.1</td> <td>44</td> <td>38.6</td> <td>41.1</td> </tr> <tr> <td>System || CNN/Daily Mail || BERT-SingPairMix (This work)</td> <td>33.6</td> <td>67.1</td> <td>44.8</td> <td>13.6</td> <td>70.2</td> <td>22.8</td> <td>44.7</td> <td>68</td> <td>53.9</td> </tr> <tr> <td>System || XSum || LEAD-Baseline</td> <td>8.5</td> <td>9.4</td> <td>8.9</td> <td>5.3</td> <td>9.5</td> <td>6.8</td> <td>13.8</td> <td>9.4</td> <td>11.2</td> </tr> <tr> <td>System || XSum || SumBasic (Vanderwende et al., 2007)</td> <td>8.7</td> <td>9.7</td> <td>9.2</td> <td>5</td> <td>8.9</td> <td>6.4</td> <td>13.7</td> <td>9.4</td> <td>11.1</td> </tr> <tr> <td>System || XSum || KL-Summ (Haghighi et al., 2009)</td> <td>9.2</td> <td>10.2</td> <td>9.7</td> <td>5</td> <td>8.9</td> <td>6.4</td> <td>14.2</td> <td>9.7</td> <td>11.5</td> </tr> <tr> <td>System || XSum || LexRank (Erkan and Radev, 2004)</td> <td>9.7</td> <td>10.8</td> <td>10.2</td> <td>5.5</td> <td>9.8</td> <td>7</td> <td>15.2</td> <td>10.4</td> <td>12.4</td> </tr> <tr> <td>System || XSum || VSM-SingOnly (This work)</td> <td>12.3</td> <td>14.1</td> <td>13.1</td> <td>3.8</td> <td>11</td> <td>5.6</td> <td>17.9</td> <td>12</td> <td>14.4</td> </tr> <tr> <td>System || XSum || VSM-SingPairMix (This work)</td> <td>10.1</td> <td>22.6</td> <td>13.9</td> <td>4.2</td> <td>17.4</td> <td>6.8</td> <td>14.3</td> <td>20.8</td> <td>17</td> </tr> <tr> <td>System || XSum || BERT-SingOnly (This work)</td> <td>24.2</td> <td>26.1</td> <td>25.1</td> <td>6.6</td> <td>16.7</td> <td>9.5</td> <td>35.3</td> <td>20.8</td> <td>26.2</td> </tr> <tr> <td>System || XSum || BERT-SingPairMix (This work)</td> <td>33.2</td> <td>56</td> <td>41.7</td> <td>24.1</td> <td>65.5</td> <td>35.2</td> <td>57.3</td> <td>59.6</td> <td>58.5</td> </tr> <tr> <td>System || DUC-04 || LEAD-Baseline</td> <td>6</td> <td>4.8</td> <td>5.3</td> <td>2.8</td> <td>3.8</td> <td>3.2</td> <td>8.8</td> <td>4.4</td> <td>5.9</td> </tr> <tr> <td>System || DUC-04 || SumBasic (Vanderwende et al., 2007)</td> <td>4.2</td> <td>3.2</td> <td>3.6</td> <td>3</td> <td>3.8</td> <td>3.3</td> <td>7.2</td> <td>3.4</td> <td>4.6</td> </tr> <tr> <td>System || DUC-04 || KL-Summ (Haghighi et al., 2009)</td> <td>5.6</td> <td>4.5</td> <td>5</td> <td>2.8</td> <td>3.8</td> <td>3.2</td> <td>8</td> <td>4.2</td> <td>5.5</td> </tr> <tr> <td>System || DUC-04 || LexRank (Erkan and Radev, 2004)</td> <td>8.5</td> <td>6.7</td> <td>7.5</td> <td>4.8</td> <td>6.5</td> <td>5.5</td> <td>12.1</td> <td>6.6</td> <td>8.6</td> </tr> <tr> <td>System || DUC-04 || VSM-SingOnly (This work)</td> <td>18</td> <td>14.7</td> <td>16.2</td> <td>3.6</td> <td>8.4</td> <td>5</td> <td>23.6</td> <td>11.8</td> <td>15.7</td> </tr> <tr> <td>System || DUC-04 || VSM-SingPairMix (This work)</td> <td>3.8</td> <td>6.2</td> <td>4.7</td> <td>3.6</td> <td>11.4</td> <td>5.5</td> <td>7.4</td> <td>8</td> <td>7.7</td> </tr> <tr> <td>System || DUC-04 || BERT-SingOnly (This work)</td> <td>8.4</td> <td>6.5</td> <td>7.4</td> <td>2.8</td> <td>5.3</td> <td>3.7</td> <td>15.6</td> <td>6.6</td> <td>9.2</td> </tr> <tr> <td>System || DUC-04 || BERT-SingPairMix (This work)</td> <td>4.8</td> <td>9.1</td> <td>6.3</td> <td>4.2</td> <td>14.2</td> <td>6.5</td> <td>9</td> <td>10.9</td> <td>9.9</td> </tr> </tbody></table>
Table 2
table_2
P19-1209
7
acl2019
Extraction Results . In Table 2 we present intance selection results for the CNN/DM, XSum, and DUC-04 datasets. Our method builds representations for instances using either BERT or VSM (§3.1). To ensure a thorough comparison, we experiment with selecting a mixed set of singletons and pairs (“SingPairMix”) as well as selecting singletons only (“SingOnly”). On the CNN/DM and XSum datasets, we observe that selecting a mixed set of singletons and pairs based on BERT representations (BERT+SingPairMix) demonstrates the most competitive results. It outperforms a number of strong baselines when evaluated on a full set of ground-truth sentences. The method also performs superiorly on identifying secondary sentences. For example, it increases recall scores for identifying secondary sentences from 33.8% to 69.8% (CNN/DM) and from 16.7% to 65.3% (XSum). Our method is able to achieve strong performance on instance selection owing to BERT’s capability of building effective representations for both singletons and pairs. It learns to identify salient source content based on token and position embeddings and it encodes sentential semantic compatibility using the pretraining task of predicting the next sentence; both are valuable additions to summary instance selection. Further, we observe that identifying summaryworthy singletons and pairs from multi-document inputs (DUC-04) appears to be more challenging than that of single-document inputs (XSum and CNN/DM). This distinction is not surprising given that for multi-document inputs, the system has a large and diverse search space where candidate singletons and pairs are gathered from a set of documents written by different authors.6 . We find that the BERT model performs consistently on identifying secondary sentences, and VSM yields considerable performance gain on selecting primary sentences. Both BERT and VSM models are trained on the CNN/DM dataset and applied to DUC-04 as the latter data are only used for testing. Our findings suggest that the TF-IDF features of the VSM model are effective for multi-document inputs, as important topic words are usually repeated across documents and TF-IDF scores can reflect topical importance of words. This analysis further reveals that extending BERT to incorporate topical salience of words can be a valuable line of research for future work.
[2, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 2, 1, 1, 2, 2]
['Extraction Results .', 'In Table 2 we present intance selection results for the CNN/DM, XSum, and DUC-04 datasets.', 'Our method builds representations for instances using either BERT or VSM (§3.1).', 'To ensure a thorough comparison, we experiment with selecting a mixed set of singletons and pairs (“SingPairMix”) as well as selecting singletons only (“SingOnly”).', 'On the CNN/DM and XSum datasets, we observe that selecting a mixed set of singletons and pairs based on BERT representations (BERT+SingPairMix) demonstrates the most competitive results.', 'It outperforms a number of strong baselines when evaluated on a full set of ground-truth sentences.', 'The method also performs superiorly on identifying secondary sentences.', 'For example, it increases recall scores for identifying secondary sentences from 33.8% to 69.8% (CNN/DM) and from 16.7% to 65.3% (XSum).', 'Our method is able to achieve strong performance on instance selection owing to BERT’s capability of building effective representations for both singletons and pairs.', 'It learns to identify salient source content based on token and position embeddings and it encodes sentential semantic compatibility using the pretraining task of predicting the next sentence; both are valuable additions to summary instance selection.', 'Further, we observe that identifying summaryworthy singletons and pairs from multi-document inputs (DUC-04) appears to be more challenging than that of single-document inputs (XSum and CNN/DM).', 'This distinction is not surprising given that for multi-document inputs, the system has a large and diverse search space where candidate singletons and pairs are gathered from a set of documents written by different authors.6 .', 'We find that the BERT model performs consistently on identifying secondary sentences, and VSM yields considerable performance gain on selecting primary sentences.', 'Both BERT and VSM models are trained on the CNN/DM dataset and applied to DUC-04 as the latter data are only used for testing.', 'Our findings suggest that the TF-IDF features of the VSM model are effective for multi-document inputs, as important topic words are usually repeated across documents and TF-IDF scores can reflect topical importance of words.', 'This analysis further reveals that extending BERT to incorporate topical salience of words can be a valuable line of research for future work.']
[None, ['CNN/Daily Mail', 'XSum', 'DUC-04'], ['VSM-SingOnly (This work)', 'VSM-SingPairMix (This work)', 'BERT-SingOnly (This work)', 'BERT-SingPairMix (This work)'], ['VSM-SingOnly (This work)', 'VSM-SingPairMix (This work)', 'BERT-SingOnly (This work)', 'BERT-SingPairMix (This work)'], ['CNN/Daily Mail', 'XSum', 'BERT-SingPairMix (This work)'], ['BERT-SingPairMix (This work)', 'Primary'], ['BERT-SingPairMix (This work)', 'Secondary'], ['BERT-SingPairMix (This work)', 'XSum', 'R', 'Secondary'], ['BERT-SingPairMix (This work)'], ['BERT-SingPairMix (This work)'], ['DUC-04', 'XSum', 'CNN/Daily Mail'], None, ['VSM-SingOnly (This work)', 'VSM-SingPairMix (This work)', 'BERT-SingOnly (This work)', 'BERT-SingPairMix (This work)'], ['VSM-SingOnly (This work)', 'VSM-SingPairMix (This work)', 'BERT-SingOnly (This work)', 'BERT-SingPairMix (This work)', 'CNN/Daily Mail', 'DUC-04'], ['VSM-SingOnly (This work)', 'VSM-SingPairMix (This work)'], ['BERT-SingOnly (This work)', 'BERT-SingPairMix (This work)']]
1
P19-1212table_4
ROUGE scores on three large datasets. The best results for non-baseline systems are in bold. Except for SentRewriting on CNN/DM and NYT, for all abstractive models, we truncate input and summaries at 400 and 100.
2
[['Models', 'LEAD-3'], ['Models', 'ORACLEFRAG (Grusky et al. 2018)'], ['Models', 'ORACLEEXT'], ['Models', 'TEXTRANK (Mihalcea and Tarau 2004)'], ['Models', 'LEXRANK (Erkan and Radev 2004)'], ['Models', 'SUMBASIC (Nenkova and Vanderwende 2005)'], ['Models', 'RNN-EXT RL (Chen and Bansal 2018)'], ['Models', 'SEQ2SEQ (Sutskever et al. 2014)'], ['Models', 'POINTGEN (See et al. 2017)'], ['Models', 'POINTGEN+COV (See et al. 2017)'], ['Models', 'SENTREWRITING (Chen and Bansal 2018)']]
2
[['CNN/DM', 'R-1'], ['CNN/DM', 'R-2'], ['CNN/DM', 'R-L'], ['NYT', 'R-1'], ['NYT', 'R-2'], ['NYT', 'R-L'], ['BIGPATENT', 'R-1'], ['BIGPATENT', 'R-2'], ['BIGPATENT', 'R-L']]
[['40.23', '17.52', '36.34', '32.93', '17.69', '29.58', '31.27', '8.75', '26.18'], ['93.36', '83.19', '93.36', '88.15', '74.74', '88.15', '91.85', '78.66', '91.85'], ['49.35', '27.96', '46.24', '42.62', '26.39', '39.5', '43.56', '16.91', '36.52'], ['37.72', '15.59', '33.81', '28.57', '14.29', '23.79', '35.99', '11.14', '29.6'], ['33.96', '11.79', '30.17', '27.32', '11.93', '23.75', '35.57', '10.47', '29.03'], ['31.72', '9.6', '28.58', '23.16', '7.18', '20.06', '27.44', '7.08', '23.66'], ['41.47', '18.72', '37.76', '39.15', '22.6', '34.99', '34.63', '10.62', '29.43'], ['31.1', '11.54', '28.56', '41.57', '26.89', '38.17', '28.74', '7.87', '24.66'], ['36.15', '15.11', '33.22', '43.49', '28.7', '39.66', '30.59', '10.01', '25.65'], ['39.23', '17.09', '36.03', '45.13', '30.13', '39.67', '33.14', '11.63', '28.55'], ['40.04', '17.61', '37.59', '44.77', '29.1', '41.55', '37.12', '11.87', '32.45']]
column
['R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L', 'R-1', 'R-2', 'R-L']
['SENTREWRITING (Chen and Bansal 2018)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>CNN/DM || R-1</th> <th>CNN/DM || R-2</th> <th>CNN/DM || R-L</th> <th>NYT || R-1.1</th> <th>NYT || R-2</th> <th>NYT || R-L</th> <th>BIGPATENT || R-1</th> <th>BIGPATENT || R-2</th> <th>BIGPATENT || R-L</th> </tr> </thead> <tbody> <tr> <td>Models || LEAD-3</td> <td>40.23</td> <td>17.52</td> <td>36.34</td> <td>32.93</td> <td>17.69</td> <td>29.58</td> <td>31.27</td> <td>8.75</td> <td>26.18</td> </tr> <tr> <td>Models || ORACLEFRAG (Grusky et al. 2018)</td> <td>93.36</td> <td>83.19</td> <td>93.36</td> <td>88.15</td> <td>74.74</td> <td>88.15</td> <td>91.85</td> <td>78.66</td> <td>91.85</td> </tr> <tr> <td>Models || ORACLEEXT</td> <td>49.35</td> <td>27.96</td> <td>46.24</td> <td>42.62</td> <td>26.39</td> <td>39.5</td> <td>43.56</td> <td>16.91</td> <td>36.52</td> </tr> <tr> <td>Models || TEXTRANK (Mihalcea and Tarau 2004)</td> <td>37.72</td> <td>15.59</td> <td>33.81</td> <td>28.57</td> <td>14.29</td> <td>23.79</td> <td>35.99</td> <td>11.14</td> <td>29.6</td> </tr> <tr> <td>Models || LEXRANK (Erkan and Radev 2004)</td> <td>33.96</td> <td>11.79</td> <td>30.17</td> <td>27.32</td> <td>11.93</td> <td>23.75</td> <td>35.57</td> <td>10.47</td> <td>29.03</td> </tr> <tr> <td>Models || SUMBASIC (Nenkova and Vanderwende 2005)</td> <td>31.72</td> <td>9.6</td> <td>28.58</td> <td>23.16</td> <td>7.18</td> <td>20.06</td> <td>27.44</td> <td>7.08</td> <td>23.66</td> </tr> <tr> <td>Models || RNN-EXT RL (Chen and Bansal 2018)</td> <td>41.47</td> <td>18.72</td> <td>37.76</td> <td>39.15</td> <td>22.6</td> <td>34.99</td> <td>34.63</td> <td>10.62</td> <td>29.43</td> </tr> <tr> <td>Models || SEQ2SEQ (Sutskever et al. 2014)</td> <td>31.1</td> <td>11.54</td> <td>28.56</td> <td>41.57</td> <td>26.89</td> <td>38.17</td> <td>28.74</td> <td>7.87</td> <td>24.66</td> </tr> <tr> <td>Models || POINTGEN (See et al. 2017)</td> <td>36.15</td> <td>15.11</td> <td>33.22</td> <td>43.49</td> <td>28.7</td> <td>39.66</td> <td>30.59</td> <td>10.01</td> <td>25.65</td> </tr> <tr> <td>Models || POINTGEN+COV (See et al. 2017)</td> <td>39.23</td> <td>17.09</td> <td>36.03</td> <td>45.13</td> <td>30.13</td> <td>39.67</td> <td>33.14</td> <td>11.63</td> <td>28.55</td> </tr> <tr> <td>Models || SENTREWRITING (Chen and Bansal 2018)</td> <td>40.04</td> <td>17.61</td> <td>37.59</td> <td>44.77</td> <td>29.1</td> <td>41.55</td> <td>37.12</td> <td>11.87</td> <td>32.45</td> </tr> </tbody></table>
Table 4
table_4
P19-1212
5
acl2019
Table 4 reports F1 scores of ROUGE-1, 2, and L (Lin and Hovy, 2003) for all models. For BIGPATENT, almost all models outperform the LEAD-3 baseline due to the more uniform distribution of salient content in BIGPATENT’s input articles. Among extractive models, TEXTRANK and LEXRANK outperform RNN-EXT RL which was trained on only the first 400 words of the input, again suggesting the need for neural models to efficiently handle longer input. Finally, SENTREWRITING, a reinforcement learning model with ROUGE as reward, achieves the best performance on BIGPATENT.
[1, 1, 1, 1]
['Table 4 reports F1 scores of ROUGE-1, 2, and L (Lin and Hovy, 2003) for all models.', 'For BIGPATENT, almost all models outperform the LEAD-3 baseline due to the more uniform distribution of salient content in BIGPATENT’s input articles.', 'Among extractive models, TEXTRANK and LEXRANK outperform RNN-EXT RL which was trained on only the first 400 words of the input, again suggesting the need for neural models to efficiently handle longer input.', 'Finally, SENTREWRITING, a reinforcement learning model with ROUGE as reward, achieves the best performance on BIGPATENT.']
[['R-1', 'R-2', 'R-L'], ['BIGPATENT', 'LEAD-3'], ['TEXTRANK (Mihalcea and Tarau 2004)', 'LEXRANK (Erkan and Radev 2004)', 'RNN-EXT RL (Chen and Bansal 2018)'], ['SENTREWRITING (Chen and Bansal 2018)', 'BIGPATENT']]
1
P19-1216table_2
Results for the Giga-MSC dataset.
2
[['Model', 'Ground truth'], ['Model', '#1 WG (Filippova, 10)'], ['Model', '#2 KWG (Boudin+, 13)'], ['Model', '#3 Hard Para.'], ['Model', '#4 Seq2seq with attention'], ['Model', '#5 Our rewriter (RWT)']]
1
[['METEOR'], ['NN-1'], ['NN-2'], ['NN-3'], ['NN-4'], ['Comp. rate']]
[['-', '8.6', '28', '40', '49.1', '0.5'], ['0.29', '0', '0', '2.8', '6.8', '0.34'], ['0.36', '0', '0', '1.1', '3.1', '0.52'], ['0.35', '10.1', '19.7', '29.1', '38', '0.51'], ['0.33', '12.7', '24', '34.7', '44.4', '0.49'], ['0.36', '9', '17.4', '25.7', '33.8', '0.5']]
column
['METEOR', 'NN-1', 'NN-2', 'NN-3', 'NN-4', 'Comp. rate']
['#5 Our rewriter (RWT)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>METEOR</th> <th>NN-1</th> <th>NN-2</th> <th>NN-3</th> <th>NN-4</th> <th>Comp. rate</th> </tr> </thead> <tbody> <tr> <td>Model || Ground truth</td> <td>-</td> <td>8.6</td> <td>28</td> <td>40</td> <td>49.1</td> <td>0.5</td> </tr> <tr> <td>Model || #1 WG (Filippova, 10)</td> <td>0.29</td> <td>0</td> <td>0</td> <td>2.8</td> <td>6.8</td> <td>0.34</td> </tr> <tr> <td>Model || #2 KWG (Boudin+, 13)</td> <td>0.36</td> <td>0</td> <td>0</td> <td>1.1</td> <td>3.1</td> <td>0.52</td> </tr> <tr> <td>Model || #3 Hard Para.</td> <td>0.35</td> <td>10.1</td> <td>19.7</td> <td>29.1</td> <td>38</td> <td>0.51</td> </tr> <tr> <td>Model || #4 Seq2seq with attention</td> <td>0.33</td> <td>12.7</td> <td>24</td> <td>34.7</td> <td>44.4</td> <td>0.49</td> </tr> <tr> <td>Model || #5 Our rewriter (RWT)</td> <td>0.36</td> <td>9</td> <td>17.4</td> <td>25.7</td> <td>33.8</td> <td>0.5</td> </tr> </tbody></table>
Table 2
table_2
P19-1216
4
acl2019
5 Results and Analysis . METEOR metric (n-gram overlap with synonyms) was used for automatic evaluation. The novel ngram rate9 (e.t., NN-1, NN-2, NN-3, and NN-4) was also computed to investigate the number of novel words that could be introduced by the models. Table 2 and Table 3 present the results and below are our observations: (i) keyphrase word graph approach (#2) is a strong baseline according to the METEOR metric. In comparison, the proposed rewriter (#5) yields comparable result on the METEOR metric for the Giga-MSC dataset but lower result for the Cornell dataset. We speculate that it may be due to the difference in the ground-truth compression. 8.6% of novel unigrams exist in the ground-truth compression of the Giga-MSC dataset, while only 5.2% of novel unigrams exist in that of the Cornell dataset, (ii) Hard Para.(#3), Seq2seq (#4), and our rewriter (#5) significantly increase the number of novel n-grams, and the proposed rewriter (#5) seemed to be a better trade-off between the information coverage (measured by METEOR) and the introduction of novel n-grams across all methods, (iii) on comparing with Seq2seq (#4) and our rewriter (#5), we found that adding pseudo data helps to decrease the novel words rate and increase the METEOR score on both datasets.
[2, 1, 1, 1, 1, 1, 1]
['5 Results and Analysis .', 'METEOR metric (n-gram overlap with synonyms) was used for automatic evaluation.', 'The novel ngram rate9 (e.t., NN-1, NN-2, NN-3, and NN-4) was also computed to investigate the number of novel words that could be introduced by the models.', 'Table 2 and Table 3 present the results and below are our observations: (i) keyphrase word graph approach (#2) is a strong baseline according to the METEOR metric.', 'In comparison, the proposed rewriter (#5) yields comparable result on the METEOR metric for the Giga-MSC dataset but lower result for the Cornell dataset.', 'We speculate that it may be due to the difference in the ground-truth compression.', '8.6% of novel unigrams exist in the ground-truth compression of the Giga-MSC dataset, while only 5.2% of novel unigrams exist in that of the Cornell dataset, (ii) Hard Para.(#3), Seq2seq (#4), and our rewriter (#5) significantly increase the number of novel n-grams, and the proposed rewriter (#5) seemed to be a better trade-off between the information coverage (measured by METEOR) and the introduction of novel n-grams across all methods, (iii) on comparing with Seq2seq (#4) and our rewriter (#5), we found that adding pseudo data helps to decrease the novel words rate and increase the METEOR score on both datasets.']
[None, ['METEOR'], ['NN-1', 'NN-2', 'NN-3', 'NN-4'], ['METEOR', '#2 KWG (Boudin+, 13)'], ['METEOR', '#5 Our rewriter (RWT)'], None, ['#3 Hard Para.', '#4 Seq2seq with attention', '#5 Our rewriter (RWT)', 'METEOR']]
1
P19-1216table_4
Human evaluation for informativeness and grammaticality. † stands for significantly better than KWG with 0.95 confidence.
2
[['Method', 'KWG'], ['Method', 'RWT']]
1
[['Informativeness'], ['Grammaticality']]
[['1.06', '1.19'], ['1.02', '1.40†']]
column
['Informativeness', 'Grammaticality']
['RWT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Informativeness</th> <th>Grammaticality</th> </tr> </thead> <tbody> <tr> <td>Method || KWG</td> <td>1.06</td> <td>1.19</td> </tr> <tr> <td>Method || RWT</td> <td>1.02</td> <td>1.40†</td> </tr> </tbody></table>
Table 4
table_4
P19-1216
4
acl2019
Human Evaluation . As METEOR metric cannot measure the grammaticality of compression, we asked two human raters10 to assess 50 compressed sentences out of the Giga-MSC test dataset in terms of informativeness and grammaticality. We used 0-2 point scale (2 pts: excellent; 1 pts: good; 0 pts: poor), similar to previous work (we recommend readers to refer to Appendix 2 for the 0-2 scale point evaluation details). Table 4 shows the average ratings for informativeness and readability. From that, we found that our rewriter (RWT) significantly improved the grammaticality of compression in comparison with the keyphrase word graph approach, implying that the pseudo data may contribute to the language modeling of the decoder, thereby improving the grammaticality.
[2, 2, 2, 1, 1]
['Human Evaluation .', 'As METEOR metric cannot measure the grammaticality of compression, we asked two human raters10 to assess 50 compressed sentences out of the Giga-MSC test dataset in terms of informativeness and grammaticality.', 'We used 0-2 point scale (2 pts: excellent; 1 pts: good; 0 pts: poor), similar to previous work (we recommend readers to refer to Appendix 2 for the 0-2 scale point evaluation details).', 'Table 4 shows the average ratings for informativeness and readability.', 'From that, we found that our rewriter (RWT) significantly improved the grammaticality of compression in comparison with the keyphrase word graph approach, implying that the pseudo data may contribute to the language modeling of the decoder, thereby improving the grammaticality.']
[None, None, None, ['Informativeness', 'Grammaticality'], ['RWT', 'Grammaticality']]
1
P19-1220table_2
Performance of our and competing models on the MS MARCO V2 leaderboard (4 March 2019). aSeo et al. (2017); bYan et al. (2019); cShao (unpublished), a variant of Tan et al. (2018); dLi (unpublished), a model using Devlin et al. (2018) and See et al. (2017); eQian (unpublished); f Wu et al. (2018). Whether the competing models are ensemble models or not is unreported.
2
[['Model', 'BiDAFa'], ['Model', 'Deep Cascade Qab'], ['Model', 'S-Net+CES2Sc'], ['Model', 'BERT+Multi-PGNetd'], ['Model', 'Selector+CCGe'], ['Model', 'VNETf'], ['Model', 'Masque (NLG single)'], ['Model', 'Masque (NLG ensemble)'], ['Model', 'Masque (Q&A single)'], ['Model', 'Masque (Q&A ensemble)'], ['Model', 'Human Performance']]
2
[['NLG', 'R-L'], ['NLG', 'B-1'], ['Q&A', 'R-L'], ['Q&A', 'B-1']]
[['16.91', '9.3', '23.96', '10.64'], ['35.14', '37.35', '52.01', '54.64'], ['45.04', '40.62', '44.96', '46.36'], ['47.37', '45.09', '48.14', '52.03'], ['47.39', '45.26', '50.63', '52.03'], ['48.37', '46.75', '51.63', '54.37'], ['49.19', '49.63', '48.42', '48.68'], ['49.61', '50.13', '48.92', '48.75'], ['25.66', '36.62', '50.93', '42.37'], ['28.53', '39.87', '52.2', '43.77'], ['63.21', '53.03', '53.87', '48.5']]
column
['R-L', 'B-1', 'R-L', 'B-1']
['Masque (NLG ensemble)', 'Masque (Q&A ensemble)', 'Masque (NLG single)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NLG || R-L</th> <th>NLG || B-1</th> <th>Q&amp;A || R-L</th> <th>Q&amp;A || B-1</th> </tr> </thead> <tbody> <tr> <td>Model || BiDAFa</td> <td>16.91</td> <td>9.3</td> <td>23.96</td> <td>10.64</td> </tr> <tr> <td>Model || Deep Cascade Qab</td> <td>35.14</td> <td>37.35</td> <td>52.01</td> <td>54.64</td> </tr> <tr> <td>Model || S-Net+CES2Sc</td> <td>45.04</td> <td>40.62</td> <td>44.96</td> <td>46.36</td> </tr> <tr> <td>Model || BERT+Multi-PGNetd</td> <td>47.37</td> <td>45.09</td> <td>48.14</td> <td>52.03</td> </tr> <tr> <td>Model || Selector+CCGe</td> <td>47.39</td> <td>45.26</td> <td>50.63</td> <td>52.03</td> </tr> <tr> <td>Model || VNETf</td> <td>48.37</td> <td>46.75</td> <td>51.63</td> <td>54.37</td> </tr> <tr> <td>Model || Masque (NLG single)</td> <td>49.19</td> <td>49.63</td> <td>48.42</td> <td>48.68</td> </tr> <tr> <td>Model || Masque (NLG ensemble)</td> <td>49.61</td> <td>50.13</td> <td>48.92</td> <td>48.75</td> </tr> <tr> <td>Model || Masque (Q&amp;A single)</td> <td>25.66</td> <td>36.62</td> <td>50.93</td> <td>42.37</td> </tr> <tr> <td>Model || Masque (Q&amp;A ensemble)</td> <td>28.53</td> <td>39.87</td> <td>52.2</td> <td>43.77</td> </tr> <tr> <td>Model || Human Performance</td> <td>63.21</td> <td>53.03</td> <td>53.87</td> <td>48.5</td> </tr> </tbody></table>
Table 2
table_2
P19-1220
6
acl2019
4.2 Results . Does our model achieve state-of-the-art on the two tasks with different styles?. Table 2 shows the performance of our model and competing models on the leaderboard. Our ensemble model of six training runs, where each model was trained with the two answer styles, achieved state-of-theart performance on both tasks in terms of ROUGEL. In particular, for the NLG task, our single model outperformed competing models in terms of both ROUGE-L and BLEU-1.
[2, 2, 1, 1, 1]
['4.2 Results .', 'Does our model achieve state-of-the-art on the two tasks with different styles?.', 'Table 2 shows the performance of our model and competing models on the leaderboard.', 'Our ensemble model of six training runs, where each model was trained with the two answer styles, achieved state-of-theart performance on both tasks in terms of ROUGEL.', 'In particular, for the NLG task, our single model outperformed competing models in terms of both ROUGE-L and BLEU-1.']
[None, None, ['Masque (NLG ensemble)', 'Masque (Q&A ensemble)', 'Masque (NLG single)', 'Masque (Q&A single)'], ['Masque (NLG ensemble)', 'Masque (Q&A ensemble)'], ['Masque (NLG single)']]
1
P19-1220table_5
Performance of our and competing models on the NarrativeQA test set. aSeo et al. (2017); bTay et al. (2018); cBauer et al. (2018); dIndurthi et al. (2018); eHu et al. (2018). f Results on the NarrativeQA validation set.
2
[['Model', 'BiDAFa'], ['Model', 'DECAPROP'], ['Model', 'MHPGM+NOIC'], ['Model', 'ConZNet'], ['Model', 'eRMR+A2D'], ['Model', 'Masque (NQA)'], ['Model', 'w/o multi-style learning'], ['Model', 'Masque (NLG)'], ['Model', 'fMasque (NQA; valid.)']]
1
[['B-1'], ['B-4'], ['M'], ['R-L']]
[['33.72', '15.53', '15.38', '36.3'], ['42', '23.42', '23.42', '40.07'], ['43.63', '21.07', '19.03', '44.16'], ['42.76', '22.49', '19.24', '46.67'], ['50.4', '26.5', 'N/A', '53.3'], ['54.11', '30.43', '26.13', '59.87'], ['48.7', '20.98', '21.95', '54.74'], ['39.14', '18.11', '24.62', '50.09'], ['52.78', '28.72', '25.38', '58.94']]
column
['B-1', 'B-4', 'M', 'R-L']
['Masque (NQA)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>B-1</th> <th>B-4</th> <th>M</th> <th>R-L</th> </tr> </thead> <tbody> <tr> <td>Model || BiDAFa</td> <td>33.72</td> <td>15.53</td> <td>15.38</td> <td>36.3</td> </tr> <tr> <td>Model || DECAPROP</td> <td>42</td> <td>23.42</td> <td>23.42</td> <td>40.07</td> </tr> <tr> <td>Model || MHPGM+NOIC</td> <td>43.63</td> <td>21.07</td> <td>19.03</td> <td>44.16</td> </tr> <tr> <td>Model || ConZNet</td> <td>42.76</td> <td>22.49</td> <td>19.24</td> <td>46.67</td> </tr> <tr> <td>Model || eRMR+A2D</td> <td>50.4</td> <td>26.5</td> <td>N/A</td> <td>53.3</td> </tr> <tr> <td>Model || Masque (NQA)</td> <td>54.11</td> <td>30.43</td> <td>26.13</td> <td>59.87</td> </tr> <tr> <td>Model || w/o multi-style learning</td> <td>48.7</td> <td>20.98</td> <td>21.95</td> <td>54.74</td> </tr> <tr> <td>Model || Masque (NLG)</td> <td>39.14</td> <td>18.11</td> <td>24.62</td> <td>50.09</td> </tr> <tr> <td>Model || fMasque (NQA; valid.)</td> <td>52.78</td> <td>28.72</td> <td>25.38</td> <td>58.94</td> </tr> </tbody></table>
Table 5
table_5
P19-1220
8
acl2019
5.2 Results . Does our model achieve state-of-the-art performance?. Table 5 shows that our single model, trained with two styles and controlled with the NQA style, pushed forward the state-of-the-art by a significant margin. The evaluation scores of the model controlled with the NLG style were low because the two styles are different. Also, our model without multi-style learning (trained with only the NQA style) outperformed the baselines in terms of ROUGE-L. This indicates that our model architecture itself is powerful for natural language understanding in RC.
[2, 2, 1, 1, 1, 1]
['5.2 Results .', 'Does our model achieve state-of-the-art performance?.', 'Table 5 shows that our single model, trained with two styles and controlled with the NQA style, pushed forward the state-of-the-art by a significant margin.', 'The evaluation scores of the model controlled with the NLG style were low because the two styles are different.', 'Also, our model without multi-style learning (trained with only the NQA style) outperformed the baselines in terms of ROUGE-L.', 'This indicates that our model architecture itself is powerful for natural language understanding in RC.']
[None, None, ['Masque (NQA)'], ['Masque (NLG)'], ['Masque (NQA)', 'R-L'], ['Masque (NQA)', 'Masque (NLG)']]
1
P19-1221table_4
Results on the SQuAD-document dev set.
2
[['Model', 'S-Norm (Clark and Gardner, 2018)'], ['Model', 'RE3QABASE'], ['Model', 'RE3QALARGE']]
1
[['EM'], ['F1']]
[['64.08', '72.37'], ['77.9', '84.81'], ['80.71', '87.2']]
column
['EM', 'F1']
['RE3QALARGE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EM</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>Model || S-Norm (Clark and Gardner, 2018)</td> <td>64.08</td> <td>72.37</td> </tr> <tr> <td>Model || RE3QABASE</td> <td>77.9</td> <td>84.81</td> </tr> <tr> <td>Model || RE3QALARGE</td> <td>80.71</td> <td>87.2</td> </tr> </tbody></table>
Table 4
table_4
P19-1221
7
acl2019
We also report the performance on documentlevel SQuAD in Table 4 to assess our approach in single-document setting. We find our approach adapts well: the best model achieves 87.2 F1. Note that the BERTLARGE model has obtained 90.9 F1 on the original SQuAD dataset (single-paragraph setting), which is only 3.7% ahead of us.
[1, 1, 2]
['We also report the performance on documentlevel SQuAD in Table 4 to assess our approach in single-document setting.', 'We find our approach adapts well: the best model achieves 87.2 F1.', 'Note that the BERTLARGE model has obtained 90.9 F1 on the original SQuAD dataset (single-paragraph setting), which is only 3.7% ahead of us.']
[None, ['F1', 'RE3QALARGE'], ['F1']]
1
P19-1225table_5
Performance of our model and the baseline in evidence extraction on the development set in the distractor setting. The correlation is the Kendall tau correlation of the number of predicted evidence sentences and that of gold evidence.
1
[['baseline'], ['QFE']]
1
[['Precision'], ['Recall'], ['Correlation']]
[['79', '82.4', '0.259'], ['88.4', '83.2', '0.375']]
column
['Precision', 'Recall', 'Correlation']
['QFE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Precision</th> <th>Recall</th> <th>Correlation</th> </tr> </thead> <tbody> <tr> <td>baseline</td> <td>79</td> <td>82.4</td> <td>0.259</td> </tr> <tr> <td>QFE</td> <td>88.4</td> <td>83.2</td> <td>0.375</td> </tr> </tbody></table>
Table 5
table_5
P19-1225
6
acl2019
What are the characteristics of our evidence extraction? . Table 5 shows the evidence extraction performance in the distractor setting. Our model improves both precision and recall, and the improvement in precision is larger. Figure 4 reveals the reason for the high EM and precision scores; QFE rarely extracts too much evidence. That is, it predicts the number of evidence sentences more accurately than the baseline. Table 5 also shows the correlation of our model about the number of evidence sentences is higher than that of the baseline. We consider that the sequential extraction and the adaptive termination help to prevent overextraction. In contrast, the baseline evaluates each sentence independently, so the baseline often extracts too much evidence.
[2, 1, 1, 0, 0, 1, 2, 2]
['What are the characteristics of our evidence extraction? .', 'Table 5 shows the evidence extraction performance in the distractor setting.', 'Our model improves both precision and recall, and the improvement in precision is larger.', 'Figure 4 reveals the reason for the high EM and precision scores; QFE rarely extracts too much evidence.', 'That is, it predicts the number of evidence sentences more accurately than the baseline.', 'Table 5 also shows the correlation of our model about the number of evidence sentences is higher than that of the baseline.', 'We consider that the sequential extraction and the adaptive termination help to prevent overextraction.', 'In contrast, the baseline evaluates each sentence independently, so the baseline often extracts too much evidence.']
[None, None, ['Precision', 'Recall', 'QFE', 'baseline'], None, None, ['Correlation', 'QFE', 'baseline'], None, ['baseline']]
1
P19-1227table_6
Overall results on the XQA dataset.
2
[['Languages', 'English'], ['Languages', 'Chinese'], ['Languages', 'French'], ['Languages', 'German'], ['Languages', 'Polish'], ['Languages', 'Portuguese'], ['Languages', 'Russian'], ['Languages', 'Tamil'], ['Languages', 'Ukrainian']]
3
[['Translate-Test', 'DocQA', 'EM'], ['Translate-Test', 'DocQA', 'F1'], ['Translate-Test', 'BERT', 'EM'], ['Translate-Test', 'BERT', 'F1'], ['Translate-Train', 'DocQA', 'EM'], ['Translate-Train', 'DocQA', 'F1'], ['Translate-Train', 'BERT', 'EM'], ['Translate-Train', 'BERT', 'F1'], ['Zero-shot', 'Multilingual BERT', 'EM'], ['Zero-shot', 'Multilingual BERT', 'F1']]
[['32.32', '38.29', '33.72', '40.51', '32.32', '38.29', '33.72', '40.51', '30.85', '38.11'], ['7.17', '17.2', '9.81', '23.05', '7.45', '18.73', '18.93', '31.5', '25.88', '39.53'], ['11.19', '18.97', '15.42', '26.13', '-', '-', '-', '-', '23.34', '31.08'], ['12.98', '19.15', '16.84', '23.65', '11.23', '15.08', '19.06', '24.33', '21.42', '26.87'], ['9.73', '16.51', '13.62', '22.18', '-', '-', '-', '-', '16.27', '21.87'], ['10.03', '15.86', '13.75', '21.27', '-', '-', '-', '-', '18.97', '23.95'], ['5.01', '9.62', '7.34', '13.61', '-', '-', '-', '-', '10.38', '13.44'], ['2.2', '6.41', '4.58', '10.15', '-', '-', '-', '-', '10.07', '14.25'], ['7.94', '14.07', '10.53', '17.72', '-', '-', '-', '-', '15.12', '20.82']]
column
['EM', 'F1', 'EM', 'F1', 'EM', 'F1', 'EM', 'F1', 'EM', 'F1']
['Multilingual BERT']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Translate-Test || DocQA || EM</th> <th>Translate-Test || DocQA || F1</th> <th>Translate-Test || BERT || EM</th> <th>Translate-Test || BERT || F1</th> <th>Translate-Train || DocQA || EM</th> <th>Translate-Train || DocQA || F1</th> <th>Translate-Train || BERT || EM</th> <th>Translate-Train || BERT || F1</th> <th>Zero-shot || Multilingual BERT || EM</th> <th>Zero-shot || Multilingual BERT || F1</th> </tr> </thead> <tbody> <tr> <td>Languages || English</td> <td>32.32</td> <td>38.29</td> <td>33.72</td> <td>40.51</td> <td>32.32</td> <td>38.29</td> <td>33.72</td> <td>40.51</td> <td>30.85</td> <td>38.11</td> </tr> <tr> <td>Languages || Chinese</td> <td>7.17</td> <td>17.2</td> <td>9.81</td> <td>23.05</td> <td>7.45</td> <td>18.73</td> <td>18.93</td> <td>31.5</td> <td>25.88</td> <td>39.53</td> </tr> <tr> <td>Languages || French</td> <td>11.19</td> <td>18.97</td> <td>15.42</td> <td>26.13</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>23.34</td> <td>31.08</td> </tr> <tr> <td>Languages || German</td> <td>12.98</td> <td>19.15</td> <td>16.84</td> <td>23.65</td> <td>11.23</td> <td>15.08</td> <td>19.06</td> <td>24.33</td> <td>21.42</td> <td>26.87</td> </tr> <tr> <td>Languages || Polish</td> <td>9.73</td> <td>16.51</td> <td>13.62</td> <td>22.18</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>16.27</td> <td>21.87</td> </tr> <tr> <td>Languages || Portuguese</td> <td>10.03</td> <td>15.86</td> <td>13.75</td> <td>21.27</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>18.97</td> <td>23.95</td> </tr> <tr> <td>Languages || Russian</td> <td>5.01</td> <td>9.62</td> <td>7.34</td> <td>13.61</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>10.38</td> <td>13.44</td> </tr> <tr> <td>Languages || Tamil</td> <td>2.2</td> <td>6.41</td> <td>4.58</td> <td>10.15</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>10.07</td> <td>14.25</td> </tr> <tr> <td>Languages || Ukrainian</td> <td>7.94</td> <td>14.07</td> <td>10.53</td> <td>17.72</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>15.12</td> <td>20.82</td> </tr> </tbody></table>
Table 6
table_6
P19-1227
6
acl2019
5.3 Overall Results. Table 6 shows the overall results for different methods in different languages. There is a large gap between the performance of English and that of other target languages, which implies that the task of cross-lingual OpenQA is difficult. In the English test set, the performance of the multilingual BERT model is worse than that of the monolingual BERT model. In almost all target languages, however, the multilingual model achieves the best result, manifesting its ability in capturing answers for questions across various languages. When we compare DocumentQA to BERT, although they have similar performance in English, BERT consistently outperforms DocumentQA by a large margin in all target languages in both translate-test and translate-train settings. We conjecture that it is because the BERT model, which has been pretrained on large-scale unlabeled text data, has better generalization power, and could better handle the different distributions between the original English training data and the machine translated test data. Translate-train methods outperform translatetest methods in all cases except for DocumentQA in German. This may be due to the fact that DocumentQA uses space-tokenized words as basic units. In German, there is no space between compound words, resulting in countless possible combinations. Therefore, many of the words in translate-train German data do not have pretrained word vectors. On the contrary, using WordPiece tokenizer, BERT is not influenced by this.
[2, 1, 1, 1, 1, 1, 2, 1, 2, 2, 2, 2]
['5.3 Overall Results.', 'Table 6 shows the overall results for different methods in different languages.', 'There is a large gap between the performance of English and that of other target languages, which implies that the task of cross-lingual OpenQA is difficult.', 'In the English test set, the performance of the multilingual BERT model is worse than that of the monolingual BERT model.', 'In almost all target languages, however, the multilingual model achieves the best result, manifesting its ability in capturing answers for questions across various languages.', 'When we compare DocumentQA to BERT, although they have similar performance in English, BERT consistently outperforms DocumentQA by a large margin in all target languages in both translate-test and translate-train settings.', 'We conjecture that it is because the BERT model, which has been pretrained on large-scale unlabeled text data, has better generalization power, and could better handle the different distributions between the original English training data and the machine translated test data.', 'Translate-train methods outperform translatetest methods in all cases except for DocumentQA in German.', 'This may be due to the fact that DocumentQA uses space-tokenized words as basic units.', 'In German, there is no space between compound words, resulting in countless possible combinations.', 'Therefore, many of the words in translate-train German data do not have pretrained word vectors.', 'On the contrary, using WordPiece tokenizer, BERT is not influenced by this.']
[None, ['Languages'], ['English'], ['English', 'Multilingual BERT', 'BERT'], ['Languages', 'Multilingual BERT'], ['DocQA', 'BERT', 'English', 'Translate-Train', 'Translate-Test'], None, ['Translate-Train', 'Translate-Test', 'DocQA', 'German'], ['DocQA'], ['German'], ['Translate-Train', 'German'], ['BERT']]
1
P19-1227table_9
Performance with respect to language distance and percentage of “easy” questions.
2
[['Languages', 'German'], ['Languages', 'Chinese'], ['Languages', 'Portuguese'], ['Languages', 'French'], ['Languages', 'Polish'], ['Languages', 'Ukrainian'], ['Languages', 'Russian'], ['Languages', 'Tamil']]
1
[['Genetic dist.'], ['Pct. of easy'], ['EM']]
[['30.8', '19.09', '36.67'], ['82.4', '33.24', '35.93'], ['59.8', '29.03', '33.68'], ['48.7', '23.37', '31.21'], ['66.9', '17.7', '31.17'], ['60.3', '21.18', '24.26'], ['60.3', '18.56', '21.11'], ['96.5', '17.63', '16.95']]
column
['Genetic dist.', 'Pct. of easy', 'EM']
['Languages']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Genetic dist.</th> <th>Pct. of easy</th> <th>EM</th> </tr> </thead> <tbody> <tr> <td>Languages || German</td> <td>30.8</td> <td>19.09</td> <td>36.67</td> </tr> <tr> <td>Languages || Chinese</td> <td>82.4</td> <td>33.24</td> <td>35.93</td> </tr> <tr> <td>Languages || Portuguese</td> <td>59.8</td> <td>29.03</td> <td>33.68</td> </tr> <tr> <td>Languages || French</td> <td>48.7</td> <td>23.37</td> <td>31.21</td> </tr> <tr> <td>Languages || Polish</td> <td>66.9</td> <td>17.7</td> <td>31.17</td> </tr> <tr> <td>Languages || Ukrainian</td> <td>60.3</td> <td>21.18</td> <td>24.26</td> </tr> <tr> <td>Languages || Russian</td> <td>60.3</td> <td>18.56</td> <td>21.11</td> </tr> <tr> <td>Languages || Tamil</td> <td>96.5</td> <td>17.63</td> <td>16.95</td> </tr> </tbody></table>
Table 9
table_9
P19-1227
7
acl2019
The results in Table 9 verify our assumption. The performance of different languages generally decreases as the genetic distance grows. The exceptions are Chinese and Portuguese since the percentages of "easy" questions in them are significantly higher than those in other languages. For languages that have similar genetic distances with English (i.e. Russian, Ukrainian, and Portuguese), the performance increases as the percentage of "easy" questions grows.
[1, 1, 1, 1]
['The results in Table 9 verify our assumption.', 'The performance of different languages generally decreases as the genetic distance grows.', 'The exceptions are Chinese and Portuguese since the percentages of "easy" questions in them are significantly higher than those in other languages.', 'For languages that have similar genetic distances with English (i.e. Russian, Ukrainian, and Portuguese), the performance increases as the percentage of "easy" questions grows.']
[None, ['Pct. of easy', 'EM', 'German', 'French', 'Polish', 'Ukrainian', 'Russian', 'Tamil', 'Genetic dist.'], ['Chinese', 'Portuguese', 'Pct. of easy'], ['Genetic dist.', 'Russian', 'Ukrainian', 'Portuguese', 'Pct. of easy']]
1
P19-1229table_3
Performance on dev data of models trained on a single-domain training data.
2
[['Trained on', 'BC'], ['Trained on', 'PB'], ['Trained on', 'ZX']]
2
[['BC', 'UAS'], ['BC', 'LAS'], ['PB', 'UAS'], ['PB', 'LAS'], ['ZX', 'UAS'], ['ZX', 'LAS']]
[['82.77', '77.66', '68.73', '61.93', '69.34', '61.32'], ['62.1', '55.2', '75.85', '70.12', '51.5', '41.92'], ['56.15', '48.34', '52.56', '43.76', '69.54', '63.65']]
column
['UAS', 'LAS', 'UAS', 'LAS', 'UAS', 'LAS']
['PB', 'BC', 'ZX']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BC || UAS</th> <th>BC || LAS</th> <th>PB || UAS</th> <th>PB || LAS</th> <th>ZX || UAS</th> <th>ZX || LAS</th> </tr> </thead> <tbody> <tr> <td>Trained on || BC</td> <td>82.77</td> <td>77.66</td> <td>68.73</td> <td>61.93</td> <td>69.34</td> <td>61.32</td> </tr> <tr> <td>Trained on || PB</td> <td>62.1</td> <td>55.2</td> <td>75.85</td> <td>70.12</td> <td>51.5</td> <td>41.92</td> </tr> <tr> <td>Trained on || ZX</td> <td>56.15</td> <td>48.34</td> <td>52.56</td> <td>43.76</td> <td>69.54</td> <td>63.65</td> </tr> </tbody></table>
Table 3
table_3
P19-1229
5
acl2019
4.1 Single-domain Training Results . Table 3 presents parsing accuracy on the dev data when training each parser on a single-domain training data. We can see that although PB-train is much smaller than BC-train, the PB-trained parser outperforms the BC-trained parser by about 8% on PB-dev, indicating the usefulness and importance of target-domain labeled data especially when two domains are very dissimilar. However, the gap between the ZX-trained parser and the BC-trained is only about 2% in LAS, which we believe has a two-fold reason. First, the size of ZX-train is even smaller, and is only less than one third of that of PB-train. Second, the BC corpus are from the People Daily newspaper and probably contains novel articles, which are more similar to ZX. Overall, it is clear and reasonable that the parser achieves best performance on a given domain when the training data is from the same domain.
[2, 1, 1, 1, 1, 1, 1]
['4.1 Single-domain Training Results .', 'Table 3 presents parsing accuracy on the dev data when training each parser on a single-domain training data.', 'We can see that although PB-train is much smaller than BC-train, the PB-trained parser outperforms the BC-trained parser by about 8% on PB-dev, indicating the usefulness and importance of target-domain labeled data especially when two domains are very dissimilar.', 'However, the gap between the ZX-trained parser and the BC-trained is only about 2% in LAS, which we believe has a two-fold reason.', 'First, the size of ZX-train is even smaller, and is only less than one third of that of PB-train.', 'Second, the BC corpus are from the People Daily newspaper and probably contains novel articles, which are more similar to ZX.', 'Overall, it is clear and reasonable that the parser achieves best performance on a given domain when the training data is from the same domain.']
[None, None, ['PB', 'BC'], ['ZX', 'BC'], ['ZX'], ['BC'], None]
1
P19-1229table_5
Final results on the test data.
2
[['Trained on single-domain data', 'BC-train'], ['Trained on single-domain data', 'PB-train'], ['Trained on single-domain data', 'ZX-train'], ['Trained on source- and target-domain data', 'MTL'], ['Trained on source- and target-domain data', 'CONCAT'], ['Trained on source- and target-domain data', 'DOEMB'], ['Trained on source- and target-domain data', '+ ELMo'], ['Trained on source- and target-domain data', '+ Fine-tuning']]
2
[['PB', 'UAS'], ['PB', 'LAS'], ['ZX', 'UAS'], ['ZX', 'LAS']]
[['67.55', '61.01', '68.44', '59.55'], ['74.52', '69.02', '51.62', '40.36'], ['52.24', '42.76', '68.14', '61.71'], ['75.39', '69.69', '72.11', '65.66'], ['77.49', '72.16', '76.8', '70.85'], ['78.24', '72.81', '77.96', '72.04'], ['77.62', '72.35', '78.5', '72.49'], ['82.05', '77.16', '80.44', '75.11']]
column
['UAS', 'LAS', 'UAS', 'LAS']
['DOEMB', '+ ELMo', '+ Fine-tuning']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PB || UAS</th> <th>PB || LAS</th> <th>ZX || UAS</th> <th>ZX || LAS</th> </tr> </thead> <tbody> <tr> <td>Trained on single-domain data || BC-train</td> <td>67.55</td> <td>61.01</td> <td>68.44</td> <td>59.55</td> </tr> <tr> <td>Trained on single-domain data || PB-train</td> <td>74.52</td> <td>69.02</td> <td>51.62</td> <td>40.36</td> </tr> <tr> <td>Trained on single-domain data || ZX-train</td> <td>52.24</td> <td>42.76</td> <td>68.14</td> <td>61.71</td> </tr> <tr> <td>Trained on source- and target-domain data || MTL</td> <td>75.39</td> <td>69.69</td> <td>72.11</td> <td>65.66</td> </tr> <tr> <td>Trained on source- and target-domain data || CONCAT</td> <td>77.49</td> <td>72.16</td> <td>76.8</td> <td>70.85</td> </tr> <tr> <td>Trained on source- and target-domain data || DOEMB</td> <td>78.24</td> <td>72.81</td> <td>77.96</td> <td>72.04</td> </tr> <tr> <td>Trained on source- and target-domain data || + ELMo</td> <td>77.62</td> <td>72.35</td> <td>78.5</td> <td>72.49</td> </tr> <tr> <td>Trained on source- and target-domain data || + Fine-tuning</td> <td>82.05</td> <td>77.16</td> <td>80.44</td> <td>75.11</td> </tr> </tbody></table>
Table 5
table_5
P19-1229
7
acl2019
4.4 Final Results . Table 5 shows the final results on the test data, which are consistent with the previous observations. First, when constrained on single-domain training data, using the target-domain data is the most effective. Second, using source-domain data as extra training data is helpful, and the DOEMB method performs the best. Third, it is extremely useful and efficient to first train ELMo on very large-scale general-purpose unlabeled data and then fine-tune it on relatively small-scale targetdomain unlabeled data.
[2, 1, 1, 1, 1]
['4.4 Final Results .', 'Table 5 shows the final results on the test data, which are consistent with the previous observations.', 'First, when constrained on single-domain training data, using the target-domain data is the most effective.', 'Second, using source-domain data as extra training data is helpful, and the DOEMB method performs the best.', 'Third, it is extremely useful and efficient to first train ELMo on very large-scale general-purpose unlabeled data and then fine-tune it on relatively small-scale targetdomain unlabeled data.']
[None, None, ['Trained on single-domain data'], ['Trained on source- and target-domain data', 'DOEMB'], ['+ ELMo', '+ Fine-tuning']]
1
P19-1231table_3
Model performance by F1 on the testing set of each dataset. The first group of models are all fullysupervised, which use manual fine-grained annotations. while the second group of models use only named entity dictionaries to perform the NER task.
2
[['CoNLL (en)', 'PER'], ['CoNLL (en)', 'LOC'], ['CoNLL (en)', 'ORG'], ['CoNLL (en)', 'MISC'], ['CoNLL (en)', 'Overall'], ['CoNLL (sp)', 'PER'], ['CoNLL (sp)', 'LOC'], ['CoNLL (sp)', 'ORG'], ['CoNLL (sp)', 'Overall'], ['MUC', 'PER'], ['MUC', 'LOC'], ['MUC', 'ORG'], ['MUC', 'Overall'], ['Twitter', 'PER'], ['Twitter', 'LOC'], ['Twitter', 'ORG'], ['Twitter', 'Overall']]
1
[['MEMM'], ['CRF'], ['BiLSTM'], ['BiLSTM+CRF'], ['Matching'], ['uPU'], ['buPU'], ['bnPU'], ['AdaPU']]
[['91.61', '93.12', '94.21', '95.71', '6.7', '74.22', '85.01', '87.2F11', '90.17'], ['89.72', '91.15', '91.76', '93.02', '67.16', '69.88', '81.27', '83.37', '85.62'], ['80.6', '81.91', '83.21', '88.45', '46.65', '73.64', '74.72', '75.29', '76.03'], ['77.45', '79.35', '76', '79.86', '53.98', '68.9', '68.9', '66.88', '69.3'], ['86.13', '87.94', '88.3', '90.01', '44.9', '72.32', '79.2', '80.74', '82.94'], ['86.18', '86.77', '88.93', '90.41', '32.4', '82.28', '83.76', '84.3', '85.1'], ['78.48', '80.3', '75.43', '80.55', '28.53', '70.44', '72.55', '73.68', '75.23'], ['79.23', '80.83', '79.27', '83.26', '55.76', '69.82', '71.22', '69.82', '72.28'], ['81.14', '82.63', '80.28', '84.74', '42.23', '73.84', '74.5', '74.43', '75.85'], ['86.32', '87.5', '85.71', '84.55', '27.84', '77.98', '84.94', '84.21', '85.26'], ['81.7', '83.83', '79.48', '83.43', '62.82', '64.56', '72.62', '75.61', '77.35'], ['68.48', '72.33', '66.17', '67.66', '51.6', '45.3', '58.39', '58.75', '60.15'], ['74.66', '76.47', '73.12', '75.08', '50.12', '63.87', '69.89', '70.06', '71.6'], ['73.85', '80.86', '80.61', '80.77', '41.33', '67.3', '72.72', '72.68', '74.66'], ['69.35', '75.39', '73.52', '72.56', '49.74', '59.28', '61.41', '63.44', '65.18'], ['41.81', '47.77', '41.39', '41.33', '32.38', '31.51', '36.78', '35.77', '36.62'], ['61.48', '67.15', '65.6', '65.32', '37.9', '53.63', '57.16', '57.54', '59.36']]
column
['F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1', 'F1']
['PER', 'LOC', 'ORG', 'Overall']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MEMM</th> <th>CRF</th> <th>BiLSTM</th> <th>BiLSTM+CRF</th> <th>Matching</th> <th>uPU</th> <th>buPU</th> <th>bnPU</th> <th>AdaPU</th> </tr> </thead> <tbody> <tr> <td>CoNLL (en) || PER</td> <td>91.61</td> <td>93.12</td> <td>94.21</td> <td>95.71</td> <td>6.7</td> <td>74.22</td> <td>85.01</td> <td>87.2F11</td> <td>90.17</td> </tr> <tr> <td>CoNLL (en) || LOC</td> <td>89.72</td> <td>91.15</td> <td>91.76</td> <td>93.02</td> <td>67.16</td> <td>69.88</td> <td>81.27</td> <td>83.37</td> <td>85.62</td> </tr> <tr> <td>CoNLL (en) || ORG</td> <td>80.6</td> <td>81.91</td> <td>83.21</td> <td>88.45</td> <td>46.65</td> <td>73.64</td> <td>74.72</td> <td>75.29</td> <td>76.03</td> </tr> <tr> <td>CoNLL (en) || MISC</td> <td>77.45</td> <td>79.35</td> <td>76</td> <td>79.86</td> <td>53.98</td> <td>68.9</td> <td>68.9</td> <td>66.88</td> <td>69.3</td> </tr> <tr> <td>CoNLL (en) || Overall</td> <td>86.13</td> <td>87.94</td> <td>88.3</td> <td>90.01</td> <td>44.9</td> <td>72.32</td> <td>79.2</td> <td>80.74</td> <td>82.94</td> </tr> <tr> <td>CoNLL (sp) || PER</td> <td>86.18</td> <td>86.77</td> <td>88.93</td> <td>90.41</td> <td>32.4</td> <td>82.28</td> <td>83.76</td> <td>84.3</td> <td>85.1</td> </tr> <tr> <td>CoNLL (sp) || LOC</td> <td>78.48</td> <td>80.3</td> <td>75.43</td> <td>80.55</td> <td>28.53</td> <td>70.44</td> <td>72.55</td> <td>73.68</td> <td>75.23</td> </tr> <tr> <td>CoNLL (sp) || ORG</td> <td>79.23</td> <td>80.83</td> <td>79.27</td> <td>83.26</td> <td>55.76</td> <td>69.82</td> <td>71.22</td> <td>69.82</td> <td>72.28</td> </tr> <tr> <td>CoNLL (sp) || Overall</td> <td>81.14</td> <td>82.63</td> <td>80.28</td> <td>84.74</td> <td>42.23</td> <td>73.84</td> <td>74.5</td> <td>74.43</td> <td>75.85</td> </tr> <tr> <td>MUC || PER</td> <td>86.32</td> <td>87.5</td> <td>85.71</td> <td>84.55</td> <td>27.84</td> <td>77.98</td> <td>84.94</td> <td>84.21</td> <td>85.26</td> </tr> <tr> <td>MUC || LOC</td> <td>81.7</td> <td>83.83</td> <td>79.48</td> <td>83.43</td> <td>62.82</td> <td>64.56</td> <td>72.62</td> <td>75.61</td> <td>77.35</td> </tr> <tr> <td>MUC || ORG</td> <td>68.48</td> <td>72.33</td> <td>66.17</td> <td>67.66</td> <td>51.6</td> <td>45.3</td> <td>58.39</td> <td>58.75</td> <td>60.15</td> </tr> <tr> <td>MUC || Overall</td> <td>74.66</td> <td>76.47</td> <td>73.12</td> <td>75.08</td> <td>50.12</td> <td>63.87</td> <td>69.89</td> <td>70.06</td> <td>71.6</td> </tr> <tr> <td>Twitter || PER</td> <td>73.85</td> <td>80.86</td> <td>80.61</td> <td>80.77</td> <td>41.33</td> <td>67.3</td> <td>72.72</td> <td>72.68</td> <td>74.66</td> </tr> <tr> <td>Twitter || LOC</td> <td>69.35</td> <td>75.39</td> <td>73.52</td> <td>72.56</td> <td>49.74</td> <td>59.28</td> <td>61.41</td> <td>63.44</td> <td>65.18</td> </tr> <tr> <td>Twitter || ORG</td> <td>41.81</td> <td>47.77</td> <td>41.39</td> <td>41.33</td> <td>32.38</td> <td>31.51</td> <td>36.78</td> <td>35.77</td> <td>36.62</td> </tr> <tr> <td>Twitter || Overall</td> <td>61.48</td> <td>67.15</td> <td>65.6</td> <td>65.32</td> <td>37.9</td> <td>53.63</td> <td>57.16</td> <td>57.54</td> <td>59.36</td> </tr> </tbody></table>
Table 3
table_3
P19-1231
7
acl2019
General Performance. Table 3 shows model performance by entity type and the overall performance on the four tested datasets. From the table, we can observe: 1) The performance of the Matching model is quite poor compared to other models. We found out that it mainly resulted from low recall values.
[2, 1, 1, 2]
['General Performance.', 'Table 3 shows model performance by entity type and the overall performance on the four tested datasets.', 'From the table, we can observe: 1) The performance of the Matching model is quite poor compared to other models.', 'We found out that it mainly resulted from low recall values.']
[None, ['CoNLL (en)', 'CoNLL (sp)', 'MUC', 'Twitter'], ['Matching'], None]
1
P19-1236table_3
F1-scores on 13PC and 13CG. † indicates that the FINAL results are statistically significant compared to all transfer baselines and ablation baselines with p < 0.01 by t-test.
1
[['Crichton et al. (2017)'], ['STM-TARGET'], ['MULTITASK(NER+LM)'], ['MULTITASK(NER)'], ['FINETUNE'], ['STM+ELMO'], ['CO-LM'], ['CO-NER'], ['MIX-DATA'], ['FINAL']]
2
[['Datasets', '13PC'], ['Datasets', '13CG']]
[['81.92', '78.9'], ['82.59', '76.55'], ['81.33', '75.27'], ['83.09', '77.73'], ['82.55', '76.73'], ['82.76', '78.24'], ['84.43', '78.6'], ['83.87', '78.43'], ['83.88', '78.7'], ['85.54', '79.86']]
column
['F1-scores', 'F1-scores']
['FINAL']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Datasets || 13PC</th> <th>Datasets || 13CG</th> </tr> </thead> <tbody> <tr> <td>Crichton et al. (2017)</td> <td>81.92</td> <td>78.9</td> </tr> <tr> <td>STM-TARGET</td> <td>82.59</td> <td>76.55</td> </tr> <tr> <td>MULTITASK(NER+LM)</td> <td>81.33</td> <td>75.27</td> </tr> <tr> <td>MULTITASK(NER)</td> <td>83.09</td> <td>77.73</td> </tr> <tr> <td>FINETUNE</td> <td>82.55</td> <td>76.73</td> </tr> <tr> <td>STM+ELMO</td> <td>82.76</td> <td>78.24</td> </tr> <tr> <td>CO-LM</td> <td>84.43</td> <td>78.6</td> </tr> <tr> <td>CO-NER</td> <td>83.87</td> <td>78.43</td> </tr> <tr> <td>MIX-DATA</td> <td>83.88</td> <td>78.7</td> </tr> <tr> <td>FINAL</td> <td>85.54</td> <td>79.86</td> </tr> </tbody></table>
Table 3
table_3
P19-1236
6
acl2019
Our method outperforms all baselines significantly, which shows the importance of using rich data. A contrast between our method and MIXDATA shows the effectiveness of using two different language models across domains. Even through MIX-DATA uses more data for training language models on both the source and target domains, it cannot learn a domain contrast since both sides use the same mixed data. In contrast, our model gives significantly better results by gleaning such contrast. (3) Comparison with current state-of-the-art. Finally, Table 3 also shows a comparison with a state-of-the-art method on the 13PC and 13CG datasets (Crichton et al., 2017), which leverages POS tagging for multi-task learning by using cotraining method. Our model outperforms their results, giving the best results in the literature. Discussion. When the number of target-domain NER sentences is 0, the transfer learning setting is unsupervised domain adaptation. As the number of target domain NER sentences increases, they will intuitively play an increasingly important role for target NER. Figure 6 compares the F1-scores of the baseline STM-TARGET and our multi-task model with varying numbers of target-domain NER training data under 100 training epochs. In the nearly unsupervised setting, our method gives the largest improvement of 20.5% F1-scores. As the number of training data increases, the gap between the two methods becomes smaller. But our method still gives a 3.3% F1 score gain when the number of training sentences reach 3,000, show-.
[1, 1, 1, 1, 2, 1, 2, 2, 2, 2, 2, 2, 2, 2]
['Our method outperforms all baselines significantly, which shows the importance of using rich data.', 'A contrast between our method and MIXDATA shows the effectiveness of using two different language models across domains.', 'Even through MIX-DATA uses more data for training language models on both the source and target domains, it cannot learn a domain contrast since both sides use the same mixed data.', 'In contrast, our model gives significantly better results by gleaning such contrast.', '(3) Comparison with current state-of-the-art.', 'Finally, Table 3 also shows a comparison with a state-of-the-art method on the 13PC and 13CG datasets (Crichton et al., 2017), which leverages POS tagging for multi-task learning by using cotraining method.', 'Our model outperforms their results, giving the best results in the literature.', 'Discussion.', 'When the number of target-domain NER sentences is 0, the transfer learning setting is unsupervised domain adaptation.', 'As the number of target domain NER sentences increases, they will intuitively play an increasingly important role for target NER.', 'Figure 6 compares the F1-scores of the baseline STM-TARGET and our multi-task model with varying numbers of target-domain NER training data under 100 training epochs.', 'In the nearly unsupervised setting, our method gives the largest improvement of 20.5% F1-scores.', 'As the number of training data increases, the gap between the two methods becomes smaller.', 'But our method still gives a 3.3% F1 score gain when the number of training sentences reach 3,000, show-.']
[['FINAL'], ['FINAL', 'MIX-DATA'], ['MIX-DATA'], ['FINAL'], None, ['13PC', '13CG', 'MULTITASK(NER+LM)', 'MULTITASK(NER)'], ['FINAL'], None, None, None, None, None, None, None]
1
P19-1240table_6
Comparison results of our ablation models on three datasets (SE: StackExchange) — separate train: our model with pre-trained latent topics; w/o topic-attn: decoder attention without topics (Eq. 7); w/o topicstate: decoder hidden states without topics (Eq. 5). We report F1@1 for Twitter and Weibo, F1@3 for StackExchange. Best results are in bold.
1
[['SEQ2SEQ-COPY'], ['Our model (separate train)'], ['Our model (w/o topic-attn)'], ['Our model (w/o topic-state)'], ['Our full model']]
1
[['Twitter'], ['Weibo'], ['SE']]
[['36.6', '32.01', '31.53'], ['36.75', '32.75', '31.78'], ['37.24', '32.42', '32.34'], ['37.44', '33.48', '31.98'], ['38.49', '34.99', '33.41']]
column
['F1', 'F1', 'F1']
['Our full model']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Twitter</th> <th>Weibo</th> <th>SE</th> </tr> </thead> <tbody> <tr> <td>SEQ2SEQ-COPY</td> <td>36.6</td> <td>32.01</td> <td>31.53</td> </tr> <tr> <td>Our model (separate train)</td> <td>36.75</td> <td>32.75</td> <td>31.78</td> </tr> <tr> <td>Our model (w/o topic-attn)</td> <td>37.24</td> <td>32.42</td> <td>32.34</td> </tr> <tr> <td>Our model (w/o topic-state)</td> <td>37.44</td> <td>33.48</td> <td>31.98</td> </tr> <tr> <td>Our full model</td> <td>38.49</td> <td>34.99</td> <td>33.41</td> </tr> </tbody></table>
Table 6
table_6
P19-1240
8
acl2019
5.3 Further Discussions . Ablation Study. We compare the results of our full model and its four ablated variants to analyze the relative contributions of topics on different components. The results in Table 6 indicate the competitive effect of topics on decoder attention and that on hidden states, but combining them both help our full model achieve the best performance. We also observe that pre-trained topics only bring a small boost, indicated by the close scores yielded by our model (separate train) and SEQ2SEQ-COPY. This suggests that the joint training is crucial to better absorb latent topics.
[2, 2, 2, 1, 1, 2]
['5.3 Further Discussions .', 'Ablation Study.', 'We compare the results of our full model and its four ablated variants to analyze the relative contributions of topics on different components.', 'The results in Table 6 indicate the competitive effect of topics on decoder attention and that on hidden states, but combining them both help our full model achieve the best performance.', 'We also observe that pre-trained topics only bring a small boost, indicated by the close scores yielded by our model (separate train) and SEQ2SEQ-COPY.', 'This suggests that the joint training is crucial to better absorb latent topics.']
[None, None, None, ['Our full model'], ['Our model (separate train)', 'SEQ2SEQ-COPY'], None]
1
P19-1241table_3
Performance Comparisons on the SHR Dataset
1
[['DAD Model'], ['SDA Model'], ['Word-CNN'], ['LSTM'], ['RNN'], ['CL-CNN'], ['fastText-BOT'], ['HATT'], ['Bi-LSTM'], ['RCNN'], ['CNN-LSTM'], ['Attentional Bi-LSTM'], ['A-CNN-LSTM'], ['openAI-Transformer'], ['SMLM']]
1
[['Accuracy'], ['Precision'], ['Recall'], ['F1']]
[['0.91', '0.9', '0.91', '0.9'], ['0.9', '0.87', '0.9', '0.88'], ['0.92', '0.68', '0.95', '0.79'], ['0.92', '0.7', '0.98', '0.81'], ['0.93', '0.86', '0.95', '0.9'], ['0.92', '0.7', '0.91', '0.79'], ['0.87', '0.7', '0.8', '0.74'], ['0.93', '0.93', '0.95', '0.93'], ['0.93', '0.86', '0.98', '0.91'], ['0.9', '0.86', '0.9', '0.87'], ['0.94', '0.93', '0.94', '0.94'], ['0.93', '0.9', '0.98', '0.93'], ['0.94', '0.92', '0.98', '0.94'], ['0.95', '0.94', '0.96', '0.94'], ['0.96', '0.95', '0.97', '0.96']]
column
['Accuracy', 'Precision', 'Recall', 'F1']
['SMLM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> <th>Precision</th> <th>Recall</th> <th>F1</th> </tr> </thead> <tbody> <tr> <td>DAD Model</td> <td>0.91</td> <td>0.9</td> <td>0.91</td> <td>0.9</td> </tr> <tr> <td>SDA Model</td> <td>0.9</td> <td>0.87</td> <td>0.9</td> <td>0.88</td> </tr> <tr> <td>Word-CNN</td> <td>0.92</td> <td>0.68</td> <td>0.95</td> <td>0.79</td> </tr> <tr> <td>LSTM</td> <td>0.92</td> <td>0.7</td> <td>0.98</td> <td>0.81</td> </tr> <tr> <td>RNN</td> <td>0.93</td> <td>0.86</td> <td>0.95</td> <td>0.9</td> </tr> <tr> <td>CL-CNN</td> <td>0.92</td> <td>0.7</td> <td>0.91</td> <td>0.79</td> </tr> <tr> <td>fastText-BOT</td> <td>0.87</td> <td>0.7</td> <td>0.8</td> <td>0.74</td> </tr> <tr> <td>HATT</td> <td>0.93</td> <td>0.93</td> <td>0.95</td> <td>0.93</td> </tr> <tr> <td>Bi-LSTM</td> <td>0.93</td> <td>0.86</td> <td>0.98</td> <td>0.91</td> </tr> <tr> <td>RCNN</td> <td>0.9</td> <td>0.86</td> <td>0.9</td> <td>0.87</td> </tr> <tr> <td>CNN-LSTM</td> <td>0.94</td> <td>0.93</td> <td>0.94</td> <td>0.94</td> </tr> <tr> <td>Attentional Bi-LSTM</td> <td>0.93</td> <td>0.9</td> <td>0.98</td> <td>0.93</td> </tr> <tr> <td>A-CNN-LSTM</td> <td>0.94</td> <td>0.92</td> <td>0.98</td> <td>0.94</td> </tr> <tr> <td>openAI-Transformer</td> <td>0.95</td> <td>0.94</td> <td>0.96</td> <td>0.94</td> </tr> <tr> <td>SMLM</td> <td>0.96</td> <td>0.95</td> <td>0.97</td> <td>0.96</td> </tr> </tbody></table>
Table 3
table_3
P19-1241
6
acl2019
6.1 Performance . Table 3 describes the performance of the baseline classifiers as well as the deep learning models based on four evaluation metrics. The Social Media Language Model outperforms all baseline models, including RNNs, LSTMs, CNNs, and the linear DAD and SDA models. The A-CNN-LSTM and the Hierarchical Attention Model has a high recall due to its ability to better capture long term dependencies. The attention mechanism allows the model to retain some important hidden information when the sentences are quite long.
[2, 2, 1, 1, 2]
['6.1 Performance .', 'Table 3 describes the performance of the baseline classifiers as well as the deep learning models based on four evaluation metrics.', 'The Social Media Language Model outperforms all baseline models, including RNNs, LSTMs, CNNs, and the linear DAD and SDA models.', 'The A-CNN-LSTM and the Hierarchical Attention Model has a high recall due to its ability to better capture long term dependencies.', 'The attention mechanism allows the model to retain some important hidden information when the sentences are quite long.']
[None, None, ['SMLM', 'RNN', 'LSTM', 'DAD Model', 'SDA Model'], ['A-CNN-LSTM', 'HATT', 'Recall'], None]
1
P19-1244table_5
Results of different claim verification models on FEVER dataset (Dev set). The columns correspond to the predicted label accuracy, the evidence precision, recall, F1 score, and the FEVER score.
1
[['Fever-base'], ['NSMN'], ['HAN-nli'], ['HAN-nli*'], ['HAN*']]
1
[['Acc.'], ['Prec.'], ['Rec.'], ['F1'], ['FEVER']]
[['0.521', '-', '-', '-', '0.326'], ['0.697', '0.286', '0.87', '0.431', '0.665'], ['0.642', '0.34', '0.484', '0.4', '0.464'], ['0.72', '0.447', '0.536', '0.488', '0.571'], ['0.475', '0.356', '0.471', '0.406', '0.365']]
column
['Acc.', 'Prec.', 'Rec.', 'F1', 'FEVER']
['HAN-nli', 'HAN-nli*', 'HAN*']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Acc.</th> <th>Prec.</th> <th>Rec.</th> <th>F1</th> <th>FEVER</th> </tr> </thead> <tbody> <tr> <td>Fever-base</td> <td>0.521</td> <td>-</td> <td>-</td> <td>-</td> <td>0.326</td> </tr> <tr> <td>NSMN</td> <td>0.697</td> <td>0.286</td> <td>0.87</td> <td>0.431</td> <td>0.665</td> </tr> <tr> <td>HAN-nli</td> <td>0.642</td> <td>0.34</td> <td>0.484</td> <td>0.4</td> <td>0.464</td> </tr> <tr> <td>HAN-nli*</td> <td>0.72</td> <td>0.447</td> <td>0.536</td> <td>0.488</td> <td>0.571</td> </tr> <tr> <td>HAN*</td> <td>0.475</td> <td>0.356</td> <td>0.471</td> <td>0.406</td> <td>0.365</td> </tr> </tbody></table>
Table 5
table_5
P19-1244
8
acl2019
Table 5 shows that HAN-nli* is much better than the two baselines in terms of label accuracy and evidence F1 score. There are two reasons: 1) apart from the retrieval module, our model optimizes all the parameters end-to-end, while the two pipeline systems may result in error propagation; and 2) our evidence embedding method considers more complex facets such as topical coherence and semantic entailment, while NSMN just focuses on similarity matching between the claim and each sentence. HAN-nli seem already a decent model given its much better performance than Fever-base. This confirms the advantage of our evidence embedding method on the FEVER task.
[1, 2, 1, 2]
['Table 5 shows that HAN-nli* is much better than the two baselines in terms of label accuracy and evidence F1 score.', 'There are two reasons: 1) apart from the retrieval module, our model optimizes all the parameters end-to-end, while the two pipeline systems may result in error propagation; and 2) our evidence embedding method considers more complex facets such as topical coherence and semantic entailment, while NSMN just focuses on similarity matching between the claim and each sentence.', 'HAN-nli seem already a decent model given its much better performance than Fever-base.', 'This confirms the advantage of our evidence embedding method on the FEVER task.']
[['HAN-nli*', 'Fever-base', 'NSMN', 'Acc.', 'F1'], None, ['HAN-nli', 'Fever-base'], ['FEVER']]
1
P19-1249table_5
Accuracy of (top) the state of the art gender prediction approaches on their respective datasets and transfer performance to celebrities, and (bottom) our baseline deep learning approach, with and without retraining on the PAN datasets.
1
[['alvarezcamona15 (2015)'], ['nissim16 (2016)'], ['nissim17 (2017)'], ['danehsvar18 (2018)'], ['CNN (Celeb)'], ['CNN (Celeb + PAN15)'], ['CNN (Celeb + PAN16)'], ['CNN (Celeb + PAN17)'], ['CNN (Celeb + PAN18)']]
2
[['Model', 'PAN15'], ['Model', 'PAN16'], ['Model', 'PAN17'], ['Model', 'PAN18'], ['Model', 'Celeb']]
[['0.859', '-', '-', '-', '0.723'], ['-', '0.641', '-', '-', '0.74'], ['-', '-', '0.823', '-', '0.855'], ['-', '-', '-', '0.822', '0.817'], ['0.747', '0.59', '0.747', '0.756', '0.861'], ['0.793', '-', '-', '-', '-'], ['-', '0.69', '-', '-', '-'], ['-', '-', '0.768', '-', '-'], ['-', '-', '-', '0.759', '-']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['alvarezcamona15 (2015)', 'nissim16 (2016)', 'PAN15', 'PAN16']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Model || PAN15</th> <th>Model || PAN16</th> <th>Model || PAN17</th> <th>Model || PAN18</th> <th>Model || Celeb</th> </tr> </thead> <tbody> <tr> <td>alvarezcamona15 (2015)</td> <td>0.859</td> <td>-</td> <td>-</td> <td>-</td> <td>0.723</td> </tr> <tr> <td>nissim16 (2016)</td> <td>-</td> <td>0.641</td> <td>-</td> <td>-</td> <td>0.74</td> </tr> <tr> <td>nissim17 (2017)</td> <td>-</td> <td>-</td> <td>0.823</td> <td>-</td> <td>0.855</td> </tr> <tr> <td>danehsvar18 (2018)</td> <td>-</td> <td>-</td> <td>-</td> <td>0.822</td> <td>0.817</td> </tr> <tr> <td>CNN (Celeb)</td> <td>0.747</td> <td>0.59</td> <td>0.747</td> <td>0.756</td> <td>0.861</td> </tr> <tr> <td>CNN (Celeb + PAN15)</td> <td>0.793</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>CNN (Celeb + PAN16)</td> <td>-</td> <td>0.69</td> <td>-</td> <td>-</td> <td>-</td> </tr> <tr> <td>CNN (Celeb + PAN17)</td> <td>-</td> <td>-</td> <td>0.768</td> <td>-</td> <td>-</td> </tr> <tr> <td>CNN (Celeb + PAN18)</td> <td>-</td> <td>-</td> <td>-</td> <td>0.759</td> <td>-</td> </tr> </tbody></table>
Table 5
table_5
P19-1249
5
acl2019
Table 5 shows all models' transfer performance between populations on gender. In general, all models generalize well to the respectively unseen datasets but perform best on the data they have been specifically trained for. The largest difference can be observed on the sub-1,000 author dataset PAN15, where the model of Ãlvarez-Carmona et al. (2015) suffers a significant performance loss, and PAN16, where the model of Busger op Vollenbroek et al.(2016) performs notably better on the celebrity data. This was a surprise to us that may be explained by the longer samples of writing per profile in our corpus. This hypothesis is also supported by the large increase in accuracy of the baseline model after retraining for two epochs with the PAN15 and PAN16 training datasets, respectively. The occupation model achieved a 0.7111 accuracy.
[1, 1, 1, 2, 1, 2]
["Table 5 shows all models' transfer performance between populations on gender.", 'In general, all models generalize well to the respectively unseen datasets but perform best on the data they have been specifically trained for.', 'The largest difference can be observed on the sub-1,000 author dataset PAN15, where the model of Ãlvarez-Carmona et al. (2015) suffers a significant performance loss, and PAN16, where the model of Busger op Vollenbroek et al.(2016) performs notably better on the celebrity data.', 'This was a surprise to us that may be explained by the longer samples of writing per profile in our corpus.', 'This hypothesis is also supported by the large increase in accuracy of the baseline model after retraining for two epochs with the PAN15 and PAN16 training datasets, respectively.', 'The occupation model achieved a 0.7111 accuracy.']
[None, None, ['PAN15', 'PAN16', 'nissim16 (2016)'], None, ['PAN15', 'PAN16'], None]
1
P19-1251table_3
The overall classification performance of the baseline methods and our approach. All of the improvements of our approach (ours) over PAQI are significant with a paired t-test at a 99% significance level.
4
[['Dataset', 'ID', 'Method', 'BOW'], ['Dataset', 'ID', 'Method', 'PAQI'], ['Dataset', 'ID', 'Method', 'Ours'], ['Dataset', 'IN', 'Method', 'BOW'], ['Dataset', 'IN', 'Method', 'PAQI'], ['Dataset', 'IN', 'Method', 'Ours'], ['Dataset', 'IL', 'Method', 'BOW'], ['Dataset', 'IL', 'Method', 'PAQI'], ['Dataset', 'IL', 'Method', 'Ours'], ['Dataset', 'OH', 'Method', 'BOW'], ['Dataset', 'OH', 'Method', 'PAQI'], ['Dataset', 'OH', 'Method', 'Ours'], ['Dataset', 'CA', 'Method', 'BOW'], ['Dataset', 'CA', 'Method', 'PAQI'], ['Dataset', 'CA', 'Method', 'Ours']]
2
[['MicroAverage', 'Prec.'], ['MicroAverage', 'Rec.'], ['MicroAverage', 'F1'], ['MacroAverage', 'Prec'], ['MacroAverage', 'Rec.'], ['MacroAverage', 'F1']]
[['0.807', '0.829', '0.809', '0.687', '0.619', '0.631'], ['0.816', '0.728', '0.757', '0.611', '0.677', '0.617'], ['0.863', '0.811', '0.828', '0.691', '0.776', '0.714'], ['0.792', '0.786', '0.786', '0.508', '0.508', '0.501'], ['0.847', '0.682', '0.737', '0.567', '0.649', '0.548'], ['0.855', '0.849', '0.852', '0.64', '0.652', '0.645'], ['0.775', '0.802', '0.791', '0.506', '0.499', '0.484'], ['0.834', '0.686', '0.737', '0.58', '0.666', '0.566'], ['0.844', '0.847', '0.845', '0.646', '0.638', '0.64'], ['0.744', '0.78', '0.76', '0.515', '0.512', '0.51'], ['0.8', '0.683', '0.724', '0.569', '0.622', '0.562'], ['0.813', '0.813', '0.815', '0.629', '0.627', '0.627'], ['0.647', '0.683', '0.66', '0.495', '0.488', '0.485'], ['0.826', '0.725', '0.745', '0.7', '0.772', '0.694'], ['0.83', '0.786', '0.798', '0.728', '0.786', '0.742']]
column
['Prec.', 'Rec.', 'F1', 'Prec', 'Rec.', 'F1']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>MicroAverage || Prec.</th> <th>MicroAverage || Rec.</th> <th>MicroAverage || F1</th> <th>MacroAverage || Prec</th> <th>MacroAverage || Rec.</th> <th>MacroAverage || F1</th> </tr> </thead> <tbody> <tr> <td>Dataset || ID || Method || BOW</td> <td>0.807</td> <td>0.829</td> <td>0.809</td> <td>0.687</td> <td>0.619</td> <td>0.631</td> </tr> <tr> <td>Dataset || ID || Method || PAQI</td> <td>0.816</td> <td>0.728</td> <td>0.757</td> <td>0.611</td> <td>0.677</td> <td>0.617</td> </tr> <tr> <td>Dataset || ID || Method || Ours</td> <td>0.863</td> <td>0.811</td> <td>0.828</td> <td>0.691</td> <td>0.776</td> <td>0.714</td> </tr> <tr> <td>Dataset || IN || Method || BOW</td> <td>0.792</td> <td>0.786</td> <td>0.786</td> <td>0.508</td> <td>0.508</td> <td>0.501</td> </tr> <tr> <td>Dataset || IN || Method || PAQI</td> <td>0.847</td> <td>0.682</td> <td>0.737</td> <td>0.567</td> <td>0.649</td> <td>0.548</td> </tr> <tr> <td>Dataset || IN || Method || Ours</td> <td>0.855</td> <td>0.849</td> <td>0.852</td> <td>0.64</td> <td>0.652</td> <td>0.645</td> </tr> <tr> <td>Dataset || IL || Method || BOW</td> <td>0.775</td> <td>0.802</td> <td>0.791</td> <td>0.506</td> <td>0.499</td> <td>0.484</td> </tr> <tr> <td>Dataset || IL || Method || PAQI</td> <td>0.834</td> <td>0.686</td> <td>0.737</td> <td>0.58</td> <td>0.666</td> <td>0.566</td> </tr> <tr> <td>Dataset || IL || Method || Ours</td> <td>0.844</td> <td>0.847</td> <td>0.845</td> <td>0.646</td> <td>0.638</td> <td>0.64</td> </tr> <tr> <td>Dataset || OH || Method || BOW</td> <td>0.744</td> <td>0.78</td> <td>0.76</td> <td>0.515</td> <td>0.512</td> <td>0.51</td> </tr> <tr> <td>Dataset || OH || Method || PAQI</td> <td>0.8</td> <td>0.683</td> <td>0.724</td> <td>0.569</td> <td>0.622</td> <td>0.562</td> </tr> <tr> <td>Dataset || OH || Method || Ours</td> <td>0.813</td> <td>0.813</td> <td>0.815</td> <td>0.629</td> <td>0.627</td> <td>0.627</td> </tr> <tr> <td>Dataset || CA || Method || BOW</td> <td>0.647</td> <td>0.683</td> <td>0.66</td> <td>0.495</td> <td>0.488</td> <td>0.485</td> </tr> <tr> <td>Dataset || CA || Method || PAQI</td> <td>0.826</td> <td>0.725</td> <td>0.745</td> <td>0.7</td> <td>0.772</td> <td>0.694</td> </tr> <tr> <td>Dataset || CA || Method || Ours</td> <td>0.83</td> <td>0.786</td> <td>0.798</td> <td>0.728</td> <td>0.786</td> <td>0.742</td> </tr> </tbody></table>
Table 3
table_3
P19-1251
5
acl2019
3.2 Experimental Results. For evaluation, micro and macro-F1 scores are selected the evaluation metrics. Table 3 demonstrates the performance of the three methods. Micro-F1 scores are generally better than macroF1 scores because the trivial cases like the class of good air quality are the majority of datasets with higher weights in micro-F1 scores. PAQI is better than BOW although BOW uses the knowledge of social media. It is because BOW features involve all irrelevant words so that the actual essential knowledge cannot be recognized. Our approach significantly outperforms all baseline methods in almost all metrics. More precisely, our approach improves the air quality prediction over PAQI from 6.92% to 17.71% in macro-F1 scores. The results demonstrate that social media and NLP can benefit air quality prediction.
[2, 2, 1, 1, 1, 2, 1, 1, 2]
['3.2 Experimental Results.', 'For evaluation, micro and macro-F1 scores are selected the evaluation metrics.', 'Table 3 demonstrates the performance of the three methods.', 'Micro-F1 scores are generally better than macroF1 scores because the trivial cases like the class of good air quality are the majority of datasets with higher weights in micro-F1 scores.', 'PAQI is better than BOW although BOW uses the knowledge of social media.', 'It is because BOW features involve all irrelevant words so that the actual essential knowledge cannot be recognized.', 'Our approach significantly outperforms all baseline methods in almost all metrics.', 'More precisely, our approach improves the air quality prediction over PAQI from 6.92% to 17.71% in macro-F1 scores.', 'The results demonstrate that social media and NLP can benefit air quality prediction.']
[None, None, None, ['MicroAverage', 'F1', 'MacroAverage'], ['PAQI', 'BOW'], ['BOW'], ['Ours'], ['PAQI', 'MacroAverage', 'F1'], None]
1
P19-1254table_3
Accuracy at choosing the correct reference string for a mention, discriminating against 10, 50 and 100 random distractors. We break out results for the first mention of an entity (requiring novelty to produce an appropriate name in the context) and subsequent references (typically pronouns, nominal references, or shorter forms of names). We compare the effect of sub-word modelling and providing longer contexts.
2
[['Model', 'Word-Based'], ['Model', 'BPE'], ['Model', 'Character-level'], ['Model', 'No story'], ['Model', 'Left story context'], ['Model', 'Full story']]
2
[['First Mentions', 'Rank 10'], ['First Mentions', 'Rank 50'], ['First Mentions', 'Rank 100'], ['Subsequent Mentions', 'Rank 10'], ['Subsequent Mentions', 'Rank 50'], ['Subsequent Mentions', 'Rank 100']]
[['42.3', '25.4', '17.2', '48.1', '38.4', '28.8'], ['48.1', '20.3', '25.5', '52.5', '50.7', '48.8'], ['64.2', '51', '35.6', '66.1', '55', '51.2'], ['50.3', '40', '26.7', '54.7', '51.3', '30.4'], ['59.1', '49.6', '33.3', '62.9', '53.2', '49.4'], ['64.2', '51', '35.6', '66.1', '55', '51.2']]
column
['Accuracy', 'Accuracy', 'Accuracy', 'Accuracy', 'Accuracy', 'Accuracy']
['Full story']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>First Mentions || Rank 10</th> <th>First Mentions || Rank 50</th> <th>First Mentions || Rank 100</th> <th>Subsequent Mentions || Rank 10</th> <th>Subsequent Mentions || Rank 50</th> <th>Subsequent Mentions || Rank 100</th> </tr> </thead> <tbody> <tr> <td>Model || Word-Based</td> <td>42.3</td> <td>25.4</td> <td>17.2</td> <td>48.1</td> <td>38.4</td> <td>28.8</td> </tr> <tr> <td>Model || BPE</td> <td>48.1</td> <td>20.3</td> <td>25.5</td> <td>52.5</td> <td>50.7</td> <td>48.8</td> </tr> <tr> <td>Model || Character-level</td> <td>64.2</td> <td>51</td> <td>35.6</td> <td>66.1</td> <td>55</td> <td>51.2</td> </tr> <tr> <td>Model || No story</td> <td>50.3</td> <td>40</td> <td>26.7</td> <td>54.7</td> <td>51.3</td> <td>30.4</td> </tr> <tr> <td>Model || Left story context</td> <td>59.1</td> <td>49.6</td> <td>33.3</td> <td>62.9</td> <td>53.2</td> <td>49.4</td> </tr> <tr> <td>Model || Full story</td> <td>64.2</td> <td>51</td> <td>35.6</td> <td>66.1</td> <td>55</td> <td>51.2</td> </tr> </tbody></table>
Table 3
table_3
P19-1254
8
acl2019
We compare three models ability to fill (using coreferenceentities based on context anonymization): a model that does not receive the story, a model that uses only leftward context (as in Clark et al. (2018)), and a model with access to the full story. We show in Table 3 that having access to the full story provides the best performance. Having no access to any of the story decreases ranking accuracy, even though the model still receives the local context window of the entity as input. The left story context model performs better, but looking at the complete story provides additional gains. We note that full-story context can only be provided in a multi-stage generation approach.
[2, 1, 1, 1, 2]
['We compare three models ability to fill (using coreferenceentities based on context anonymization): a model that does not receive the story, a model that uses only leftward context (as in Clark et al. (2018)), and a model with access to the full story.', 'We show in Table 3 that having access to the full story provides the best performance.', 'Having no access to any of the story decreases ranking accuracy, even though the model still receives the local context window of the entity as input.', 'The left story context model performs better, but looking at the complete story provides additional gains.', 'We note that full-story context can only be provided in a multi-stage generation approach.']
[None, ['Full story'], ['No story'], ['Left story context'], ['Full story']]
1
P19-1256table_2
Automatic evaluation results of different NLU models on both training and test sets
1
[['Original data'], ['NLU refined data'], ['w/o self-training']]
1
[['Train Err (%)'], ['Test Err (%)']]
[['35.5', '37.59'], ['16.31', '14.26'], ['25.14', '22.69']]
column
['Train Err (%)', 'Test Err (%)']
['NLU refined data']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Train Err (%)</th> <th>Test Err (%)</th> </tr> </thead> <tbody> <tr> <td>Original data</td> <td>35.5</td> <td>37.59</td> </tr> <tr> <td>NLU refined data</td> <td>16.31</td> <td>14.26</td> </tr> <tr> <td>w/o self-training</td> <td>25.14</td> <td>22.69</td> </tr> </tbody></table>
Table 2
table_2
P19-1256
4
acl2019
3.2 Main Results . NLU Results. One challenge in E2E dataset is the need to account for the noise in the corpus as some of the MR-text pairs are not semantically equivalent due to the data collection process (Dusek et al., 2018). We examine the performance of the NLU module by comparing noise reduction of the reconstructed MR-text pairs with the original ones in both training and test sets. Table 2 shows the automatic results. Applying our NLU model with iterative data refinement, the error rates of refined MR-text pairs yields 23.33% absolute error reduction on test set.
[2, 2, 2, 2, 1, 1]
['3.2 Main Results .', 'NLU Results.', 'One challenge in E2E dataset is the need to account for the noise in the corpus as some of the MR-text pairs are not semantically equivalent due to the data collection process (Dusek et al., 2018).', 'We examine the performance of the NLU module by comparing noise reduction of the reconstructed MR-text pairs with the original ones in both training and test sets.', 'Table 2 shows the automatic results.', 'Applying our NLU model with iterative data refinement, the error rates of refined MR-text pairs yields 23.33% absolute error reduction on test set.']
[None, None, None, None, None, ['NLU refined data', 'Original data', 'Test Err (%)']]
1
P19-1256table_3
Human evaluation results for NLU on test set (inter-annotator agreement: Fleiss’ kappa = 0.855)
1
[['Original data'], ['NLU refined data'], ['w/o self-training']]
1
[['E (%)'], ['M (%)'], ['A (%)'], ['C (%)']]
[['71.93', '0', '24.13', '3.95'], ['88.62', '5.45', '2.48', '3.47'], ['73.53', '13.23', '8.33', '4.91']]
column
['E (%)', 'M (%)', 'A (%)', 'C (%)']
['NLU refined data']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>E (%)</th> <th>M (%)</th> <th>A (%)</th> <th>C (%)</th> </tr> </thead> <tbody> <tr> <td>Original data</td> <td>71.93</td> <td>0</td> <td>24.13</td> <td>3.95</td> </tr> <tr> <td>NLU refined data</td> <td>88.62</td> <td>5.45</td> <td>2.48</td> <td>3.47</td> </tr> <tr> <td>w/o self-training</td> <td>73.53</td> <td>13.23</td> <td>8.33</td> <td>4.91</td> </tr> </tbody></table>
Table 3
table_3
P19-1256
4
acl2019
Human evaluation in Table 3 shows that our proposed method achieves 16.69% improvement on information equivalence between MR-text pairs. These results confirm the effectiveness of our method in reducing the unaligned data noise, and the large improvement (i.e, 15.09%) on exact match when applying self-training algorithm suggests the importance of iterative data refinement.
[1, 1]
['Human evaluation in Table 3 shows that our proposed method achieves 16.69% improvement on information equivalence between MR-text pairs.', 'These results confirm the effectiveness of our method in reducing the unaligned data noise, and the large improvement (i.e, 15.09%) on exact match when applying self-training algorithm suggests the importance of iterative data refinement.']
[['NLU refined data', 'Original data'], ['NLU refined data', 'w/o self-training', 'Original data']]
1
P19-1257table_2
Quality evaluation results of the testing set. Flue., Rele. and Info. denotes fluency, relevance, and informativeness, respectively.
2
[['Evaluation', 'Score'], ['Evaluation', 'Pearson']]
1
[['Flue.'], ['Rele.'], ['Info.'], ['Overall']]
[['9.2', '6.7', '6.4', '7.6'], ['0.74', '0.76', '0.66', '0.68']]
column
['Flue.', 'Rele.', 'Info.', 'Overall']
['Overall']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Flue.</th> <th>Rele.</th> <th>Info.</th> <th>Overall</th> </tr> </thead> <tbody> <tr> <td>Evaluation || Score</td> <td>9.2</td> <td>6.7</td> <td>6.4</td> <td>7.6</td> </tr> <tr> <td>Evaluation || Pearson</td> <td>0.74</td> <td>0.76</td> <td>0.66</td> <td>0.68</td> </tr> </tbody></table>
Table 2
table_2
P19-1257
2
acl2019
Data Analysis . High-quality testing set is necessary for faithful automatic evaluation. Therefore, we randomly selected 200 samples from the testing set for quality evaluation. Three annotators with linguistic background are required to score comments and readers can refer to Section 4.3 for the evaluation details. Table 2 shows the evaluation results. The average score for overall quality is 7.6, showing that the testing set is satisfactory.
[2, 2, 2, 2, 1, 1]
['Data Analysis .', 'High-quality testing set is necessary for faithful automatic evaluation.', 'Therefore, we randomly selected 200 samples from the testing set for quality evaluation.', 'Three annotators with linguistic background are required to score comments and readers can refer to Section 4.3 for the evaluation details.', 'Table 2 shows the evaluation results.', 'The average score for overall quality is 7.6, showing that the testing set is satisfactory.']
[None, None, None, None, None, ['Overall']]
1
P19-1266table_4
Results summary over the 510 comparisons of Reimers and Gurevych (2017a).
1
[['Case A'], ['Case B'], ['Case C']]
1
[['% of comparisons'], ['Avg. e'], ['e std']]
[['0.98%', '0', '0'], ['48.04%', '0.072', '0.108'], ['50.98%', '0.202', '0.143']]
column
['% of comparisons', 'Avg. e', 'e std']
['Case A']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>% of comparisons</th> <th>Avg. e</th> <th>e std</th> </tr> </thead> <tbody> <tr> <td>Case A</td> <td>0.98%</td> <td>0</td> <td>0</td> </tr> <tr> <td>Case B</td> <td>48.04%</td> <td>0.072</td> <td>0.108</td> </tr> <tr> <td>Case C</td> <td>50.98%</td> <td>0.202</td> <td>0.143</td> </tr> </tbody></table>
Table 4
table_4
P19-1266
8
acl2019
Results: Summary. We now turn to a summary of our analysis across the 510 comparisons of Reimers and Gurevych (2017a). Table 4 presents the percentage of comparisons that fall into each category, along with the average and std of the e value of ASO for each case (all ASO results are significant with p <= 0.01). Figure 1 presents the histogram of these e values in each case. The number of comparisons that fall into case A is only 0.98%, indicating that it is rare that a decision about stochastic dominance of one algorithm can be reached when comparing DNNs. We consider this a strong indication that the Mann Whitney U test is not suitable for DNN comparison as it has very little statistical power (criterion (b)).
[2, 2, 1, 2, 1, 2]
['Results: Summary.', 'We now turn to a summary of our analysis across the 510 comparisons of Reimers and Gurevych (2017a).', 'Table 4 presents the percentage of comparisons that fall into each category, along with the average and std of the e value of ASO for each case (all ASO results are significant with p <= 0.01).', 'Figure 1 presents the histogram of these e values in each case.', 'The number of comparisons that fall into case A is only 0.98%, indicating that it is rare that a decision about stochastic dominance of one algorithm can be reached when comparing DNNs.', 'We consider this a strong indication that the Mann Whitney U test is not suitable for DNN comparison as it has very little statistical power (criterion (b)).']
[None, None, ['Avg. e'], None, ['% of comparisons', 'Case A'], None]
1
P19-1276table_4
Overall performance of schema matching.
2
[['Method', 'Nguyen et al. (2015)'], ['Method', 'Clustering'], ['Method', 'ODEE-F'], ['Method', 'ODEE-FE'], ['Method', 'ODEE-FER']]
2
[['Schema Matching (%)', 'P'], ['Schema Matching (%)', 'R'], ['Schema Matching (%)', 'F1']]
[['41.5', '53.4', '46.7'], ['41.2', '50.6', '45.4'], ['41.7', '53.2', '46.8'], ['42.4', '56.1', '48.3'], ['43.4', '58.3', '49.8']]
column
['P', 'R', 'F1']
['ODEE-F', 'ODEE-FE', 'ODEE-FER']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Schema Matching (%) || P</th> <th>Schema Matching (%) || R</th> <th>Schema Matching (%) || F1</th> </tr> </thead> <tbody> <tr> <td>Method || Nguyen et al. (2015)</td> <td>41.5</td> <td>53.4</td> <td>46.7</td> </tr> <tr> <td>Method || Clustering</td> <td>41.2</td> <td>50.6</td> <td>45.4</td> </tr> <tr> <td>Method || ODEE-F</td> <td>41.7</td> <td>53.2</td> <td>46.8</td> </tr> <tr> <td>Method || ODEE-FE</td> <td>42.4</td> <td>56.1</td> <td>48.3</td> </tr> <tr> <td>Method || ODEE-FER</td> <td>43.4</td> <td>58.3</td> <td>49.8</td> </tr> </tbody></table>
Table 4
table_4
P19-1276
7
acl2019
Schemas Matching. Table 4 shows the overall performance of schema matching on GNBusiness-Test. From the table, we can see that ODEE-FER achieves the best F1 scores among all the methods. By comparing Nguyen et al. (2015) and ODEE-F (p= 0.01), we can see that using continuous contextual features gives better performance than discrete features. This demonstrates the advantages of continuous contextual features for alleviating the sparsity of discrete features in texts. We can also see from the result of Clustering that using only the contextual features is not sufficient for ODEE, while combining with our neural latent variable model in ODEE-F can achieve strong results (p= 6×10^6). This shows that the neural latent variable model can better explain the observed data. These results demonstrate the effectivenesses of our method in incorporating with contextual features, latent event types and redundancy information. Among ODEE models, ODEE-Fe gives a 2% gain in F1 score against ODEE-F, which shows that the latent event type modeling is beneficial and the slot distribution relies on the latent event type. Additionally, there is a 1% gain in F1 score by comparing ODEE-FER and ODEE-FE (p= 2×10?6), which confirms that leveraging redundancy is also beneficial in exploring which slot an entity should be assigned.
[2, 1, 1, 1, 2, 1, 2, 2, 1, 1]
['Schemas Matching.', 'Table 4 shows the overall performance of schema matching on GNBusiness-Test.', 'From the table, we can see that ODEE-FER achieves the best F1 scores among all the methods.', 'By comparing Nguyen et al. (2015) and ODEE-F (p= 0.01), we can see that using continuous contextual features gives better performance than discrete features.', 'This demonstrates the advantages of continuous contextual features for alleviating the sparsity of discrete features in texts.', 'We can also see from the result of Clustering that using only the contextual features is not sufficient for ODEE, while combining with our neural latent variable model in ODEE-F can achieve strong results (p= 6×10^6).', 'This shows that the neural latent variable model can better explain the observed data.', 'These results demonstrate the effectivenesses of our method in incorporating with contextual features, latent event types and redundancy information.', 'Among ODEE models, ODEE-Fe gives a 2% gain in F1 score against ODEE-F, which shows that the latent event type modeling is beneficial and the slot distribution relies on the latent event type.', 'Additionally, there is a 1% gain in F1 score by comparing ODEE-FER and ODEE-FE (p= 2×10?6), which confirms that leveraging redundancy is also beneficial in exploring which slot an entity should be assigned.']
[None, None, ['ODEE-FER', 'F1'], ['Nguyen et al. (2015)', 'ODEE-F'], None, ['ODEE-F', 'ODEE-FE', 'ODEE-FER'], None, None, ['ODEE-F', 'F1'], ['F1', 'ODEE-FER', 'ODEE-FE']]
1
P19-1284table_3
SNLI results (accuracy).
2
[['Model', ' LSTM (Bowman et al. 2016)'], ['Model', ' DA (Parikh et al. 2016)'], ['Model', ' DA (reimplementation)'], ['Model', ' DA with HardKuma attention']]
1
[[' Dev'], [' Test']]
[[' –', '80.6'], [' –', '86.3'], ['86.9', '86.5'], ['86', '85.5']]
column
['accuracy', 'accuracy']
[' DA with HardKuma attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Dev</th> <th>Test</th> </tr> </thead> <tbody> <tr> <td>Model || LSTM (Bowman et al. 2016)</td> <td>–</td> <td>80.6</td> </tr> <tr> <td>Model || DA (Parikh et al. 2016)</td> <td>–</td> <td>86.3</td> </tr> <tr> <td>Model || DA (reimplementation)</td> <td>86.9</td> <td>86.5</td> </tr> <tr> <td>Model || DA with HardKuma attention</td> <td>86</td> <td>85.5</td> </tr> </tbody></table>
Table 3
table_3
P19-1284
7
acl2019
Results. With a target rate of 10%, the HardKuma model achieved 8.5% non-zero attention. Table 3 shows that, even with so many zeros in the attention matrices, it only does about 1% worse compared to the DA baseline. Figure 6 shows an example of HardKuma attention, with additional examples in Appendix B. We leave further explorations with HardKuma attention for future work.
[2, 1, 1, 2, 2]
['Results.', 'With a target rate of 10%, the HardKuma model achieved 8.5% non-zero attention.', 'Table 3 shows that, even with so many zeros in the attention matrices, it only does about 1% worse compared to the DA baseline.', 'Figure 6 shows an example of HardKuma attention, with additional examples in Appendix B.', 'We leave further explorations with HardKuma attention for future work.']
[None, [' DA with HardKuma attention'], [' DA with HardKuma attention', ' DA (reimplementation)'], None, None]
1
P19-1296table_1
Translation results for Chinese-English and English-German translation task. “†”: indicates statistically better than Transformer(Base/Big) (ρ < 0.01).
3
[['Model', 'Existing NMT Systems', 'EDR (Tu et al., 2017)'], ['Model', 'Existing NMT Systems', '(Kuang et al., 2018)'], ['Model', 'Our NMT Systems', 'Transformer(Base)'], ['Model', 'Our NMT Systems', '+lossmse'], ['Model', 'Our NMT Systems', '+lossmse + enhanced'], ['Model', 'Our NMT Systems', 'Transformer(Big)'], ['Model', 'Our NMT Systems', '+lossmse'], ['Model', 'Our NMT Systems', '+lossmse + enhanced']]
2
[['NIST', '3'], ['NIST', '4'], ['NIST', '5'], ['NIST', '6'], ['NIST', 'Avg'], ['WMT', '14']]
[['N/A', 'N/A', '33.73', '34.15', 'N/A', 'N/A'], ['38.02', '40.83', 'N/A', 'N/A', 'N/A', 'N/A'], ['45.57', '46.4', '46.11', '44.92', '45.75', '27.28'], ['46.71†', '47.23†', '47.12†', '45.78†', '46.71', '28.11†'], ['46.94†', '47.52†', '47.43†', '46.04†', '46.98', '28.38†'], ['46.73', '47.36', '47.15', '46.82', '47.01', '28.36'], ['47.43†', '47.96', '47.78', '47.39', '47.74', '28.71'], ['47.68†', '48.13†', '47.96†', '47.56†', '47.83', '28.92†']]
column
['BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU', 'BLEU']
['Our NMT Systems']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NIST || 3</th> <th>NIST || 4</th> <th>NIST || 5</th> <th>NIST || 6</th> <th>NIST || Avg</th> <th>WMT || 14</th> </tr> </thead> <tbody> <tr> <td>Model || Existing NMT Systems || EDR (Tu et al., 2017)</td> <td>N/A</td> <td>N/A</td> <td>33.73</td> <td>34.15</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Model || Existing NMT Systems || (Kuang et al., 2018)</td> <td>38.02</td> <td>40.83</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Model || Our NMT Systems || Transformer(Base)</td> <td>45.57</td> <td>46.4</td> <td>46.11</td> <td>44.92</td> <td>45.75</td> <td>27.28</td> </tr> <tr> <td>Model || Our NMT Systems || +lossmse</td> <td>46.71†</td> <td>47.23†</td> <td>47.12†</td> <td>45.78†</td> <td>46.71</td> <td>28.11†</td> </tr> <tr> <td>Model || Our NMT Systems || +lossmse + enhanced</td> <td>46.94†</td> <td>47.52†</td> <td>47.43†</td> <td>46.04†</td> <td>46.98</td> <td>28.38†</td> </tr> <tr> <td>Model || Our NMT Systems || Transformer(Big)</td> <td>46.73</td> <td>47.36</td> <td>47.15</td> <td>46.82</td> <td>47.01</td> <td>28.36</td> </tr> <tr> <td>Model || Our NMT Systems || +lossmse</td> <td>47.43†</td> <td>47.96</td> <td>47.78</td> <td>47.39</td> <td>47.74</td> <td>28.71</td> </tr> <tr> <td>Model || Our NMT Systems || +lossmse + enhanced</td> <td>47.68†</td> <td>48.13†</td> <td>47.96†</td> <td>47.56†</td> <td>47.83</td> <td>28.92†</td> </tr> </tbody></table>
Table 1
table_1
P19-1296
4
acl2019
4.2 Performance . Table 1 shows the performances measured in terms of BLEU score. On ZH-EN task, Transformer(Base) existing systems EDR (Tu et al., 2017) and DB (Kuang et al., 2018) by 11.5 and 6.5 BLEU points. With respect to BLEU scores, all the proposed models consistently outperform Transformer(base) by 0.96 and 1.23 BLEU points. The big models (Row 7-8) also achieve similar improvement by 0.73 and 0.82 BLEU points on a larger parameters model. These findings suggest a sentence-level agreement between source-side and target-side is helpful for NMT. Further, we to enhance the source representation is use it an effective way to improve the translation. In addition, the proposed methods gain similar improvements on EN-DE task.
[2, 1, 1, 1, 1, 2, 2, 1]
['4.2 Performance .', 'Table 1 shows the performances measured in terms of BLEU score.', 'On ZH-EN task, Transformer(Base) existing systems EDR (Tu et al., 2017) and DB (Kuang et al., 2018) by 11.5 and 6.5 BLEU points.', 'With respect to BLEU scores, all the proposed models consistently outperform Transformer(base) by 0.96 and 1.23 BLEU points.', 'The big models (Row 7-8) also achieve similar improvement by 0.73 and 0.82 BLEU points on a larger parameters model.', 'These findings suggest a sentence-level agreement between source-side and target-side is helpful for NMT.', 'Further, we to enhance the source representation is use it an effective way to improve the translation.', 'In addition, the proposed methods gain similar improvements on EN-DE task.']
[None, None, ['EDR (Tu et al., 2017)', '(Kuang et al., 2018)'], ['Our NMT Systems', 'Transformer(Base)'], ['+lossmse', '+lossmse + enhanced'], None, None, None]
1
P19-1305table_3
Comparison on the cross-lingual ASSUM performances.
2
[['System', 'Transformerbpe'], ['System', 'Pipeline-TS'], ['System', 'Pipeline-ST'], ['System', 'Pseudo-Summary (Ayana et al., 2018)'], ['System', 'Pivot-based (Cheng et al., 2017)'], ['System', 'Pseudo-Chinese'], ['System', 'Teaching Generation'], ['System', 'Teaching Attention'], ['System', 'Teaching Generation+Attention']]
2
[['Gigaword', 'ROUGE-1'], ['Gigaword', 'ROUGE-2'], ['Gigaword', 'ROUGE-L'], ['DUC2004', 'ROUGE-1'], ['DUC2004', 'ROUGE-2'], ['DUC2004', 'ROUGE-L']]
[['38.1', '19.1', '35.2', '31.2', '10.7', '27.1'], ['25.8', '9.7', '23.6', '23.7', '6.8', '20.9'], ['22', '7', '20.9', '20.9', '5.3', '18.3'], ['21.5', '6.6', '19.6', '19.3', '4.3', '17'], ['26.7', '10.2', '24.3', '24', '7', '21.3'], ['27.9', '10.9', '25.6', '24.4', '6.6', '21.4'], ['29.6', '12.1', '27.3', '25.6', '7.9', '22.7'], ['28.1', '11.4', '26', '24.3', '7.4', '21.7'], ['30.1', '12.2', '27.7', '26', '8', '23.1']]
column
['ROUGE-1', 'ROUGE-2', 'ROUGE-L', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L']
['Teaching Generation', 'Teaching Attention', 'Teaching Generation+Attention']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Gigaword || ROUGE-1</th> <th>Gigaword || ROUGE-2</th> <th>Gigaword || ROUGE-L</th> <th>DUC2004 || ROUGE-1</th> <th>DUC2004 || ROUGE-2</th> <th>DUC2004 || ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>System || Transformerbpe</td> <td>38.1</td> <td>19.1</td> <td>35.2</td> <td>31.2</td> <td>10.7</td> <td>27.1</td> </tr> <tr> <td>System || Pipeline-TS</td> <td>25.8</td> <td>9.7</td> <td>23.6</td> <td>23.7</td> <td>6.8</td> <td>20.9</td> </tr> <tr> <td>System || Pipeline-ST</td> <td>22</td> <td>7</td> <td>20.9</td> <td>20.9</td> <td>5.3</td> <td>18.3</td> </tr> <tr> <td>System || Pseudo-Summary (Ayana et al., 2018)</td> <td>21.5</td> <td>6.6</td> <td>19.6</td> <td>19.3</td> <td>4.3</td> <td>17</td> </tr> <tr> <td>System || Pivot-based (Cheng et al., 2017)</td> <td>26.7</td> <td>10.2</td> <td>24.3</td> <td>24</td> <td>7</td> <td>21.3</td> </tr> <tr> <td>System || Pseudo-Chinese</td> <td>27.9</td> <td>10.9</td> <td>25.6</td> <td>24.4</td> <td>6.6</td> <td>21.4</td> </tr> <tr> <td>System || Teaching Generation</td> <td>29.6</td> <td>12.1</td> <td>27.3</td> <td>25.6</td> <td>7.9</td> <td>22.7</td> </tr> <tr> <td>System || Teaching Attention</td> <td>28.1</td> <td>11.4</td> <td>26</td> <td>24.3</td> <td>7.4</td> <td>21.7</td> </tr> <tr> <td>System || Teaching Generation+Attention</td> <td>30.1</td> <td>12.2</td> <td>27.7</td> <td>26</td> <td>8</td> <td>23.1</td> </tr> </tbody></table>
Table 3
table_3
P19-1305
7
acl2019
Our Systems VS. the Baselines . The bottom part of Table 3 lists the performances of our methods. It manifests that both teaching summary word generation and teaching attention weights are able to improve the performance over the baselines. When the summary word generation and attention weights are taught simultaneously (denoted by Teaching Generation+Attention), the performance is further improved, surpassing the best baseline by more than two points on Gigaword evaluation set and more than one point on DUC2004.
[2, 1, 1, 1]
['Our Systems VS. the Baselines .', 'The bottom part of Table 3 lists the performances of our methods.', 'It manifests that both teaching summary word generation and teaching attention weights are able to improve the performance over the baselines.', 'When the summary word generation and attention weights are taught simultaneously (denoted by Teaching Generation+Attention), the performance is further improved, surpassing the best baseline by more than two points on Gigaword evaluation set and more than one point on DUC2004.']
[None, ['Teaching Generation', 'Teaching Attention', 'Teaching Generation+Attention'], ['Teaching Generation', 'Teaching Attention'], ['Teaching Generation+Attention', 'DUC2004']]
1
P19-1306table_2
Test set result on English to Swahili and English to Tagalog. We report the TREC ad-hoc retrieval evaluation metrics (MAP, P@20, NDCG@20) and the Actual Query Weighted Value (AQWV).
2
[['Query Translation and Document Translation with Indri', 'Dictionary-Based Query Translation (DBQT)'], ['Query Translation and Document Translation with Indri', 'Probabilistic Structured Query (PSQ)'], ['Query Translation and Document Translation with Indri', 'Statistical MT (SMT)'], ['Query Translation and Document Translation with Indri', 'Neural MT (NMT)'], ['Deep Relevance Ranking', 'PACRR'], ['Deep Relevance Ranking', 'PACRR-DRMM'], ['Deep Relevance Ranking', 'POSIT-DRMM'], ['Deep Relevance Ranking with Extra Features in Section 4', 'PACRR'], ['Deep Relevance Ranking with Extra Features in Section 5', 'PACRR-DRMM'], ['Deep Relevance Ranking with Extra Features in Section 6', 'POSIT-DRMM'], ['Ours with Extra Features in Section 4: In-Language Training', 'Bilingual PACRR'], ['Ours with Extra Features in Section 4: In-Language Training', 'Bilingual PACRR-DRMM'], ['Ours with Extra Features in Section 4: In-Language Training', 'Bilingual POSIT-DRMM'], ['Ours with Extra Features in Section 4: In-Language Training', 'Bilingual POSIT-DRMM (3-model ensemble)']]
2
[['EN->SW', 'MAP'], ['EN->SW', 'P@20'], ['EN->SW', 'NDCG@20'], ['EN->SW', 'AQWV'], ['EN->TL', 'MAP'], ['EN->TL', 'P@20'], ['EN->TL', 'NDCG@20'], ['EN->TL', 'AQWV']]
[['20.93', '4.86', '28.65', '6.5', '20.01', '5.42', '27.01', '5.93'], ['27.16', '5.81', '36.03', '12.56', '35.2', '8.18', '44.04', '19.81'], ['26.3', '5.28', '34.6', '13.77', '37.31', '8.77', '46.77', '21.9'], ['26.54', '5.26', '34.83', '15.7', '33.83', '8.2', '43.17', '18.56'], ['24.69', '5.24', '32.85', '11.73', '32.53', '8.42', '41.75', '17.48'], ['22.15', '5.14', '30.28', '8.5', '32.59', '8.6', '42.17', '16.59'], ['23.91', '6.04', '33.83', '12.06', '25.16', '8.15', '34.8', '9.28'], ['27.03', '5.34', '35.36', '14.18', '41.43', '8.98', '49.96', '27.46'], ['25.46', '5.5', '34.15', '12.18', '35.61', '8.69', '45.34', '22.7'], ['26.1', '5.26', '34.27', '14.11', '39.35', '9.24', '48.41', '25.01'], ['29.64', '5.75', '38.27', '17.87', '43.02', '9.63', '52.27', '29.12'], ['26.15', '5.84', '35.54', '12.92', '38.29', '9.21', '47.6', '22.94'], ['30.13', '6.28', '39.68', '18.69', '43.67', '9.73', '52.8', '29.12'], ['31.6', '6.37', '41.25', '20.19', '45.35', '9.84', '54.26', '31']]
column
['MAP', 'P@20', 'NDCG@20', 'AQWV', 'MAP', 'P@20', 'NDCG@20', 'AQWV']
['Probabilistic Structured Query (PSQ)', 'POSIT-DRMM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-&gt;SW || MAP</th> <th>EN-&gt;SW || P@20</th> <th>EN-&gt;SW || NDCG@20</th> <th>EN-&gt;SW || AQWV</th> <th>EN-&gt;TL || MAP</th> <th>EN-&gt;TL || P@20</th> <th>EN-&gt;TL || NDCG@20</th> <th>EN-&gt;TL || AQWV</th> </tr> </thead> <tbody> <tr> <td>Query Translation and Document Translation with Indri || Dictionary-Based Query Translation (DBQT)</td> <td>20.93</td> <td>4.86</td> <td>28.65</td> <td>6.5</td> <td>20.01</td> <td>5.42</td> <td>27.01</td> <td>5.93</td> </tr> <tr> <td>Query Translation and Document Translation with Indri || Probabilistic Structured Query (PSQ)</td> <td>27.16</td> <td>5.81</td> <td>36.03</td> <td>12.56</td> <td>35.2</td> <td>8.18</td> <td>44.04</td> <td>19.81</td> </tr> <tr> <td>Query Translation and Document Translation with Indri || Statistical MT (SMT)</td> <td>26.3</td> <td>5.28</td> <td>34.6</td> <td>13.77</td> <td>37.31</td> <td>8.77</td> <td>46.77</td> <td>21.9</td> </tr> <tr> <td>Query Translation and Document Translation with Indri || Neural MT (NMT)</td> <td>26.54</td> <td>5.26</td> <td>34.83</td> <td>15.7</td> <td>33.83</td> <td>8.2</td> <td>43.17</td> <td>18.56</td> </tr> <tr> <td>Deep Relevance Ranking || PACRR</td> <td>24.69</td> <td>5.24</td> <td>32.85</td> <td>11.73</td> <td>32.53</td> <td>8.42</td> <td>41.75</td> <td>17.48</td> </tr> <tr> <td>Deep Relevance Ranking || PACRR-DRMM</td> <td>22.15</td> <td>5.14</td> <td>30.28</td> <td>8.5</td> <td>32.59</td> <td>8.6</td> <td>42.17</td> <td>16.59</td> </tr> <tr> <td>Deep Relevance Ranking || POSIT-DRMM</td> <td>23.91</td> <td>6.04</td> <td>33.83</td> <td>12.06</td> <td>25.16</td> <td>8.15</td> <td>34.8</td> <td>9.28</td> </tr> <tr> <td>Deep Relevance Ranking with Extra Features in Section 4 || PACRR</td> <td>27.03</td> <td>5.34</td> <td>35.36</td> <td>14.18</td> <td>41.43</td> <td>8.98</td> <td>49.96</td> <td>27.46</td> </tr> <tr> <td>Deep Relevance Ranking with Extra Features in Section 5 || PACRR-DRMM</td> <td>25.46</td> <td>5.5</td> <td>34.15</td> <td>12.18</td> <td>35.61</td> <td>8.69</td> <td>45.34</td> <td>22.7</td> </tr> <tr> <td>Deep Relevance Ranking with Extra Features in Section 6 || POSIT-DRMM</td> <td>26.1</td> <td>5.26</td> <td>34.27</td> <td>14.11</td> <td>39.35</td> <td>9.24</td> <td>48.41</td> <td>25.01</td> </tr> <tr> <td>Ours with Extra Features in Section 4: In-Language Training || Bilingual PACRR</td> <td>29.64</td> <td>5.75</td> <td>38.27</td> <td>17.87</td> <td>43.02</td> <td>9.63</td> <td>52.27</td> <td>29.12</td> </tr> <tr> <td>Ours with Extra Features in Section 4: In-Language Training || Bilingual PACRR-DRMM</td> <td>26.15</td> <td>5.84</td> <td>35.54</td> <td>12.92</td> <td>38.29</td> <td>9.21</td> <td>47.6</td> <td>22.94</td> </tr> <tr> <td>Ours with Extra Features in Section 4: In-Language Training || Bilingual POSIT-DRMM</td> <td>30.13</td> <td>6.28</td> <td>39.68</td> <td>18.69</td> <td>43.67</td> <td>9.73</td> <td>52.8</td> <td>29.12</td> </tr> <tr> <td>Ours with Extra Features in Section 4: In-Language Training || Bilingual POSIT-DRMM (3-model ensemble)</td> <td>31.6</td> <td>6.37</td> <td>41.25</td> <td>20.19</td> <td>45.35</td> <td>9.84</td> <td>54.26</td> <td>31</td> </tr> </tbody></table>
Table 2
table_2
P19-1306
4
acl2019
Table 2 shows the result on EN->SW and EN>TL where we train and test on the same language pair. Performance of Baselines. For query translation, PSQ is better than DBQT because PSQ uses a weighted alternative to translate query terms and does not limit to the fixed translation from the dictionary as in DBQT. For document translation, we find that both SMT and NMT have a similar performance which is close to PSQ. The effectiveness of different approaches depends on the language pair (PSQ for EN->SW and SMT for EN->TL), which is a similar finding with McCarley (1999) and Franz et al. (1999). In our experiments with deep relevance ranking models, we all use SMT and PSQ because they have strong performances in both language pairs and it is fair to compare. Effect of Extra Features and Bilingual Representation. While deep relevance ranking can achieve decent performance, the extra features are critical to achieve better results. Because the extra features include the Indri score, the deep neural model essentially learns to rerank the document by effectively using a small number of training examples. Furthermore, our models with bilingual representations achieve better results in both language pairs, giving additional 1-3 MAP improvements over their counterparts. To compare language pairs, EN->TL has larger improvements over EN->SW. This is because EN->TL has better query translation, document translation, and query likelihood retrieval results from the baselines, and thus it enjoys more benefits from our model. We also found POSIT-DRMM works better than the other two, suggesting term-gating is useful especially when the query translation can provide more alternatives. We then perform ensembling of POSIT-DRMM to further improve th results.
[1, 2, 1, 1, 2, 1, 2, 2, 2, 2, 1, 1, 2]
['Table 2 shows the result on EN->SW and EN>TL where we train and test on the same language pair.', 'Performance of Baselines.', 'For query translation, PSQ is better than DBQT because PSQ uses a weighted alternative to translate query terms and does not limit to the fixed translation from the dictionary as in DBQT.', 'For document translation, we find that both SMT and NMT have a similar performance which is close to PSQ.', 'The effectiveness of different approaches depends on the language pair (PSQ for EN->SW and SMT for EN->TL), which is a similar finding with McCarley (1999) and Franz et al. (1999).', 'In our experiments with deep relevance ranking models, we all use SMT and PSQ because they have strong performances in both language pairs and it is fair to compare.', 'Effect of Extra Features and Bilingual Representation.', 'While deep relevance ranking can achieve decent performance, the extra features are critical to achieve better results.', 'Because the extra features include the Indri score, the deep neural model essentially learns to rerank the document by effectively using a small number of training examples.', 'Furthermore, our models with bilingual representations achieve better results in both language pairs, giving additional 1-3 MAP improvements over their counterparts.', 'To compare language pairs, EN->TL has larger improvements over EN->SW. This is because EN->TL has better query translation, document translation, and query likelihood retrieval results from the baselines, and thus it enjoys more benefits from our model.', 'We also found POSIT-DRMM works better than the other two, suggesting term-gating is useful especially when the query translation can provide more alternatives.', 'We then perform ensembling of POSIT-DRMM to further improve th results.']
[['EN->SW', 'EN->TL'], None, ['Probabilistic Structured Query (PSQ)', 'Dictionary-Based Query Translation (DBQT)'], ['Statistical MT (SMT)', 'Neural MT (NMT)', 'Probabilistic Structured Query (PSQ)'], ['Probabilistic Structured Query (PSQ)', 'EN->SW', 'EN->TL'], ['Probabilistic Structured Query (PSQ)', 'Statistical MT (SMT)'], None, None, None, ['Probabilistic Structured Query (PSQ)'], ['EN->SW', 'EN->TL'], ['POSIT-DRMM'], ['POSIT-DRMM']]
1
P19-1308table_2
The accuracy of different methods in various language pairs. Bold indicates the best supervised and unsupervised results, respectively. “-” means that the model fails to converge and hence the result is omitted.
3
[['Methods', 'Supervised', 'Mikolov et al. (2013a)'], ['Methods', 'Supervised', 'Xing et al. (2015)'], ['Methods', 'Supervised', 'Shigeto et al. (2015)'], ['Methods', 'Supervised', 'Artetxe et al. (2016)'], ['Methods', 'Supervised', 'Artetxe et al. (2017)'], ['Methods', 'Unsupervised', 'Zhang et al. (2017a)'], ['Methods', 'Unsupervised', 'Zhang et al. (2017b)'], ['Methods', 'Unsupervised', 'Lample et al. (2018)'], ['Methods', 'Unsupervised', 'Xu et al. (2018)'], ['Methods', 'Unsupervised', 'Artetxe et al. (2018a)'], ['Methods', 'Unsupervised', 'Ours']]
1
[['DE-EN'], ['EN-DE'], ['ES-EN'], ['EN-ES'], ['FR-EN'], ['EN-FR'], ['IT-EN'], ['EN-IT']]
[['61.93', '73.07', '74', '80.73', '71.33', '82.2', '68.93', '77.6'], ['67.73', '69.53', '77.2', '78.6', '76.33', '78.67', '72', '73.33'], ['71.07', '63.73', '81.07', '74.53', '79.93', '73.13', '76.47', '68.13'], ['69.13', '72.13', '78.27', '80.07', '77.73', '79.2', '73.6', '74.47'], ['68.07', '69.2', '75.6', '78.2', '74.47', '77.67', '70.53', '71.67'], ['40.13', '41.27', '58.8', '60.93', '-', '57.6', '43.6', '44.53'], ['-', '55.2', '70.87', '71.4', '-', '-', '64.87', '65.27'], ['69.73', '71.33', '79.07', '78.8', '77.87', '78.13', '74.47', '75.33'], ['67', '69.33', '77.8', '79.53', '75.47', '77.93', '72.6', '73.47'], ['72.27', '73.6', '81.6', '80.67', '80.2', '80.4', '76.33', '77.13'], ['73.13', '74.47', '82.13', '81.87', '81.53', '81.27', '77.6', '78.33']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['Ours']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DE-EN</th> <th>EN-DE</th> <th>ES-EN</th> <th>EN-ES</th> <th>FR-EN</th> <th>EN-FR</th> <th>IT-EN</th> <th>EN-IT</th> </tr> </thead> <tbody> <tr> <td>Methods || Supervised || Mikolov et al. (2013a)</td> <td>61.93</td> <td>73.07</td> <td>74</td> <td>80.73</td> <td>71.33</td> <td>82.2</td> <td>68.93</td> <td>77.6</td> </tr> <tr> <td>Methods || Supervised || Xing et al. (2015)</td> <td>67.73</td> <td>69.53</td> <td>77.2</td> <td>78.6</td> <td>76.33</td> <td>78.67</td> <td>72</td> <td>73.33</td> </tr> <tr> <td>Methods || Supervised || Shigeto et al. (2015)</td> <td>71.07</td> <td>63.73</td> <td>81.07</td> <td>74.53</td> <td>79.93</td> <td>73.13</td> <td>76.47</td> <td>68.13</td> </tr> <tr> <td>Methods || Supervised || Artetxe et al. (2016)</td> <td>69.13</td> <td>72.13</td> <td>78.27</td> <td>80.07</td> <td>77.73</td> <td>79.2</td> <td>73.6</td> <td>74.47</td> </tr> <tr> <td>Methods || Supervised || Artetxe et al. (2017)</td> <td>68.07</td> <td>69.2</td> <td>75.6</td> <td>78.2</td> <td>74.47</td> <td>77.67</td> <td>70.53</td> <td>71.67</td> </tr> <tr> <td>Methods || Unsupervised || Zhang et al. (2017a)</td> <td>40.13</td> <td>41.27</td> <td>58.8</td> <td>60.93</td> <td>-</td> <td>57.6</td> <td>43.6</td> <td>44.53</td> </tr> <tr> <td>Methods || Unsupervised || Zhang et al. (2017b)</td> <td>-</td> <td>55.2</td> <td>70.87</td> <td>71.4</td> <td>-</td> <td>-</td> <td>64.87</td> <td>65.27</td> </tr> <tr> <td>Methods || Unsupervised || Lample et al. (2018)</td> <td>69.73</td> <td>71.33</td> <td>79.07</td> <td>78.8</td> <td>77.87</td> <td>78.13</td> <td>74.47</td> <td>75.33</td> </tr> <tr> <td>Methods || Unsupervised || Xu et al. (2018)</td> <td>67</td> <td>69.33</td> <td>77.8</td> <td>79.53</td> <td>75.47</td> <td>77.93</td> <td>72.6</td> <td>73.47</td> </tr> <tr> <td>Methods || Unsupervised || Artetxe et al. (2018a)</td> <td>72.27</td> <td>73.6</td> <td>81.6</td> <td>80.67</td> <td>80.2</td> <td>80.4</td> <td>76.33</td> <td>77.13</td> </tr> <tr> <td>Methods || Unsupervised || Ours</td> <td>73.13</td> <td>74.47</td> <td>82.13</td> <td>81.87</td> <td>81.53</td> <td>81.27</td> <td>77.6</td> <td>78.33</td> </tr> </tbody></table>
Table 2
table_2
P19-1308
4
acl2019
3.2 Experimental Results . Table 2 presents the results of different systems, showing that our proposed model achieves the best performance on all test language pairs under unsupervised settings. In addition, our approach is able to achieve completely comparable or even better performance than supervised systems. This illustrates that the quality of word alignment can be improved by introducing grammar information from the pre-trained denoising language model. Our denoising evaluator encourages the model to retrieve the correct translation with appropriate morphological by assessing the fluency of sentences obtained by word-to-word translation. This alleviates the adverse effect of morphological variation.
[2, 1, 1, 2, 2, 2]
['3.2 Experimental Results .', 'Table 2 presents the results of different systems, showing that our proposed model achieves the best performance on all test language pairs under unsupervised settings.', 'In addition, our approach is able to achieve completely comparable or even better performance than supervised systems.', 'This illustrates that the quality of word alignment can be improved by introducing grammar information from the pre-trained denoising language model.', 'Our denoising evaluator encourages the model to retrieve the correct translation with appropriate morphological by assessing the fluency of sentences obtained by word-to-word translation.', 'This alleviates the adverse effect of morphological variation.']
[None, ['Ours', 'Unsupervised'], ['Ours', 'Supervised'], None, None, None]
1
P19-1309table_2
BUCC results (precision, recall and F1) on the training set, used to optimize the filtering threshold.
4
[['Func.', 'Abs. (cos)', 'Retrieval', 'Forward'], ['Func.', 'Abs. (cos)', 'Retrieval', 'Backward'], ['Func.', 'Abs. (cos)', 'Retrieval', 'Intersection'], ['Func.', 'Abs. (cos)', 'Retrieval', 'Max. score'], ['Func.', 'Dist.', 'Retrieval', 'Forward'], ['Func.', 'Dist.', 'Retrieval', 'Backward'], ['Func.', 'Dist.', 'Retrieval', 'Intersection'], ['Func.', 'Dist.', 'Retrieval', 'Max. score'], ['Func.', 'Ratio', 'Retrieval', 'Forward'], ['Func.', 'Ratio', 'Retrieval', 'Backward'], ['Func.', 'Ratio', 'Retrieval', 'Intersection'], ['Func.', 'Ratio', 'Retrieval', 'Max. score']]
2
[['EN-DE', 'P'], ['EN-DE', 'R'], ['EN-DE', 'F1'], ['EN-FR', 'P'], ['EN-FR', 'R'], ['EN-FR', 'F1']]
[['78.9', '75.1', '77', '82.1', '74.2', '77.9'], ['79', '73.1', '75.9', '77.2', '72.2', '74.7'], ['84.9', '80.8', '82.8', '83.6', '78.3', '80.9'], ['83.1', '77.2', '80.1', '80.9', '77.5', '79.2'], ['94.8', '94.1', '94.4', '91.1', '91.8', '91.4'], ['94.8', '94.1', '94.4', '91.5', '91.4', '91.4'], ['94.9', '94.1', '94.5', '91.2', '91.8', '91.5'], ['94.9', '94.1', '94.5', '91.2', '91.8', '91.5'], ['95.2', '94.4', '94.8', '92.4', '91.3', '91.8'], ['95.2', '94.4', '94.8', '92.3', '91.3', '91.8'], ['95.3', '94.4', '94.8', '92.4', '91.3', '91.9'], ['95.3', '94.4', '94.8', '92.4', '91.3', '91.9']]
column
['P', 'R', 'F1', 'P', 'R', 'F1']
['Intersection', 'Max. score']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>EN-DE || P</th> <th>EN-DE || R</th> <th>EN-DE || F1</th> <th>EN-FR || P</th> <th>EN-FR || R</th> <th>EN-FR || F1</th> </tr> </thead> <tbody> <tr> <td>Func. || Abs. (cos) || Retrieval || Forward</td> <td>78.9</td> <td>75.1</td> <td>77</td> <td>82.1</td> <td>74.2</td> <td>77.9</td> </tr> <tr> <td>Func. || Abs. (cos) || Retrieval || Backward</td> <td>79</td> <td>73.1</td> <td>75.9</td> <td>77.2</td> <td>72.2</td> <td>74.7</td> </tr> <tr> <td>Func. || Abs. (cos) || Retrieval || Intersection</td> <td>84.9</td> <td>80.8</td> <td>82.8</td> <td>83.6</td> <td>78.3</td> <td>80.9</td> </tr> <tr> <td>Func. || Abs. (cos) || Retrieval || Max. score</td> <td>83.1</td> <td>77.2</td> <td>80.1</td> <td>80.9</td> <td>77.5</td> <td>79.2</td> </tr> <tr> <td>Func. || Dist. || Retrieval || Forward</td> <td>94.8</td> <td>94.1</td> <td>94.4</td> <td>91.1</td> <td>91.8</td> <td>91.4</td> </tr> <tr> <td>Func. || Dist. || Retrieval || Backward</td> <td>94.8</td> <td>94.1</td> <td>94.4</td> <td>91.5</td> <td>91.4</td> <td>91.4</td> </tr> <tr> <td>Func. || Dist. || Retrieval || Intersection</td> <td>94.9</td> <td>94.1</td> <td>94.5</td> <td>91.2</td> <td>91.8</td> <td>91.5</td> </tr> <tr> <td>Func. || Dist. || Retrieval || Max. score</td> <td>94.9</td> <td>94.1</td> <td>94.5</td> <td>91.2</td> <td>91.8</td> <td>91.5</td> </tr> <tr> <td>Func. || Ratio || Retrieval || Forward</td> <td>95.2</td> <td>94.4</td> <td>94.8</td> <td>92.4</td> <td>91.3</td> <td>91.8</td> </tr> <tr> <td>Func. || Ratio || Retrieval || Backward</td> <td>95.2</td> <td>94.4</td> <td>94.8</td> <td>92.3</td> <td>91.3</td> <td>91.8</td> </tr> <tr> <td>Func. || Ratio || Retrieval || Intersection</td> <td>95.3</td> <td>94.4</td> <td>94.8</td> <td>92.4</td> <td>91.3</td> <td>91.9</td> </tr> <tr> <td>Func. || Ratio || Retrieval || Max. score</td> <td>95.3</td> <td>94.4</td> <td>94.8</td> <td>92.4</td> <td>91.3</td> <td>91.9</td> </tr> </tbody></table>
Table 2
table_2
P19-1309
4
acl2019
4.1 BUCC mining task. The shared task of the workshop on Building and Using Comparable Corpora (BUCC) is a wellestablished evaluation framework for bitext mining (Zweigenbaum et al., 2017, 2018). The task is to mine for parallel sentences between English and four foreign languages: German, French, Russian and Chinese. There are 150K to 1.2M sentences for each language, split into a sample, training and test set. About 2-3% of the sentences are parallel. Table 2 reports precision, recall and F1 scores on the training set.8 . Our results show that multilingual sentence embeddings already achieve competitive performance using standard forward retrieval over cosine similarity, which is in line with Schwenk (2018). Both of our bidirectional retrieval strategies achieve substantial improvements over this baseline while still relying on cosine similarity, with intersection giving the best results. Moreover, our proposed margin-based scoring brings large improvements when using either the distance or the ratio functions, outperforming cosine similarity by more than 10 points in all cases. The best results are achieved by ratio, which outperforms distance by 0.3-0.5 points. Interestingly, the retrieval strategy has a very small effect in both cases, suggesting that the proposed scoring is more robust than cosine.
[2, 2, 2, 2, 2, 1, 1, 1, 1, 1, 2]
['4.1 BUCC mining task.', 'The shared task of the workshop on Building and Using Comparable Corpora (BUCC) is a wellestablished evaluation framework for bitext mining (Zweigenbaum et al., 2017, 2018).', 'The task is to mine for parallel sentences between English and four foreign languages: German, French, Russian and Chinese.', 'There are 150K to 1.2M sentences for each language, split into a sample, training and test set.', 'About 2-3% of the sentences are parallel.', 'Table 2 reports precision, recall and F1 scores on the training set.8 .', 'Our results show that multilingual sentence embeddings already achieve competitive performance using standard forward retrieval over cosine similarity, which is in line with Schwenk (2018).', 'Both of our bidirectional retrieval strategies achieve substantial improvements over this baseline while still relying on cosine similarity, with intersection giving the best results.', 'Moreover, our proposed margin-based scoring brings large improvements when using either the distance or the ratio functions, outperforming cosine similarity by more than 10 points in all cases.', 'The best results are achieved by ratio, which outperforms distance by 0.3-0.5 points.', 'Interestingly, the retrieval strategy has a very small effect in both cases, suggesting that the proposed scoring is more robust than cosine.']
[None, None, None, None, None, ['P', 'R', 'F1'], None, ['Retrieval', 'Abs. (cos)', 'Intersection'], ['Ratio', 'Dist.', 'Max. score', 'Abs. (cos)'], ['Ratio', 'Dist.'], None]
1
P19-1317table_3
Name normalization accuracy on disease (Di) and chemical (Ch) datasets. The last row group includes the results of supervised models that utilize training annotations in each specific dataset. XM denotes the use of ‘exact match’ rule to assign the corresponding concept to a mention if the mention is found in the training data. † indicates the results reported by Wright et al. (2019).
2
[['Models', 'Jaccard'], ['Models', 'SG W'], ['Models', 'SG W + WMD'], ['Models', 'SG S'], ['Models', 'SG S.C'], ['Models', 'BNE + SG W'], ['Models', 'BNE + SG S.C'], ['Models', 'Wieting et al. (2015)'], ['Models', 'DSouza and Ng (2015)'], ['Models', 'Leaman and Lu (2016)'], ['Models', 'Wright et al. (2019)'], ['Models', 'BNE + SG W + XM'], ['Models', 'BNE + SG S.C + XM']]
2
[['NCBI', '(Di)'], ['BC5CDR', '(Di)'], ['BC5CDR', '(Ch)']]
[['0.843', '0.772', '0.935'], ['0.8', '0.725', '0.771'], ['0.779', '0.731', '0.919'], ['0.815', '0.79', '0.929'], ['0.838', '0.811', '0.929'], ['0.854', '0.829', '0.93'], ['0.857', '0.829', '0.934'], ['0.822', '0.813', '0.93'], ['0.847', '0.841', '-'], ['0.877*', '0.889*', '0.941'], ['0.878*', '0.880*', '-'], ['0.873', '0.905', '0.954'], ['0.877', '0.906', '0.958']]
column
['accuracy', 'accuracy', 'accuracy']
['BNE + SG W', 'BNE + SG S.C', 'BNE + SG W + XM', 'BNE + SG S.C + XM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>NCBI || (Di)</th> <th>BC5CDR || (Di)</th> <th>BC5CDR || (Ch)</th> </tr> </thead> <tbody> <tr> <td>Models || Jaccard</td> <td>0.843</td> <td>0.772</td> <td>0.935</td> </tr> <tr> <td>Models || SG W</td> <td>0.8</td> <td>0.725</td> <td>0.771</td> </tr> <tr> <td>Models || SG W + WMD</td> <td>0.779</td> <td>0.731</td> <td>0.919</td> </tr> <tr> <td>Models || SG S</td> <td>0.815</td> <td>0.79</td> <td>0.929</td> </tr> <tr> <td>Models || SG S.C</td> <td>0.838</td> <td>0.811</td> <td>0.929</td> </tr> <tr> <td>Models || BNE + SG W</td> <td>0.854</td> <td>0.829</td> <td>0.93</td> </tr> <tr> <td>Models || BNE + SG S.C</td> <td>0.857</td> <td>0.829</td> <td>0.934</td> </tr> <tr> <td>Models || Wieting et al. (2015)</td> <td>0.822</td> <td>0.813</td> <td>0.93</td> </tr> <tr> <td>Models || DSouza and Ng (2015)</td> <td>0.847</td> <td>0.841</td> <td>-</td> </tr> <tr> <td>Models || Leaman and Lu (2016)</td> <td>0.877*</td> <td>0.889*</td> <td>0.941</td> </tr> <tr> <td>Models || Wright et al. (2019)</td> <td>0.878*</td> <td>0.880*</td> <td>-</td> </tr> <tr> <td>Models || BNE + SG W + XM</td> <td>0.873</td> <td>0.905</td> <td>0.954</td> </tr> <tr> <td>Models || BNE + SG S.C + XM</td> <td>0.877</td> <td>0.906</td> <td>0.958</td> </tr> </tbody></table>
Table 3
table_3
P19-1317
8
acl2019
Different from the lexical (Jaccard) and semantic matching (WMD and SGW) baselines, BNE obtains high scores in accuracy metric (see Table 3). The result indicates that BNE has encoded both lexical and semantic information of names into their embeddings. Table 3 also includes performances of other state-of-the-art baselines in biomedical name normalization, such as sieve-based (D'Souza and Ng, 2015), supervised semantic indexing (Leaman and Lu, 2016), and coherence-based neural network Wright et al. (2019) approaches. Note that all these baselines require human annotated labels, and the models are specifically tuned for each dataset. On the other hand, BNE utilizes only the existing synonym sets in UMLS for training. When the dataset-specific annotations are utilized, even the simple exact matching rule can boost the performance of our model to surpass other baselines (see the last two rows in Table 3).
[1, 2, 1, 2, 2, 1]
['Different from the lexical (Jaccard) and semantic matching (WMD and SGW) baselines, BNE obtains high scores in accuracy metric (see Table 3).', 'The result indicates that BNE has encoded both lexical and semantic information of names into their embeddings.', "Table 3 also includes performances of other state-of-the-art baselines in biomedical name normalization, such as sieve-based (D'Souza and Ng, 2015), supervised semantic indexing (Leaman and Lu, 2016), and coherence-based neural network Wright et al. (2019) approaches.", 'Note that all these baselines require human annotated labels, and the models are specifically tuned for each dataset.', 'On the other hand, BNE utilizes only the existing synonym sets in UMLS for training.', 'When the dataset-specific annotations are utilized, even the simple exact matching rule can boost the performance of our model to surpass other baselines (see the last two rows in Table 3).']
[['BNE + SG W', 'BNE + SG S.C', 'BNE + SG W + XM', 'BNE + SG S.C + XM'], None, ['DSouza and Ng (2015)', 'Leaman and Lu (2016)', 'Wright et al. (2019)'], None, ['BNE + SG W', 'BNE + SG S.C', 'BNE + SG W + XM', 'BNE + SG S.C + XM'], ['BNE + SG W + XM', 'BNE + SG S.C + XM']]
1
P19-1318table_1
Accuracy and macro-averaged F-Measure, precision and recall on BLESS and DiffVec. Models marked with † use external resources. The results with * indicate that WordNet was used for both the development of the model and the construction of the dataset. All models concatenate their encoded representations with the baseline vector difference of standard FastText word embeddings.
4
[['Encoding', 'Mult+Avg', 'RWE', '(This paper)'], ['Encoding', 'Mult+Avg', 'Pair2Vec', '(Joshi et al., 2019)'], ['Encoding', 'Mult+Avg', 'FastText', '(Bojanowski et al., 2017)'], ['Encoding', 'Mult+Avg', 'Retrofitting†', '(Faruqui et al., 2015)'], ['Encoding', 'Mult+Avg', 'Attract-Repel†', '(Mrkšić et al., 2017)'], ['Encoding', 'Mult+Conc', 'Pair2Vec', '(Joshi et al., 2019)'], ['Encoding', 'Mult+Conc', 'FastText', '(Bojanowski et al., 2017)'], ['Encoding', 'Diff (only)', 'FastText', '(Bojanowski et al., 2017)']]
2
[['DiffVec', 'Acc.'], ['DiffVec', 'F1'], ['DiffVec', 'Prec.'], ['DiffVec', 'Rec.'], ['BLESS', 'Acc.'], ['BLESS', 'F1'], ['BLESS', 'Prec.'], ['BLESS', 'Rec.']]
[['85.3', '64.2', '65.1', '64.5', '94.3', '92.8', '93.0', '92.6'], ['85', '64.0', '65.0', '64.5', '91.2', '89.3', '88.9', '89.7'], ['84.2', '61.4', '62.6', '61.9', '92.8', '90.4', '90.7', '90.2'], ['86.1*', '64.6*', '66.6*', '64.5*', '90.6', '88.3', '88.1', '88.6'], ['86.0*', '64.6*', '66.0*', '65.2*', '91.2', '89.0', '88.8', '89.3'], ['84.8', '64.1', '65.7', '64.4', '90.9', '88.8', '88.6', '89.1'], ['84.3', '61.3', '62.4', '61.8', '92.9', '90.6', '90.8', '90.4'], ['81.9', '57.3', '59.3', '57.8', '88.5', '85.4', '85.7', '85.4']]
column
['Acc.', 'F1', 'Prec.', 'Rec.', 'Acc.', 'F1', 'Prec.', 'Rec.']
['RWE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>DiffVec || Acc.</th> <th>DiffVec || F1</th> <th>DiffVec || Prec.</th> <th>DiffVec || Rec.</th> <th>BLESS || Acc.</th> <th>BLESS || F1</th> <th>BLESS || Prec.</th> <th>BLESS || Rec.</th> </tr> </thead> <tbody> <tr> <td>Encoding || Mult+Avg || RWE || (This paper)</td> <td>85.3</td> <td>64.2</td> <td>65.1</td> <td>64.5</td> <td>94.3</td> <td>92.8</td> <td>93.0</td> <td>92.6</td> </tr> <tr> <td>Encoding || Mult+Avg || Pair2Vec || (Joshi et al., 2019)</td> <td>85</td> <td>64.0</td> <td>65.0</td> <td>64.5</td> <td>91.2</td> <td>89.3</td> <td>88.9</td> <td>89.7</td> </tr> <tr> <td>Encoding || Mult+Avg || FastText || (Bojanowski et al., 2017)</td> <td>84.2</td> <td>61.4</td> <td>62.6</td> <td>61.9</td> <td>92.8</td> <td>90.4</td> <td>90.7</td> <td>90.2</td> </tr> <tr> <td>Encoding || Mult+Avg || Retrofitting† || (Faruqui et al., 2015)</td> <td>86.1*</td> <td>64.6*</td> <td>66.6*</td> <td>64.5*</td> <td>90.6</td> <td>88.3</td> <td>88.1</td> <td>88.6</td> </tr> <tr> <td>Encoding || Mult+Avg || Attract-Repel† || (Mrkšić et al., 2017)</td> <td>86.0*</td> <td>64.6*</td> <td>66.0*</td> <td>65.2*</td> <td>91.2</td> <td>89.0</td> <td>88.8</td> <td>89.3</td> </tr> <tr> <td>Encoding || Mult+Conc || Pair2Vec || (Joshi et al., 2019)</td> <td>84.8</td> <td>64.1</td> <td>65.7</td> <td>64.4</td> <td>90.9</td> <td>88.8</td> <td>88.6</td> <td>89.1</td> </tr> <tr> <td>Encoding || Mult+Conc || FastText || (Bojanowski et al., 2017)</td> <td>84.3</td> <td>61.3</td> <td>62.4</td> <td>61.8</td> <td>92.9</td> <td>90.6</td> <td>90.8</td> <td>90.4</td> </tr> <tr> <td>Encoding || Diff (only) || FastText || (Bojanowski et al., 2017)</td> <td>81.9</td> <td>57.3</td> <td>59.3</td> <td>57.8</td> <td>88.5</td> <td>85.4</td> <td>85.7</td> <td>85.4</td> </tr> </tbody></table>
Table 1
table_1
P19-1318
6
acl2019
Results . Table 1 shows the results of our relational word vectors, the standard FastText embeddings and other baselines on the two relation classification datasets (i.e. BLESS and DiffVec). Our model consistently outperforms the FastText embeddings baseline and comparison systems, with the only exception being the precision score for DiffVec. Despite being completely unsupervised, it is also surprising that our model manages to outperform the knowledge-enhanced embeddings of Retrofitting and Attract-Repel in the BLESS dataset. For DiffVec, let us recall that both these approaches have the unfair advantage of having had WordNet as source knowledge base, used both to construct the test set and to enhance the word embeddings. In general, the improvement of RWE over standard word embeddings suggests that our vectors capture relations in a way that is compatible to standard word vectors (which will be further discussed in Section 6.2).
[2, 1, 1, 1, 2, 1]
['Results .', 'Table 1 shows the results of our relational word vectors, the standard FastText embeddings and other baselines on the two relation classification datasets (i.e. BLESS and DiffVec).', 'Our model consistently outperforms the FastText embeddings baseline and comparison systems, with the only exception being the precision score for DiffVec.', 'Despite being completely unsupervised, it is also surprising that our model manages to outperform the knowledge-enhanced embeddings of Retrofitting and Attract-Repel in the BLESS dataset.', ' For DiffVec, let us recall that both these approaches have the unfair advantage of having had WordNet as source knowledge base, used both to construct the test set and to enhance the word embeddings.', 'In general, the improvement of RWE over standard word embeddings suggests that our vectors capture relations in a way that is compatible to standard word vectors (which will be further discussed in Section 6.2).']
[None, ['FastText', 'BLESS', 'DiffVec'], ['FastText', 'DiffVec'], ['BLESS', 'Retrofitting†', 'Attract-Repel†'], ['DiffVec'], ['RWE']]
1
P19-1318table_2
Results on the McRae feature norms dataset (Macro F-Score) and QVEC (correlation score). Models marked with † use external resources. The results with * indicate that WordNet was used for both the development of the model and the construction of the dataset.
2
[['Model', 'RWE'], ['Model', 'Pair2Vec'], ['Model', 'Retrofitting'], ['Model', 'Attract-Repel'], ['Model', 'FastText']]
2
[['McRae Feature Norms', 'Overall'], ['McRae Feature Norms', 'metal'], ['McRae Feature Norms', 'is_small'], ['McRae Feature Norms', 'is_large'], ['McRae Feature Norms', 'animal'], ['McRae Feature Norms', 'is_edible'], ['McRae Feature Norms', 'wood'], ['McRae Feature Norms', 'is_round'], ['McRae Feature Norms', 'is_long'], ['QVEC', '-']]
[['55.2', '73.6', '46.7', '45.9', '89.2', '61.5', '38.5', '39', '46.8', '55.4'], ['55', '71.9', '49.2', '43.3', '88.9', '68.3', '37.7', '35', '45.5', '52.7'], ['50.6', '72.3', '44', '39.1', '90.6', '75.7', '15.4', '22.9', '44.4', '56.8*'], ['50.4', '73.2', '44.4', '33.3', '88.9', '71.8', '31.1', '24.2', '35.9', '55.9*'], ['54.6', '72.7', '48.4', '45.2', '87.5', '63.2', '33.3', '39', '47.8', '54.6']]
column
['F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'F-Score', 'correlation']
['RWE']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>McRae Feature Norms || Overall</th> <th>McRae Feature Norms || metal</th> <th>McRae Feature Norms || is_small</th> <th>McRae Feature Norms || is_large</th> <th>McRae Feature Norms || animal</th> <th>McRae Feature Norms || is_edible</th> <th>McRae Feature Norms || wood</th> <th>McRae Feature Norms || is_round</th> <th>McRae Feature Norms || is_long</th> <th>QVEC || -</th> </tr> </thead> <tbody> <tr> <td>Model || RWE</td> <td>55.2</td> <td>73.6</td> <td>46.7</td> <td>45.9</td> <td>89.2</td> <td>61.5</td> <td>38.5</td> <td>39</td> <td>46.8</td> <td>55.4</td> </tr> <tr> <td>Model || Pair2Vec</td> <td>55</td> <td>71.9</td> <td>49.2</td> <td>43.3</td> <td>88.9</td> <td>68.3</td> <td>37.7</td> <td>35</td> <td>45.5</td> <td>52.7</td> </tr> <tr> <td>Model || Retrofitting</td> <td>50.6</td> <td>72.3</td> <td>44</td> <td>39.1</td> <td>90.6</td> <td>75.7</td> <td>15.4</td> <td>22.9</td> <td>44.4</td> <td>56.8*</td> </tr> <tr> <td>Model || Attract-Repel</td> <td>50.4</td> <td>73.2</td> <td>44.4</td> <td>33.3</td> <td>88.9</td> <td>71.8</td> <td>31.1</td> <td>24.2</td> <td>35.9</td> <td>55.9*</td> </tr> <tr> <td>Model || FastText</td> <td>54.6</td> <td>72.7</td> <td>48.4</td> <td>45.2</td> <td>87.5</td> <td>63.2</td> <td>33.3</td> <td>39</td> <td>47.8</td> <td>54.6</td> </tr> </tbody></table>
Table 2
table_2
P19-1318
7
acl2019
Results . Table 2 shows the results on the McRae Feature Norms dataset and QVEC. In the case of the McRae Feature Norms dataset, our relational word embeddings achieve the best overall results, although there is some variation for the individual features. These results suggest that attributional information is encoded well in our relational word embeddings. Interestingly, our results also suggest that Retrofitting and Attract-Repel, which use pairs of related words during training, may be too naive to capture the complex relationships proposed in these benchmarks. In fact, they perform considerably lower than the baseline Fast-Text model. On the other hand, Pair2Vec, which we recall is the most similar to our model, yields slightly better results than the FastText baseline, but still worse than our relational word embedding model. This is especially remarkable considering its much lower computational cost.
[2, 1, 1, 1, 1, 1, 1, 1]
['Results .', 'Table 2 shows the results on the McRae Feature Norms dataset and QVEC.', 'In the case of the McRae Feature Norms dataset, our relational word embeddings achieve the best overall results, although there is some variation for the individual features.', 'These results suggest that attributional information is encoded well in our relational word embeddings.', 'Interestingly, our results also suggest that Retrofitting and Attract-Repel, which use pairs of related words during training, may be too naive to capture the complex relationships proposed in these benchmarks.', ' In fact, they perform considerably lower than the baseline Fast-Text model.', 'On the other hand, Pair2Vec, which we recall is the most similar to our model, yields slightly better results than the FastText baseline, but still worse than our relational word embedding model.', 'This is especially remarkable considering its much lower computational cost.']
[None, ['McRae Feature Norms', 'QVEC'], ['McRae Feature Norms', 'RWE'], ['RWE'], ['Retrofitting', 'Attract-Repel'], ['Retrofitting', 'Attract-Repel'], ['Pair2Vec', 'FastText', 'RWE'], ['Pair2Vec']]
1
P19-1321table_1
Word analogy accuracy results on different datasets.
2
[['Models', 'GloVe'], ['Models', 'SG'], ['Models', 'CBOW'], ['Models', 'WeMAP'], ['Models', 'CvMF'], ['Models', 'CvMF(NIG)']]
1
[['Gsem'], ['GSyn'], ['MSR'], ['IM'], ['DM'], ['ES'], ['LS']]
[['78.85', '62.81', '53.04', '55.21', '14.82', '10.56', '0.881'], ['71.58', '60.50', '51.71', '55.45', '13.48', '08.78', '0.671'], ['64.81', '47.39', '45.33', '50.58', '10.11', '07.02', '0.764'], ['83.52', '63.08', '55.08', '56.03', '14.95', '10.62', '0.903'], ['63.22', '67.41', '63.21', '65.94', '17.46', '9.380', '1.100'], ['64.14', '67.55', '63.55', '65.95', '17.49', '9.410', '1.210']]
column
['accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy', 'accuracy']
['CvMF(NIG)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Gsem</th> <th>GSyn</th> <th>MSR</th> <th>IM</th> <th>DM</th> <th>ES</th> <th>LS</th> </tr> </thead> <tbody> <tr> <td>Models || GloVe</td> <td>78.85</td> <td>62.81</td> <td>53.04</td> <td>55.21</td> <td>14.82</td> <td>10.56</td> <td>0.881</td> </tr> <tr> <td>Models || SG</td> <td>71.58</td> <td>60.50</td> <td>51.71</td> <td>55.45</td> <td>13.48</td> <td>08.78</td> <td>0.671</td> </tr> <tr> <td>Models || CBOW</td> <td>64.81</td> <td>47.39</td> <td>45.33</td> <td>50.58</td> <td>10.11</td> <td>07.02</td> <td>0.764</td> </tr> <tr> <td>Models || WeMAP</td> <td>83.52</td> <td>63.08</td> <td>55.08</td> <td>56.03</td> <td>14.95</td> <td>10.62</td> <td>0.903</td> </tr> <tr> <td>Models || CvMF</td> <td>63.22</td> <td>67.41</td> <td>63.21</td> <td>65.94</td> <td>17.46</td> <td>9.380</td> <td>1.100</td> </tr> <tr> <td>Models || CvMF(NIG)</td> <td>64.14</td> <td>67.55</td> <td>63.55</td> <td>65.95</td> <td>17.49</td> <td>9.410</td> <td>1.210</td> </tr> </tbody></table>
Table 1
table_1
P19-1321
6
acl2019
Table 1 shows word analogy results for three datasets. First, we show results for the Google analogy dataset (Mikolov et al., 2013a) which is available from the GloVe project and covers a mix of semantic and syntactic relations. These results are shown separately in Table 1 as Gsem and Gsyn respectively. Second, we considerthe Microsoft syntactic word analogy dataset, which only covers syntactic relations and is re-ferred to as MSR. Finally, we show results for the BATS analogy dataset4, which covers four categories of relations: inflectional morphology (IM), derivational morphology (DM), encyclopedic semantics (ES) and lexicographic semantics (LS). The results in Table 1 clearly show that our model behaves substantially differently from the baselines: for the syntactic/morphological relationships (Gsyn, MSR, IM, DM), our model outperforms the baselines in a very substantial way. On the other hand, for the remaining, semanticallyoriented categories, the performance is less strong, with particularly weak results for Gsem. For ES and IS, it needs to be emphasized that the results are weak for all models, which is partially due to a relatively high number of out-of-vocabulary words. In Figure 1 we show the impact of the number of mixture components K on the performance for Gsem and Gsyn (for the NIG variant). This shows that the under-performance on Gsem is not due to the choice of K. Among others, we can also see that a relatively high number of mixture components is needed to achieve the best results.
[1, 2, 1, 1, 1, 1, 1, 1, 2, 2]
['Table 1 shows word analogy results for three datasets.', 'First, we show results for the Google analogy dataset (Mikolov et al., 2013a) which is available from the GloVe project and covers a mix of semantic and syntactic relations.', 'These results are shown separately in Table 1 as Gsem and Gsyn respectively.', 'Second, we considerthe Microsoft syntactic word analogy dataset, which only covers syntactic relations and is re-ferred to as MSR.', 'Finally, we show results for the BATS analogy dataset4, which covers four categories of relations: inflectional morphology (IM), derivational morphology (DM), encyclopedic semantics (ES) and lexicographic semantics (LS).', 'The results in Table 1 clearly show that our model behaves substantially differently from the baselines: for the syntactic/morphological relationships (Gsyn, MSR, IM, DM), our model outperforms the baselines in a very substantial way.', 'On the other hand, for the remaining, semanticallyoriented categories, the performance is less strong, with particularly weak results for Gsem.', 'For ES and IS, it needs to be emphasized that the results are weak for all models, which is partially due to a relatively high number of out-of-vocabulary words.', 'In Figure 1 we show the impact of the number of mixture components K on the performance for Gsem and Gsyn (for the NIG variant).', 'This shows that the under-performance on Gsem is not due to the choice of K. Among others, we can also see that a relatively high number of mixture components is needed to achieve the best results.']
[None, None, ['Gsem', 'GSyn'], ['MSR'], ['IM', 'DM', 'ES', 'LS'], ['CvMF(NIG)', 'GSyn', 'MSR', 'IM', 'DM'], ['CvMF(NIG)', 'Gsem'], ['ES'], None, None]
1
P19-1321table_7
Document classification results (F1).
2
[['Models', 'TF-IDF'], ['Models', 'LDA'], ['Models', 'HDP'], ['Models', 'movMF'], ['Models', 'GLDA'], ['Models', 'sHDP'], ['Models', 'GloVe'], ['Models', 'WeMAP'], ['Models', 'SG'], ['Models', 'CBOW'], ['Models', 'CvMF'], ['Models', 'CvMF(NIG)']]
1
[['20NG'], ['OHS'], ['TechTC'], ['Reu']]
[['0.852', '0.632', '0.306', '0.319'], ['0.859', '0.629', '0.305', '0.323'], ['0.862', '0.627', '0.304', '0.339'], ['0.809', '0.610', '0.302', '0.336'], ['0.862', '0.629', '0.305', '0.352'], ['0.863', '0.631', '0.304', '0.353'], ['0.852', '0.629', '0.301', '0.315'], ['0.855', '0.630', '0.306', '0.345'], ['0.853', '0.631', '0.304', '0.341'], ['0.823', '0.629', '0.297', '0.339'], ['0.871', '0.633', '0.305', '0.362'], ['0.871', '0.633', '0.305', '0.363']]
column
['F1', 'F1', 'F1', 'F1']
['CvMF(NIG)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>20NG</th> <th>OHS</th> <th>TechTC</th> <th>Reu</th> </tr> </thead> <tbody> <tr> <td>Models || TF-IDF</td> <td>0.852</td> <td>0.632</td> <td>0.306</td> <td>0.319</td> </tr> <tr> <td>Models || LDA</td> <td>0.859</td> <td>0.629</td> <td>0.305</td> <td>0.323</td> </tr> <tr> <td>Models || HDP</td> <td>0.862</td> <td>0.627</td> <td>0.304</td> <td>0.339</td> </tr> <tr> <td>Models || movMF</td> <td>0.809</td> <td>0.610</td> <td>0.302</td> <td>0.336</td> </tr> <tr> <td>Models || GLDA</td> <td>0.862</td> <td>0.629</td> <td>0.305</td> <td>0.352</td> </tr> <tr> <td>Models || sHDP</td> <td>0.863</td> <td>0.631</td> <td>0.304</td> <td>0.353</td> </tr> <tr> <td>Models || GloVe</td> <td>0.852</td> <td>0.629</td> <td>0.301</td> <td>0.315</td> </tr> <tr> <td>Models || WeMAP</td> <td>0.855</td> <td>0.630</td> <td>0.306</td> <td>0.345</td> </tr> <tr> <td>Models || SG</td> <td>0.853</td> <td>0.631</td> <td>0.304</td> <td>0.341</td> </tr> <tr> <td>Models || CBOW</td> <td>0.823</td> <td>0.629</td> <td>0.297</td> <td>0.339</td> </tr> <tr> <td>Models || CvMF</td> <td>0.871</td> <td>0.633</td> <td>0.305</td> <td>0.362</td> </tr> <tr> <td>Models || CvMF(NIG)</td> <td>0.871</td> <td>0.633</td> <td>0.305</td> <td>0.363</td> </tr> </tbody></table>
Table 7
table_7
P19-1321
6
acl2019
Table 7 summarizes our document classification results. It can be seen that our model outperforms all baselines, except for the TechTC dataset, where the results are very close. Among the baselines, InterestsHDP achieves the best performance. Interestingly, this model also uses von Mishes-Fisher mixtures, but relies on a pre-trained word embedding.
[1, 1, 1, 2]
['Table 7 summarizes our document classification results.', 'It can be seen that our model outperforms all baselines, except for the TechTC dataset, where the results are very close.', 'Among the baselines, InterestsHDP achieves the best performance.', 'Interestingly, this model also uses von Mishes-Fisher mixtures, but relies on a pre-trained word embedding.']
[None, ['CvMF(NIG)', 'TechTC'], ['sHDP'], ['sHDP']]
1
P19-1328table_2
Ablation study results of our approach. BERT (Keep/Mask) are the baselines that uses BERT unmasking/masking the target word to propose candidates and rank by the proposal scores. Remember that our approach is a linear combination of proposal score sp and validation score sv, as in Eq (3). In the baselines “w/o sp”, we alternatively use BERT (Keep), BERT (Mask) or WordNet to propose candidates.
2
[['LS07', 'our approach'], ['LS07', ' - w/o sp (Keep)'], ['LS07', ' - w/o sp (Mask)'], ['LS07', ' - w/o sp (WordNet)'], ['LS07', ' - w/o sv'], ['LS07', 'BERT (Keep)'], ['LS07', 'BERT (Mask)'], ['LS14', 'our approach'], ['LS14', ' - w/o sp (Keep)'], ['LS14', ' - w/o sp (Mask)'], ['LS14', ' - w/o sp (WordNet)'], ['LS14', ' - w/o sv'], ['LS14', 'BERT (Keep)'], ['LS14', 'BERT (Mask)']]
2
[['Method', 'best'], ['Method', 'best-m'], ['Method', 'oot'], ['Method', 'oot-m'], ['Method', 'P@1']]
[['20.3', '34.2', '55.4', '68.4', '51.1'], ['18.9', '32.6', '51.7', '63.5', '48.6'], ['16.2', '27.5', '46.4', '57.9', '43.3'], ['15.9', '27.1', '45.9', '57.1', '42.8'], ['12.1', '20.2', '40.8', '56.9', '13.1'], ['9.2', '16.3', '37.3', '52.2', '9.2'], ['8.6', '14.2', '33.2', '48.9', '5.7'], ['14.5', '33.9', '45.9', '69.9', '56.3'], ['13.7', '31.4', '41.3', '63.5', '53.1'], ['11.3', '26.7', '36.2', '59.1', '47.1'], ['11', '26.3', '35.9', '58.7', '46.3'], ['9.1', '19.7', '33.5', '56.9', '14.3'], ['8.3', '17.2', '31.1', '54.4', '11.2'], ['7.6', '15.4', '38.5', '51.3', '7.6']]
column
['best', 'best-m', 'oot', 'oot-m', 'P@1']
['our approach']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Method || best</th> <th>Method || best-m</th> <th>Method || oot</th> <th>Method || oot-m</th> <th>Method || P@1</th> </tr> </thead> <tbody> <tr> <td>LS07 || our approach</td> <td>20.3</td> <td>34.2</td> <td>55.4</td> <td>68.4</td> <td>51.1</td> </tr> <tr> <td>LS07 || - w/o sp (Keep)</td> <td>18.9</td> <td>32.6</td> <td>51.7</td> <td>63.5</td> <td>48.6</td> </tr> <tr> <td>LS07 || - w/o sp (Mask)</td> <td>16.2</td> <td>27.5</td> <td>46.4</td> <td>57.9</td> <td>43.3</td> </tr> <tr> <td>LS07 || - w/o sp (WordNet)</td> <td>15.9</td> <td>27.1</td> <td>45.9</td> <td>57.1</td> <td>42.8</td> </tr> <tr> <td>LS07 || - w/o sv</td> <td>12.1</td> <td>20.2</td> <td>40.8</td> <td>56.9</td> <td>13.1</td> </tr> <tr> <td>LS07 || BERT (Keep)</td> <td>9.2</td> <td>16.3</td> <td>37.3</td> <td>52.2</td> <td>9.2</td> </tr> <tr> <td>LS07 || BERT (Mask)</td> <td>8.6</td> <td>14.2</td> <td>33.2</td> <td>48.9</td> <td>5.7</td> </tr> <tr> <td>LS14 || our approach</td> <td>14.5</td> <td>33.9</td> <td>45.9</td> <td>69.9</td> <td>56.3</td> </tr> <tr> <td>LS14 || - w/o sp (Keep)</td> <td>13.7</td> <td>31.4</td> <td>41.3</td> <td>63.5</td> <td>53.1</td> </tr> <tr> <td>LS14 || - w/o sp (Mask)</td> <td>11.3</td> <td>26.7</td> <td>36.2</td> <td>59.1</td> <td>47.1</td> </tr> <tr> <td>LS14 || - w/o sp (WordNet)</td> <td>11</td> <td>26.3</td> <td>35.9</td> <td>58.7</td> <td>46.3</td> </tr> <tr> <td>LS14 || - w/o sv</td> <td>9.1</td> <td>19.7</td> <td>33.5</td> <td>56.9</td> <td>14.3</td> </tr> <tr> <td>LS14 || BERT (Keep)</td> <td>8.3</td> <td>17.2</td> <td>31.1</td> <td>54.4</td> <td>11.2</td> </tr> <tr> <td>LS14 || BERT (Mask)</td> <td>7.6</td> <td>15.4</td> <td>38.5</td> <td>51.3</td> <td>7.6</td> </tr> </tbody></table>
Table 2
table_2
P19-1328
4
acl2019
For understanding the improvement, we conduct an ablation test and show the result in Table 2. According to Table 2, we observe that the original BERT cannot perform as well as the previous state-of-the-art approaches by its own. When we further add our candidate valuation method in Section 2.2 to validate the candidates, its performance is significantly improved. Furthermore, it is clear that our substitute candidate proposal method is much better than WordNet for candidate proposal when we compare our approach to the -w/o sp (WordNet) baseline where candidates are obtained by WordNet and validated by our validation approach.
[1, 1, 1, 1]
['For understanding the improvement, we conduct an ablation test and show the result in Table 2.', 'According to Table 2, we observe that the original BERT cannot perform as well as the previous state-of-the-art approaches by its own.', 'When we further add our candidate valuation method in Section 2.2 to validate the candidates, its performance is significantly improved.', 'Furthermore, it is clear that our substitute candidate proposal method is much better than WordNet for candidate proposal when we compare our approach to the -w/o sp (WordNet) baseline where candidates are obtained by WordNet and validated by our validation approach.']
[None, ['BERT (Keep)', 'BERT (Mask)'], ['our approach'], ['our approach', ' - w/o sp (WordNet)']]
1
P19-1335table_5
Performance of the Full-Transformer (UWB) model evaluated on seen and unseen entities from the training and validation worlds.
2
[['Evaluation', 'Training worlds seen'], ['Evaluation', 'Training worlds unseen'], ['Evaluation', 'Validation worlds unseen']]
1
[['Accuracy']]
[['87.74'], ['82.96'], ['76']]
column
['Accuracy']
['Evaluation']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>Accuracy</th> </tr> </thead> <tbody> <tr> <td>Evaluation || Training worlds seen</td> <td>87.74</td> </tr> <tr> <td>Evaluation || Training worlds unseen</td> <td>82.96</td> </tr> <tr> <td>Evaluation || Validation worlds unseen</td> <td>76</td> </tr> </tbody></table>
Table 5
table_5
P19-1335
7
acl2019
To analyze the impact of unseen entities and domain shift in zero-shot entity linking, we evaluate performance on a more standard in-domain entity linking setting by making predictions on held out mentions from the training worlds. Table 5 compares entity linking performance for different entity splits. Seen entities from the training worlds are unsurprisingly the easiest to link to. For unseen entities from the training world, we observe a 5-point drop in performance.
[2, 1, 1, 1]
['To analyze the impact of unseen entities and domain shift in zero-shot entity linking, we evaluate performance on a more standard in-domain entity linking setting by making predictions on held out mentions from the training worlds.', 'Table 5 compares entity linking performance for different entity splits.', 'Seen entities from the training worlds are unsurprisingly the easiest to link to.', 'For unseen entities from the training world, we observe a 5-point drop in performance.']
[None, None, ['Training worlds seen'], ['Training worlds unseen']]
1
P19-1338table_1
Parsing performance with and without punctuation. Mean F indicates mean parsing F -score against the Stanford Parser (early stopping by F -score). Self-/RB-agreement indicates self-agreement and agreement with the right-branching baseline across multiple runs. † indicates a statistical difference from the corresponding PRPN baseline with p < 0.01, paired one-tailed bootstrap test.2
2
[['Model', 'Left-Branching'], ['Model', 'Right-Branching'], ['Model', 'Balanced-Tree'], ['Model', 'ST-Gumbel'], ['Model', 'PRPN'], ['Model', 'Imitation (SbS only)'], ['Model', 'Imitation (SbS + refine)']]
2
[['w/o Punctuation', 'Mean F'], ['w/o Punctuation', 'Self-agreement'], ['w/o Punctuation', 'RB-agreement'], ['w/ Punctuation', 'Mean F'], ['w/ Punctuation', 'Self-agreement'], ['w/ Punctuation', 'RB-agreement']]
[['20.7', '-', '-', '18.9', '-', '-'], ['58.5', '-', '-', '18.5', '-', '-'], ['39.5', '-', '-', '22', '-', '-'], ['36.4', '57', '33.8', '21.9', '56.8', '38.1'], ['46', '48.9', '51.2', '51.6', '65', '27.4'], ['45.9', '49.5', '62.2', '52', '70.8', '20.6'], ['53.3', '58.2', '64.9', '53.7', '67.4', '21.1']]
column
['Mean F', 'Self-agreement', 'RB-agreement', 'Mean F', 'Self-agreement', 'RB-agreement']
['Imitation (SbS only)', 'Imitation (SbS + refine)']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>w/o Punctuation || Mean F</th> <th>w/o Punctuation || Self-agreement</th> <th>w/o Punctuation || RB-agreement</th> <th>w/ Punctuation || Mean F</th> <th>w/ Punctuation || Self-agreement</th> <th>w/ Punctuation || RB-agreement</th> </tr> </thead> <tbody> <tr> <td>Model || Left-Branching</td> <td>20.7</td> <td>-</td> <td>-</td> <td>18.9</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Right-Branching</td> <td>58.5</td> <td>-</td> <td>-</td> <td>18.5</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || Balanced-Tree</td> <td>39.5</td> <td>-</td> <td>-</td> <td>22</td> <td>-</td> <td>-</td> </tr> <tr> <td>Model || ST-Gumbel</td> <td>36.4</td> <td>57</td> <td>33.8</td> <td>21.9</td> <td>56.8</td> <td>38.1</td> </tr> <tr> <td>Model || PRPN</td> <td>46</td> <td>48.9</td> <td>51.2</td> <td>51.6</td> <td>65</td> <td>27.4</td> </tr> <tr> <td>Model || Imitation (SbS only)</td> <td>45.9</td> <td>49.5</td> <td>62.2</td> <td>52</td> <td>70.8</td> <td>20.6</td> </tr> <tr> <td>Model || Imitation (SbS + refine)</td> <td>53.3</td> <td>58.2</td> <td>64.9</td> <td>53.7</td> <td>67.4</td> <td>21.1</td> </tr> </tbody></table>
Table 1
table_1
P19-1338
5
acl2019
Table 1 shows the parsing F -scores against the Stanford Parser. The ST-Gumbel Tree-LSTM model and the PRPN were run five times with different initializations, each known as a trajectory. For imitation learning, given a PRPN trajectory, we perform SbS training once and then policy refinement for five runs. Left-/right-branching and balanced trees are also included as baselines. We break down the performance of latent tree induction across constituent types in the setting of keeping punctuation. We see that, among the six most common ones, our imitation approach outperforms the PRPN on four types. However, we also notice that for the most frequent type (NP), our approach is worse than the PRPN. This shows that the strengths of the two approaches complement each other, and in future work ensemble methods could be employed to combine them.
[1, 2, 2, 2, 2, 1, 2, 2]
['Table 1 shows the parsing F -scores against the Stanford Parser.', 'The ST-Gumbel Tree-LSTM model and the PRPN were run five times with different initializations, each known as a trajectory.', 'For imitation learning, given a PRPN trajectory, we perform SbS training once and then policy refinement for five runs.', 'Left-/right-branching and balanced trees are also included as baselines.', 'We break down the performance of latent tree induction across constituent types in the setting of keeping punctuation.', 'We see that, among the six most common ones, our imitation approach outperforms the PRPN on four types.', 'However, we also notice that for the most frequent type (NP), our approach is worse than the PRPN.', 'This shows that the strengths of the two approaches complement each other, and in future work ensemble methods could be employed to combine them.']
[None, ['ST-Gumbel', 'PRPN'], ['Imitation (SbS only)'], ['Left-Branching', 'Right-Branching', 'Balanced-Tree'], None, ['Imitation (SbS only)', 'Imitation (SbS + refine)', 'PRPN'], ['Imitation (SbS only)', 'Imitation (SbS + refine)', 'PRPN'], ['Imitation (SbS only)', 'Imitation (SbS + refine)', 'PRPN']]
1
P19-1341table_6
Correlation (τ) of generic DA lexicons with gold standard lexicons. ORTH results are from Rothe et al. (2016). The other columns use REG (§3.4). Training words for lexicon induction are from Rothe et al. (2016) (GEN) and from PBC+ ZS lexicons.
1
[['CZ web'], ['DE web'], ['ES web'], ['FR web'], ['EN Tw.'], ['EN Ne.'], ['JA Wiki']]
2
[['ORTH', 'GEN'], ['REG', 'GEN'], ['REG', 'PBC+/T'], ['REG', 'PBC+/NT']]
[['0.58', '0.576', '0.529', '0.524'], ['0.654', '0.654', '0.634', '0.634'], ['0.563', '0.568', '0.524', '0.514'], ['0.544', '0.54', '0.514', '0.474'], ['0.654', '0.629', '0.583', '0.583'], ['0.622', '0.582', '0.562', '0.557'], ['-', '0.628', '0.571', '0.558']]
column
['correlation', 'correlation', 'correlation', 'correlation']
['REG']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>ORTH || GEN</th> <th>REG || GEN</th> <th>REG || PBC+/T</th> <th>REG || PBC+/NT</th> </tr> </thead> <tbody> <tr> <td>CZ web</td> <td>0.58</td> <td>0.576</td> <td>0.529</td> <td>0.524</td> </tr> <tr> <td>DE web</td> <td>0.654</td> <td>0.654</td> <td>0.634</td> <td>0.634</td> </tr> <tr> <td>ES web</td> <td>0.563</td> <td>0.568</td> <td>0.524</td> <td>0.514</td> </tr> <tr> <td>FR web</td> <td>0.544</td> <td>0.54</td> <td>0.514</td> <td>0.474</td> </tr> <tr> <td>EN Tw.</td> <td>0.654</td> <td>0.629</td> <td>0.583</td> <td>0.583</td> </tr> <tr> <td>EN Ne.</td> <td>0.622</td> <td>0.582</td> <td>0.562</td> <td>0.557</td> </tr> <tr> <td>JA Wiki</td> <td>-</td> <td>0.628</td> <td>0.571</td> <td>0.558</td> </tr> </tbody></table>
Table 6
table_6
P19-1341
7
acl2019
Columns (i) and (ii) of Table 6 show that REG (§3.4) delivers results comparable to Densifier (ORTH) when using the same set of generic training words (GEN) in lexicon induction. However, our method is more efficient - no need to compute the expensive SVD after every batch update.
[1, 1]
['Columns (i) and (ii) of Table 6 show that REG (§3.4) delivers results comparable to Densifier (ORTH) when using the same set of generic training words (GEN) in lexicon induction.', 'However, our method is more efficient - no need to compute the expensive SVD after every batch update.']
[['GEN', 'ORTH', 'REG'], ['REG', 'GEN']]
1
P19-1342table_6
Sentence-level phrase accuracy (SPAcc) and phrase error deviation (PEDev) comparison on SST-5 between bi-tree-LSTM and TCM.
2
[['Metrics', 'SPAcc alpha=1'], ['Metrics', 'SPAcc alpha=0.9'], ['Metrics', 'SPAcc alpha=0.8'], ['Metrics', 'PEDev-mean'], ['Metrics', 'PEDev-median']]
1
[['BTL'], ['TCM'], ['Diff.']]
[['3.2', '3.7', '0.5'], ['20', '21.2', '1.2'], ['70.7', '71.4', '0.7'], ['36.4', '35.7', '-0.7'], ['37.6', '37', '-0.6']]
row
['SPAcc alpha=1', 'SPAcc alpha=0.9', 'SPAcc alpha=0.8', 'PEDev-mean', 'PEDev-median']
['TCM']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>BTL</th> <th>TCM</th> <th>Diff.</th> </tr> </thead> <tbody> <tr> <td>Metrics || SPAcc alpha=1</td> <td>3.2</td> <td>3.7</td> <td>0.5</td> </tr> <tr> <td>Metrics || SPAcc alpha=0.9</td> <td>20</td> <td>21.2</td> <td>1.2</td> </tr> <tr> <td>Metrics || SPAcc alpha=0.8</td> <td>70.7</td> <td>71.4</td> <td>0.7</td> </tr> <tr> <td>Metrics || PEDev-mean</td> <td>36.4</td> <td>35.7</td> <td>-0.7</td> </tr> <tr> <td>Metrics || PEDev-median</td> <td>37.6</td> <td>37</td> <td>-0.6</td> </tr> </tbody></table>
Table 6
table_6
P19-1342
8
acl2019
Table 6 shows the sentence-level phrase accuracy (SPAcc) and phrase error deviation (PEDev) comparison on SST-5 between bi-tree-LSTM and TCM, respectively. TCM outperforms bi-treeLSTM on all the metrics, which demonstrates that TCM gives more consistent predictions of sentiments over different phrases in a tree, compared to top-down communication. This shows the benefit of rich node communication.
[1, 1, 1]
['Table 6 shows the sentence-level phrase accuracy (SPAcc) and phrase error deviation (PEDev) comparison on SST-5 between bi-tree-LSTM and TCM, respectively.', 'TCM outperforms bi-treeLSTM on all the metrics, which demonstrates that TCM gives more consistent predictions of sentiments over different phrases in a tree, compared to top-down communication.', 'This shows the benefit of rich node communication.']
[['BTL', 'TCM'], ['BTL', 'TCM'], None]
1
P19-1346table_3
Comparison of oracles, baselines, retrieval, extractive, and abstractive models on the full proposed answers.
2
[['Model', 'Support Document'], ['Model', 'Nearest Neighbor'], ['Model', 'Extractive (TFIDF)'], ['Model', 'Extractive (BidAF)'], ['Model', 'Oracle support doc'], ['Model', 'Oracle web sources'], ['Model', 'LM Q + A'], ['Model', 'LM Q + D + A'], ['Model', 'Seq2Seq Q to A'], ['Model', 'Seq2Seq Q + D to A'], ['Model', 'Seq2Seq Multi-task']]
1
[['PPL'], ['ROUGE-1'], ['ROUGE-2'], ['ROUGE-L']]
[['-', '16.8', '2.3', '10.2'], ['-', '16.7', '2.3', '12.5'], ['-', '20.6', '2.9', '17'], ['-', '23.5', '3.1', '17.5'], ['-', '27.4', '2.8', '19.9'], ['-', '54.8', '8.6', '40.3'], ['42.2', '27.8', '4.7', '23.1'], ['33.9', '26.4', '4', '20.5'], ['52.9', '28.3', '5.1', '22.7'], ['55.1', '28.3', '5.1', '22.8'], ['32.7', '28.9', '5.4', '23.1']]
column
['PPL', 'ROUGE-1', 'ROUGE-2', 'ROUGE-L']
['Seq2Seq Q to A', 'Seq2Seq Q + D to A']
<table border='1' class='dataframe'> <thead> <tr style='text-align: right;'> <th></th> <th>PPL</th> <th>ROUGE-1</th> <th>ROUGE-2</th> <th>ROUGE-L</th> </tr> </thead> <tbody> <tr> <td>Model || Support Document</td> <td>-</td> <td>16.8</td> <td>2.3</td> <td>10.2</td> </tr> <tr> <td>Model || Nearest Neighbor</td> <td>-</td> <td>16.7</td> <td>2.3</td> <td>12.5</td> </tr> <tr> <td>Model || Extractive (TFIDF)</td> <td>-</td> <td>20.6</td> <td>2.9</td> <td>17</td> </tr> <tr> <td>Model || Extractive (BidAF)</td> <td>-</td> <td>23.5</td> <td>3.1</td> <td>17.5</td> </tr> <tr> <td>Model || Oracle support doc</td> <td>-</td> <td>27.4</td> <td>2.8</td> <td>19.9</td> </tr> <tr> <td>Model || Oracle web sources</td> <td>-</td> <td>54.8</td> <td>8.6</td> <td>40.3</td> </tr> <tr> <td>Model || LM Q + A</td> <td>42.2</td> <td>27.8</td> <td>4.7</td> <td>23.1</td> </tr> <tr> <td>Model || LM Q + D + A</td> <td>33.9</td> <td>26.4</td> <td>4</td> <td>20.5</td> </tr> <tr> <td>Model || Seq2Seq Q to A</td> <td>52.9</td> <td>28.3</td> <td>5.1</td> <td>22.7</td> </tr> <tr> <td>Model || Seq2Seq Q + D to A</td> <td>55.1</td> <td>28.3</td> <td>5.1</td> <td>22.8</td> </tr> <tr> <td>Model || Seq2Seq Multi-task</td> <td>32.7</td> <td>28.9</td> <td>5.4</td> <td>23.1</td> </tr> </tbody></table>
Table 3
table_3
P19-1346
6
acl2019
6.1 Overview of Model Performance . Full answer ROUGE. Table 3 shows that the nearest neighbor baseline performs similarly to simply returning the support document which indicates that memorizing answers from the training set is insufficient. For extractive models, the oracle provides an approximate upper bound of 27.4 ROUGE-1. The BidAF model is the strongest (23.5), better than TFIDF between the question and the support document to select sentences. However, these approaches are limited by the support document, as an oracle computed on the full web sources achieves 54.8. Abstractive methods achieve higher ROUGE, likely because they can adapt to the domain shift between the web sources and the ELI5 subreddit. In general, Seq2Seq models perform better than language models and the various Seq2Seq settings do not show large ROUGE differences. Figure 3 shows an example of generation for the language model and the best Seq2Seq and extractive settings (see Appendix F for additional random examples).
[2, 2, 1, 1, 1, 1, 1, 1, 0]
['6.1 Overview of Model Performance .', 'Full answer ROUGE.', 'Table 3 shows that the nearest neighbor baseline performs similarly to simply returning the support document which indicates that memorizing answers from the training set is insufficient.', 'For extractive models, the oracle provides an approximate upper bound of 27.4 ROUGE-1.', 'The BidAF model is the strongest (23.5), better than TFIDF between the question and the support document to select sentences.', 'However, these approaches are limited by the support document, as an oracle computed on the full web sources achieves 54.8.', 'Abstractive methods achieve higher ROUGE, likely because they can adapt to the domain shift between the web sources and the ELI5 subreddit.', 'In general, Seq2Seq models perform better than language models and the various Seq2Seq settings do not show large ROUGE differences.', 'Figure 3 shows an example of generation for the language model and the best Seq2Seq and extractive settings (see Appendix F for additional random examples).']
[None, None, ['Nearest Neighbor'], ['Oracle support doc', 'ROUGE-1'], ['Extractive (BidAF)', 'Extractive (TFIDF)'], ['Oracle web sources'], None, ['Seq2Seq Q to A', 'Seq2Seq Q + D to A'], None]
1