paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 5: Results of fine-tuned models on Balanced COPA. Easy: instances with superficial cues, Hard: instances without superficial cues.
['Model', 'Training data', 'Overall', 'Easy', 'Hard']
[['BERT-large-FT', 'B-COPA', '74.5 (± 0.7)', '74.7 (± 0.4)', '[BOLD] 74.4 (± 0.9)'], ['BERT-large-FT', 'B-COPA (50%)', '74.3 (± 2.2)', '76.8 (± 1.9)', '72.8 (± 3.1)'], ['BERT-large-FT', 'COPA', '[BOLD] 76.5 (± 2.7)', '[BOLD] 83.9 (± 4.4)', '71.9 (± 2.5)'], ['RoBERTa-large-FT', 'B-COPA', '[BOLD] 89.0 (± 0.3)', '88.9 (± 2.1)', '[BOLD] 89.0 (± 0.8)'], ['RoBERTa-large-FT', 'B-COPA (50%)', '86.1 (± 2.2)', '87.4 (± 1.1)', '85.4 (± 2.9)'], ['RoBERTa-large-FT', 'COPA', '87.7 (± 0.9)', '[BOLD] 91.6 (± 1.1)', '85.3 (± 2.0)']]
The results are shown in Table 5. The smaller performance gap between Easy and Hard subsets indicates that training on BCOPA encourages BERT and RoBERTa to rely less on superficial cues. Moreover, training on B-COPA improves performance on the Hard subset, both when training with all 1000 instances in B-COPA, and when matching the training size of the original COPA (500 instances, B-COPA 50%). Note that training on B-COPA 50% exposes the model to lexically less diverse training instances than the original COPA due to the high overlap between mirrored alternatives [CONTINUE] These results show that once superficial cues [CONTINUE] are removed, the models are able to learn the task to a high degree.
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 1: Reported results on COPA. With the exception of Wang et al. (2019), BERT-large and RoBERTa-large yields substantial improvements over prior approaches. See §2 for model details. * indicates our replication experiments.
['Model', 'Accuracy']
[['BigramPMI\xa0Goodwin et al. ( 2012 )', '63.4'], ['PMI\xa0Gordon et al. ( 2011 )', '65.4'], ['PMI+Connectives\xa0Luo et al. ( 2016 )', '70.2'], ['PMI+Con.+Phrase\xa0Sasaki et al. ( 2017 )', '71.4'], ['BERT-large\xa0Wang et al. ( 2019 )', '70.5'], ['BERT-large\xa0Sap et al. ( 2019 )', '75.0'], ['BERT-large\xa0Li et al. ( 2019 )', '75.4'], ['RoBERTa-large (finetuned)', '90.6'], ['BERT-large (finetuned)*', '76.5 ± 2.7'], ['RoBERTa-large (finetuned)*', '87.7 ± 0.9']]
Recent studies show that BERT and RoBERTa achieve considerable improvements on COPA (see Table 1).
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 2: Applicability (App.), Productivity (Prod.) and Coverage (Cov.) of the various words in the alternatives of the COPA dev set.
['Cue', 'App.', 'Prod.', 'Cov.']
[['in', '47', '55.3', '9.40'], ['was', '55', '61.8', '11.0'], ['to', '82', '40.2', '16.4'], ['the', '85', '38.8', '17.0'], ['a', '106', '57.5', '21.2']]
Table 2 shows the five tokens with highest coverage. For example, a is the token with the highest coverage and appears in either a correct alternative or wrong alternative in 21.2% of COPA training instances. Its productivity of 57.5% expresses that it appears in in correct alternatives 7.5% more often than expected by random chance. This suggests that a model could rely on such unbalanced distributions of tokens to predict answers based only on alternatives without understanding the task.
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 3: Results of human performance evaluation of the original COPA and Balanced COPA.
['Dataset', 'Accuracy', 'Fleiss’ kappa [ITALIC] k']
[['Original COPA', '100.0', '0.973'], ['Balanced COPA', '97.0', '0.798']]
The human evaluation shows that our mirrored instances are comparable in difficulty to the original ones (see Table 3).
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 4: Model performance on the COPA test set (Overall), on Easy instances with superficial cues, and on Hard instances without superficial cues. p-values according to Approximate Randomization Tests Noreen (1989), with ∗ indicating a significant difference between performance on Easy and Hard p<5%. Methods are pointwise mutual information (PMI), word frequency provided by the wordfreq package Speer et al. (2018), pretrained language model (LM), and next-sentence prediction (NSP).
['Model', 'Method', 'Training Data', 'Overall', 'Easy', 'Hard', 'p-value (%)']
[['goodwin-etal-2012-utdhlt', 'PMI', 'unsupervised', '61.8', '64.7', '60.0', '19.8'], ['gordon_commonsense_2011-1', 'PMI', 'unsupervised', '65.4', '65.8', '65.2', '83.5'], ['sasaki-etal-2017-handling', 'PMI', 'unsupervised', '71.4', '75.3', '69.0', '4.8∗'], ['Word frequency', 'wordfreq', 'COPA', '53.5', '57.4', '51.3', '9.8'], ['BERT-large-FT', 'LM, NSP', 'COPA', '76.5 (± 2.7)', '83.9 (± 4.4)', '71.9 (± 2.5)', '0.0∗'], ['RoBERTa-large-FT', 'LM', 'COPA', '87.7 (± 0.9)', '91.6 (± 1.1)', '85.3 (± 2.0)', '0.0∗']]
We then compare BERT and RoBERTa with previous models on the Easy and Hard subsets.7 As Table 4 shows, previous models perform similarly on both subsets, with the exception of Sasaki et al. (2017).8 Overall both BERT (76.5%) and [CONTINUE] RoBERTa (87.7%) considerably outperform the best previous model (71.4%). However, BERT's improvements over previous work can be almost entirely attributed to high accuracy on the Easy subset: on this subset, finetuned BERT-large improves 8.6 percent over the model by (Sasaki et al., 2017) (83.9% vs. 75.3%), but on the Hard subset, the improvement is only 2.9 percent (71.9% vs. 69.0%). This indicates that BERT relies on superficial cues. The difference between accuracy on Easy and Hard is less pronounced for RoBERTa, but still suggests some reliance on superficial cues.
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 6: Results of non-fine-tuned models on Balanced COPA. Easy: instances with superficial cues, Hard: instances without superficial cues.
['Model', 'Training data', 'Overall', 'Easy', 'Hard']
[['BERT-large', 'B-COPA', '70.5 (± 2.5)', '72.6 (± 2.3)', '[BOLD] 69.1 (± 2.7)'], ['BERT-large', 'B-COPA (50%)', '69.9 (± 1.9)', '71.2 (± 1.3)', '69.0 (± 3.5)'], ['BERT-large', 'COPA', '[BOLD] 71.7 (± 0.5)', '[BOLD] 80.5 (± 0.4)', '66.3 (± 0.8)'], ['RoBERTa-large', 'B-COPA', '[BOLD] 76.7 (± 0.8)', '73.3 (± 1.5)', '[BOLD] 78.8 (± 2.0)'], ['RoBERTa-large', 'B-COPA (50%)', '72.4 (± 2.0)', '72.1 (± 1.7)', '72.6 (± 2.1)'], ['RoBERTa-large', 'COPA', '76.4 (± 0.7)', '[BOLD] 79.6 (± 1.0)', '74.4 (± 1.1)'], ['BERT-base-NSP', 'None', '[BOLD] 66.4', '66.2', '[BOLD] 66.7'], ['BERT-large-NSP', 'None', '65.0', '[BOLD] 66.9', '62.1']]
The relatively high accuracies of BERT-large, RoBERTa-large and BERT-*-NSP show that these pretrained models are already well-equipped to perform this task "out-of-the-box".
When Choosing Plausible Alternatives, Clever Hans can be Clever
1911.00225v1
Table 7: Sensitivity of BERT-large to superficial cues identified in §2 (unit: 10−2). Cues with top-5 reduction are shown. SCOPA,SB_COPA indicate the mean contributions of BERT-large trained on COPA, and BERT-large trained on B-COPA, respectively.
['Cue', '[ITALIC] SCOPA', '[ITALIC] SB_COPA', 'Diff.', 'Prod.']
[['woman', '7.98', '4.84', '-3.14', '0.25'], ['mother', '5.16', '3.95', '-1.21', '0.75'], ['went', '6.00', '5.15', '-0.85', '0.73'], ['down', '5.52', '4.93', '-0.58', '0.71'], ['into', '4.07', '3.51', '-0.56', '0.40']]
We observe that BERT trained on Balanced COPA is less sensitive to a few highly productive superficial cues than BERT trained on original COPA. Note the decrease in the sensitivity for cues of productivity from 0.7 to 0.9. These cues are shown in Table 7. However, for cues with lower productivity, the picture is less clear, in case of RoBERTa, there are no noticeable trends in the change of sensitivity.
Using Structured Representation and Data: A Hybrid Model for Negation and Sentiment in Customer Service Conversations
1906.04706v1
Table 8: Sentiment classification evaluation, using different classifiers on the test set.
['Classifier', 'Positive Sentiment Precision', 'Positive Sentiment Recall', 'Positive Sentiment Fscore']
[['SVM-w/o neg.', '0.57', '0.72', '0.64'], ['SVM-Punct. neg.', '0.58', '0.70', '0.63'], ['SVM-our-neg.', '0.58', '0.73', '0.65'], ['CNN', '0.63', '0.83', '0.72'], ['CNN-LSTM', '0.71', '0.72', '0.72'], ['CNN-LSTM-Our-neg-Ant', '[BOLD] 0.78', '[BOLD] 0.77', '[BOLD] 0.78'], ['[EMPTY]', 'Negative Sentiment', 'Negative Sentiment', 'Negative Sentiment'], ['[EMPTY]', 'Precision', 'Recall', 'Fscore'], ['SVM-w/o neg.', '0.78', '0.86', '0.82'], ['SVM-Punct. neg.', '0.78', '0.87', '0.83'], ['SVM-Our neg.', '0.80', '0.87', '0.83'], ['CNN', '0.88', '0.72', '0.79'], ['CNN-LSTM.', '0.83', '0.83', '0.83'], ['CNN-LSTM-our-neg-Ant', '[BOLD] 0.87', '[BOLD] 0.87', '[BOLD] 0.87'], ['[EMPTY]', 'Train', '[EMPTY]', 'Test'], ['Positive tweets', '5121', '[EMPTY]', '1320'], ['Negative tweets', '9094', '[EMPTY]', '2244']]
show that the antonym based learned representations are more useful for sentiment task as compared to prefixing with NOT_. The proposed CNN-LSTMOur-neg-Ant improves upon the simple CNNLSTM-w/o neg. baseline with F1 scores improving from 0.72 to 0.78 for positive sentiment and from 0.83 to 0.87 for negative sentiment.
Using Structured Representation and Data: A Hybrid Model for Negation and Sentiment in Customer Service Conversations
1906.04706v1
Table 7: Negation classifier performance for scope detection with gold cues and scope.
['[EMPTY]', '[BOLD] Punctuation', '[BOLD] BiLSTM', '[BOLD] Proposed']
[['In-scope (F)', '0.66', '0.88', '0.85'], ['Out-scope (F)', '0.87', '0.97', '0.97'], ['PCS', '0.52', '0.72', '0.72']]
The results in Table 7 show that the method is comparable to state of the art BiLSTM model from (Fancellu et al., 2016) on gold negation cues for scope prediction. [CONTINUE] report the F-score for both in-scope and out-ofscope tokens.
Using Structured Representation and Data: A Hybrid Model for Negation and Sentiment in Customer Service Conversations
1906.04706v1
Table 3: Cue and token distribution in the conversational negation corpus.
['Total negation cues', '2921']
[['True negation cues', '2674'], ['False negation cues', '247'], ['Average scope length', '2.9'], ['Average sentence length', '13.6'], ['Average tweet length', '22.3']]
Corpus statistics are shown in Table 3. The average number of tokens per tweet is 22.3, per sentence is 13.6 and average scope length is 2.9.
Using Structured Representation and Data: A Hybrid Model for Negation and Sentiment in Customer Service Conversations
1906.04706v1
Table 4: Cue classification on the test set.
['[EMPTY]', '[BOLD] F-Score [BOLD] Baseline', '[BOLD] F-Score [BOLD] Proposed', '[BOLD] Support']
[['False cues', '0.61', '0.68', '47'], ['Actual cues', '0.97', '0.98', '557']]
improves the F-score of false negation from a 0.61 baseline to 0.68 on a test set containing 47 false and 557 actual negation cues.
Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context
1911.10484v2
Table 5: Human evaluation results. Models with data augmentation are noted as (+). App denotes the average appropriateness score.
['Model', 'Diversity', 'App', 'Good%', 'OK%', 'Invalid%']
[['DAMD', '3.12', '2.50', '56.5%', '[BOLD] 37.4%', '6.1%'], ['DAMD (+)', '[BOLD] 3.65', '[BOLD] 2.53', '[BOLD] 63.0%', '27.1%', '9.9%'], ['HDSA (+)', '2.14', '2.47', '57.5%', '32.5%', '[BOLD] 10.0%']]
The results are shown in Table 5. [CONTINUE] We report the average value of diversity and appropriateness, and the percentage of responses scored for each appropriateness level. [CONTINUE] With data augmentation, our model obtains a significant improvement in diversity score and achieves the best average appropriateness score as well. [CONTINUE] However, the slightly increased invalid response percentage [CONTINUE] We also observe our DAMD model outperforms HDSA in both diversity and appropriateness scores.
Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context
1911.10484v2
Table 1: Multi-action evaluation results. The “w” and “w/o” column denote with and without data augmentation respectively, and the better score between them is in bold. We report the average performance over 5 runs.
['Model & Decoding Scheme', 'Act # w/o', 'Act # w/', 'Slot # w/o', 'Slot # w/']
[['Single-Action Baselines', 'Single-Action Baselines', 'Single-Action Baselines', 'Single-Action Baselines', 'Single-Action Baselines'], ['DAMD + greedy', '[BOLD] 1.00', '[BOLD] 1.00', '1.95', '[BOLD] 2.51'], ['HDSA + fixed threshold', '[BOLD] 1.00', '[BOLD] 1.00', '2.07', '[BOLD] 2.40'], ['5-Action Generation', '5-Action Generation', '5-Action Generation', '5-Action Generation', '5-Action Generation'], ['DAMD + beam search', '2.67', '[BOLD] 2.87', '3.36', '[BOLD] 4.39'], ['DAMD + diverse beam search', '2.68', '[BOLD] 2.88', '3.41', '[BOLD] 4.50'], ['DAMD + top-k sampling', '3.08', '[BOLD] 3.43', '3.61', '[BOLD] 4.91'], ['DAMD + top-p sampling', '3.08', '[BOLD] 3.40', '3.79', '[BOLD] 5.20'], ['HDSA + sampled threshold', '1.32', '[BOLD] 1.50', '3.08', '[BOLD] 3.31'], ['10-Action Generation', '10-Action Generation', '10-Action Generation', '10-Action Generation', '10-Action Generation'], ['DAMD + beam search', '3.06', '[BOLD] 3.39', '4.06', '[BOLD] 5.29'], ['DAMD + diverse beam search', '3.05', '[BOLD] 3.39', '4.05', '[BOLD] 5.31'], ['DAMD + top-k sampling', '3.59', '[BOLD] 4.12', '4.21', '[BOLD] 5.77'], ['DAMD + top-p sampling', '3.53', '[BOLD] 4.02', '4.41', '[BOLD] 6.17'], ['HDSA + sampled threshold', '1.54', '[BOLD] 1.83', '3.42', '[BOLD] 3.92']]
The results are shown in Table 1. [CONTINUE] After applying our data augmentation, both the action and slot diversity are improved consistently, [CONTINUE] HDSA has the worse performance and benefits less from data augmentation comparing to our proposed domain-aware multi-decoder network,
Task-Oriented Dialog Systems that Consider Multiple Appropriate Responses under the Same Context
1911.10484v2
Table 2: Comparison of response generation results on MultiWOZ. The oracle/generated denotes either using ground truth or generated results. The results are grouped according to whether and how system action is modeled.
['Model', 'Belief State Type', 'System Action Type', 'System Action Form', 'Inform (%)', 'Success (%)', 'BLEU', 'Combined Score']
[['1. Seq2Seq + Attention ', 'oracle', '-', '-', '71.3', '61.0', '[BOLD] 18.9', '85.1'], ['2. Seq2Seq + Copy', 'oracle', '-', '-', '86.2', '[BOLD] 72.0', '15.7', '94.8'], ['3. MD-Sequicity', 'oracle', '-', '-', '[BOLD] 86.6', '71.6', '16.8', '[BOLD] 95.9'], ['4. SFN + RL (Mehri et al. mehri2019structured)', 'oracle', 'generated', 'one-hot', '82.7', '72.1', '16.3', '93.7'], ['5. HDSA ', 'oracle', 'generated', 'graph', '82.9', '68.9', '[BOLD] 23.6', '99.5'], ['6. DAMD', 'oracle', 'generated', 'span', '[BOLD] 89.5', '75.8', '18.3', '100.9'], ['7. DAMD + multi-action data augmentation', 'oracle', 'generated', 'span', '89.2', '[BOLD] 77.9', '18.6', '[BOLD] 102.2'], ['8. SFN + RL (Mehri et al. mehri2019structured)', 'oracle', 'oracle', 'one-hot', '-', '-', '29.0', '106.0'], ['9. HDSA ', 'oracle', 'oracle', 'graph', '87.9', '78.0', '[BOLD] 30.4', '113.4'], ['10. DAMD + multi-action data augmentation', 'oracle', 'oracle', 'span', '[BOLD] 95.4', '[BOLD] 87.2', '27.3', '[BOLD] 118.5'], ['11. SFN + RL (Mehri et al. mehri2019structured)', 'generated', 'generated', 'one-hot', '73.8', '58.6', '[BOLD] 16.9', '83.0'], ['12. DAMD + multi-action data augmentation', 'generated', 'generated', 'span', '[BOLD] 76.3', '[BOLD] 60.4', '16.6', '[BOLD] 85.0']]
Results are shown in Table 2. [CONTINUE] The first group shows that after applying our domain-adaptive delexcalization and domain-aware belief span modeling, the task completion ability of seq2seq models becomes better. [CONTINUE] The relative lower BLEU score [CONTINUE] Our DAMD model significantly outperforms other models with different system action forms in terms of inform and success rates, [CONTINUE] While we find applying our data augmentation achieves a limited improvement on combined score (6 vs 7), [CONTINUE] Moreover, if a model has access to ground truth system action, the model further improves its task performance.
Improving Generalization by Incorporating Coverage in Natural Language Inference
1909.08940v1
Table 3: Impact of using coverage for improving generalization across the datasets of similar tasks. Both models are trained on the SQuAD training data.
['[EMPTY]', 'in-domain SQuAD', 'in-domain SQuAD', 'out-of-domain QA-SRL', 'out-of-domain QA-SRL']
[['[EMPTY]', 'EM', 'F1', 'EM', 'F1'], ['MQAN', '31.76', '75.37', '<bold>10.99</bold>', '50.10'], ['+coverage', '<bold>32.67</bold>', '<bold>76.83</bold>', '10.63', '<bold>50.89</bold>'], ['BIDAF (ELMO)', '70.43', '79.76', '28.35', '49.98'], ['+coverage', '<bold>71.07</bold>', '<bold>80.15</bold>', '<bold>30.58</bold>', '<bold>52.43</bold>']]
Table 3 shows the impact of coverage for improving generalization across these two datasets that belong to the two similar tasks of reading comprehension and QA-SRL. [CONTINUE] The models are evaluated using Exact Match (EM) and F1 measures, [CONTINUE] As the results show, incorporating coverage improves the model's performance in the in-domain evaluation as well as the out-of-domain evaluation in QASRL.
Improving Generalization by Incorporating Coverage in Natural Language Inference
1909.08940v1
Table 2: Impact of using coverage for improving generalization across different datasets of the same task (NLI). All models are trained on MultiNLI.
['[EMPTY]', 'in-domain MultiNLI', 'out-of-domain SNLI', 'out-of-domain Glockner', 'out-of-domain SICK']
[['MQAN', '72.30', '60.91', '41.82', '53.95'], ['+ coverage', '<bold>73.84</bold>', '<bold>65.38</bold>', '<bold>78.69</bold>', '<bold>54.55</bold>'], ['ESIM (ELMO)', '80.04', '68.70', '60.21', '51.37'], ['+ coverage', '<bold>80.38</bold>', '<bold>70.05</bold>', '<bold>67.47</bold>', '<bold>52.65</bold>']]
Table 2 shows the performance for both systems for in-domain (the MultiNLI development set) as well as out-of-domain evaluations on SNLI, Glockner, and SICK datasets. [CONTINUE] The results show that coverage information considerably improves the generalization of both examined models across various NLI datasets. The resulting cross-dataset improvements on the SNLI and Glockner datasets are larger than those on the SICK dataset.
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 4: KL-divergence between different dialog policy and the human dialog KL(πturns||pturns), where πturns denotes the discrete distribution over the number of dialog turns of simulated sessions between the policy π and the agenda-based user simulator, and pturns for the real human-human dialog.
['GP-MBCM', 'ACER', 'PPO', 'ALDM', 'GDPL']
[['1.666', '0.775', '0.639', '1.069', '[BOLD] 0.238']]
Table 4 shows that GDPL has the smallest KL-divergence to the human on the number of dialog turns over the baselines, which implies that GDPL behaves more like the human.
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 3: Performance of different dialog agents on the multi-domain dialog corpus by interacting with the agenda-based user simulator. All the results except “dialog turns” are shown in percentage terms. Real human-human performance computed from the test set (i.e. the last row) serves as the upper bounds.
['Method', 'Agenda Turns', 'Agenda Inform', 'Agenda Match', 'Agenda Success']
[['GP-MBCM', '2.99', '19.04', '44.29', '28.9'], ['ACER', '10.49', '77.98', '62.83', '50.8'], ['PPO', '9.83', '83.34', '69.09', '59.1'], ['ALDM', '12.47', '81.20', '62.60', '61.2'], ['GDPL-sess', '[BOLD] 7.49', '88.39', '77.56', '76.4'], ['GDPL-discr', '7.86', '93.21', '80.43', '80.5'], ['GDPL', '7.64', '[BOLD] 94.97', '[BOLD] 83.90', '[BOLD] 86.5'], ['[ITALIC] Human', '[ITALIC] 7.37', '[ITALIC] 66.89', '[ITALIC] 95.29', '[ITALIC] 75.0']]
The performance of each approach that interacts with the agenda-based user simulator is shown in [CONTINUE] Table 3. GDPL achieves extremely high performance in the task success on account of the substantial improvement in inform F1 and match rate over the baselines. [CONTINUE] Surprisingly, GDPL even outperforms human in completing the task, and its average dialog turns are close to those of humans, though GDPL is inferior in terms of match rate. [CONTINUE] ACER and PPO obtain high performance in inform F1 and match rate as well. [CONTINUE] Though ALDM obtains a lower inform F1 and match rate than PPO, it gets a slight improvement [CONTINUE] on task success [CONTINUE] Ablation test is investigated in Table 3. [CONTINUE] It is perceptible that GDPL has better performance than GDPL-sess on the task success and is comparable regarding the dialog turns, [CONTINUE] GDPL also outperforms GDPL-discr
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 5: Performance of different agents on the neural user simulator.
['Method', 'VHUS Turns', 'VHUS Inform', 'VHUS Match', 'VHUS Success']
[['ACER', '22.35', '55.13', '33.08', '18.6'], ['PPO', '[BOLD] 19.23', '[BOLD] 56.31', '33.08', '18.3'], ['ALDM', '26.90', '54.37', '24.15', '16.4'], ['GDPL', '22.43', '52.58', '[BOLD] 36.21', '[BOLD] 19.7']]
the agent The performance that interacts with VHUS is presented in Table 5. [CONTINUE] All the methods cause a significant drop in performance when interacting with VHUS. ALDM even gets worse performance than ACER and PPO. In comparison, GDPL is still comparable with ACER and PPO, obtains a better match rate, and even achieves higher task success.
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 6: The count of human preference on dialog session pairs that GDPL wins (W), draws with (D) or loses to (L) other methods based on different criteria. One method wins the other if the majority prefer the former one.
['VS.', 'Efficiency W', 'Efficiency D', 'Efficiency L', 'Quality W', 'Quality D', 'Quality L', 'Success W', 'Success D', 'Success L']
[['ACER', '55', '25', '20', '44', '32', '24', '52', '30', '18'], ['PPO', '74', '13', '13', '56', '26', '18', '59', '31', '10'], ['ALDM', '69', '19', '12', '49', '25', '26', '61', '24', '15']]
Table 6 presents the results of human evaluation. GDPL outperforms three baselines significantly in all aspects (sign test, p-value < 0.01) except for the quality compared with ACER. Among all the baselines, GDPL obtains the most preference against PPO.
Guided Dialog Policy Learning: Reward Estimation for Multi-Domain Task-Oriented Dialog
1908.10719v1
Table 7: Return distribution of GDPL on each metric. The first row counts the dialog sessions that get the full score of the corresponding metric, and the results of the rest sessions are included in the second row.
['Type', 'Inform Mean', 'Inform Num', 'Match Mean', 'Match Num', 'Success Mean', 'Success Num']
[['Full', '8.413', '903', '10.59', '450', '11.18', '865'], ['Other', '-99.95', '76', '-48.15', '99', '-71.62', '135']]
Table 7 provides a quantitative evaluation on the learned rewards by showing the distribution of the return R = t γtrt according to each metric. [CONTINUE] It can be observed that the learned reward function has good interpretability in that the reward is positive when the dialog gets a full score on each metric, and negative otherwise.
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE VII: Precision scores for the Analogy Test
['Methods', '# dims', 'Analg. (sem)', 'Analg. (syn)', 'Total']
[['GloVe', '300', '78.94', '64.12', '70.99'], ['Word2Vec', '300', '81.03', '66.11', '73.03'], ['OIWE-IPG', '300', '19.99', '23.44', '21.84'], ['SOV', '3000', '64.09', '46.26', '54.53'], ['SPINE', '1000', '17.07', '8.68', '12.57'], ['Word2Sense', '2250', '12.94', '19.44', '5.84'], ['Proposed', '300', '79.96', '63.52', '71.15']]
We present precision scores for the word analogy tests in Table VII. It can be seen that the alternative approaches that aim to improve interpretability, have poor performance on the word analogy tests. However, our proposed method has comparable performance with the original GloVe embeddings. Our proposed method outperforms GloVe in semantic analogy test set and in overall results, while GloVe performs slightly better in syntactic test set.
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE V: Word Intrusion Test Results: Correct Answers out of 300 Questions
['[EMPTY]', 'GloVe', 'Imparted']
[['Participants 1 to 5', '80/88/82/78/97', '212/170/207/229/242'], ['Mean/Std', '85/6.9', '212/24.4']]
We apply the test on five participants. Results tabulated at Table V shows that our proposed method significantly improves the interpretability by increasing the average true answer percentage from ∼ 28% for baseline to ∼ 71% for our method.
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE VI: Correlations for Word Similarity Tests
['Dataset (EN-)', 'GloVe', 'Word2Vec', 'OIWE-IPG', 'SOV', 'SPINE', 'Word2Sense', 'Proposed']
[['WS-353-ALL', '0.612', '0.7156', '0.634', '0.622', '0.173', '0.690', '0.657'], ['SIMLEX-999', '0.359', '0.3939', '0.295', '0.355', '0.090', '0.380', '0.381'], ['VERB-143', '0.326', '0.4430', '0.255', '0.271', '0.293', '0.271', '0.348'], ['SimVerb-3500', '0.193', '0.2856', '0.184', '0.197', '0.035', '0.234', '0.245'], ['WS-353-REL', '0.578', '0.6457', '0.595', '0.578', '0.134', '0.695', '0.619'], ['RW-STANF.', '0.378', '0.4858', '0.316', '0.373', '0.122', '0.390', '0.382'], ['YP-130', '0.524', '0.5211', '0.353', '0.482', '0.169', '0.420', '0.589'], ['MEN-TR-3k', '0.710', '0.7528', '0.684', '0.696', '0.298', '0.769', '0.725'], ['RG-65', '0.768', '0.8051', '0.736', '0.732', '0.338', '0.761', '0.774'], ['MTurk-771', '0.650', '0.6712', '0.593', '0.623', '0.199', '0.665', '0.671'], ['WS-353-SIM', '0.682', '0.7883', '0.713', '0.702', '0.220', '0.720', '0.720'], ['MC-30', '0.749', '0.8112', '0.799', '0.726', '0.330', '0.735', '0.776'], ['MTurk-287', '0.649', '0.6645', '0.591', '0.631', '0.295', '0.674', '0.634'], ['Average', '0.552', '0.6141', '0.519', '0.538', '0.207', '0.570', '0.579']]
The correlation scores for 13 different similarity test sets and their averages are reported in Table VI. We observe that, let alone a reduction in performance, the obtained scores indicate an almost uniform improvement in the correlation values for the proposed algorithm, outperforming all the alternatives except Word2Vec baseline on average. Although Word2Sense performed slightly better on some of the test sets, it should be noted that it is trained on a significantly larger corpus. Categories from Roget's thesaurus are groupings of words that are similar in some sense which the original embedding algorithm may fail to capture. These test results signify that the semantic information injected into the algorithm by the additional cost term is significant enough to result in a measurable improvement. It should also be noted that scores obtained by SPINE is unacceptably low on almost all tests indicating that it has achieved its interpretability performance at the cost of losing its semantic functions.
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE VIII: Precision scores for the Semantic Analogy Test
['Questions Subset', '# of Questions Seen', 'GloVe', 'Word2Vec', 'Proposed']
[['All', '8783', '78.94', '81.03', '79.96'], ['At least one', '1635', '67.58', '70.89', '67.89'], ['concept word', '1635', '67.58', '70.89', '67.89'], ['All concept words', '110', '77.27', '89.09', '83.64']]
To investigate the effect of the additional cost term on the performance improvement in the semantic analogy test, we [CONTINUE] present Table VIII. In particular, we present results for the cases where i) all questions in the dataset are considered, ii) only the questions that contains at least one concept word are considered, iii) only the questions that consist entirely of concept words are considered. [CONTINUE] We observe that for all three scenarios, our proposed algorithm results in an improvement in the precision scores. However, the greatest performance increase is seen for the last scenario, which underscores the extent to which the semantic features captured by embeddings can be improved with a reasonable selection of the lexical resource from which the concept wordgroups were derived.
Imparting Interpretability to Word Embeddings while Preserving Semantic Structure
1807.07279v3
TABLE IX: Accuracies (%) for Sentiment Classification Task
['GloVe', 'Word2Vec', 'OIWE-IPG', 'SOV', 'SPINE', 'Word2Sense', 'Proposed']
[['77.34', '77.91', '74.27', '78.43', '74.13', '81.21', '78.26']]
Classification accuracies are presented in Table IX. The proposed method outperforms the original embeddings and performs on par with the SOV. Pretrained Word2Sense embeddings outperform our method, however it has the advantage of training on a larger corpus. This result along with the intrinsic evaluations show that the proposed imparting method can significantly improve interpretability without a drop in performance.
Revisiting Joint Modeling of Cross-documentEntity and Event Coreference Resolution
1906.01753v1
Table 2: Combined within- and cross-document entity coreference results on the ECB+ test set.
['[BOLD] Model', 'R', 'MUC P', '[ITALIC] F1', 'R', 'B3 P', '[ITALIC] F1', 'R', 'CEAF- [ITALIC] e P', '[ITALIC] F1', 'CoNLL [ITALIC] F1']
[['Cluster+Lemma', '71.3', '83', '76.7', '53.4', '84.9', '65.6', '70.1', '52.5', '60', '67.4'], ['Disjoint', '76.7', '80.8', '78.7', '63.2', '78.2', '69.9', '65.3', '58.3', '61.6', '70'], ['Joint', '78.6', '80.9', '79.7', '65.5', '76.4', '70.5', '65.4', '61.3', '63.3', '[BOLD] 71.2']]
Table 2 presents the performance of our method with respect to entity coreference. Our joint model improves upon the strong lemma baseline by 3.8 points in CoNLL F1 score.
Revisiting Joint Modeling of Cross-documentEntity and Event Coreference Resolution
1906.01753v1
Table 3: Combined within- and cross-document event coreference results on the ECB+ test set.
['[BOLD] Model', 'R', 'MUC P', '[ITALIC] F1', 'R', 'B3 P', '[ITALIC] F1', 'R', 'CEAF- [ITALIC] e P', '[ITALIC] F1', 'CoNLL [ITALIC] F1']
[['[BOLD] Baselines', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Cluster+Lemma', '76.5', '79.9', '78.1', '71.7', '85', '77.8', '75.5', '71.7', '73.6', '76.5'], ['CV Cybulska and Vossen ( 2015a )', '71', '75', '73', '71', '78', '74', '-', '-', '64', '73'], ['KCP Kenyon-Dean et\xa0al. ( 2018 )', '67', '71', '69', '71', '67', '69', '71', '67', '69', '69'], ['Cluster+KCP', '68.4', '79.3', '73.4', '67.2', '87.2', '75.9', '77.4', '66.4', '71.5', '73.6'], ['[BOLD] Model Variants', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Disjoint', '75.5', '83.6', '79.4', '75.4', '86', '80.4', '80.3', '71.9', '75.9', '78.5'], ['Joint', '77.6', '84.5', '80.9', '76.1', '85.1', '80.3', '81', '73.8', '77.3', '[BOLD] 79.5']]
Table 3 presents the results on event coreference. Our joint model outperforms all the base [CONTINUE] lines with a gap of 10.5 CoNLL F1 points from the last published results (KCP), while surpassing our strong lemma baseline by 3 points. [CONTINUE] The results reconfirm that the lemma baseline, when combined with effective topic clustering, is a strong baseline for CD event coreference resolution on the ECB+ corpus (Upadhyay et al., 2016). [CONTINUE] The results of CLUSTER+KCP again indicate that pre-clustering of documents to topics is beneficial, improving upon the KCP performance by 4.6 points, though still performing substantially worse than our joint model. [CONTINUE] To test the contribution of joint modeling, we compare our joint model to its disjoint variant. We observe that the joint model performs better on both event and entity coreference. The performance gap is modest but significant with bootstrapping and permutation tests (p < 0.001).
Revisiting Joint Modeling of Cross-documentEntity and Event Coreference Resolution
1906.01753v1
Table 2: Combined within- and cross-document entity coreference results on the ECB+ test set.
['<bold>Model</bold>', 'R', 'MUC P', '<italic>F</italic>1', 'R', 'B3 P', '<italic>F</italic>1', 'R', 'CEAF-<italic>e</italic> P', '<italic>F</italic>1', 'CoNLL <italic>F</italic>1']
[['Cluster+Lemma', '71.3', '83', '76.7', '53.4', '84.9', '65.6', '70.1', '52.5', '60', '67.4'], ['Disjoint', '76.7', '80.8', '78.7', '63.2', '78.2', '69.9', '65.3', '58.3', '61.6', '70'], ['Joint', '78.6', '80.9', '79.7', '65.5', '76.4', '70.5', '65.4', '61.3', '63.3', '<bold>71.2</bold>']]
Table 2 presents the performance of our method with respect to entity coreference. Our joint model improves upon the strong lemma baseline by 3.8 points in CoNLL F1 score.
Revisiting Joint Modeling of Cross-documentEntity and Event Coreference Resolution
1906.01753v1
Table 3: Combined within- and cross-document event coreference results on the ECB+ test set.
['<bold>Model</bold>', 'R', 'MUC P', '<italic>F</italic>1', 'R', 'B3 P', '<italic>F</italic>1', 'R', 'CEAF-<italic>e</italic> P', '<italic>F</italic>1', 'CoNLL <italic>F</italic>1']
[['<bold>Baselines</bold>', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Cluster+Lemma', '76.5', '79.9', '78.1', '71.7', '85', '77.8', '75.5', '71.7', '73.6', '76.5'], ["CV Cybulska and Vossen (<ref id='bib-bib8'>2015a</ref>)", '71', '75', '73', '71', '78', '74', '-', '-', '64', '73'], ["KCP Kenyon-Dean et\xa0al. (<ref id='bib-bib14'>2018</ref>)", '67', '71', '69', '71', '67', '69', '71', '67', '69', '69'], ['Cluster+KCP', '68.4', '79.3', '73.4', '67.2', '87.2', '75.9', '77.4', '66.4', '71.5', '73.6'], ['<bold>Model Variants</bold>', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Disjoint', '75.5', '83.6', '79.4', '75.4', '86', '80.4', '80.3', '71.9', '75.9', '78.5'], ['Joint', '77.6', '84.5', '80.9', '76.1', '85.1', '80.3', '81', '73.8', '77.3', '<bold>79.5</bold>']]
lines with a gap of 10.5 CoNLL F1 points from the last published results (KCP), while surpassing our strong lemma baseline by 3 points. Table 3 presents the results on event coreference. Our joint model outperforms all the base [CONTINUE] The results reconfirm that the lemma baseline, when combined with effective topic clustering, is a strong baseline for CD event coreference resolution on the ECB+ corpus (Upadhyay et al., 2016). [CONTINUE] The results of CLUSTER+KCP again indicate that pre-clustering of documents to topics is beneficial, improving upon the KCP performance by 4.6 points, though still performing substantially worse than our joint model. [CONTINUE] Our model achieves state-of-the-art results, outperforming previous models by 10.5 CoNLL F1 points on events,
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 1: Precisions on the NYT dataset.
['Recall', '0.1', '0.2', '0.3', '0.4', 'AUC']
[['PCNN+ATT', '0.698', '0.606', '0.518', '0.446', '0.323'], ['Rank+ExATT', '0.789', '0.726', '0.620', '0.514', '0.395'], ['Our Model', '0.788', '[BOLD] 0.743', '[BOLD] 0.654', '[BOLD] 0.546', '[BOLD] 0.397']]
We also show the precision numbers for some particular recalls as well as the AUC in Table 1, where our model generally leads to better precision.
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 2: Precisions on the Wikidata dataset.
['Recall', '0.1', '0.2', '0.3', 'AUC']
[['Rank+ExATT', '0.584', '0.535', '0.487', '0.392'], ['PCNN+ATT (m)', '0.365', '0.317', '0.213', '0.204'], ['PCNN+ATT (1)', '0.665', '0.517', '0.413', '0.396'], ['Our Model', '0.650', '0.519', '0.422', '[BOLD] 0.405']]
We show the precision numbers for some particular recalls as well as the AUC in Table 2, where PCNN+ATT (1) refers to train sentences with two entities and one relation label, PCNN+ATT (m) refers to train sentences with four entities7 and two relation labels. We observe that our model exhibits the best performances.
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 3: Ablation study of capsule net and word-level attention on Wikidata dataset.
['Recall', '0.1', '0.2', '0.3', 'AUC']
[['-Word-ATT', '0.648', '0.515', '0.395', '0.389'], ['-Capsule', '0.635', '0.507', '0.413', '0.386'], ['Our Model', '0.650', '0.519', '0.422', '0.405']]
The experimental results on Wikidata dataset are summarized in Table 3. The results of "-Word-ATT" row refers to the results without word-level attention. According to the table, the drop of precision demonstrates that the word-level attention is quite useful.
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 4: Precisions on the Wikidata dataset with different choice of d.
['Recall', '0.1', '0.2', '0.3', 'AUC', 'Time']
[['[ITALIC] d=1', '0.602', '0.487', '0.403', '0.367', '4h'], ['[ITALIC] d=32', '0.645', '0.501', '0.393', '0.370', '-'], ['[ITALIC] d=16', '0.655', '0.518', '0.413', '0.413', '20h'], ['[ITALIC] d=8', '0.650', '0.519', '0.422', '0.405', '8h']]
As the table 4 depicts, the training time increases with the growth of d.
Attention-Based Capsule Networks with Dynamic Routing for Relation Extraction
1812.11321v1
Table 5: Precisions on the Wikidata dataset with different number of dynamic routing iterations.
['Recall', '0.1', '0.2', '0.3', 'AUC']
[['Iteration=1', '0.531', '0.455', '0.353', '0.201'], ['Iteration=2', '0.592', '0.498', '0.385', '0.375'], ['Iteration=3', '0.650', '0.519', '0.422', '0.405'], ['Iteration=4', '0.601', '0.505', '0.422', '0.385'], ['Iteration=5', '0.575', '0.495', '0.394', '0.376']]
Table 5 shows the comparison of 1 - 5 iterations. We find that the performance reach the best when iteration is set to 3.
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 3: Full-length ROUGE F-scores of some recent RL-based (upper) and supervised (middle) extractive summarisation systems, as well as our system with learned rewards (bottom). R-1/2/L stands for ROUGE-1/2/L. Our system maximises the learned reward instead of ROUGE, hence receives lower ROUGE scores.
['System', 'Reward', 'R-1', 'R-2', 'R-L']
[['Kryscinski et\xa0al. ( 2018 )', 'R-L', '40.2', '17.4', '37.5'], ['Narayan et\xa0al. ( 2018b )', 'R-1,2,L', '40.0', '18.2', '36.6'], ['Chen and Bansal ( 2018 )', 'R-L', '41.5', '18.7', '37.8'], ['Dong et\xa0al. ( 2018 )', 'R-1,2,L', '41.5', '18.7', '37.6'], ['Zhang et\xa0al. ( 2018 )', '[EMPTY]', '41.1', '18.8', '37.5'], ['Zhou et\xa0al. ( 2018 )', '[EMPTY]', '41.6', '19.0', '38.0'], ['Kedzie et\xa0al. ( 2018 )', '[EMPTY]', '39.1', '17.9', '35.9'], ['(ours) NeuralTD', 'Learned', '39.6', '18.1', '36.5']]
Table 3 presents the ROUGE scores of our system (NeuralTD+LearnedRewards) and multiple stateof-the-art systems. The summaries generated by our system receive decent ROUGE metrics, but are lower than most of the recent systems, because our learned reward is optimised towards high correlation with human judgement instead of ROUGE metrics.
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 1: Quality of reward metrics. G-Pre and G-Rec are the precision and recall rate of the “good” summaries identified by the metrics, resp. All metrics here require reference summaries. We perform stemming and stop words removal as preprosessing, as they help increase the correlation. For InferSent, the embeddings of the reference/system summaries are obtained by averaging the embeddings of the sentences therein.
['Metric', '[ITALIC] ρ', '[ITALIC] r', 'G-Pre', 'G-Rec']
[['ROUGE-1', '.290', '.304', '.392', '.428'], ['ROUGE-2', '.259', '.278', '.408', '.444'], ['ROUGE-L', '.274', '.297', '.390', '.426'], ['ROUGE-SU4', '.282', '.279', '.404', '.440'], ['BLEU-1', '.256', '.281', '.409', '.448'], ['BLEU-2', '.301', '.312', '.411', '.446'], ['BLEU-3', '.317', '.312', '.409', '.444'], ['BLEU-4', '.311', '.307', '.409', '.446'], ['BLEU-5', '.308', '.303', '.420', '.459'], ['METEOR', '.305', '.285', '.409', '.444'], ['InferSent-Cosine', '[BOLD] .329', '[BOLD] .339', '.417', '.460'], ['BERT-Cosine', '.312', '.335', '[BOLD] .440', '[BOLD] .484']]
From Table 1, we find that all metrics we consider have low correlation with the human judgement. More importantly, their G-Pre and G-Rec scores are all below .50, which means that more than half of the good summaries identified by the metrics are actually not good, and more than 50%
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 2: Summary-level correlation of learned reward functions. All results are averaged over 5-fold cross validations. Unlike the metrics in Table 1, all rewards in this table do not require reference summaries.
['Model', 'Encoder', '[ITALIC] Reg. loss (Eq. ( 1 )) [ITALIC] ρ', '[ITALIC] Reg. loss (Eq. ( 1 )) [ITALIC] r', '[ITALIC] Reg. loss (Eq. ( 1 )) G-Pre', '[ITALIC] Reg. loss (Eq. ( 1 )) G-Rec', '[ITALIC] Pref. loss (Eq. ( 3 )) [ITALIC] ρ', '[ITALIC] Pref. loss (Eq. ( 3 )) [ITALIC] r', '[ITALIC] Pref. loss (Eq. ( 3 )) G-Pre', '[ITALIC] Pref. loss (Eq. ( 3 )) G-Rec']
[['MLP', 'CNN-RNN', '.311', '.340', '.486', '.532', '.318', '.335', '.481', '.524'], ['MLP', 'PMeans-RNN', '.313', '.331', '.489', '.536', '.354', '.375', '.502', '.556'], ['MLP', 'BERT', '[BOLD] .487', '[BOLD] .526', '[BOLD] .544', '[BOLD] .597', '[BOLD] .505', '[BOLD] .531', '[BOLD] .556', '[BOLD] .608'], ['SimRed', 'CNN', '.340', '.392', '.470', '.515', '.396', '.443', '.499', '.549'], ['SimRed', 'PMeans', '.354', '.393', '.493', '.541', '.370', '.374', '.507', '.551'], ['SimRed', 'BERT', '.266', '.296', '.458', '.495', '.325', '.338', '.485', '.533'], ['Peyrard and Gurevych ( 2018 )', 'Peyrard and Gurevych ( 2018 )', '.177', '.189', '.271', '.306', '.175', '.186', '.268', '.174']]
Table 2 shows the quality of different reward learning models. As a baseline, we also consider the feature-rich reward learning method proposed by Peyrard and Gurevych (see §2). MLP with BERT as en(2018) coder has the best overall performance. Specifically, BERT+MLP+Pref significantly outperforms (p < 0.05) all the other models that do not use BERT+MLP,
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 4: Human evaluation on extractive summaries. Our system receives significantly higher human ratings on average. “Best%”: in how many percentage of documents a system receives the highest human rating.
['[EMPTY]', 'Ours', 'Refresh', 'ExtAbsRL']
[['Avg. Human Rating', '[BOLD] 2.52', '2.27', '1.66'], ['Best%', '[BOLD] 70.0', '33.3', '6.7']]
Table 4 presents the human evaluation results. summaries generated by NeuralTD receives significantly higher human evaluation scores than those by Refresh (p = 0.0088, double-tailed ttest) and ExtAbsRL (p (cid:28) 0.01). Also, the average human rating for Refresh is significantly higher (p (cid:28) 0.01) than ExtAbsRL,
Better Rewards Yield Better Summaries: Learning to Summarise Without References
1909.01214v1
Table 5: Performance of ExtAbsRL with different reward functions, measured in terms of ROUGE (center) and human judgements (right). Using our learned reward yields significantly (p=0.0057) higher average human rating. “Pref%”: in how many percentage of documents a system receives the higher human rating.
['Reward', 'R-1', 'R-2', 'R-L', 'Human', 'Pref%']
[['R-L (original)', '40.9', '17.8', '38.5', '1.75', '15'], ['Learned (ours)', '39.2', '17.4', '37.5', '[BOLD] 2.20', '[BOLD] 75']]
Table 5 compares the ROUGE scores of using different rewards to train the extractor in ExtAbsRL (the abstractor is pre-trained, and is applied to rephrase the extracted sentences). Again, when ROUGE is used as rewards, the generated summaries have higher ROUGE scores. [CONTINUE] It is clear from Table 5 that using the learned reward helps the RL-based system generate summaries with significantly higher human ratings.
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 9: An ablation study showing the effect of different model architectures and training regimes on performance on the proprietary help desk dataset.
['[BOLD] Model', '[BOLD] Parameters', '[BOLD] Validation [email protected]', '[BOLD] Test [email protected]']
[['Base', '8.0M', '[BOLD] 0.871', '0.816'], ['4L SRU → 2L LSTM', '7.3M', '0.864', '[BOLD] 0.829'], ['4L SRU → 2L SRU', '7.8M', '0.856', '[BOLD] 0.829'], ['Flat → hierarchical', '12.4M', '0.825', '0.559'], ['Cross entropy → hinge loss', '8.0M', '0.765', '0.693'], ['6.6M → 1M examples', '8.0M', '0.835', '0.694'], ['6.6M → 100K examples', '8.0M', '0.565', '0.417'], ['200 → 100 negatives', '8.0M', '0.864', '0.647'], ['200 → 10 negatives', '8.0M', '0.720', '0.412']]
The results are shown in Table 9, [CONTINUE] As Table 9 shows, the training set size and the number of negative responses for each positive response are the most important factors in model performance. The model performs significantly worse when trained with hinge loss instead of cross-entropy loss, indicating the importance of the loss function. [CONTINUE] We observed no advantage to using a hierachical encoder, [CONTINUE] Finally, we see that a 2 layer LSTM performs similarly to either a 4 layer or a 2 layer SRU with a comparable number of parameters. [CONTINUE] Table 9 shows the results of an ablation study we performed to identify the most important components of our model architecture and training regime.
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 8: Inference time (milliseconds) of our model to encode a context using an SRU or an LSTM encoder on a single CPU core. The last row shows the extra time needed to compare the response encoding to 10,000 cached candidate response encodings in order to find the best response.
['[BOLD] Encoder', '[BOLD] Layer', '[BOLD] Params', '[BOLD] Time']
[['SRU', '2', '3.7M', '14.7'], ['SRU', '4', '8.0M', '21.9'], ['LSTM', '2', '7.3M', '90.9'], ['LSTM', '4', '15.9M', '174.8'], ['+rank response', '-', '-', '0.9']]
The model makes use of a fast recurrent network implementation (Lei et al., 2018) and multiheaded attention (Lin et al., 2017) and achieves over a 4.1x inference speedup over traditional encoders such as LSTM (Hochreiter and Schmidhuber, 1997). [CONTINUE] SRU also exhibits a significant speedup in inference time compared to LSTM (by a factor of 4.1x in our experiments), [CONTINUE] As seen in Table 8, an SRU encoder is over 4x faster than an LSTM encoder with a similar number of parameters, making it more suitable for production use. [CONTINUE] Table 8 also highlights the scalability of using a dual encoder architecture. [CONTINUE] Retrieving the best candidate once the context is encoded takes a negligible amount of time compared to the time to encode the context. [CONTINUE] Since the SRU is more than 4x faster at inference time with the same level of performance,
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 3: AUC and AUC@p of our model on the propriety help desk dataset.
['[BOLD] Metric', '[BOLD] Validation', '[BOLD] Test']
[['AUC', '0.991', '0.977'], ['[email protected]', '0.925', '0.885'], ['[email protected]', '0.871', '0.816'], ['[email protected]', '0.677', '0.630']]
The performance of our model according to these AUC metrics can be seen in Table 3. The high AUC indicates that our model can easily distinguish between the true response and negative responses. Furthermore, the AUC@p numbers show that the model has a relatively high true positive rate even under the difficult requirement of a low false positive rate.
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 4: Recall@k from n response candidates for different values of n using random whitelists. Each random whitelist includes the correct response along with n−1 randomly selected responses.
['[BOLD] Candidates', '[BOLD] R@1', '[BOLD] R@3', '[BOLD] R@5', '[BOLD] R@10']
[['10', '0.892', '0.979', '0.987', '1'], ['100', '0.686', '0.842', '0.894', '0.948'], ['1,000', '0.449', '0.611', '0.677', '0.760'], ['10,000', '0.234', '0.360', '0.421', '0.505']]
Table 4 shows Rn@k on the test set for different values of n and k when using a random [CONTINUE] Table 4 shows that recall drops significantly as n grows, meaning that the R10@k evaluation performed by prior work may significantly overstate model performance in a production setting.
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 5: Recall@k for random, frequency, and clustering whitelists of different sizes. The “+” indicates that the true response is added to the whitelist.
['[BOLD] Whitelist', '[BOLD] R@1', '[BOLD] R@3', '[BOLD] R@5', '[BOLD] R@10', '[BOLD] BLEU']
[['Random 10K+', '0.252', '0.400', '0.472', '0.560', '37.71'], ['Frequency 10K+', '0.257', '0.389', '0.455', '0.544', '41.34'], ['Clustering 10K+', '0.230', '0.376', '0.447', '0.541', '37.59'], ['Random 1K+', '0.496', '0.663', '0.728', '0.805', '59.28'], ['Frequency 1K+', '0.513', '0.666', '0.726', '0.794', '67.05'], ['Clustering 1K+', '0.481', '0.667', '0.745', '0.835', '61.88'], ['Frequency 10K', '0.136', '0.261', '0.327', '0.420', '30.46'], ['Clustering 10K', '0.164', '0.292', '0.360', '0.457', '31.47'], ['Frequency 1K', '0.273', '0.465', '0.550', '0.658', '47.13'], ['Clustering 1K', '0.331', '0.542', '0.650', '0.782', '49.26']]
The results in Table 5 show that the three types of whitelists perform comparably to each other when the true response is added. However, in the more realistic second case, when recall is only computed on examples with a response already in the whitelist, performance on the frequency and clustering whitelists drops significantly. [CONTINUE] The BLEU scores computed with the frequency and clustering whitelists are slightly higher than those computed with random whitelists. [CONTINUE] The recall results in Table 5 seem to indicate that the clustering-based whitelists are strictly superior to the frequency-based whitelists in the realistic case when we only consider responses that are already contained in the whitelist, but this analysis fails to account for the frequency with which this is the case.
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 6: Recall@1 versus coverage for frequency and clustering whitelists.
['[BOLD] Whitelist', '[BOLD] R@1', '[BOLD] Coverage']
[['Frequency 10K', '0.136', '45.04%'], ['Clustering 10K', '0.164', '38.38%'], ['Frequency 1K', '0.273', '33.38%'], ['Clustering 1K', '0.331', '23.28%']]
Table 6 shows R@1 and coverage for the frequency and clustering whitelists. While the clustering whitelists have higher recall, the frequency whitelists have higher coverage.
Building a Production Model for Retrieval-Based Chatbots
1906.03209v2
Table 7: Results of the human evaluation of the responses produced by our model. A response is acceptable if it is either good or great. Note: Numbers may not add up to 100% due to rounding.
['[BOLD] Whitelist', '[BOLD] Great', '[BOLD] Good', '[BOLD] Bad', '[BOLD] Accept']
[['Freq. 1K', '54%', '26%', '20%', '80%'], ['Cluster. 1K', '55%', '21%', '23%', '77%'], ['Freq. 10K', '56%', '24%', '21%', '80%'], ['Cluster. 10K', '57%', '23%', '20%', '80%'], ['Real response', '60%', '24%', '16%', '84%']]
The results of the human evaluation are in Table 7. Our proposed system works well, selecting acceptable (i.e. good or great) responses about 80% of the time and selecting great responses more than 50% of the time. Interestingly, the size and type of whitelist seem to have little effect on performance, indicating that all the whitelists contain responses appropriate to a variety of conversational contexts.
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 6: Performance of our baselines on the development set. Parallelism+URL tests the page-context setting; all other test the snippet-context setting. Bold indicates best performance in each setting.
['[EMPTY]', 'M', 'F', 'B', 'O']
[['Random', '43.6', '39.3', '[ITALIC] 0.90', '41.5'], ['Token Distance', '50.1', '42.4', '[ITALIC] 0.85', '46.4'], ['Topical Entity', '51.5', '43.7', '[ITALIC] 0.85', '47.7'], ['Syntactic Distance', '63.0', '56.2', '[ITALIC] 0.89', '59.7'], ['Parallelism', '[BOLD] 67.1', '[BOLD] 63.1', '[ITALIC] [BOLD] 0.94', '[BOLD] 65.2'], ['Parallelism+URL', '[BOLD] 71.1', '[BOLD] 66.9', '[ITALIC] [BOLD] 0.94', '[BOLD] 69.0'], ['Transformer-Single', '58.6', '51.2', '[ITALIC] 0.87', '55.0'], ['Transformer-Multi', '59.3', '52.9', '[ITALIC] 0.89', '56.2']]
Both cues yield strong baselines comparable to the strongest OntoNotes-trained systems (cf. Table 4). In fact, Lee et al. (2017) and PARALLELISM produce remarkably similar output: of the 2000 example pairs in the development set, the two have completely opposing predictions (i.e. Name A vs. Name B) on only 325 examples. [CONTINUE] Further, the cues are markedly gender-neutral, improving the Bias metric by 9% in the standard task formulation and to parity in the gold-two-mention case. [CONTINUE] The heuristic gives a performance gain of 2% overall compared to PARALLELISM. [CONTINUE] TRANSFORMER-MULTI is stronger than TRANSFORMER-SINGLE [CONTINUE] .2% overall improvement over TRANSFORMER-SINGLE for the goldtwo-mention task.
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 4: Performance of off-the-shelf resolvers on the GAP development set, split by Masculine and Feminine (Bias shows F/M), and Overall. Bold indicates best performance.
['[EMPTY]', 'M', 'F', 'B', 'O']
[['Lee et\xa0al. ( 2013 )', '55.4', '45.5', '[ITALIC] 0.82', '50.5'], ['Clark and Manning', '58.5', '51.3', '[ITALIC] 0.88', '55.0'], ['Wiseman et\xa0al.', '[BOLD] 68.4', '59.9', '[ITALIC] 0.88', '64.2'], ['Lee et\xa0al. ( 2017 )', '67.2', '[BOLD] 62.2', '[ITALIC] [BOLD] 0.92', '[BOLD] 64.7']]
We note particularly the large difference in performance between genders, [CONTINUE] Both cues yield strong baselines comparable to the strongest OntoNotes-trained systems (cf. Table 4). In fact, Lee et al. (2017) and PARALLELISM produce remarkably similar output: of the 2000 example pairs in the development set, the two have completely opposing predictions (i.e. Name A vs. Name B) on only 325 examples.
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 7: Performance of our baselines on the development set in the gold-two-mention task (access to the two candidate name spans). Parallelism+URL tests the page-context setting; all other test the snippet-context setting. Bold indicates best performance in each setting.
['[EMPTY]', 'M', 'F', 'B', 'O']
[['Random', '47.5', '50.5', '[ITALIC] 1.06', '49.0'], ['Token Distance', '50.6', '47.5', '[ITALIC] 0.94', '49.1'], ['Topical Entity', '50.2', '47.3', '[ITALIC] 0.94', '48.8'], ['Syntactic Distance', '66.7', '66.7', '[ITALIC] [BOLD] 1.00', '66.7'], ['Parallelism', '[BOLD] 69.3', '[BOLD] 69.2', '[ITALIC] [BOLD] 1.00', '[BOLD] 69.2'], ['Parallelism+URL', '[BOLD] 74.2', '[BOLD] 71.6', '[ITALIC] [BOLD] 0.96', '[BOLD] 72.9'], ['Transformer-Single', '59.6', '56.6', '[ITALIC] 0.95', '58.1'], ['Transformer-Multi', '62.9', '61.7', '[ITALIC] 0.98', '62.3']]
RANDOM is indeed closer here to the expected 50% and other baselines are closer to gender-parity. [CONTINUE] TOKEN DISTANCE and TOPICAL ENTITY are only weak improvements above RANDOM, [CONTINUE] Further, the cues are markedly gender-neutral, improving the Bias metric by 9% in the standard task formulation and to parity in the gold-two-mention case. [CONTINUE] .2% overall improvement over TRANSFORMER-SINGLE for the goldtwo-mention task.
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 8: Coreference signal of a Transformer model on the validation dataset, by encoder attention layer and head.
['HeadLayer', 'L0', 'L1', 'L2', 'L3', 'L4', 'L5']
[['H0', '4 6.9', '4 7.4', '4 5.8', '4 6.2', '4 5.8', '4 5.7'], ['H1', '4 5.3', '4 6.5', '4 6.4', '4 6.2', '4 9.4', '4 6.3'], ['H2', '4 5.8', '4 6.7', '4 6.3', '4 6.5', '4 5.7', '4 5.9'], ['H3', '4 6.0', '4 6.3', '4 6.8', '4 6.0', '4 6.6', '4 8.0'], ['H4', '4 5.7', '4 6.3', '4 6.5', '4 7.8', '4 5.1', '4 7.0'], ['H5', '4 7.0', '4 6.5', '4 6.5', '4 5.6', '4 6.2', '5 2.9'], ['H6', '4 6.7', '4 5.4', '4 6.4', '4 5.3', '4 6.9', '4 7.0'], ['H7', '4 3.8', '4 6.6', '4 6.4', '5 5.0', '4 6.4', '4 6.2']]
Consistent with the observations by Vaswani et al. (2017), we observe that the coreference signal is localized on specific heads and that these heads are in the deep layers of the network (e.g. L3H7).
Mind the GAP: A Balanced Corpus of Gendered Ambiguous Pronouns
1810.05201v1
Table 9: Comparison of the predictions of the Parallelism and Transformer-Single heuristics over the GAP development dataset.
['[EMPTY]', '[EMPTY]', 'Parallelism Correct', 'Parallelism Incorrect']
[['Transf.', 'Correct', '48.7%', '13.4%'], ['Transf.', 'Incorrect', '21.6%', '16.3%']]
we find that the instances of coreference that TRANSFORMERSINGLE can handle is substantially
Effective Attention Modeling for Neural Relation Extraction
1912.03832v1
Table 3: Performance comparison of our model with different values of m on the two datasets.
['[ITALIC] m', 'NYT10 Prec.', 'NYT10 Rec.', 'NYT10 F1', 'NYT11 Prec.', 'NYT11 Rec.', 'NYT11 F1']
[['1', '0.541', '0.595', '[BOLD] 0.566', '0.495', '0.621', '0.551'], ['2', '0.521', '0.597', '0.556', '0.482', '0.656', '0.555'], ['3', '0.490', '0.617', '0.547', '0.509', '0.633', '0.564'], ['4', '0.449', '0.623', '0.522', '0.507', '0.652', '[BOLD] 0.571'], ['5', '0.467', '0.609', '0.529', '0.488', '0.677', '0.567']]
We investigate the effects of the multi-factor count (m) in our final model on the test datasets in Table 3. We observe that for the NYT10 dataset, m = {1, 2, 3} gives good performance with m = 1 achieving the highest F1 score. On the NYT11 dataset, m = 4 gives the best performance. These experiments show that the number of factors giving the best performance may vary depending on the underlying data distribution.
Effective Attention Modeling for Neural Relation Extraction
1912.03832v1
Table 2: Performance comparison of different models on the two datasets. * denotes a statistically significant improvement over the previous best state-of-the-art model with p<0.01 under the bootstrap paired t-test. † denotes the previous best state-of-the-art model.
['Model', 'NYT10 Prec.', 'NYT10 Rec.', 'NYT10 F1', 'NYT11 Prec.', 'NYT11 Rec.', 'NYT11 F1']
[['CNN zeng2014relation', '0.413', '0.591', '0.486', '0.444', '0.625', '0.519'], ['PCNN zeng2015distant', '0.380', '[BOLD] 0.642', '0.477', '0.446', '0.679', '0.538†'], ['EA huang2016attention', '0.443', '0.638', '0.523†', '0.419', '0.677', '0.517'], ['BGWA jat2018attention', '0.364', '0.632', '0.462', '0.417', '[BOLD] 0.692', '0.521'], ['BiLSTM-CNN', '0.490', '0.507', '0.498', '0.473', '0.606', '0.531'], ['Our model', '[BOLD] 0.541', '0.595', '[BOLD] 0.566*', '[BOLD] 0.507', '0.652', '[BOLD] 0.571*']]
We present the results of our final model on the relation extraction task on the two datasets in Table 2. Our model outperforms the previous stateof-the-art models on both datasets in terms of F1 score. On the NYT10 dataset, it achieves 4.3% higher F1 score compared to the previous best state-of-the-art model EA. Similarly, it achieves 3.3% higher F1 score compared to the previous best state-of-the-model PCNN on the NYT11 dataset. Our model improves the precision scores on both datasets with good recall scores.
Effective Attention Modeling for Neural Relation Extraction
1912.03832v1
Table 4: Effectiveness of model components (m=4) on the NYT11 dataset.
['[EMPTY]', 'Prec.', 'Rec.', 'F1']
[['(A1) BiLSTM-CNN', '0.473', '0.606', '0.531'], ['(A2) Standard attention', '0.466', '0.638', '0.539'], ['(A3) Window size ( [ITALIC] ws)=5', '0.507', '0.652', '[BOLD] 0.571'], ['(A4) Window size ( [ITALIC] ws)=10', '0.510', '0.640', '0.568'], ['(A5) Softmax', '0.490', '0.658', '0.562'], ['(A6) Max-pool', '0.492', '0.600', '0.541']]
We include the ablation results on the NYT11 dataset in Table 4. When we add multi-factor attention to the baseline BiLSTM-CNN model without the dependency distance-based weight factor in the attention mechanism, we get 0.8% F1 score improvement (A2−A1). Adding the dependency weight factor with a window size of 5 improves [CONTINUE] the F1 score by 3.2% (A3−A2). Increasing the window size to 10 reduces the F1 score marginally (A3−A4). Replacing the attention normalizing function with softmax operation also reduces the F1 score marginally (A3−A5). In our model, we concatenate the features extracted by each attention layer. Rather than concatenating them, we can apply max-pooling operation across the multiple attention scores to compute the final attention scores. These max-pooled attention scores are used to obtain the weighted average vector of Bi-LSTM hidden vectors. This affects the model performance negatively and F1 score of the model decreases by 3.0% (A3−A6).
Zero-Shot Grounding of Objects from Natural Language Queries
1908.07129v1
Table 3: Category-wise performance with the default split of Flickr30k Entities.
['Method', 'Overall', 'people', 'clothing', 'bodyparts', 'animals', 'vehicles', 'instruments', 'scene', 'other']
[['QRC - VGG(det)', '60.21', '75.08', '55.9', '20.27', '73.36', '68.95', '45.68', '65.27', '38.8'], ['CITE - VGG(det)', '61.89', '[BOLD] 75.95', '58.50', '30.78', '[BOLD] 77.03', '[BOLD] 79.25', '48.15', '58.78', '43.24'], ['ZSGNet - VGG (cls)', '60.12', '72.52', '60.57', '38.51', '63.61', '64.47', '49.59', '64.66', '41.09'], ['ZSGNet - Res50 (cls)', '[BOLD] 63.39', '73.87', '[BOLD] 66.18', '[BOLD] 45.27', '73.79', '71.38', '[BOLD] 58.54', '[BOLD] 66.49', '[BOLD] 45.53']]
For Flickr30k we also note entity-wise accuracy in Table 3 and compare against [7, 34]. [CONTINUE] As these models use object detectors pretrained on Pascal-VOC , they have somewhat higher performance on classes that are common to both Flickr30k and Pascal-VOC ("animals", "people" and "vehicles"). However, on the classes like "clothing" and "bodyparts" our model ZSGNet shows much better performance; likely because both "clothing" and "bodyparts" are present along with "people" category and so the other methods choose the "people" category.
Zero-Shot Grounding of Objects from Natural Language Queries
1908.07129v1
Table 2: Comparison of our model with other state of the art methods. We denote those networks which use classification weights from ImageNet [41] using “cls” and those networks which use detection weights from Pascal VOC [12] using “det”. The reported numbers are all Accuracy@IoU=0.5 or equivalently Recall@1. Models marked with “*” fine-tune their detection network on the entities in the Flickr30k.
['Method', 'Net', 'Flickr30k', 'ReferIt']
[['SCRC ', 'VGG', '27.8', '17.9'], ['GroundeR (cls) ', 'VGG', '42.43', '24.18'], ['GroundeR (det) ', 'VGG', '48.38', '28.5'], ['MCB (det) ', 'VGG', '48.7', '28.9'], ['Li (cls) ', 'VGG', '-', '40'], ['QRC* (det) ', 'VGG', '60.21', '44.1'], ['CITE* (cls) ', 'VGG', '61.89', '34.13'], ['QRG* (det)', 'VGG', '60.1', '-'], ['[BOLD] ZSGNet (cls)', '[BOLD] VGG', '[BOLD] 60.12', '[BOLD] 53.31'], ['[BOLD] ZSGNet (cls)', '[BOLD] Res50', '[BOLD] 63.39', '[BOLD] 58.63']]
Table 2 compares ZSGNet with prior works on Flickr30k Entities and ReferIt . We use "det" and "cls" to denote models using Pascal VOC detection weights and ImageNet [10, 41] classification weights. Networks marked [CONTINUE] with "*" fine-tune their object detector pretrained on PascalVOC on the fixed entities of Flickr30k . [CONTINUE] However, such information is not available in ReferIt dataset which explains ∼ 9% increase in performance of ZSGNet over other methods.
Zero-Shot Grounding of Objects from Natural Language Queries
1908.07129v1
Table 4: Accuracy across various unseen splits. For Flickr-Split-0,1 we use Accuracy with IoU threshold of 0.5. Since Visual Genome annotations are noisy we additionally report Accuracy with IoU threshold of 0.3. The second row denotes the IoU threshold at which the Accuracy is calculated. “B” and “UB” denote the balanced and unbalanced sets.
['Method', 'Net', 'Flickr- Split-0', 'Flickr- Split-1', 'VG-2B 0.3', 'VG-2B 0.5', 'VG-2UB 0.3', 'VG-2UB 0.5', 'VG-3B 0.3', 'VG-3B 0.5', 'VG-3UB 0.3', 'VG-3UB 0.5']
[['QRG', 'VGG', '35.62', '24.42', '13.17', '7.64', '12.39', '7.15', '14.21', '8.35', '13.03', '7.52'], ['ZSGNet', 'VGG', '39.32', '29.35', '17.09', '11.02', '16.48', '10.55', '17.63', '11.42', '17.35', '10.97'], ['ZSGNet', 'Res50', '[BOLD] 43.02', '[BOLD] 31.23', '[BOLD] 19.95', '[BOLD] 12.90', '[BOLD] 19.12', '[BOLD] 12.37', '[BOLD] 20.77', '[BOLD] 13.77', '[BOLD] 19.72', '[BOLD] 12.82']]
Table 4 shows the performance of our ZSGNet model compared to QRG on the four unseen splits [CONTINUE] Across all splits, ZSGNet shows 4−8% higher performance than QRG even though the latter has seen more data [CONTINUE] we observe that the accuracy obtained on Flickr-Split-0,1 are higher than the VG-split likely due to larger variation of the referred objects in Visual Genome. [CONTINUE] Finally, the accuracy remains the same across the balanced and unbalanced sets indicating the model performs well across all clusters as our training set is balanced.
Zero-Shot Grounding of Objects from Natural Language Queries
1908.07129v1
Table 6: Ablation study: BM=Base Model, softmax means we classify only one candidate box as foreground, BCE = Binary Cross Entropy means we classify each candidate box as the foreground or background, FL = Focal Loss, Img-Resize: use images of dimension 600×600
['Model', 'Accuracy on RefClef']
[['BM + Softmax', '48.54'], ['BM + BCE', '55.20'], ['BM + FL', '57.13'], ['BM + FL + Img-Resize', '[BOLD] 61.75']]
We show the performance of our model with different loss functions using the base model of ZSGNet on the validation set of ReferIt in Table 6. [CONTINUE] Note that using softmax loss by itself places us higher than the previous methods. [CONTINUE] Further using Binary Cross Entropy Loss and Focal loss give a significant (7%) performance boost which is expected in a single shot framework. [CONTINUE] Finally, image resizing gives another 4% increase.
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 4: Test BLEU for en-de adaptive training, with sequential adaptation to a third task. EWC-tuned models give the best performance on each domain.
['[EMPTY]', '[BOLD] Training scheme', '[BOLD] News', '[BOLD] TED', '[BOLD] IT']
[['1', 'News', '37.8', '25.3', '35.3'], ['2', 'TED', '23.7', '24.1', '14.4'], ['3', 'IT', '1.6', '1.8', '39.6'], ['4', 'News and TED', '38.2', '25.5', '35.4'], ['5', '1 then TED, No-reg', '30.6', '[BOLD] 27.0', '22.1'], ['6', '1 then TED, L2', '37.9', '26.7', '31.8'], ['7', '1 then TED, EWC', '[BOLD] 38.3', '[BOLD] 27.0', '33.1'], ['8', '5 then IT, No-reg', '8.0', '6.9', '56.3'], ['9', '6 then IT, L2', '32.3', '22.6', '56.9'], ['10', '7 then IT, EWC', '35.8', '24.6', '[BOLD] 57.0']]
In the en-de News/TED task (Table 4), all fine-tuning schemes give similar improvements on TED. However, EWC outperforms no-reg and L2 on News, not only reducing forgetting but giving 0.5 BLEU improvement over the baseline News model. [CONTINUE] The IT task is very small: training on IT data alone results in over-fitting, with a 17 BLEU improvement under fine-tuning. [CONTINUE] fine-tuning rapidly forgets previous tasks. EWC reduces forgetting on two previous tasks while further improving on the target domain.
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 3: Test BLEU for es-en adaptive training. EWC reduces forgetting compared to other fine-tuning methods, while offering the greatest improvement on the new domain.
['[EMPTY]', '[BOLD] Training scheme', '[BOLD] Health', '[BOLD] Bio']
[['1', 'Health', '[BOLD] 35.9', '33.1'], ['2', 'Bio', '29.6', '36.1'], ['3', 'Health and Bio', '35.8', '37.2'], ['4', '1 then Bio, No-reg', '30.3', '36.6'], ['5', '1 then Bio, L2', '35.1', '37.3'], ['6', '1 then Bio, EWC', '35.2', '[BOLD] 37.8']]
For es-en, the Health and Bio tasks overlap, but catastrophic forgetting still occurs under noreg (Table 3). Regularization reduces forgetting and allows further improvements on Bio over noreg fine-tuning. We find EWC outperforms the L2 approach
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 5: Test BLEU for 2-model es-en and 3-model en-de unadapted model ensembling, compared to oracle unadapted model chosen if test domain is known. Uniform ensembling generally underperforms the oracle, while BI+IS outperforms the oracle.
['[BOLD] Decoder configuration', '[BOLD] es-en [BOLD] Health', '[BOLD] es-en [BOLD] Bio', '[BOLD] en-de [BOLD] News', '[BOLD] en-de [BOLD] TED', '[BOLD] en-de [BOLD] IT']
[['Oracle model', '35.9', '36.1', '37.8', '24.1', '39.6'], ['Uniform', '33.1', '36.4', '21.9', '18.4', '38.9'], ['Identity-BI', '35.0', '36.6', '32.7', '25.3', '42.6'], ['BI', '35.9', '36.5', '38.0', '26.1', '[BOLD] 44.7'], ['IS', '[BOLD] 36.0', '36.8', '37.5', '25.6', '43.3'], ['BI + IS', '[BOLD] 36.0', '[BOLD] 36.9', '[BOLD] 38.4', '[BOLD] 26.4', '[BOLD] 44.7']]
Table 5 shows improvements on data without domain labelling using our adaptive decoding schemes with unadapted models trained only on one domain [CONTINUE] Uniform ensembling under-performs all oracle models except es-en Bio, especially on general domains. Identity-BI strongly improves over uniform ensembling, and BI with λ as in Eq. 10 improves further for all but es-en Bio. BI and IS both individually outperform the oracle for all but IS-News, [CONTINUE] With adaptive decoding, we do not need to assume whether a uniform ensemble or a single model might perform better for some potentially unknown domain. We highlight this in Table 7 by reporting results with the ensembles of Tables 5 and 6 over concatenated test sets, to mimic the realistic scenario of unlabelled test data. We additionally include the uniform no-reg ensembling
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 6: Test BLEU for 2-model es-en and 3-model en-de model ensembling for models adapted with EWC, compared to oracle model last trained on each domain, chosen if test domain is known. BI+IS outperforms uniform ensembling and in some cases outperforms the oracle.
['[BOLD] Decoder configuration', '[BOLD] es-en [BOLD] Health', '[BOLD] es-en [BOLD] Bio', '[BOLD] en-de [BOLD] News', '[BOLD] en-de [BOLD] TED', '[BOLD] en-de [BOLD] IT']
[['Oracle model', '35.9', '37.8', '37.8', '27.0', '57.0'], ['Uniform', '36.0', '36.4', '[BOLD] 38.9', '26.0', '43.5'], ['BI + IS', '[BOLD] 36.2', '[BOLD] 38.0', '38.7', '[BOLD] 26.1', '[BOLD] 56.4']]
In Table 6 we apply the best adaptive decoding scheme, BI+IS, to models fine-tuned with EWC. [CONTINUE] EWC models perform well over multiple domains, so the improvement over uniform ensembling is less striking than for unadapted models. Nevertheless adaptive decoding improves over both uniform ensembling and the oracle model in most cases.
Domain Adaptive Inference for Neural Machine Translation
1906.00408v1
Table 7: Total BLEU for test data concatenated across domains. Results from 2-model es-en and 3-model en-de ensembles, compared to oracle model chosen if test domain is known. No-reg uniform corresponds to the approach of Freitag and Al-Onaizan (2016). BI+IS performs similarly to strong oracles with no test domain labeling.
['[BOLD] Language pair', '[BOLD] Model type', '[BOLD] Oracle model', '[BOLD] Decoder configuration [BOLD] Uniform', '[BOLD] Decoder configuration [BOLD] BI + IS']
[['es-en', 'Unadapted', '36.4', '34.7', '36.6'], ['es-en', 'No-reg', '36.6', '34.8', '-'], ['es-en', 'EWC', '37.0', '36.3', '[BOLD] 37.2'], ['en-de', 'Unadapted', '36.4', '26.8', '38.8'], ['en-de', 'No-reg', '41.7', '31.8', '-'], ['en-de', 'EWC', '42.1', '38.6', '[BOLD] 42.0']]
Uniform no-reg ensembling outperforms unadapted uniform ensembling, since fine-tuning gives better in-domain performance. [CONTINUE] BI+IS decoding with single-domain trained models achieves gains over both the naive uniform approach and over oracle single-domain models. BI+IS with EWC-adapted models gives a 0.9 / 3.4 BLEU gain over the strong uniform EWC ensemble, and a 2.4 / 10.2 overall BLEU gain over the approach described in Freitag and Al-Onaizan (2016).
Filling Conversation Ellipsis for Better Social Dialog Understanding
1911.10776v1
Table 8: Semantic role labeling results. Hybrid-EL-CMP1 represents rule-based model and Hybrid-EL-CMP2 represents probability-based model.
['[BOLD] Model', '[BOLD] Prec.(%)', '[BOLD] Rec.(%)', '[BOLD] F1(%)']
[['EL', '96.02', '81.89', '88.39'], ['CMP', '86.39', '[BOLD] 88.64', '87.50'], ['Hybrid-EL-CMP1', '[BOLD] 97.42', '84.70', '90.62'], ['Hybrid-EL-CMP2', '95.82', '86.42', '[BOLD] 90.87']]
Our results in Table 8 show that when only using original utterances with ellipsis, precision is relatively high while recall is low.
Filling Conversation Ellipsis for Better Social Dialog Understanding
1911.10776v1
Table 6: Dialog act prediction performance using different selection methods.
['[BOLD] Selection Method', '[BOLD] Prec.(%)', '[BOLD] Rec.(%)', '[BOLD] F1(%)']
[['Max Logits', '80.19', '80.50', '79.85'], ['Add Logits', '81.30', '81.28', '80.85'], ['Add Logits+Expert', '[BOLD] 81.30', '[BOLD] 81.41', '[BOLD] 80.90'], ['Concat Hidden', '80.24', '80.04', '79.65'], ['Max Hidden', '80.30', '80.04', '79.63'], ['Add Hidden', '80.82', '80.28', '80.08']]
We can see from Table 6 that empirically adding logits from two models after classifiers performs the best.
Towards Scalable and Reliable Capsule Networksfor Challenging NLP Applications
1906.02829v1
Table 6: Experimental results on TREC QA dataset.
['Method', 'MAP', 'MRR']
[['CNN + LR (unigram)', '54.70', '63.29'], ['CNN + LR (bigram)', '56.93', '66.13'], ['CNN', '66.91', '68.80'], ['CNTN', '65.80', '69.78'], ['LSTM (1 layer)', '62.04', '66.85'], ['LSTM', '59.75', '65.33'], ['MV-LSTM', '64.88', '68.24'], ['NTN-LSTM', '63.40', '67.72'], ['HD-LSTM', '67.44', '<bold>75.11</bold>'], ['Capsule-Zhao', '73.63', '70.12'], ['NLP-Capsule', '<bold>77.73</bold>', '74.16']]
In Table 6, the best performance on MAP metric is achieved by our approach, which verifies the effectiveness of our model. We also observe that our approach exceeds traditional neural models like CNN, LSTM and NTNLSTM by a noticeable margin.
Towards Scalable and Reliable Capsule Networksfor Challenging NLP Applications
1906.02829v1
Table 2: Comparisons of our NLP-Cap approach and baselines on two text classification benchmarks, where ’-’ denotes methods that failed to scale due to memory issues.
['<bold>Datasets</bold>', '<bold>Metrics</bold>', '<bold>FastXML</bold>', '<bold>PD-Sparse</bold>', '<bold>FastText</bold>', '<bold>Bow-CNN</bold>', '<bold>CNN-Kim</bold>', '<bold>XML-CNN</bold>', '<bold>Cap-Zhao</bold>', '<bold>NLP-Cap</bold>', '<bold>Impv</bold>']
[['RCV1', 'PREC@1', '94.62', '95.16', '95.40', '96.40', '93.54', '96.86', '96.63', '<bold>97.05</bold>', '+0.20%'], ['RCV1', 'PREC@3', '78.40', '79.46', '79.96', '81.17', '76.15', '81.11', '81.02', '<bold>81.27</bold>', '+0.20%'], ['RCV1', 'PREC@5', '54.82', '55.61', '55.64', '<bold>56.74</bold>', '52.94', '56.07', '56.12', '56.33', '-0.72%'], ['[EMPTY]', 'NDCG@1', '94.62', '95.16', '95.40', '96.40', '93.54', '96.88', '96.63', '<bold>97.05</bold>', '+0.20%'], ['[EMPTY]', 'NDCG@3', '89.21', '90.29', '90.95', '92.04', '87.26', '92.22', '92.31', '<bold>92.47</bold>', '+0.17%'], ['[EMPTY]', 'NDCG@5', '90.27', '91.29', '91.68', '92.89', '88.20', '92.63', '92.75', '<bold>93.11</bold>', '+0.52%'], ['EUR-Lex', 'PREC@1', '68.12', '72.10', '71.51', '64.99', '68.35', '75.65', '-', '<bold>80.20</bold>', '+6.01%'], ['EUR-Lex', 'PREC@3', '57.93', '57.74', '60.37', '51.68', '54.45', '61.81', '-', '<bold>65.48</bold>', '+5.93%'], ['EUR-Lex', 'PREC@5', '48.97', '47.48', '50.41', '42.32', '44.07', '50.90', '-', '<bold>52.83</bold>', '+3.79%'], ['[EMPTY]', 'NDCG@1', '68.12', '72.10', '71.51', '64.99', '68.35', '75.65', '-', '<bold>80.20</bold>', '+6.01%'], ['[EMPTY]', 'NDCG@3', '60.66', '61.33', '63.32', '55.03', '59.81', '66.71', '-', '<bold>71.11</bold>', '+6.59%'], ['[EMPTY]', 'NDCG@5', '56.42', '55.93', '58.56', '49.92', '57.99', '64.45', '-', '<bold>68.80</bold>', '+6.75%']]
In Table 2, we can see a noticeable margin brought by our capsule-based approach over the strong baselines on EUR-Lex, and competitive results on RCV1. These results appear to indicate that our approach has superior generalization ability on datasets with fewer training examples, i.e., RCV1 has 729.67 examples per label while EUR-Lex has 15.59 examples.
Suggestion Mining from Online Reviews using ULMFiT
1904.09076v1
Table 1: Dataset Distribution for Sub Task A - Task 9: Suggestion Mining from Online Reviews.
['[BOLD] Label', '[BOLD] Train', '[BOLD] Trial']
[['[BOLD] Suggestion', '2085', '296'], ['[BOLD] Non Suggestion', '6415', '296']]
As evident from Table 1, there is a significant imbalance in the distribution of training instances that are suggestions and non-suggestions, 2https://www.uservoice.com/ [CONTINUE] For Sub Task A, the organizers shared a training and a validation dataset whose label distribution (suggestion or a non-suggestion) is presented in Table 1.
Suggestion Mining from Online Reviews using ULMFiT
1904.09076v1
Table 3: Performance of different models on the provided train and test dataset for Sub Task A.
['[BOLD] Model', '[BOLD] F1 (train)', '[BOLD] F1 (test)']
[['[BOLD] Multinomial Naive Bayes (using Count Vectorizer)', '0.641', '0.517'], ['[BOLD] Logistic Regression (using Count Vectorizer)', '0.679', '0.572'], ['[BOLD] SVM (Linear Kernel) (using TfIdf Vectorizer)', '0.695', '0.576'], ['[BOLD] LSTM (128 LSTM Units)', '0.731', '0.591'], ['[BOLD] Provided Baseline', '0.720', '0.267'], ['[BOLD] ULMFit*', '0.861', '0.701']]
Table 3, shows the performances of all the models that we trained on the provided training dataset. [CONTINUE] The ULMFiT model achieved the best results with a F1-score of 0.861 on the training dataset and a F1-score of 0.701 on the test dataset.
Suggestion Mining from Online Reviews using ULMFiT
1904.09076v1
Table 4: Best performing models for SemEval Task 9: Sub Task A.
['[BOLD] Ranking', '[BOLD] Team Name', '[BOLD] Performance (F1)']
[['[BOLD] 1', 'OleNet', '0.7812'], ['[BOLD] 2', 'ThisIsCompetition', '0.7778'], ['[BOLD] 3', 'm_y', '0.7761'], ['[BOLD] 4', 'yimmon', '0.7629'], ['[BOLD] 5', 'NTUA-ISLab', '0.7488'], ['[BOLD] 10', '[BOLD] MIDAS (our team)', '[BOLD] 0.7011*']]
Table 4 shows the performance of the top 5 models for Sub Task A of SemEval 2019 Task 9. Our team ranked 10th out of 34 participants.
Unpaired Speech Enhancement by Acoustic and Adversarial Supervision for Speech Recognition
1811.02182v1
TABLE III: WERs (%) of obtained using different training data of CHiME-4
['Method', 'Training Data', 'Test WER (%) simulated', 'Test WER (%) real']
[['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', 'simulated', '26.1', '25.2'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', 'real', '37.3', '35.2'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', 'simulated + real', '25.9', '24.7'], ['FSEGAN', 'simulated', '29.1', '29.6']]
Table III shows the WERs on the simulated and real test sets when AAS is trained with different training data. With the simulated dataset as the training data, FSEGAN (29.6%) does not generalize well compared to AAS (25.2%) in terms of WER. With the real dataset as the training data, AAS shows severe overfitting since the size of training data is small. When AAS is trained with simulated and real datasets, it achieves the best result (24.7%) on the real test set.
Unpaired Speech Enhancement by Acoustic and Adversarial Supervision for Speech Recognition
1811.02182v1
TABLE I: WERs (%) and DCE of different speech enhancement methods on Librispeech + DEMAND test set
['Method', 'WER (%)', 'DCE']
[['No enhancement', '17.3', '0.828'], ['Wiener filter', '19.5', '0.722'], ['Minimizing DCE', '15.8', '[BOLD] 0.269'], ['FSEGAN', '14.9', '0.291'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=0)', '15.6', '0.330'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', '[BOLD] 14.4', '0.303'], ['Clean speech', '5.7', '0.0']]
Table [CONTINUE] and II show the WER and DCE (normalized by the number of frames) on the test set of Librispeech + DEMAND, and CHiME-4. The Wiener filtering method shows lower DCE, but higher WER than no enhancement. We conjecture that Wiener filter remove some fraction of noise, however, remaining speech is distorted as well. The adversarial supervision (i.e., wAC = 0, wAD > 0) consistently shows very high WER (i.e., > 90%), because the enhanced sample tends to have less correlation with noisy speech, as shown in Fig. 3. [CONTINUE] In Librispeech + DEMAND, acoustic supervision (15.6%) and multi-task learning (14.4%) achieves a lower WER than minimizing DCE (15.8%) and FSEGAN (14.9%).
Unpaired Speech Enhancement by Acoustic and Adversarial Supervision for Speech Recognition
1811.02182v1
TABLE II: WERs (%) and DCE of different speech enhancement methods on CHiME4-simulated test set
['Method', 'WER (%)', 'DCE']
[['No enhancement', '38.4', '0.958'], ['Wiener filter', '41.0', '0.775'], ['Minimizing DCE', '31.1', '[BOLD] 0.392'], ['FSEGAN', '29.1', '0.421'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=0)', '27.7', '0.476'], ['AAS ( [ITALIC] wAC=1, [ITALIC] wAD=105)', '[BOLD] 26.1', '0.462'], ['Clean speech', '9.3', '0.0']]
Table [CONTINUE] and II show the WER and DCE (normalized by the number of frames) on the test set of Librispeech + DEMAND, and CHiME-4. The Wiener filtering method shows lower DCE, but higher WER than no enhancement. We conjecture that Wiener filter remove some fraction of noise, however, remaining speech is distorted as well. The adversarial supervision (i.e., wAC = 0, wAD > 0) consistently shows very high WER (i.e., > 90%), because the enhanced sample tends to have less correlation with noisy speech, as shown in Fig. 3. [CONTINUE] The same tendency is observed in CHiME-4 (i.e. acoustic supervision (27.7%) and multi-task learning (26.1%) show lower WER than minimizing DCE (31.1%) and FSEGAN (29.1%)).
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE IV: Results investigating RQ1 on the Nepal and Kerala datasets. (b) Kerala
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '56.25%', '37.17%', '55.71%', '44.33%'], ['Manual', '65.00%', '47.82%', '[BOLD] 55.77%', '50.63%'], ['Wiki', '63.25%', '42.07%', '46.67%', '44.00%'], ['Local-Manual', '64.50%', '46.90%', '51.86%', '48.47%'], ['Wiki-Manual', '62.25%', '43.56%', '52.63%', '46.93%'], ['Wiki-Manual', '[BOLD] 68.75%∗∗∗', '51.04%', '54.29%', '[BOLD] 52.20%∗∗'], ['[ITALIC] Our Approach', '68.50%', '[BOLD] 51.39%∗∗∗', '52.76%', '51.62%']]
Concerning transfer learning experiments (RQ2), we note that source domain embedding model can improve the performance for target model, and upsampling has a generally positive effect (Tables V-VIII). As expected, transfer learning [CONTINUE] Table VII, a result not found to be significant even at the 90% level). Our approach shows a slight improvement over the upsampling baseline on two of the four scenarios (Tables V and VII) by 2-2.7% on the F-Measure metric, which shows the diminishing returns from mixing source and target labeled training data. Further improving performance by high margins
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE II: Details on datasets used for experiments.
['Dataset', 'Unlabeled / Labeled Messages', 'Urgent / Non-urgent Messages', 'Unique Tokens', 'Avg. Tokens / Message', 'Time Range']
[['Nepal', '6,063/400', '201/199', '1,641', '14', '04/05/2015-05/06/2015'], ['Macedonia', '0/205', '92/113', '129', '18', '09/18/2018-09/21/2018'], ['Kerala', '92,046/400', '125/275', '19,393', '15', '08/17/2018-08/22/2018']]
For evaluating the approaches laid out in Section IV, we consider three real-world datasets described in Table II. [CONTINUE] Originally, all the raw messages for the datasets described in Table II were unlabeled, in that their urgency status was [CONTINUE] unknown. Since the Macedonia dataset only contains 205 messages, and is a small but information-dense dataset, we labeled all messages in Macedonia as urgent or non-urgent (hence, there are no unlabeled messages in Macedonia per Table II). For the two other Twitter-based datasets, we used [CONTINUE] terms of urgent and non-urgent messages. Table II shows that Nepal is roughly balanced, while Kerala is imbalanced. We
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE IV: Results investigating RQ1 on the Nepal and Kerala datasets. (a) Nepal
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '63.97%', '64.27%', '64.50%', '63.93%'], ['Manual', '64.25%', '[BOLD] 70.84%∗∗', '48.50%', '57.11%'], ['Wiki', '67.25%', '66.51%', '69.50%', '67.76%'], ['Local-Manual', '65.75%', '67.96%', '59.50%', '62.96%'], ['Wiki-Local', '67.40%', '65.54%', '68.50%', '66.80%'], ['Wiki-Manual', '67.75%', '70.38%', '63.00%', '65.79%'], ['[ITALIC] Our Approach', '[BOLD] 69.25%∗∗∗', '68.76%', '[BOLD] 70.50%∗∗', '[BOLD] 69.44%∗∗∗']]
Table IV illustrate the result for RQ1 on the Nepal and Kerala datasets. The results illustrate the viability of urgency detection in low-supervision settings (with our approach yielding 69.44% F-Measure on Nepal, at 99% significance compared to the Local baseline), with different feature sets contributing differently to the four metrics. While the local embedding model can reduce precision, for example, it can help the system to improve and accuracy and recall. Similarly, manual features reduce recall, but help the system to improve accuracy and precision (sometimes considerably).
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE V: Results investigating RQ2 using the Nepal dataset as source and Macedonia dataset as target.
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '58.76%', '52.96%', '59.19%', '54.95%'], ['Transform', '58.62%', '51.40%', '[BOLD] 60.32%∗', '55.34%'], ['Upsample', '59.38%', '52.35%', '57.58%', '54.76%'], ['[ITALIC] Our Approach', '[BOLD] 61.79%∗', '[BOLD] 55.08%', '59.19%', '[BOLD] 56.90%']]
Concerning transfer learning experiments (RQ2), we note that source domain embedding model can improve the performance for target model, and upsampling has a generally positive effect (Tables V-VIII).
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE VI: Results investigating RQ2 using the Kerala dataset as source and Macedonia dataset as target.
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '58.76%', '52.96%', '59.19%', '54.95%'], ['Transform', '62.07%', '55.45%', '64.52%', '59.09%'], ['Upsample', '[BOLD] 64.90%∗∗∗', '[BOLD] 57.98%∗', '[BOLD] 65.48%∗∗∗', '[BOLD] 61.30%∗∗∗'], ['[ITALIC] Our Approach', '62.90%', '56.28%', '62.42%', '58.91%']]
uncertain in low-supervision settings. Concerning transfer learning experiments (RQ2), we note that source domain embedding model can improve the performance for target model, and upsampling has a generally positive effect (Tables V-VIII). As expected, transfer learning [CONTINUE] supervision urgency detection on a single dataset(RQ1). Note that at least one of the transfer learning methods always bests the Local baseline on all metrics (except precision in Table VII, a result not found to be significant even at the [CONTINUE] Table VII, a result not found to be significant even at the 90% level). Our approach shows a slight improvement over the upsampling baseline on two of the four scenarios (Tables V and VII) by 2-2.7% on the F-Measure metric, which shows the diminishing returns from mixing source and target labeled training data. Further improving performance by high margins
Low-supervision urgency detection and transfer in short crisis messages
1907.06745v1
TABLE VII: Results investigating RQ2 using the Nepal dataset as source and Kerala dataset as target.
['System', 'Accuracy', 'Precision', 'Recall', 'F-Measure']
[['Local', '58.65%', '[BOLD] 42.40%', '47.47%', '36.88%'], ['Transform', '53.74%', '32.89%', '[BOLD] 57.47%∗', '41.42%'], ['Upsample', '53.88%', '31.71%', '56.32%', '40.32%'], ['[ITALIC] Our Approach', '[BOLD] 58.79%', '35.26%', '55.89%', '[BOLD] 43.03%∗']]
uncertain in low-supervision settings. Concerning transfer learning experiments (RQ2), we note that source domain embedding model can improve the performance for target model, and upsampling has a generally positive effect (Tables V-VIII). As expected, transfer learning [CONTINUE] 0The best F-Measure achieved on Nepal in Table IV was more than 69%, but when using Kerala as source, only 62.5% F-Measure could be achieved (Table VIII).
One-to-X analogical reasoning on word embeddings: a case for diachronic armed conflict prediction from news texts
1907.12674v1
Table 4: Average synchronic performance
['[EMPTY]', '[BOLD] Algorithm', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1']
[['Giga', 'Baseline', '0.28', '0.74', '0.41'], ['Giga', 'Threshold', '0.60', '0.69', '[BOLD] 0.63'], ['NOW', 'Baseline', '0.39', '0.88', '0.53'], ['NOW', 'Threshold', '0.50', '0.77', '[BOLD] 0.60']]
As a sanity check, we also evaluated it synchronically, that is when Tn and rn are tested on the locations from the same year (including peaceful ones). In this easier setup, we observed exactly the same trends (Table 4).
One-to-X analogical reasoning on word embeddings: a case for diachronic armed conflict prediction from news texts
1907.12674v1
Table 2: Average recall of diachronic analogy inference
['[BOLD] Dataset', '[BOLD] @1', '[BOLD] @5', '[BOLD] @10']
[['Gigaword', '0.356', '0.555', '0.610'], ['NOW', '0.442', '0.557', '0.578']]
A replication experiment In Table 2 we replicate the experiments from (Kutuzov et al., 2017) on both sets. It follows their evaluation scheme, where only the presence of the correct armed group name in the k nearest neighbours of the ˆi mattered, and only conflict areas were present in the yearly test sets. Essentially, it measures the recall @k, without penalizing the models for yielding incorrect answers along with the correct ones, and never asking questions having no correct answer at all (e.g., peaceful locations). The performance is very similar on both sets,
One-to-X analogical reasoning on word embeddings: a case for diachronic armed conflict prediction from news texts
1907.12674v1
Table 3: Average diachronic performance
['[EMPTY]', '[BOLD] Algorithm', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1']
[['Giga', 'Baseline', '0.19', '0.51', '0.28'], ['Giga', 'Threshold', '0.46', '0.41', '[BOLD] 0.41'], ['NOW', 'Baseline', '0.26', '0.53', '0.34'], ['NOW', 'Threshold', '0.42', '0.41', '[BOLD] 0.41']]
Table 3 shows the diachronic performance of our system in the setup when the matrix Tn and the threshold rn are applied to the year n + 1. For both Gigaword and NOW datasets (and the corresponding embeddings), using the cosinebased threshold decreases recall and increases precision (differences are statistically significant with t-test, p < 0.05). At the same time, the integral [CONTINUE] metrics of F1 consistently improves (p < 0.01). Thus, the thresholding reduces prediction noise in the one-to-X analogy task without sacrificing too many correct answers. In our particular case, this helps to more precisely detect events of armed conflicts termination (where no insurgents should be predicted for a location), not only their start.
Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation
1909.00754v2
Table 4: The ablation study on the WoZ2.0 dataset with the joint goal accuracy on the test set. For “- Hierachical-Attn”, we remove the residual connections between the attention modules in the CMR decoders and all the attention memory access are based on the output from the LSTM. For “- MLP”, we further replace the MLP with a single linear layer with the non-linear activation.
['[BOLD] Model', '[BOLD] Joint Acc.']
[['COMER', '88.64%'], ['- Hierachical-Attn', '86.69%'], ['- MLP', '83.24%']]
Table 4: The ablation study on the WoZ2.0 dataset with the joint goal accuracy on the test set. [CONTINUE] The effectiveness of our hierarchical attention design is proved by an accuracy drop of 1.95% after removing residual connections and the hierarchical stack of our attention modules.
Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation
1909.00754v2
Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability. The baseline accuracy for the WoZ2.0 dataset is the Delexicalisation-Based (DB) Model Mrksic et al. (2017), while the baseline for the MultiWoZ dataset is taken from the official website of MultiWoZ Budzianowski et al. (2018).
['[BOLD] DST Models', '[BOLD] Joint Acc. WoZ 2.0', '[BOLD] Joint Acc. MultiWoZ', '[BOLD] ITC']
[['Baselines Mrksic et al. ( 2017 )', '70.8%', '25.83%', '[ITALIC] O( [ITALIC] mn)'], ['NBT-CNN Mrksic et al. ( 2017 )', '84.2%', '-', '[ITALIC] O( [ITALIC] mn)'], ['StateNet_PSI Ren et al. ( 2018 )', '[BOLD] 88.9%', '-', '[ITALIC] O( [ITALIC] n)'], ['GLAD Nouri and Hosseini-Asl ( 2018 )', '88.5%', '35.58%', '[ITALIC] O( [ITALIC] mn)'], ['HyST (ensemble) Goel et al. ( 2019 )', '-', '44.22%', '[ITALIC] O( [ITALIC] n)'], ['DSTRead (ensemble) Gao et al. ( 2019 )', '-', '42.12%', '[ITALIC] O( [ITALIC] n)'], ['TRADE Wu et al. ( 2019 )', '-', '48.62%', '[ITALIC] O( [ITALIC] n)'], ['COMER', '88.6%', '[BOLD] 48.79%', '[ITALIC] O(1)']]
Table 3: The joint goal accuracy of the DST models on the WoZ2.0 test set and the MultiWoZ test set. We also include the Inference Time Complexity (ITC) for each model as a metric for scalability [CONTINUE] Table 3 compares our model with the previous state-of-the-art on both the WoZ2.0 test set and the MultiWoZ test set. For the WoZ2.0 dataset, we maintain performance at the level of the state-ofthe-art, with a marginal drop of 0.3% compared with previous work. Considering the fact that WoZ2.0 is a relatively small dataset, this small difference does not represent a significant big performance drop. On the muli-domain dataset, MultiWoZ, our model achieves a joint goal accuracy of 48.79%, which marginally outperforms the previous state-of-the-art.
Scalable and Accurate Dialogue State Tracking via Hierarchical Sequence Generation
1909.00754v2
Table 5: The ablation study on the MultiWoZ dataset with the joint domain accuracy (JD Acc.), joint domain-slot accuracy (JDS Acc.) and joint goal accuracy (JG Acc.) on the test set. For “- moveDrop”, we move the dropout layer to be in front of the final linear layer before the Softmax. For “- postprocess”, we further fix the decoder embedding layer and remove the post-processing during model evaluation. For “- ShareParam”, we further remove the parameter sharing mechanism on the encoders and the attention modules. For “- Order”, we further arrange the order of the slots according to its global frequencies in the training set instead of the local frequencies given the domain it belongs to. For “- Nested”, we do not generate domain sequences but generate combined slot sequences which combines the domain and the slot together. For “- BlockGrad”, we further remove the gradient blocking mechanism in the CMR decoder.
['[BOLD] Model', '[BOLD] JD Acc.', '[BOLD] JDS Acc.', '[BOLD] JG Acc.']
[['COMER', '95.52%', '55.81%', '48.79%'], ['- moveDrop', '95.34%', '55.08%', '47.19%'], ['- postprocess', '95.53%', '54.74%', '45.72%'], ['- ShareParam', '94.96%', '54.40%', '44.38%'], ['- Order', '95.55%', '55.06%', '42.84%'], ['- Nested', '-', '49.58%', '40.57%'], ['- BlockGrad', '-', '49.36%', '39.15%']]
Table 5: The ablation study on the MultiWoZ dataset with the joint domain accuracy (JD Acc.), joint domain-slot accuracy (JDS Acc.) and joint goal accuracy (JG Acc.) on the test set. [CONTINUE] From Table 5, We can further calculate that given the correct slot prediction, COMER has 87.42% chance to make the correct value prediction. While COMER has done great job on domain prediction (95.52%) and value prediction (87.42%), the accuracy of the slot prediction given the correct domain is only 58.43%. [CONTINUE] We can also see that the JDS Acc. has an absolute boost of 5.48% when we switch from the combined slot representation to the nested tuple representation.
Deriving Machine Attention from Human Rationales
1808.09367v1
Table 4: Accuracy of transferring between domains. Models with † use labeled data from source domains and unlabeled data from the target domain. Models with ‡ use human rationales on the target task.
['Source', 'Target', 'Svm', 'Ra-Svm‡', 'Ra-Cnn‡', 'Trans†', 'Ra-Trans‡†', 'Ours‡†', 'Oracle†']
[['Beer look + Beer aroma + Beer palate', 'Hotel location', '78.65', '79.09', '79.28', '80.42', '82.10', '[BOLD] 84.52', '85.43'], ['Beer look + Beer aroma + Beer palate', 'Hotel cleanliness', '86.44', '86.68', '89.01', '86.95', '87.15', '[BOLD] 90.66', '92.09'], ['Beer look + Beer aroma + Beer palate', 'Hotel service', '85.34', '86.61', '87.91', '87.37', '86.40', '[BOLD] 89.93', '92.42']]
Table 4 presents the results of domain transfer using 200 training examples. We use the three aspects of the beer review data together as our source tasks while use the three aspects of hotel review data as the target. Our model (OURS) shows marked performance improvement. The error reduction over the best baseline is 15.08% on average.
Deriving Machine Attention from Human Rationales
1808.09367v1
Table 3: Accuracy of transferring between aspects. Models with † use labeled data from source aspects. Models with ‡ use human rationales on the target aspect.
['Source', 'Target', 'Svm', 'Ra-Svm‡', 'Ra-Cnn‡', 'Trans†', 'Ra-Trans‡†', 'Ours‡†', 'Oracle†']
[['Beer aroma+palate', 'Beer look', '74.41', '74.83', '74.94', '72.75', '76.41', '[BOLD] 79.53', '80.29'], ['Beer look+palate', 'Beer aroma', '68.57', '69.23', '67.55', '69.92', '76.45', '[BOLD] 77.94', '78.11'], ['Beer look+aroma', 'Beer palate', '63.88', '67.82', '65.72', '74.66', '73.40', '[BOLD] 75.24', '75.50']]
Table 3 summarizes the results of aspect transfer on the beer review dataset. Our model (OURS) obtains substantial gains in accuracy over the baselines across all three target aspects. It closely matches the performance of ORACLE with only 0.40% absolute difference. [CONTINUE] Specifically, all rationale-augmented methods (RA-SVM, RA-TRANS and OURS) outperform their rationale-free counterparts on average. This confirms the value of human rationales in the low-resource settings. We observe that the transfer baseline that directly uses rationale as augmented supervision (RA-TRANS) underperforms ORACLE by a large margin. This validates our hypothesis that human rationales and attention are different.
Deriving Machine Attention from Human Rationales
1808.09367v1
Table 5: Ablation study on domain transfer from beer to hotel.
['Model', 'Hotel location', 'Hotel cleanliness', 'Hotel service']
[['Ours', '[BOLD] 84.52', '[BOLD] 90.66', '[BOLD] 89.93'], ['w/o L [ITALIC] wd', '82.36', '89.79', '89.61'], ['w/o L [ITALIC] lm', '82.47', '90.05', '89.75']]
Table 5 presents the results of an ablation study of our model in the setting of domain transfer. As this table indicates, both the language modeling objective and the Wasserstein [CONTINUE] distance contribute similarly to the task, with the Wasserstein distance having a bigger impact.
Keyphrase Generation for Scientific Articles using GANs
1909.12229v1
Table 1: Extractive and Abstractive Keyphrase Metrics
['Model', 'Score', 'Inspec', 'Krapivin', 'NUS', 'KP20k']
[['Catseq(Ex)', 'F1@5', '0.2350', '0.2680', '0.3330', '0.2840'], ['[EMPTY]', 'F1@M', '0.2864', '0.3610', '0.3982', '0.3661'], ['catSeq-RL(Ex.)', 'F1@5', '[BOLD] 0.2501', '[BOLD] 0.2870', '[BOLD] 0.3750', '[BOLD] 0.3100'], ['[EMPTY]', 'F1@M', '[BOLD] 0.3000', '0.3630', '[BOLD] 0.4330', '[BOLD] 0.3830'], ['GAN(Ex.)', 'F1@5', '0.2481', '0.2862', '0.3681', '0.3002'], ['[EMPTY]', 'F1@M', '0.2970', '[BOLD] 0.3700', '0.4300', '0.3810'], ['catSeq(Abs.)', 'F1@5', '0.0045', '0.0168', '0.0126', '0.0200'], ['[EMPTY]', 'F1@M', '0.0085', '0.0320', '0.0170', '0.0360'], ['catSeq-RL(Abs.)', 'F1@5', '0.0090', '[BOLD] 0.0262', '0.0190', '0.0240'], ['[EMPTY]', 'F1@M', '0.0017', '[BOLD] 0.0460', '0.0310', '0.0440'], ['GAN(Abs.)', 'F1@5', '[BOLD] 0.0100', '0.0240', '[BOLD] 0.0193', '[BOLD] 0.0250'], ['[EMPTY]', 'F1@M', '[BOLD] 0.0190', '0.0440', '[BOLD] 0.0340', '[BOLD] 0.0450']]
We compare our proposed approach against 2 baseline models - catSeq (Yuan et al. 2018), RL-based catSeq Model (Chan et al. 2019) in terms of F1 scores as explained in (Yuan et al. 2018). The results, summarized in Table 1, are broken down in terms of performance on extractive and abstractive keyphrases. For extractive keyphrases, our proposed model performs better than the pre-trained catSeq model on all datasets but is slightly worse than catSeq-RL except for on Krapivin where it obtains the best F1@M of 0.37. On the other hand, for abstractive keyphrases, our model performs better than the other two baselines on three of four datasets suggesting that GAN models are more effective in generation of keyphrases.
Keyphrase Generation for Scientific Articles using GANs
1909.12229v1
Table 2: α-nDCG@5 metrics
['Model', 'Inspec', 'Krapivin', 'NUS', 'KP20k']
[['Catseq', '0.87803', '0.781', '0.82118', '0.804'], ['Catseq-RL', '0.8602', '[BOLD] 0.786', '0.83', '0.809'], ['GAN', '[BOLD] 0.891', '0.771', '[BOLD] 0.853', '[BOLD] 0.85']]
We also evaluated the models in terms of α-nDCG@5 (Clarke et al. 2008). The results are summarized in Table 2. Our model obtains the best performance on three out of the four datasets. The difference is most prevalent in KP20k, the largest of the four datasets, where our GAN model (at 0.85) is nearly 5% better than both the other baseline models.
Assessing Gender Bias in Machine Translation – A Case Study with Google Translate
1809.02208v4
Table 7: Percentage of female, male and neutral gender pronouns obtained for each of the merged occupation category, averaged over all occupations in said category and tested languages detailed in Table
['Category', 'Female (%)', 'Male (%)', 'Neutral (%)']
[['Service', '10.5', '59.548', '16.476'], ['STEM', '4.219', '71.624', '11.181'], ['Farming / Fishing / Forestry', '12.179', '62.179', '14.744'], ['Corporate', '9.167', '66.042', '14.861'], ['Healthcare', '23.305', '49.576', '15.537'], ['Legal', '11.905', '72.619', '10.714'], ['Arts / Entertainment', '10.36', '67.342', '11.486'], ['Education', '23.485', '53.03', '9.091'], ['Production', '14.331', '51.199', '18.245'], ['Construction / Extraction', '8.578', '61.887', '17.525'], ['Total', '11.76', '58.93', '15.939']]
To simplify our dataset, we have decided to focus our work on job positions – which, we believe, are an interesting window into the nature of gender bias –, and were able to obtain a comprehensive list of professional occupations from the Bureau of Labor Statistics' detailed occupations table , from the United States Department of Labor. The values inside, however, had to be expanded since each line contained multiple occupations and sometimes very specific ones. Fortunately this table also provided a percentage of women participation in the jobs shown, for those that had more than 50 thousand workers. We filtered some of these because they were too generic ( "Computer occupations, all other", and others) or because they had gender specific words for the profession ("host/hostess", "waiter/waitress"). [CONTINUE] and Table 7 summarizes it even further by coalescing occupation categories into broader groups to ease interpretation.
Assessing Gender Bias in Machine Translation – A Case Study with Google Translate
1809.02208v4
Table 6: Percentage of female, male and neutral gender pronouns obtained for each BLS occupation category, averaged over all occupations in said category and tested languages detailed in Table
['Category', 'Female (%)', 'Male (%)', 'Neutral (%)']
[['Office and administrative support', '11.015', '58.812', '16.954'], ['Architecture and engineering', '2.299', '72.701', '10.92'], ['Farming, fishing, and forestry', '12.179', '62.179', '14.744'], ['Management', '11.232', '66.667', '12.681'], ['Community and social service', '20.238', '62.5', '10.119'], ['Healthcare support', '25.0', '43.75', '17.188'], ['Sales and related', '8.929', '62.202', '16.964'], ['Installation, maintenance, and repair', '5.22', '58.333', '17.125'], ['Transportation and material moving', '8.81', '62.976', '17.5'], ['Legal', '11.905', '72.619', '10.714'], ['Business and financial operations', '7.065', '67.935', '15.58'], ['Life, physical, and social science', '5.882', '73.284', '10.049'], ['Arts, design, entertainment, sports, and media', '10.36', '67.342', '11.486'], ['Education, training, and library', '23.485', '53.03', '9.091'], ['Building and grounds cleaning and maintenance', '12.5', '68.333', '11.667'], ['Personal care and service', '18.939', '49.747', '18.434'], ['Healthcare practitioners and technical', '22.674', '51.744', '15.116'], ['Production', '14.331', '51.199', '18.245'], ['Computer and mathematical', '4.167', '66.146', '14.062'], ['Construction and extraction', '8.578', '61.887', '17.525'], ['Protective service', '8.631', '65.179', '12.5'], ['Food preparation and serving related', '21.078', '58.333', '17.647'], ['Total', '11.76', '58.93', '15.939']]
What we have found is that Google Translate does indeed translate sentences with male pronouns with greater probability than it does either with female or gender-neutral pronouns, in general. Furthermore, this bias is seemingly aggravated for fields suggested to be troubled by male stereotypes, such as life and physical sciences, architecture, engineering, computer science and mathematics . Table 6 summarizes these data,
Racial Bias in Hate Speech and Abusive Language Detection Datasets
1905.12516v1
Table 4: Experiment 2, t= “b*tch”
['Dataset', 'Class', 'ˆ [ITALIC] piblack', 'ˆ [ITALIC] piwhite', '[ITALIC] t', '[ITALIC] p', 'ˆ [ITALIC] piblackˆ [ITALIC] piwhite']
[['[ITALIC] Waseem and Hovy', 'Racism', '0.010', '0.010', '-0.632', '[EMPTY]', '0.978'], ['[EMPTY]', 'Sexism', '0.963', '0.944', '20.064', '***', '1.020'], ['[ITALIC] Waseem', 'Racism', '0.011', '0.011', '-1.254', '[EMPTY]', '0.955'], ['[EMPTY]', 'Sexism', '0.349', '0.290', '28.803', '***', '1.203'], ['[EMPTY]', 'Racism and sexism', '0.012', '0.012', '-0.162', '[EMPTY]', '0.995'], ['[ITALIC] Davidson et al.', 'Hate', '0.017', '0.015', '4.698', '***', '1.152'], ['[EMPTY]', 'Offensive', '0.988', '0.991', '-6.289', '***', '0.997'], ['[ITALIC] Golbeck et al.', 'Harassment', '0.099', '0.091', '6.273', '***', '1.091'], ['[ITALIC] Founta et al.', 'Hate', '0.074', '0.027', '46.054', '***', '2.728'], ['[EMPTY]', 'Abusive', '0.925', '0.968', '-41.396', '***', '0.956'], ['[EMPTY]', 'Spam', '0.010', '0.010', '0.000', '[EMPTY]', '1.000']]
The results for the second variation of Experiment 2 where we conditioned on the word "b*tch" are shown in Table 4. [CONTINUE] We see similar results for Waseem and Hovy (2016) and Waseem (2016). In both cases the classifiers trained upon their data are still more likely to flag black-aligned tweets as sexism. The Waseem and Hovy (2016) classifier is particularly sensitive to the word "b*tch" with 96% of black-aligned and 94% of white-aligned [CONTINUE] tweets predicted to belong to this class. For Davidson et al. (2017) almost all of these tweets are classified as offensive, however those in the blackaligned corpus are 1.15 times as frequently classified as hate speech. We see a very similar result for Golbeck et al. (2017) compared to the previous experiment, with black-aligned tweets flagged as harassment at 1.1 times the rate of those in the white-aligned corpus. Finally, for the Founta et al. (2018) classifier we see a substantial racial disparity, with black-aligned tweets classified as hate speech at 2.7 times the rate of white aligned ones, a higher rate than in Experiment 1.
Racial Bias in Hate Speech and Abusive Language Detection Datasets
1905.12516v1
Table 1: Classifier performance
['Dataset', 'Class', 'Precision', 'Recall', 'F1']
[['[ITALIC] W. & H.', 'Racism', '0.73', '0.79', '0.76'], ['[EMPTY]', 'Sexism', '0.69', '0.73', '0.71'], ['[EMPTY]', 'Neither', '0.88', '0.85', '0.86'], ['[ITALIC] W.', 'Racism', '0.56', '0.77', '0.65'], ['[EMPTY]', 'Sexism', '0.62', '0.73', '0.67'], ['[EMPTY]', 'R. & S.', '0.56', '0.62', '0.59'], ['[EMPTY]', 'Neither', '0.95', '0.92', '0.94'], ['[ITALIC] D. et al.', 'Hate', '0.32', '0.53', '0.4'], ['[EMPTY]', 'Offensive', '0.96', '0.88', '0.92'], ['[EMPTY]', 'Neither', '0.81', '0.95', '0.87'], ['[ITALIC] G. et al.', 'Harass.', '0.41', '0.19', '0.26'], ['[EMPTY]', 'Non.', '0.75', '0.9', '0.82'], ['[ITALIC] F. et al.', 'Hate', '0.33', '0.42', '0.37'], ['[EMPTY]', 'Abusive', '0.87', '0.88', '0.88'], ['[EMPTY]', 'Spam', '0.5', '0.7', '0.58'], ['[EMPTY]', 'Neither', '0.88', '0.77', '0.82']]
The performance of these models on the 20% held-out validation data is reported in Table 1. [CONTINUE] Overall we see varying performance across the classifiers, with some performing much [CONTINUE] better out-of-sample than others. In particular, we see that hate speech and harassment are particularly difficult to detect. Since we are primarily interested in within classifier, between corpora performance, any variation between classifiers should not impact our results.
Racial Bias in Hate Speech and Abusive Language Detection Datasets
1905.12516v1
Table 2: Experiment 1
['Dataset', 'Class', 'ˆ [ITALIC] piblack', 'ˆ [ITALIC] piwhite', '[ITALIC] t', '[ITALIC] p', 'ˆ [ITALIC] piblackˆ [ITALIC] piwhite']
[['[ITALIC] Waseem and Hovy', 'Racism', '0.001', '0.003', '-20.818', '***', '0.505'], ['[EMPTY]', 'Sexism', '0.083', '0.048', '101.636', '***', '1.724'], ['[ITALIC] Waseem', 'Racism', '0.001', '0.001', '0.035', '[EMPTY]', '1.001'], ['[EMPTY]', 'Sexism', '0.023', '0.012', '64.418', '***', '1.993'], ['[EMPTY]', 'Racism and sexism', '0.002', '0.001', '4.047', '***', '1.120'], ['[ITALIC] Davidson et al.', 'Hate', '0.049', '0.019', '120.986', '***', '2.573'], ['[EMPTY]', 'Offensive', '0.173', '0.065', '243.285', '***', '2.653'], ['[ITALIC] Golbeck et al.', 'Harassment', '0.032', '0.023', '39.483', '***', '1.396'], ['[ITALIC] Founta et al.', 'Hate', '0.111', '0.061', '122.707', '***', '1.812'], ['[EMPTY]', 'Abusive', '0.178', '0.080', '211.319', '***', '2.239'], ['[EMPTY]', 'Spam', '0.028', '0.015', '63.131', '***', '1.854']]
The results of Experiment 1 are shown in Table 2. [CONTINUE] We observe substantial racial disparities in the performance of all classifiers. In all but one of the comparisons, there are statistically significant (p < 0.001) differences and in all but one of these we see that tweets in the black-aligned corpus are assigned negative labels more frequently than those by whites. The only case where blackaligned tweets are classified into a negative class less frequently than white-aligned tweets is the racism class in the Waseem and Hovy (2016) classifier. Note, however, the extremely low rate at which tweets are predicted to belong to this class for both groups. On the other hand, this classifier is 1.7 times more likely to classify tweets in the black-aligned corpus as sexist. For Waseem (2016) we see that there is no significant difference in the estimated rates at which tweets are clas [CONTINUE] sified as racist across groups, although the rates remain low. Tweets in the black-aligned corpus are classified as containing sexism almost twice as frequently and 1.1 times as frequently classified as containing racism and sexism compared to those in the white-aligned corpus. Moving onto Davidson et al. (2017), we find large disparities, with around 5% of tweets in the black-aligned corpus classified as hate speech compared to 2% of those in the white-aligned set. Similarly, 17% of black-aligned tweets are predicted to contain offensive language compared to 6.5% of whitealigned tweets. The classifier trained on the Golbeck et al. (2017) dataset predicts black-aligned tweets to be harassment 1.4 times as frequently as white-aligned tweets. The Founta et al. (2018) classifier labels around 11% of tweets in the blackaligned corpus as hate speech and almost 18% as abusive, compared to 6% and 8% of white-aligned tweets respectively. It also classifies black-aligned tweets as spam 1.8 times as frequently. The results of Experiment 2 are consistent with the previous results, although there are some notable differences. In most cases the racial disparities persist, although they are generally smaller in magnitude and in some cases the direction even changes.
Racial Bias in Hate Speech and Abusive Language Detection Datasets
1905.12516v1
Table 3: Experiment 2, t= “n*gga”
['Dataset', 'Class', 'ˆ [ITALIC] piblack', 'ˆ [ITALIC] piwhite', '[ITALIC] t', '[ITALIC] p', 'ˆ [ITALIC] piblackˆ [ITALIC] piwhite']
[['[ITALIC] Waseem and Hovy', 'Racism', '0.010', '0.011', '-1.462', '[EMPTY]', '0.960'], ['[EMPTY]', 'Sexism', '0.147', '0.100', '31.932', '***', '1.479'], ['[ITALIC] Waseem', 'Racism', '0.010', '0.010', '0.565', '[EMPTY]', '1.027'], ['[EMPTY]', 'Sexism', '0.040', '0.026', '18.569', '***', '1.554'], ['[EMPTY]', 'Racism and sexism', '0.011', '0.010', '0.835', '[EMPTY]', '1.026'], ['[ITALIC] Davidson et al.', 'Hate', '0.578', '0.645', '-31.248', '***', '0.896'], ['[EMPTY]', 'Offensive', '0.418', '0.347', '32.895', '***', '1.202'], ['[ITALIC] Golbeck et al.', 'Harassment', '0.085', '0.078', '5.984', '***', '1.096'], ['[ITALIC] Founta et al.', 'Hate', '0.912', '0.930', '-15.037', '***', '0.980'], ['[EMPTY]', 'Abusive', '0.086', '0.067', '16.131', '***', '1.296'], ['[EMPTY]', 'Spam', '0.010', '0.010', '-1.593', '[EMPTY]', '1.000']]
Table 3 shows that for tweets containing the word "n*gga", classifiers trained on Waseem and Hovy (2016) and Waseem (2016) are both predict black-aligned tweets to be instances of sexism approximately 1.5 times as often as white-aligned tweets. The classifier trained on the Davidson et al. (2017) data is significantly less likely to classify black-aligned tweets as hate speech, although it is more likely to classify them as offensive. Golbeck et al. (2017) classifies black-aligned tweets as harassment at a higher rate for both groups than in the previous experiment, although the disparity is narrower. For the Founta et al. (2018) classifier we see that black-aligned tweets are slightly less frequently considered to be hate speech but are much more frequently classified as abusive.
Sparse and Structured Visual Attention
2002.05556v1
Table 1: Automatic evaluation of caption generation on MSCOCO and Flickr30k.
['[EMPTY]', 'MSCOCO spice', 'MSCOCO cider', 'MSCOCO rouge [ITALIC] L', 'MSCOCO bleu4', 'MSCOCO meteor', 'MSCOCO rep↓', 'Flickr30k spice', 'Flickr30k cider', 'Flickr30k rouge [ITALIC] L', 'Flickr30k bleu4', 'Flickr30k meteor', 'Flickr30k rep↓']
[['softmax', '18.4', '0.967', '52.9', '29.9', '24.9', '3.76', '13.5', '0.443', '44.2', '19.9', '19.1', '6.09'], ['sparsemax', '[BOLD] 18.9', '[BOLD] 0.990', '[BOLD] 53.5', '[BOLD] 31.5', '[BOLD] 25.3', '3.69', '[BOLD] 13.7', '[BOLD] 0.444', '[BOLD] 44.3', '[BOLD] 20.7', '[BOLD] 19.3', '5.84'], ['TVmax', '18.5', '0.974', '53.1', '29.9', '25.1', '[BOLD] 3.17', '13.3', '0.438', '44.2', '20.5', '19.0', '[BOLD] 3.97']]
As can be seen in Table 1, sparsemax and TVMAX achieve better results overall when compared with softmax, indicating that the use of selective attention leads to better captions. [CONTINUE] Moreover, for TVMAX, automatic metrics results are slightly worse than sparsemax but still superior to softmax on MSCOCO and similar on Flickr30k. [CONTINUE] Selective attention mechanisms like sparsemax and especially TVMAX reduce repetition, as measured by the REP metric reported in Table 1.
Sparse and Structured Visual Attention
2002.05556v1
Table 2: Human evaluation results on MSCOCO.
['[EMPTY]', 'caption', 'attention relevance']
[['softmax', '3.50', '3.38'], ['sparsemax', '3.71', '3.89'], ['TVmax', '[BOLD] 3.87', '[BOLD] 4.10']]
Despite performing slightly worse than sparsemax under automatic metrics, TVMAX outperforms sparsemax and softmax in the caption human evaluation and the attention relevance human evaluation, reported in Table 2. The superior score on attention relevance shows that TVMAX is better at selecting the relevant features and its output is more interpretable. Additionally, the better caption evaluation results demonstrate that the ability to select compact regions induces the generation of better captions. [CONTINUE] With these scores, we computed the mean of the captions evaluation scores and the mean of the attention relevance evaluation scores. The results are reported in Table 2.
Sparse and Structured Visual Attention
2002.05556v1
Table 3: Automatic evaluation of VQA on VQA-2.0. Sparse-TVmax and soft-TVmax correspond to using sparsemax or softmax on the image self-attention and TVmax on the output attention. Other models use softmax or sparsemax on self-attention and output attention.
['[EMPTY]', 'Att. to image', 'Att. to bounding boxes', 'Test-Dev Yes/No', 'Test-Dev Number', 'Test-Dev Other', 'Test-Dev Overall', 'Test-Standard Yes/No', 'Test-Standard Number', 'Test-Standard Other', 'Test-Standard Overall']
[['softmax', '✓', '[EMPTY]', '83.08', '42.65', '55.74', '65.52', '83.55', '42.68', '56.01', '65.97'], ['sparsemax', '✓', '[EMPTY]', '83.08', '43.19', '55.79', '65.60', '83.33', '42.99', '56.06', '65.94'], ['soft-TVmax', '✓', '[EMPTY]', '83.13', '43.53', '56.01', '65.76', '83.63', '43.24', '56.10', '66.11'], ['sparse-TVmax', '✓', '[EMPTY]', '83.10', '43.30', '56.14', '65.79', '83.66', '43.18', '56.21', '66.17'], ['softmax', '[EMPTY]', '✓', '85.14', '49.59', '58.72', '68.57', '85.56', '49.54', '59.11', '69.04'], ['sparsemax', '[EMPTY]', '✓', '[BOLD] 85.40', '[BOLD] 50.87', '58.67', '68.79', '[BOLD] 85.80', '50.18', '59.08', '69.19'], ['softmax', '✓', '✓', '85.33', '50.49', '58.88', '68.82', '85.58', '50.42', '59.18', '69.17'], ['sparse-TVmax', '✓', '✓', '85.35', '50.52', '[BOLD] 59.15', '[BOLD] 68.96', '85.72', '[BOLD] 50.66', '[BOLD] 59.22', '[BOLD] 69.28']]
As can be seen in the results presented in Table 3 the models using TVMAX in the output attention layer outperform the models using softmax and sparsemax. Moreover, the results are slightly superior when the sparsemax transformation is used in the self-attention layers of the decoder. It can also be observed that, when using sparsemax both in the self-attention layers of the decoder and in the output attention mechanism, the accuracy is superior than when using softmax. Thus, having sparse attention mechanisms in the self-attention layers is beneficial, but the biggest improvement is obtained when using TVMAX in the output attention. This corroborates our intuition that selecting the features of contiguous regions of the image leads to a better understanding of the image, and, consequently, better accuracy. Additionally, when using bounding box features, sparsemax outperforms softmax, showing that selecting only the bounding boxes of the relevant objects leads to a better answering capability. We can also see that combining ResNet features with bounding box features improves results. Moreover, the model using TVMAX in the final attention layer achieves the highest accuracy, showing that features obtained using the TVMAX transformation are a better complement to bounding box features.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
508
Add dataset card