paper
stringlengths
0
839
paper_id
stringlengths
1
12
table_caption
stringlengths
3
2.35k
table_column_names
large_stringlengths
13
1.76k
table_content_values
large_stringlengths
2
11.9k
text
large_stringlengths
69
2.82k
A context sensitive real-time Spell Checker with language adaptability
1910.11242v1
TABLE IV: Synthetic Data Performance on three error generation algorithm
['[BOLD] Language', '[BOLD] Random Character [BOLD] P@1', '[BOLD] Random Character [BOLD] P@10', '[BOLD] Characters Swap [BOLD] P@1', '[BOLD] Characters Swap [BOLD] P@10', '[BOLD] Character Bigrams [BOLD] P@1', '[BOLD] Character Bigrams [BOLD] P@10']
[['Bengali', '91.243', '99.493', '82.580', '99.170', '93.694', '99.865'], ['Czech', '94.035', '99.264', '91.560', '99.154', '97.795', '99.909'], ['Danish', '84.605', '98.435', '71.805', '97.160', '90.103', '99.444'], ['Dutch', '85.332', '98.448', '72.800', '96.675', '91.159', '99.305'], ['English', '97.260', '99.897', '93.220', '99.700', '98.050', '99.884'], ['Finnish', '97.735', '99.855', '94.510', '99.685', '98.681', '99.972'], ['French', '84.332', '98.483', '72.570', '97.215', '91.165', '99.412'], ['German', '86.870', '98.882', '73.920', '97.550', '91.448', '99.509'], ['Greek', '82.549', '97.800', '71.925', '96.910', '90.291', '99.386'], ['Hebrew', '94.180', '99.672', '88.491', '99.201', '95.414', '99.706'], ['Hindi', '81.610', '97.638', '67.730', '96.200', '86.274', '99.169'], ['Indonesian', '94.735', '99.838', '89.035', '99.560', '96.745', '99.910'], ['Italian', '88.865', '99.142', '78.765', '98.270', '93.400', '99.775'], ['Marathi', '92.392', '99.493', '85.145', '99.025', '95.449', '99.905'], ['Polish', '94.918', '99.743', '90.280', '99.705', '97.454', '99.954'], ['Portuguese', '86.422', '98.903', '71.735', '97.685', '90.787', '99.562'], ['Romanian', '94.925', '99.575', '90.805', '99.245', '97.119', '99.845'], ['Russian', '93.285', '99.502', '89.000', '99.240', '97.196', '99.942'], ['Spanish', '84.535', '98.210', '71.345', '96.645', '90.395', '99.246'], ['Swedish', '87.195', '98.865', '76.940', '97.645', '92.828', '99.656'], ['Tamil', '98.118', '99.990', '96.920', '99.990', '99.284', '99.999'], ['Telugu', '97.323', '99.990', '93.935', '99.985', '97.897', '99.998'], ['Thai', '97.989', '99.755', '97.238', '99.448', '98.859', '99.986'], ['Turkish', '97.045', '99.880', '93.195', '99.815', '98.257', '99.972']]
Table IV presents the system's performance on each error generation algorithm. We included only P@1 and P@10 to show trend on all languages. "Random Character" and "Character Bigrams" includes data for edit distance 1 and 2 whereas "Characters Swap" includes data for edit distance 2.
A context sensitive real-time Spell Checker with language adaptability
1910.11242v1
TABLE I: Average Time taken by suggestion generation algorithms (Edit Distance = 2) (in millisecond)
['[BOLD] Token', '[BOLD] Trie', '[BOLD] DAWGs', '[BOLD] SDA']
[['3', '170.50', '180.98', '112.31'], ['4', '175.04', '178.78', '52.97'], ['5', '220.44', '225.10', '25.44'], ['6', '254.57', '259.54', '7.44'], ['7', '287.19', '291.99', '4.59'], ['8', '315.78', '321.58', '2.58'], ['9', '351.19', '356.76', '1.91'], ['10', '379.99', '386.04', '1.26'], ['11', '412.02', '419.55', '1.18'], ['12', '436.54', '443.85', '1.06'], ['13', '473.45', '480.26', '1.16'], ['14', '508.08', '515.04', '0.97'], ['15', '548.04', '553.49', '0.66'], ['16', '580.44', '584.99', '0.37']]
We considered four approaches — Trie data structure, Burkhard-Keller Tree (BK Tree) , Directed Acyclic Word Graphs (DAWGs) and Symmetric Delete algorithm (SDA)6. In Table I, we represent the performance of algorithms for edit distance 2 without adding results for BK trees because its performance was in range of couple of seconds.
A context sensitive real-time Spell Checker with language adaptability
1910.11242v1
TABLE II: Synthetic Data Performance results
['[BOLD] Language', '[BOLD] # Test', '[BOLD] P@1', '[BOLD] P@3', '[BOLD] P@5', '[BOLD] P@10', '[BOLD] MRR']
[['[BOLD] Language', '[BOLD] Samples', '[BOLD] P@1', '[BOLD] P@3', '[BOLD] P@5', '[BOLD] P@10', '[BOLD] MRR'], ['Bengali', '140000', '91.30', '97.83', '98.94', '99.65', '94.68'], ['Czech', '94205', '95.84', '98.72', '99.26', '99.62', '97.37'], ['Danish', '140000', '85.84', '95.19', '97.28', '98.83', '90.85'], ['Dutch', '140000', '86.83', '95.01', '97.04', '98.68', '91.32'], ['English', '140000', '97.08', '99.39', '99.67', '99.86', '98.27'], ['Finnish', '140000', '97.77', '99.58', '99.79', '99.90', '98.69'], ['French', '140000', '86.52', '95.66', '97.52', '98.83', '91.38'], ['German', '140000', '87.58', '96.16', '97.86', '99.05', '92.10'], ['Greek', '30022', '84.95', '94.99', '96.88', '98.44', '90.27'], ['Hebrew', '132596', '94.00', '98.26', '99.05', '99.62', '96.24'], ['Hindi', '140000', '82.19', '93.71', '96.28', '98.30', '88.40'], ['Indonesian', '140000', '95.01', '98.98', '99.50', '99.84', '97.04'], ['Italian', '140000', '89.93', '97.31', '98.54', '99.38', '93.76'], ['Marathi', '140000', '93.01', '98.16', '99.06', '99.66', '95.69'], ['Polish', '140000', '95.65', '99.17', '99.62', '99.86', '97.44'], ['Portuguese', '140000', '86.73', '96.29', '97.94', '99.10', '91.74'], ['Romanian', '140000', '95.52', '98.79', '99.32', '99.68', '97.22'], ['Russian', '140000', '94.85', '98.74', '99.33', '99.71', '96.86'], ['Spanish', '140000', '85.91', '95.35', '97.18', '98.57', '90.92'], ['Swedish', '140000', '88.86', '96.40', '98.00', '99.14', '92.87'], ['Tamil', '140000', '98.05', '99.70', '99.88', '99.98', '98.88'], ['Telugu', '140000', '97.11', '99.68', '99.92', '99.99', '98.38'], ['Thai', '12403', '98.73', '99.71', '99.78', '99.85', '99.22'], ['Turkish', '140000', '97.13', '99.51', '99.78', '99.92', '98.33']]
The best performances for each language is reported in Table II. We present Precision@k9 for k ∈ 1, 3, 5, 10 and mean reciprocal rank (MRR). The system performs well on synthetic dataset with a minimum of 80% P@1 and 98% P@10.
A context sensitive real-time Spell Checker with language adaptability
1910.11242v1
TABLE III: Synthetic Data Time Performance results
['[BOLD] Language', '[BOLD] Detection [BOLD] Time ( [ITALIC] μs)', '[BOLD] Suggestion Time [BOLD] ED=1 (ms)', '[BOLD] Suggestion Time [BOLD] ED=2 (ms)', '[BOLD] Ranking [BOLD] Time (ms)']
[['Bengali', '7.20', '0.48', '14.85', '1.14'], ['Czech', '7.81', '0.75', '26.67', '2.34'], ['Danish', '7.28', '0.67', '23.70', '1.96'], ['Dutch', '10.80', '0.81', '30.44', '2.40'], ['English', '7.27', '0.79', '39.36', '2.35'], ['Finnish', '8.53', '0.46', '15.55', '1.05'], ['French', '7.19', '0.82', '32.02', '2.69'], ['German', '8.65', '0.85', '41.18', '2.63'], ['Greek', '7.63', '0.86', '25.40', '1.87'], ['Hebrew', '22.35', '1.01', '49.91', '2.18'], ['Hindi', '8.50', '0.60', '18.51', '1.72'], ['Indonesian', '12.00', '0.49', '20.75', '1.22'], ['Italian', '6.92', '0.72', '29.02', '2.17'], ['Marathi', '7.16', '0.43', '10.68', '0.97'], ['Polish', '6.44', '0.64', '24.15', '1.74'], ['Portuguese', '7.14', '0.66', '28.92', '2.20'], ['Romanian', '10.26', '0.63', '18.83', '1.79'], ['Russian', '6.79', '0.68', '22.56', '1.72'], ['Spanish', '7.19', '0.75', '31.00', '2.41'], ['Swedish', '7.76', '0.83', '32.17', '2.57'], ['Tamil', '11.34', '0.23', '4.83', '0.31'], ['Telugu', '6.31', '0.29', '7.50', '0.54'], ['Thai', '11.60', '0.66', '18.75', '1.33'], ['Turkish', '7.40', '0.49', '17.42', '1.23']]
The system is able to do each sub-step in real-time; the average time taken to perform for each sub-step is reported in Table III. All the sentences used for this analysis had exactly one error according to our system. Detection time is the average time weighted over number of tokens in query sentence, suggestion time is weighted over misspelling character length and ranking time is weighted over length of suggestions generated.
A context sensitive real-time Spell Checker with language adaptability
1910.11242v1
TABLE VI: Public dataset comparison results
['[EMPTY]', '[BOLD] P@1', '[BOLD] P@3', '[BOLD] P@5', '[BOLD] P@10']
[['Aspell', '60.82', '80.81', '87.26', '91.35'], ['Hunspell', '61.34', '77.86', '83.47', '87.04'], ['[ITALIC] Ours', '68.99', '83.43', '87.03', '90.16']]
Comparison of most popular spell checkers for English (GNU Aspell and Hunspell12) on this data is presented in Table VI. Since these tools only work on word-error level, we used only unigram probabilities for ranking. Our system outperforms both the systems.
A context sensitive real-time Spell Checker with language adaptability
1910.11242v1
TABLE VII: False Positive Experiment Results
['[BOLD] Language', '[BOLD] # Sentences', '[BOLD] # Total Words', '[BOLD] # Detected', '[BOLD] %']
[['Bengali', '663748', '457140', '443650', '97.05'], ['Czech', '6128', '36846', '36072', '97.90'], ['Danish', '16198', '102883', '101798', '98.95'], ['Dutch', '55125', '1048256', '1004274', '95.80'], ['English', '239555', '4981604', '4907733', '98.52'], ['Finnish', '3757', '43457', '39989', '92.02'], ['French', '164916', '3244367', '3187587', '98.25'], ['German', '71025', '1283239', '1250232', '97.43'], ['Greek', '1586', '43035', '42086', '97.79'], ['Hebrew', '95813', '505335', '494481', '97.85'], ['Hindi', '5089', '37617', '37183', '98.85'], ['Indonesian', '100248', '84347', '82809', '98.18'], ['Italian', '36026', '718774', '703514', '97.88'], ['Marathi', '17007', '84286', '79866', '94.76'], ['Polish', '3283', '34226', '32780', '95.78'], ['Portuguese', '1453', '25568', '25455', '99.56'], ['Romanian', '4786', '34862', '34091', '97.79'], ['Russian', '27252', '384262', '372979', '97.06'], ['Spanish', '108017', '2057481', '2028951', '98.61'], ['Swedish', '3209', '66191', '64649', '97.67'], ['Tamil', '40165', '21044', '19526', '92.79'], ['Telugu', '30466', '17710', '17108', '96.60'], ['Thai', '16032', '67507', '49744', '73.69'], ['Turkish', '163910', '794098', '775776', '97.69']]
As shown in Table VII, most of the words for each language were detected as known but still there was a minor percentage of words which were detected as errors.
Automatically Identifying Complaints in Social Media
1906.03890v1
Table 9: Performance of models trained with tweets from one domain and tested on other domains. All results are reported in ROC AUC. The All line displays results on training on all categories except the category in testing.
['[BOLD] Test', 'F&B', 'A', 'R', 'Ca', 'Se', 'So', 'T', 'E', 'O']
[['[BOLD] Train', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Food & Bev.', '–', '58.1', '52.5', '66.4', '59.7', '58.9', '54.1', '61.4', '53.7'], ['Apparel', '63.9', '–', '74.4', '65.1', '70.8', '71.2', '68.5', '76.9', '85.6'], ['Retail', '58.8', '74.4', '–', '70.1', '72.6', '69.9', '68.7', '69.6', '82.7'], ['Cars', '68.7', '61.1', '65.1', '–', '58.8', '67.', '59.3', '62.9', '68.2'], ['Services', '65.', '74.2', '75.8', '74.', '–', '68.8', '74.2', '77.9', '77.9'], ['Software', '62.', '74.2', '68.', '67.9', '72.8', '–', '72.8', '72.1', '80.6'], ['Transport', '59.3', '71.7', '72.4', '67.', '74.6', '75.', '–', '72.6', '81.7'], ['Electronics', '61.6', '75.2', '71.', '68.', '75.', '69.9', '68.2', '–', '78.7'], ['Other', '56.1', '71.3', '72.4', '70.2', '73.5', '67.2', '68.5', '71.', '–'], ['All', '70.3', '77.7', '79.5', '82.0', '79.6', '80.1', '76.8', '81.7', '88.2']]
Finally, Table 9 presents the results of models trained on tweets from one domain and tested on all tweets from other domains, with additional models trained on tweets from all domains except the one that the model is tested on. We observe that predictive performance is relatively consistent across all domains with two exceptions ('Food & Beverage' consistently shows lower performance, while 'Other' achieves higher performance) when using all the data available from the other domains.
Automatically Identifying Complaints in Social Media
1906.03890v1
Table 3: Number of tweets annotated as complaints across the nine domains.
['[BOLD] Category', '[BOLD] Complaints', '[BOLD] Not Complaints']
[['Food & Beverage', '95', '35'], ['Apparel', '141', '117'], ['Retail', '124', '75'], ['Cars', '67', '25'], ['Services', '207', '130'], ['Software & Online Services', '189', '103'], ['Transport', '139', '109'], ['Electronics', '174', '112'], ['Other', '96', '33'], ['Total', '1232', '739']]
In total, 1,232 tweets (62.4%) are complaints and 739 are not complaints (37.6%). The statistics for each category is in Table 3.
Automatically Identifying Complaints in Social Media
1906.03890v1
Table 4: Features associated with complaint and non-complaint tweets, sorted by Pearson correlation (r) computed between the normalized frequency of each feature and the complaint label across all tweets. All correlations are significant at p
['[BOLD] Complaints [BOLD] Feature', '[BOLD] Complaints [ITALIC] r', '[BOLD] Not Complaints [BOLD] Feature', '[BOLD] Not Complaints [ITALIC] r']
[['[BOLD] Unigrams', '[BOLD] Unigrams', '[BOLD] Unigrams', '[BOLD] Unigrams'], ['not', '.154', '[URL]', '.150'], ['my', '.131', '!', '.082'], ['working', '.124', 'he', '.069'], ['still', '.123', 'thank', '.067'], ['on', '.119', ',', '.064'], ['can’t', '.113', 'love', '.064'], ['service', '.112', 'lol', '.061'], ['customer', '.109', 'you', '.060'], ['why', '.108', 'great', '.058'], ['website', '.107', 'win', '.058'], ['no', '.104', '’', '.058'], ['?', '.098', 'she', '.054'], ['fix', '.093', ':', '.053'], ['won’t', '.092', 'that', '.053'], ['been', '.090', 'more', '.052'], ['issue', '.089', 'it', '.052'], ['days', '.088', 'would', '.051'], ['error', '.087', 'him', '.047'], ['is', '.084', 'life', '.046'], ['charged', '.083', 'good', '.046'], ['[BOLD] POS (Unigrams and Bigrams)', '[BOLD] POS (Unigrams and Bigrams)', '[BOLD] POS (Unigrams and Bigrams)', '[BOLD] POS (Unigrams and Bigrams)'], ['VBN', '.141', 'UH', '.104'], ['$', '.118', 'NNP', '.098'], ['VBZ', '.114', 'PRP', '.076'], ['NN_VBZ', '.114', 'HT', '.076'], ['PRP$', '.107', 'PRP_.', '.076'], ['PRP$_NN', '.105', 'PRP_RB', '.067'], ['VBG', '.093', 'NNP_NNP', '.062'], ['CD', '.092', 'VBP_PRP', '.054'], ['WRB_VBZ', '.084', 'JJ', '.053'], ['VBZ_VBN', '.084', 'DT_JJ', '.051']]
Top unigrams and part-of-speech features specific of complaints and non-complaints are presented in Table 4. [CONTINUE] All correlations shown in these tables are statistically significant at p < .01, with Simes correction for multiple comparisons. [CONTINUE] Negations are uncovered through unigrams (not, no, won't) [CONTINUE] Several unigrams (error, issue, working, fix) [CONTINUE] However, words regularly describing negative sentiment or emotions are not one of the most distinctive features for complaints. [CONTINUE] On the other hand, the presence of terms that show positive sentiment or emotions (good, great, win, POSEMO, AFFECT, ASSENT) are among the top most distinctive features for a tweet not being labeled as a complaint. [CONTINUE] In addition, other words and clusters expressing positive states such as gratitude (thank, great, love) or laughter (lol) are also distinctive for tweets that are not complaints. [CONTINUE] Across unigrams, part-of-speech patterns and word clusters, we see a distinctive pattern emerging around pronoun usage. [CONTINUE] Complaints use more possessive pronouns, indicating that the user is describing personal experiences. [CONTINUE] A distinctive part-of-speech pattern common in complaints is possessive pronouns followed by nouns (PRP$ NN) which refer to items of services possessed by the complainer (e.g., my account, my order). [CONTINUE] Question marks are distinctive of complaints, as many complaints are formulated as questions to the responsible party (e.g., why is this not working?, when will [CONTINUE] get my response?). [CONTINUE] Mentions of time are specific of complaints (been, still, on, days, Temporal References cluster). [CONTINUE] In addition, the presence of verbs in past participle (VBN) is the most distinctive part-of-speech pattern of complaints. These are used to describe actions completed in the past (e.g., i've bought, have come) in order to provide context for the complaint. [CONTINUE] Several part-of-speech patterns distinctive of complaints involve present verbs in third person singular (VBZ). [CONTINUE] Verbs in gerund or present participle are used as a complaint strategy to describe things that just happened to a user (e.g., got an email saying my service will be terminated).
Automatically Identifying Complaints in Social Media
1906.03890v1
Table 5: Group text features associated with tweets that are complaints and not complaints. Features are sorted by Pearson correlation (r) between their each feature’s normalized frequency and the outcome. We restrict to only the top six categories for each feature type. All correlations are significant at p
['[BOLD] Complaints [BOLD] Label', '[BOLD] Complaints [BOLD] Words', '[BOLD] Complaints [ITALIC] r', '[BOLD] Not Complaints [BOLD] Label', '[BOLD] Not Complaints [BOLD] Words', '[BOLD] Not Complaints [ITALIC] r']
[['[BOLD] LIWC Features', '[BOLD] LIWC Features', '[BOLD] LIWC Features', '[BOLD] LIWC Features', '[BOLD] LIWC Features', '[BOLD] LIWC Features'], ['NEGATE', 'not, no, can’t, don’t, never, nothing, doesn’t, won’t', '.271', 'POSEMO', 'thanks, love, thank, good, great, support, lol, win', '.185'], ['RELATIV', 'in, on, when, at, out, still, now, up, back, new', '.225', 'AFFECT', 'thanks, love, thank, good, great, support, lol', '.111'], ['FUNCTION', 'the, i, to, a, my, and, you, for, is, in', '.204', 'SHEHE', 'he, his, she, her, him, he’s, himself', '.105'], ['TIME', 'when, still, now, back, new, never, after, then, waiting', '.186', 'MALE', 'he, his, man, him, sir, he’s, son', '.086'], ['DIFFER', 'not, but, if, or, can’t, really, than, other, haven’t', '.169', 'FEMALE', 'she, her, girl, mom, ma, lady, mother, female, mrs', '.084'], ['COGPROC', 'not, but, how, if, all, why, or, any, need', '.132', 'ASSENT', 'yes, ok, awesome, okay, yeah, cool, absolutely, agree', '.080'], ['[BOLD] Word2Vec Clusters', '[BOLD] Word2Vec Clusters', '[BOLD] Word2Vec Clusters', '[BOLD] Word2Vec Clusters', '[BOLD] Word2Vec Clusters', '[BOLD] Word2Vec Clusters'], ['Cust. Service', 'service, customer, contact, job, staff, assist, agent', '.136', 'Gratitude', 'thanks, thank, good, great, support, everyone, huge, proud', '.089'], ['Order', 'order, store, buy, free, delivery, available, package', '.128', 'Family', 'old, friend, family, mom, wife, husband, younger', '.063'], ['Issues', 'delayed, closed, between, outage, delay, road, accident', '.122', 'Voting', 'favorite, part, stars, model, vote, models, represent', '.060'], ['Time Ref.', 'been, yet, haven’t, long, happened, yesterday, took', '.122', 'Contests', 'Christmas, gift, receive, entered, giveaway, enter, cards', '.058'], ['Tech Parts', 'battery, laptop, screen, warranty, desktop, printer', '.100', 'Pets', 'dogs, cat, dog, pet, shepherd, fluffy, treats', '.054'], ['Access', 'use, using, error, password, access, automatically, reset', '.098', 'Christian', 'god, shall, heaven, spirit, lord, belongs, soul, believers', '.053']]
The top features for the LIWC categories and Word2Vec topics are presented in Table 5. [CONTINUE] the top LIWC category (NEGATE). [CONTINUE] a cluster (Issues) contain words referring to issues or errors. [CONTINUE] Complaints tend to not contain personal pronouns (he, she, it, him, you, SHEHE, MALE, FEMALE), as the focus on expressing the complaint is on the self and the party the complaint is addressed to and not other third parties. [CONTINUE] Complaints are not usually accompanied by exclamation marks. [CONTINUE] General topics typical of complaint tweets include requiring assistance or customer support. Several groups of words are much more likely to appear in a complaint, although not used to express complaints per se: about orders or deliveries (in the retail domain), about access (in complaints to service providers) and about parts of tech products (in tech).
Automatically Identifying Complaints in Social Media
1906.03890v1
Table 6: Complaint prediction results using logistic regression (with different types of linguistic features), neural network approaches and the most frequent class baseline. Best results are in bold.
['[BOLD] Model', '[BOLD] Acc', '[BOLD] F1', '[BOLD] AUC']
[['Most Frequent Class', '64.2', '39.1', '0.500'], ['Logistic Regression', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Sentiment – MPQA', '64.2', '39.1', '0.499'], ['Sentiment – NRC', '63.9', '42.2', '0.599'], ['Sentiment – V&B', '68.9', '60.0', '0.696'], ['Sentiment – VADER', '66.0', '54.2', '0.654'], ['Sentiment – Stanford', '68.0', '55.6', '0.696'], ['Complaint Specific (all)', '65.7', '55.2', '0.634'], ['Request', '64.2', '39.1', '0.583'], ['Intensifiers', '64.5', '47.3', '0.639'], ['Downgraders', '65.4', '49.8', '0.615'], ['Temporal References', '64.2', '43.7', '0.535'], ['Pronoun Types', '64.1', '39.1', '0.545'], ['POS Bigrams', '72.2', '66.8', '0.756'], ['LIWC', '71.6', '65.8', '0.784'], ['Word2Vec Clusters', '67.7', '58.3', '0.738'], ['Bag-of-Words', '79.8', '77.5', '0.866'], ['All Features', '[BOLD] 80.5', '[BOLD] 78.0', '[BOLD] 0.873'], ['Neural Networks', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['MLP', '78.3', '76.2', '0.845'], ['LSTM', '80.2', '77.0', '0.864']]
Results are presented in Table 6. Most sentiment analysis models show accuracy above chance in predicting complaints. The best results are obtained by the Volkova & Bachrach model (Sentiment – V&B) which achieves 60 F1. However, models trained using linguistic features on the training data obtain significantly higher predictive accuracy. Complaint specific features are predictive of complaints, but to a smaller extent than sentiment, reaching an overall 55.2 F1. From this group of features, the most predictive groups are intensifiers and downgraders. Syntactic part-ofspeech features alone obtain higher performance than any sentiment or complaint feature group, showing the syntactic patterns discussed in the previous section hold high predictive accuracy for the task. The topical features such as the LIWC dictionaries (which combine syntactic and semantic information) and Word2Vec topics perform in the same range as the part of speech tags. However, best predictive performance is obtained using bag-of-word features, reaching an F1 of up to 77.5 and AUC of 0.866. Further, combining all features boosts predictive accuracy to 78 F1 and 0.864 AUC. We notice that neural network approaches are comparable, but do not outperform [CONTINUE] the best performing feature-based model, likely in part due to the training data size.
Automatically Identifying Complaints in Social Media
1906.03890v1
Table 7: Complaint prediction results using the original data set and distantly supervised data. All models are based on logistic regression with bag-of-word and Part-of-Speech tag features.
['[BOLD] Model', '[BOLD] Acc', '[BOLD] F1', '[BOLD] AUC']
[['Most Frequent Class', '64.2', '39.1', '0.500'], ['LR-All Features – Original Data', '80.5', '78.0', '0.873'], ['Dist. Supervision + Pooling', '77.2', '75.7', '0.853'], ['Dist. Supervision + EasyAdapt', '[BOLD] 81.2', '[BOLD] 79.0', '[BOLD] 0.885']]
Results presented in Table 7 show that the domain adaptation approach further boosts F1 by 1 point to 79 (t-test, p<0.5) and ROC AUC by 0.012. [CONTINUE] However, simply pooling the data actually hurts predictive performance leading to a drop of more than 2 points in F1.
Automatically Identifying Complaints in Social Media
1906.03890v1
Table 8: Performance of models in Macro F1 on tweets from each domain.
['[BOLD] Domain', '[BOLD] In-Domain', '[BOLD] Pooling', '[BOLD] EasyAdapt']
[['Food & Beverage', '63.9', '60.9', '[BOLD] 83.1'], ['Apparel', '[BOLD] 76.2', '71.1', '72.5'], ['Retail', '58.8', '[BOLD] 79.7', '[BOLD] 79.7'], ['Cars', '41.5', '77.8', '[BOLD] 80.9'], ['Services', '65.2', '75.9', '[BOLD] 76.7'], ['Software', '61.3', '73.4', '[BOLD] 78.7'], ['Transport', '56.4', '[BOLD] 73.4', '69.8'], ['Electronics', '66.2', '73.0', '[BOLD] 76.2'], ['Other', '42.4', '[BOLD] 82.8', '[BOLD] 82.8']]
Table 8 shows the model performance in macro-averaged F1 using the best performing feature set. Results show that, in all but one case, adding out-of-domain data helps predictive performance. The apparel domain is qualitatively very different from the others as a large number of complaints are about returns or the company not stocking items, hence leading to different features being important for prediction. Domain adaptation is beneficial the majority of domains, lowering performance on a single domain compared to data pooling. This highlights the differences in expressing complaints across domains. Overall, predictive performance is high across all domains, with the exception of transport.
Localization of Fake News Detection via Multitask Transfer Learning
1910.09295v3
Table 4: Consolidated experiment results. The first section shows finetuning results for base transfer learning methods and the baseline siamese network. The second section shows results for ULMFiT without Language Model Finetuning. The last section shows finetuning results for transformer methods augmented with multitasking heads. BERT and GPT-2 were finetuned for three epochs in all cases and ULMFiT was finetuned for 5 during classifier finetuning.
['Model', 'Val. Accuracy', 'Loss', 'Val. Loss', 'Pretraining Time', 'Finetuning Time']
[['Siamese Networks', '77.42%', '0.5601', '0.5329', '[EMPTY]', '4m per epoch'], ['BERT', '87.47%', '0.4655', '0.4419', '66 hours', '2m per epoch'], ['GPT-2', '90.99%', '0.2172', '0.1826', '78 hours', '4m per epoch'], ['ULMFiT', '91.59%', '0.3750', '0.1972', '11 hours', '2m per epoch'], ['ULMFiT (no LM Finetuning)', '78.11%', '0.5512', '0.5409', '11 hours', '2m per epoch'], ['BERT + Multitasking', '91.20%', '0.3155', '0.3023', '66 hours', '4m per epoch'], ['GPT-2 + Multitasking', '96.28%', '0.2609', '0.2197', '78 hours', '5m per epoch']]
BERT achieved a final accuracy of 91.20%, now marginally comparable to ULMFiT's full performance. GPT-2, on the other hand, finetuned to a final accuracy of 96.28%, a full 4.69% improvement over the performance of ULMFiT. [CONTINUE] Rersults for this experiment are outlined in Table 4.
Localization of Fake News Detection via Multitask Transfer Learning
1910.09295v3
Table 5: An ablation study on the effects of pretraining for multitasking-based and standard GPT-2 finetuning. Results show that pretraining greatly accounts for almost half of performance on both finetuning techniques. “Acc. Inc.” refers to the boost in performance contributed by the pretraining step. “% of Perf.” refers to the percentage of the total performance that the pretraining step contributes.
['Finetuning', 'Pretrained?', 'Accuracy', 'Val. Loss', 'Acc. Inc.', '% of Perf.']
[['Multitasking', 'No', '53.61%', '0.7217', '-', '-'], ['[EMPTY]', 'Yes', '96.28%', '0.2197', '+42.67%', '44.32%'], ['Standard', 'No', '51.02%', '0.7024', '-', '-'], ['[EMPTY]', 'Yes', '90.99%', '0.1826', '+39.97%', '43.93%']]
In Table 5, it can be seen that generative pretraining via language modeling does account for a considerable amount of performance, constituting 44.32% of the overall performance (a boost of 42.67% in accuracy) in the multitasking setup, and constituting 43.93% of the overall performance (a boost of 39.97%) in the standard finetuning setup.
Localization of Fake News Detection via Multitask Transfer Learning
1910.09295v3
Table 6: An ablation study on the effect of multiple heads in the attention mechanisms. The results show that increasing the number of heads improves performance, though this plateaus at 10 attention heads. All ablations use the multitask-based finetuning method. “Effect” refers to the increase or decrease of accuracy as the heads are removed. Note that 10 heads is the default used throughout the study.
['# of Heads', 'Accuracy', 'Val. Loss', 'Effect']
[['1', '89.44%', '0.2811', '-6.84%'], ['2', '91.20%', '0.2692', '-5.08%'], ['4', '93.85%', '0.2481', '-2.43%'], ['8', '96.02%', '0.2257', '-0.26%'], ['10', '96.28%', '0.2197', '[EMPTY]'], ['16', '96.32%', '0.2190', '+0.04']]
As shown in Table 6, reducing the number of attention heads severely decreases multitasking performance. Using only one attention head, thereby attending to only one context position at once, degrades the performance to less than the performance of 10 heads using the standard finetuning scheme. This shows that more attention heads, thereby attending to multiple different contexts at once, is important to boosting performance to state-of-the-art results.
Hierarchical Graph Network for Multi-hop Question Answering
1911.03631v2
Table 6: Error analysis of HGN model. For ‘Multi-hop’ errors, the model jumps to the wrong film (“Tommy (1975 film)”) instead of the correct one (“Quadrophenia (film)”) from the starting entity “rock opera 5:15”. The supporting fact for the ‘MRC’ example is “Childe Byron is a 1977 play by Romulus Linney about the strained relationship between the poet, Lord Byron, and his daughter, Ada Lovelace”.
['Category', 'Question', 'Answer', 'Prediction', 'Pct (%)']
[['Annotation', 'Were the films Tonka and 101 Dalmatians released in the same decade?', '1958 Walt Disney Western adventure film', 'No', '9'], ['Multiple Answers', 'Michael J. Hunter replaced the lawyer who became the administrator of which agency?', 'EPA', 'Environmental Protection Agency', '24'], ['Discrete Reasoning', 'Between two bands, Mastodon and Hole, which one has more members?', 'Mastodon', 'Hole', '15'], ['Commonsense & External Knowledge', 'What is the name of second extended play by the artists of the mini-abum Code#01?', 'Code#02 Pretty Pretty', 'Code#01 Bad Girl', '16'], ['Multi-hop', 'Who directed the film based on the rock opera 5:15 appeared in?', 'Franc Roddam', 'Ken Russell', '16'], ['MRC', 'How was Ada Lovelace, the first computer programmer, related to Lord Byron in Childe Byron?', 'his daughter', 'strained relationship', '20']]
Table 6 shows the ablation study results on paragraph selection loss Lpara and entity prediction loss Lentity. [CONTINUE] As shown in the table, using paragraph selection and entity prediction loss can further improve the joint F1 by 0.31 points,
Hierarchical Graph Network for Multi-hop Question Answering
1911.03631v2
Table 1: Results on the test set of HotpotQA in the Distractor setting. HGN achieves state-of-the-art results at the time of submission (Dec. 1, 2019). (†) indicates unpublished work. RoBERTa-large is used for context encoding.
['Model', 'Ans EM', 'Ans F1', 'Sup EM', 'Sup F1', 'Joint EM', 'Joint F1']
[['DecompRC Min et al. ( 2019b )', '55.20', '69.63', '-', '-', '-', '-'], ['ChainEx Chen et al. ( 2019 )', '61.20', '74.11', '-', '-', '-', '-'], ['Baseline Model Yang et al. ( 2018 )', '45.60', '59.02', '20.32', '64.49', '10.83', '40.16'], ['QFE Nishida et al. ( 2019 )', '53.86', '68.06', '57.75', '84.49', '34.63', '59.61'], ['DFGN Xiao et al. ( 2019 )', '56.31', '69.69', '51.50', '81.62', '33.62', '59.82'], ['LQR-Net Grail et al. ( 2020 )', '60.20', '73.78', '56.21', '84.09', '36.56', '63.68'], ['P-BERT†', '61.18', '74.16', '51.38', '82.76', '35.42', '63.79'], ['TAP2†', '64.99', '78.59', '55.47', '85.57', '39.77', '69.12'], ['EPS+BERT†', '65.79', '79.05', '58.50', '86.26', '42.47', '70.48'], ['SAE-large Tu et al. ( 2020 )', '66.92', '79.62', '61.53', '86.86', '45.36', '71.45'], ['C2F ReaderShao et al. ( 2020 )', '67.98', '81.24', '60.81', '87.63', '44.67', '72.73'], ['HGN (ours)', '[BOLD] 69.22', '[BOLD] 82.19', '[BOLD] 62.76', '[BOLD] 88.47', '[BOLD] 47.11', '[BOLD] 74.21']]
Table 1 and Table 2 summarize our results on the hidden test set of HotpotQA in the Distractor and Fullwiki setting, respectively. The proposed HGN outperforms both published and unpublished work on every metric by a significant margin. For example, HGN achieves a Joint EM/F1 score of 43.57/71.03 and 35.63/59.86 on the Distractor and Fullwiki setting, respectively, with an absolute improvement of 2.36/0.38 and 6.45/4.55 points over the previous state of the art.
Hierarchical Graph Network for Multi-hop Question Answering
1911.03631v2
Table 2: Results on the test set of HotpotQA in the Fullwiki setting. HGN achieves close to state-of-the-art results at the time of submission (Dec. 1, 2019). (†) indicates unpublished work. RoBERTa-large is used for context encoding, and SemanticRetrievalMRS is used for retrieval. Leaderboard: https://hotpotqa.github.io/.
['Model', 'Ans EM', 'Ans F1', 'Sup EM', 'Sup F1', 'Joint EM', 'Joint F1']
[['TPReasoner Xiong et al. ( 2019 )', '36.04', '47.43', '-', '-', '-', '-'], ['Baseline Model Yang et al. ( 2018 )', '23.95', '32.89', '3.86', '37.71', '1.85', '16.15'], ['QFE Nishida et al. ( 2019 )', '28.66', '38.06', '14.20', '44.35', '8.69', '23.10'], ['MUPPET Feldman and El-Yaniv ( 2019 )', '30.61', '40.26', '16.65', '47.33', '10.85', '27.01'], ['Cognitive Graph Ding et al. ( 2019 )', '37.12', '48.87', '22.82', '57.69', '12.42', '34.92'], ['PR-BERT†', '43.33', '53.79', '21.90', '59.63', '14.50', '39.11'], ['Golden Retriever Qi et al. ( 2019 )', '37.92', '48.58', '30.69', '64.24', '18.04', '39.13'], ['Entity-centric BERT Godbole et al. ( 2019 )', '41.82', '53.09', '26.26', '57.29', '17.01', '39.18'], ['SemanticRetrievalMRS Yixin Nie ( 2019 )', '45.32', '57.34', '38.67', '70.83', '25.14', '47.60'], ['Transformer-XH Zhao et al. ( 2020 )', '48.95', '60.75', '41.66', '70.01', '27.13', '49.57'], ['MIR+EPS+BERT†', '52.86', '64.79', '42.75', '72.00', '31.19', '54.75'], ['Graph Recur. Retriever Asai et al. ( 2020 )', '[BOLD] 60.04', '[BOLD] 72.96', '49.08', '76.41', '35.35', '[BOLD] 61.18'], ['HGN (ours)', '57.85', '69.93', '[BOLD] 51.01', '[BOLD] 76.82', '[BOLD] 37.17', '60.74']]
Table 1 and Table 2 summarize our results on the hidden test set of HotpotQA in the Distractor and Fullwiki setting, respectively. The proposed HGN outperforms both published and unpublished work on every metric by a significant margin. For example, HGN achieves a Joint EM/F1 score of 43.57/71.03 and 35.63/59.86 on the Distractor and Fullwiki setting, respectively, with an absolute improvement of 2.36/0.38 and 6.45/4.55 points over the previous state of the art.
Hierarchical Graph Network for Multi-hop Question Answering
1911.03631v2
Table 3: Ablation study on the effectiveness of the hierarchical graph on the dev set in the Distractor setting. RoBERTa-large is used for context encoding.
['Model', 'Ans F1', 'Sup F1', 'Joint F1']
[['w/o Graph', '80.58', '85.83', '71.02'], ['PS Graph', '81.68', '88.44', '73.83'], ['PSE Graph', '82.10', '88.40', '74.13'], ['Hier. Graph', '[BOLD] 82.22', '[BOLD] 88.58', '[BOLD] 74.37']]
Table 3 shows the performance of paragraph selection on the dev set of HotpotQA. In DFGN, paragraphs are selected based on a threshold to maintain high recall (98.27%), leading to a low precision (60.28%). Compared to both threshold-based and pure TopN -based paragraph selection, our two-step paragraph selection process is more accurate, achieving 94.53% precision and 94.53% recall.
Hierarchical Graph Network for Multi-hop Question Answering
1911.03631v2
Table 5: Results with different pre-trained language models on the dev set in the Distractor setting. (†) is unpublished work with results on the test set, using BERT whole word masking (wwm).
['Model', 'Ans F1', 'Sup F1', 'Joint F1']
[['DFGN (BERT-base)', '69.38', '82.23', '59.89'], ['EPS (BERT-wwm)†', '79.05', '86.26', '70.48'], ['SAE (RoBERTa)', '80.75', '87.38', '72.75'], ['HGN (BERT-base)', '74.76', '86.61', '66.90'], ['HGN (BERT-wwm)', '80.51', '88.14', '72.77'], ['HGN (RoBERTa)', '[BOLD] 82.22', '[BOLD] 88.58', '[BOLD] 74.37']]
As shown in Table 5, the use of PS Graph improves the joint F1 score over the plain RoBERTa model by 1.59 points. By further adding entity nodes, the Joint F1 increases by 0.18 points.
Hierarchical Graph Network for Multi-hop Question Answering
1911.03631v2
Table 7: Results of HGN for different reasoning types.
['Question', 'Ans F1', 'Sup F1', 'Joint F1', 'Pct (%)']
[['comp-yn', '93.45', '94.22', '88.50', '6.19'], ['comp-span', '79.06', '91.72', '74.17', '13.90'], ['bridge', '81.90', '87.60', '73.31', '79.91']]
Results in Table 7 show that our HGN variants outperform DFGN and EPS, indicating that the performance gain comes from a better model design.
Semantic Neural Machine Translation using AMR
1902.07282v1
Table 4: BLEU scores of Dual2seq on the little prince data, when gold or automatic AMRs are available.
['AMR Anno.', 'BLEU']
[['Automatic', '16.8'], ['Gold', '[BOLD] *17.5*']]
Table 4 shows the BLEU scores of our Dual2seq model taking gold or automatic AMRs as inputs. [CONTINUE] The improvement from automatic AMR to gold AMR (+0.7 BLEU) is significant, which shows that the translation quality of our model can be further improved with an increase of AMR parsing accuracy.
Semantic Neural Machine Translation using AMR
1902.07282v1
Table 3: Test performance. NC-v11 represents training only with the NC-v11 data, while Full means using the full training data. * represents significant Koehn (2004) result (p<0.01) over Seq2seq. ↓ indicates the lower the better.
['System', 'NC-v11 BLEU', 'NC-v11 TER↓', 'NC-v11 Meteor', 'Full BLEU', 'Full TER↓', 'Full Meteor']
[['OpenNMT-tf', '15.1', '0.6902', '0.3040', '24.3', '0.5567', '0.4225'], ['Transformer-tf', '17.1', '0.6647', '0.3578', '25.1', '0.5537', '0.4344'], ['Seq2seq', '16.0', '0.6695', '0.3379', '23.7', '0.5590', '0.4258'], ['Dual2seq-LinAMR', '17.3', '0.6530', '0.3612', '24.0', '0.5643', '0.4246'], ['Duel2seq-SRL', '17.2', '0.6591', '0.3644', '23.8', '0.5626', '0.4223'], ['Dual2seq-Dep', '17.8', '0.6516', '0.3673', '25.0', '0.5538', '0.4328'], ['Dual2seq', '[BOLD] *19.2*', '[BOLD] 0.6305', '[BOLD] 0.3840', '[BOLD] *25.5*', '[BOLD] 0.5480', '[BOLD] 0.4376']]
Table 3 shows the TEST BLEU, TER and Meteor scores of all systems trained on the smallscale News Commentary v11 subset or the largescale full set. Dual2seq is consistently better than the other systems under all three metrics, [CONTINUE] Dual2seq is better than both OpenNMT-tf and Transformer-tf . [CONTINUE] Dual2seq is signifi [CONTINUE] cantly better than Seq2seq in both settings, [CONTINUE] In particular, the improvement is much larger under the small-scale setting (+3.2 BLEU) than that under the large-scale setting (+1.7 BLEU). [CONTINUE] When trained on the NC-v11 subset, the gap between Seq2seq and Dual2seq under Meteor (around 5 points) is greater than that under BLEU (around 3 points). [CONTINUE] As shown in the second group of Table 3, we further compare our model with other methods of leveraging syntactic or semantic information. Dual2seq-LinAMR shows much worse performance than our model and only slightly outperforms the Seq2seq baseline. [CONTINUE] Encoding dependency trees instead of AMRs, Dual2seq-Dep shows a larger performance gap with our model (17.8 vs 19.2) on small-scale training data than on large-scale training data (25.0 vs 25.5). [CONTINUE] Dual2seq-SRL is less effective than our model,
Recent Advances in Natural Language Inference:A Survey of Benchmarks, Resources, and Approaches
1904.01172v3
Table 2: Comparison of exact-match accuracy achieved on selected benchmarks by a random or majority-choice baseline, various neural contextual embedding models, and humans. ELMo refers to the highest-performing listed approach using ELMo embeddings. Best system performance on each benchmark in bold. Information extracted from leaderboards (linked to in the first column) at time of writing (October 2019), and original papers for benchmarks introduced in Section 2.
['[BOLD] Benchmark', '[BOLD] Simple Baseline ', '[BOLD] ELMo', '[BOLD] GPT', '[BOLD] BERT', '[BOLD] MT-DNN', '[BOLD] XLNet', '[BOLD] RoBERTa', '[BOLD] ALBERT', '[BOLD] Human']
[['[BOLD] CLOTH', '25.0', '70.7', '–', '[BOLD] 86.0', '–', '–', '–', '–', '85.9'], ['[BOLD] Cosmos QA', '–', '–', '54.5', '67.1', '–', '–', '–', '–', '94.0'], ['[BOLD] DREAM', '33.4', '59.5', '55.5', '66.8', '–', '[BOLD] 72.0', '–', '–', '95.5'], ['[BOLD] GLUE', '–', '70.0', '–', '80.5', '87.6', '88.4', '88.5', '[BOLD] 89.4', '87.1'], ['[BOLD] HellaSWAG', '25.0', '33.3', '41.7', '47.3', '–', '–', '[BOLD] 85.2', '[EMPTY]', '95.6'], ['[BOLD] MC-TACO', '17.4', '26.4', '–', '42.7', '–', '–', '[BOLD] 43.6', '–', '75.8'], ['[BOLD] RACE', '24.9', '–', '59.0', '72.0', '–', '81.8', '83.2', '[BOLD] 89.4', '94.5'], ['[BOLD] SciTail', '60.3', '–', '88.3', '–', '94.1', '–', '–', '–', '–'], ['[BOLD] SQuAD 1.1', '1.3', '81.0', '–', '87.4', '–', '[BOLD] 89.9', '–', '–', '82.3'], ['[BOLD] SQuAD 2.0', '48.9', '63.4', '–', '80.8', '–', '86.3', '86.8', '[BOLD] 89.7', '86.9'], ['[BOLD] SuperGLUE', '47.1', '–', '–', '69.0', '–', '–', '[BOLD] 84.6', '–', '89.8'], ['[BOLD] SWAG', '25.0', '59.1', '78.0', '86.3', '87.1', '–', '[BOLD] 89.9', '–', '88.0']]
The most representative models are ELMO, GPT, BERT and its variants, and XLNET. Next, we give a brief overview of these models and summarize their performance on the selected benchmark tasks. Table 2 quantitatively compares the performance of these models on various benchmarks. [CONTINUE] smaller tweaks to various aspects of the model have resulted in hundreds of entries on leaderboards (e.g., those linked to in Section 4.3.3 and Table 2) leading only to marginal improvements.
Entity, Relation, and Event Extraction with Contextualized Span Representations
1909.03546v2
Table 2: F1 scores on NER.
['[EMPTY]', 'ACE05', 'SciERC', 'GENIA', 'WLPC']
[['BERT + LSTM', '85.8', '69.9', '78.4', '[BOLD] 78.9'], ['+RelProp', '85.7', '70.5', '-', '78.7'], ['+CorefProp', '86.3', '[BOLD] 72.0', '78.3', '-'], ['BERT Finetune', '87.3', '70.5', '78.3', '78.5'], ['+RelProp', '86.7', '71.1', '-', '78.8'], ['+CorefProp', '[BOLD] 87.5', '71.1', '[BOLD] 79.5', '-']]
Table 2 shows that Coreference propagation (CorefProp) improves named entity recognition performance across all three domains. The largest gains are on the computer science research abstracts of SciERC,
Entity, Relation, and Event Extraction with Contextualized Span Representations
1909.03546v2
Table 1: DyGIE++ achieves state-of-the-art results. Test set F1 scores of best model, on all tasks and datasets. We define the following notations for events: Trig: Trigger, Arg: argument, ID: Identification, C: Classification. * indicates the use of a 4-model ensemble for trigger detection. See Appendix E for details. The results of the single model are reported in Table 2 (c). We ran significance tests on a subset of results in Appendix D. All were statistically significant except Arg-C and Arg-ID on ACE05-Event.
['Dataset', 'Task', 'SOTA', 'Ours', 'Δ%']
[['ACE05', 'Entity', '88.4', '[BOLD] 88.6', '1.7'], ['ACE05', 'Relation', '63.2', '[BOLD] 63.4', '0.5'], ['ACE05-Event*', 'Entity', '87.1', '[BOLD] 90.7', '27.9'], ['ACE05-Event*', 'Trig-ID', '73.9', '[BOLD] 76.5', '9.6'], ['ACE05-Event*', 'Trig-C', '72.0', '[BOLD] 73.6', '5.7'], ['ACE05-Event*', 'Arg-ID', '[BOLD] 57.2', '55.4', '-4.2'], ['ACE05-Event*', 'Arg-C', '52.4', '[BOLD] 52.5', '0.2'], ['SciERC', 'Entity', '65.2', '[BOLD] 67.5', '6.6'], ['SciERC', 'Relation', '41.6', '[BOLD] 48.4', '11.6'], ['GENIA', 'Entity', '76.2', '[BOLD] 77.9', '7.1'], ['WLPC', 'Entity', '79.5', '[BOLD] 79.7', '1.0'], ['WLPC', 'Relation', '64.1', '[BOLD] 65.9', '5.0']]
Table 1 shows test set F1 on the entity, relation and event extraction tasks. Our framework establishes a new state-of-the-art on all three high-level tasks, and on all subtasks except event argument identification. Relative error reductions range from 0.2 - 27.9% over previous state of the art models.
Entity, Relation, and Event Extraction with Contextualized Span Representations
1909.03546v2
Table 3: F1 scores on Relation.
['[EMPTY]', 'ACE05', 'SciERC', 'WLPC']
[['BERT + LSTM', '60.6', '40.3', '65.1'], ['+RelProp', '61.9', '41.1', '65.3'], ['+CorefProp', '59.7', '42.6', '-'], ['BERT FineTune', '[BOLD] 62.1', '44.3', '65.4'], ['+RelProp', '62.0', '43.0', '[BOLD] 65.5'], ['+CorefProp', '60.0', '[BOLD] 45.3', '-']]
CorefProp also improves relation extraction on SciERC. [CONTINUE] Relation propagation (RelProp) improves relation extraction performance over pretrained BERT, but does not improve fine-tuned BERT.
Entity, Relation, and Event Extraction with Contextualized Span Representations
1909.03546v2
Table 7: In-domain pre-training: SciBERT vs. BERT
['[EMPTY]', 'SciERC Entity', 'SciERC Relation', 'GENIA Entity']
[['Best BERT', '69.8', '41.9', '78.4'], ['Best SciBERT', '[BOLD] 72.0', '[BOLD] 45.3', '[BOLD] 79.5']]
Table 7 compares the results of BERT and SciBERT with the best-performing model configurations. SciBERT significantly boosts performance for scientific datasets including SciERC and GENIA.
Entity, Relation, and Event Extraction with Contextualized Span Representations
1909.03546v2
Table 6: Effect of BERT cross-sentence context. F1 score of relation F1 on ACE05 dev set and entity, arg, trigger extraction F1 on ACE05-E test set, as a function of the BERT context window size.
['Task', 'Variation', '1', '3']
[['Relation', 'BERT+LSTM', '59.3', '[BOLD] 60.6'], ['Relation', 'BERT Finetune', '62.0', '[BOLD] 62.1'], ['Entity', 'BERT+LSTM', '90.0', '[BOLD] 90.5'], ['Entity', 'BERT Finetune', '88.8', '[BOLD] 89.7'], ['Trigger', 'BERT+LSTM', '[BOLD] 69.4', '68.9'], ['Trigger', 'BERT Finetune', '68.3', '[BOLD] 69.7'], ['Arg Class', 'BERT+LSTM', '48.6', '[BOLD] 51.4'], ['Arg Class', 'BERT Finetune', '[BOLD] 50.0', '48.8']]
Table 6 shows that both variations of our BERT model benefit from wider context windows. Our model achieves the best performance with a 3sentence window across all relation and event extraction tasks.
Aligning Vector-spaces with Noisy Supervised Lexicons
1903.10238v1
Table 1: Bilingual Experiment P@1. Numbers are based on 10 runs of each method. The En→De, En→Fi and En→Es improvements are significant at p<0.05 according to ANOVA on the different runs.
['Method', 'En→It best', 'En→It avg', 'En→It iters', 'En→De best', 'En→De avg', 'En→De iters', 'En→Fi best', 'En→Fi avg', 'En→Fi iters', 'En→Es best', 'En→Es avg', 'En→Es iters']
[['Artetxe et\xa0al., 2018b', '[BOLD] 48.53', '48.13', '573', '48.47', '48.19', '773', '33.50', '32.63', '988', '37.60', '37.33', '808'], ['Noise-aware Alignment', '[BOLD] 48.53', '[BOLD] 48.20', '471', '[BOLD] 49.67', '[BOLD] 48.89', '568', '[BOLD] 33.98', '[BOLD] 33.68', '502', '[BOLD] 38.40', '[BOLD] 37.79', '551']]
In Table 1 we report the best and average precision@1 scores and the average number of iterations among 10 experiments, for different language translations. Our model improves the results in the translation tasks. In most setups our average case is better than the former best case. In addition, the noise-aware model is more stable and therefore requires fewer iterations to converge. The accuracy improvements are small but consistent, and we note that we consider them as a lower-bound on the actual improvements as the current test set comes from the same distribution of the training set, and also contains similarly noisy pairs.
Predicting Discourse Structure using Distant Supervision from Sentiment
1910.14176v1
Table 3: Discourse structure prediction results; tested on RST-DTtest and Instr-DTtest. Subscripts in inter-domain evaluation sub-table indicate the training set. Best performance in the category is bold. Consistently best model for inter-domain discourse structure prediction is underlined
['Approach', 'RST-DTtest', 'Instr-DTtest']
[['Right Branching', '54.64', '58.47'], ['Left Branching', '53.73', '48.15'], ['Hier. Right Branch.', '[BOLD] 70.82', '[BOLD] 67.86'], ['Hier. Left Branch.', '70.58', '63.49'], ['[BOLD] Intra-Domain Evaluation', '[BOLD] Intra-Domain Evaluation', '[BOLD] Intra-Domain Evaluation'], ['HILDAHernault et al. ( 2010 )', '83.00', '—'], ['DPLPJi and Eisenstein ( 2014 )', '82.08', '—'], ['CODRAJoty et al. ( 2015 )', '83.84', '[BOLD] 82.88'], ['Two-StageWang et al. ( 2017 )', '[BOLD] 86.00', '77.28'], ['[BOLD] Inter-Domain Evaluation', '[BOLD] Inter-Domain Evaluation', '[BOLD] Inter-Domain Evaluation'], ['Two-StageRST-DT', '×', '73.65'], ['Two-StageInstr-DT', '74.48', '×'], ['Two-StageOurs(avg)', '76.42', '[BOLD] 74.22'], ['Two-StageOurs(max)', '[BOLD] 77.24', '73.12'], ['Human Morey et al. ( 2017 )', '88.30', '—']]
We further analyze our findings with respect to baselines and existing discourse parsers. The first set of results in Table 3 shows that the hierarchical right/left branching baselines dominate the completely right/left branching ones. However, their performance is still significantly worse than any discourse parser (intra- and interdomain). [CONTINUE] The second set of results show the performance of existing discourse parsers when trained and tested on the same dataset (intra-domain). [CONTINUE] The Two-Stage approach by Wang et al. (2017) achieves the best performance with 86% on the structure prediction using the RST-DT dataset. On the Instructional dataset, the CODRA discourse parser by Joty et al. (2015) achieves the highest score with 82.88%. [CONTINUE] The third set in the table shows the key results from Phase 1 on the inter-domain performance. [CONTINUE] While the avg attention-aggregation function achieves the most consistent performance the max function on both evaluation corpora, should not be dismissed, as it performs better on the larger RST-DT dataset, which is arguably more related to the Yelp'13 corpus than the sentimentneutral Instr-DT.
Dynamic Knowledge Routing Network For Target-Guided Open-Domain Conversation
2002.01196v2
Table 4: Results of Self-Play Evaluation.
['System', 'TGPC Succ. (%)', 'TGPC #Turns', 'CWC Succ. (%)', 'CWC #Turns']
[['Retrieval\xa0', '7.16', '4.17', '0', '-'], ['Retrieval-Stgy\xa0', '47.80', '6.7', '44.6', '7.42'], ['PMI\xa0', '35.36', '6.38', '47.4', '5.29'], ['Neural\xa0', '54.76', '4.73', '47.6', '5.16'], ['Kernel\xa0', '62.56', '4.65', '53.2', '4.08'], ['DKRN (ours)', '[BOLD] 89.0', '5.02', '[BOLD] 84.4', '4.20']]
Figure 2: A conversation example between human (H) and agents (A) with the same target and starting utterance in CWC dataset.Keywords predicted by the agents or mentioned by human are highlighted in bold. The target achieved at the end of a conversation is underlined. judge whether the current conversation context contains the end target. We set a maximum number of turns as 8 to prevent from an endless conversation that can not reach the target. We use the success rate of reaching the targets (Succ.) and the average number of turns taken to achieve the target (#Turns) as our evaluation criterion. Table 4 shows the results of 500 simulations for each of the comparison systems. Although the average number of turns of our approach is slightly more than Kernel, our system obtains the highest success rate, significantly improving over other approaches. We conduct user study for a more thorough on our CWC dataset. We use the DialCrowd e et al. 2018) to set up user study interfaces and kinds of user studies as below. The first study sure system performance in terms of two key amely transition smoothness and target achieveonstruct 50 test cases, each of which has a target g utterance. In each case, a user will chat with a lected agent. If the agent thinks the conversation the given target or the maximum number of conrns, the agent will inform and show the target to e DialCrowd will ask the user to judge whether reached and rate the transition smoothness of tion with a smoothness score. The smoothness s from 1 (very bad) to 5 (very good). All agents ur DKRN agent outpe Table 6 shows the results of the second study outperforms the comparison agents with a large example, compared to Kernel, 62% users deter agent is better on CWC dataset. This superior confirms the effectiveness of our approach. Qualitative Study To more intuitively demonstrate the superior per our agent, two conversations in user study deriv kernel agent and our DKRN agent respe Figure 2. Both conversations have the same targ starting utterance. From Figure 2, we can see th than Kernel, a achieve the target. The complete conversations o line agents and our DKRN agent are presented plemental material. Conclusions In this work, we move forward to develop an effe guided open-domain conversation system that a chat with user target. Spec routing net semantic knowledge relations among candidat into turn-level keyword prediction for the smoot sition. We also propose a simple but effective dua
Dynamic Knowledge Routing Network For Target-Guided Open-Domain Conversation
2002.01196v2
Table 3: Results of Turn-level Evaluation.
['Dataset', 'System', 'Keyword Prediction [ITALIC] Rw@1', 'Keyword Prediction [ITALIC] Rw@3', 'Keyword Prediction [ITALIC] Rw@5', 'Keyword Prediction P@1', 'Response Retrieval [ITALIC] R20@1', 'Response Retrieval [ITALIC] R20@3', 'Response Retrieval [ITALIC] R20@5', 'Response Retrieval MRR']
[['TGPC', 'Retrieval\xa0', '-', '-', '-', '-', '0.5063', '0.7615', '0.8676', '0.6589'], ['TGPC', 'PMI\xa0', '0.0585', '0.1351', '0.1872', '0.0871', '0.5441', '0.7839', '0.8716', '0.6847'], ['TGPC', 'Neural\xa0', '0.0708', '0.1438', '0.1820', '0.1321', '0.5311', '0.7905', '0.8800', '0.6822'], ['TGPC', 'Kernel\xa0', '0.0632', '0.1377', '0.1798', '0.1172', '0.5386', '0.8012', '0.8924', '0.6877'], ['TGPC', 'DKRN (ours)', '[BOLD] 0.0909', '[BOLD] 0.1903', '[BOLD] 0.2477', '[BOLD] 0.1685', '[BOLD] 0.5729', '[BOLD] 0.8132', '[BOLD] 0.8966', '[BOLD] 0.7110'], ['CWC', 'Retrieval\xa0', '-', '-', '-', '-', '0.5785', '0.8101', '0.8999', '0.7141'], ['CWC', 'PMI\xa0', '0.0555', '0.1001', '0.1212', '0.0969', '0.5945', '0.8185', '0.9054', '0.7257'], ['CWC', 'Neural\xa0', '0.0654', '0.1194', '0.1450', '0.1141', '0.6044', '0.8233', '0.9085', '0.7326'], ['CWC', 'Kernel\xa0', '0.0592', '0.1113', '0.1337', '0.1011', '0.6017', '0.8234', '0.9087', '0.7320'], ['CWC', 'DKRN (ours)', '[BOLD] 0.0680', '[BOLD] 0.1254', '[BOLD] 0.1548', '[BOLD] 0.1185', '[BOLD] 0.6324', '[BOLD] 0.8416', '[BOLD] 0.9183', '[BOLD] 0.7533']]
Table 3 shows the turn-level evaluation results. Our approach DKRN outperforms all state-of-the-art methods in terms of all metrics on both datasets with two tasks.
Dynamic Knowledge Routing Network For Target-Guided Open-Domain Conversation
2002.01196v2
Table 5: Results of the Human Rating on CWC.
['System', 'Succ. (%)', 'Smoothness']
[['Retrieval-Stgy\xa0', '54.0', '2.48'], ['PMI\xa0', '46.0', '2.56'], ['Neural\xa0', '36.0', '2.50'], ['Kernel\xa0', '58.0', '2.48'], ['DKRN (ours)', '[BOLD] 88.0', '[BOLD] 3.22']]
Table 5 shows the evaluation results. Our DKRN agent outperforms all other agents with a large margin.
Dynamic Knowledge Routing Network For Target-Guided Open-Domain Conversation
2002.01196v2
Table 6: Results of the Human Rating on CWC.
['[EMPTY]', 'Ours Better(%)', 'No Prefer(%)', 'Ours Worse(%)']
[['Retrieval-Stgy\xa0', '[BOLD] 62', '22', '16'], ['PMI\xa0', '[BOLD] 54', '32', '14'], ['Neural\xa0', '[BOLD] 60', '22', '18'], ['Kernel\xa0', '[BOLD] 62', '26', '12']]
Table 6 shows the results of the second study. Our agent outperforms the comparison agents with a large margin.
Modulated Self-attention Convolutional Network for VQA
1910.03343v2
Table 1: Experiments run on a ResNet-34. Numbers following S (stages) and B (blocks) indicate where SA (self-attention) modules are put. Parameters count concerns only SA and are in millions (M).
['[BOLD] ResNet-34', '[BOLD] Eval set %', '[BOLD] #param']
[['Baseline (No SA)Anderson et al. ( 2018 )', '55.00', '0M'], ['SA (S: 1,2,3 - B: 1)', '55.11', '} 0.107M'], ['SA (S: 1,2,3 - B: 2)', '55.17', '} 0.107M'], ['[BOLD] SA (S: 1,2,3 - B: 3)', '[BOLD] 55.27', '} 0.107M']]
guistic input a CNN augmented with self-attentionWe show encouraging relative improvements for future research in this direction. [CONTINUE] We empirically found that self-attention was the most efficient in the 3rd stage. [CONTINUE] We notice small improvements relative to the baseline showing that self-attention alone does improve the VQA task. [CONTINUE] We showed that it is possible to improve the feature extraction procedure for the VQA task by adding self-attention modules in the different ResNet blocks.
Modulated Self-attention Convolutional Network for VQA
1910.03343v2
Table 1: Experiments run on a ResNet-34. Numbers following S (stages) and B (blocks) indicate where SA (self-attention) modules are put. Parameters count concerns only SA and are in millions (M).
['[BOLD] ResNet-34', '[BOLD] Eval set %', '[BOLD] #param']
[['SA (S: 3 - M: 1)', '55.25', '} 0.082M'], ['[BOLD] SA (S: 3 - B: 3)', '[BOLD] 55.42', '} 0.082M'], ['SA (S: 3 - B: 4)', '55.33', '} 0.082M'], ['SA (S: 3 - B: 6)', '55.31', '} 0.082M'], ['SA (S: 3 - B: 1,3,5)', '55.45', '} 0.245M'], ['[BOLD] SA (S: 3 - B: 2,4,6)', '[BOLD] 55.56', '} 0.245M']]
However, we managed to show improvements with the β modulation with a ResNet-152. Though the improvement is slim, it is encouraging to continue researching into visual modulation
Toward Extractive Summarization of Online Forum Discussions via Hierarchical Attention Networks
1805.10390v2
Table 1: Results of thread summarization. ‘HAN’ models are our proposed approaches adapted from the hierarchical attention networks [Yang et al.2016]. The models can be pretrained using unlabeled threads from TripAdvisor (‘T’) and Ubuntuforum (‘U’). r indicates a redundancy removal step is applied. We report the variance of F-scores across all threads (‘±’). A redundancy removal step improves recall scores (shown in gray) of the HAN models and boosts performance.
['[BOLD] System', '[BOLD] ROUGE-1 [BOLD] R (%)', '[BOLD] ROUGE-1 [BOLD] P (%)', '[BOLD] ROUGE-1 [BOLD] F (%)', '[BOLD] ROUGE-2 [BOLD] R (%)', '[BOLD] ROUGE-2 [BOLD] P (%)', '[BOLD] ROUGE-2 [BOLD] F (%)', '[BOLD] Sentence-Level [BOLD] R (%)', '[BOLD] Sentence-Level [BOLD] P (%)', '[BOLD] Sentence-Level [BOLD] F (%)']
[['[BOLD] ILP', '24.5', '41.1', '29.3±0.5', '7.9', '15.0', '9.9±0.5', '13.6', '22.6', '15.6±0.4'], ['[BOLD] Sum-Basic', '28.4', '44.4', '33.1±0.5', '8.5', '15.6', '10.4±0.4', '14.7', '22.9', '16.7±0.5'], ['[BOLD] KL-Sum', '39.5', '34.6', '35.5±0.5', '13.0', '12.7', '12.3±0.5', '15.2', '21.1', '16.3±0.5'], ['[BOLD] LexRank', '42.1', '39.5', '38.7±0.5', '14.7', '15.3', '14.2±0.5', '14.3', '21.5', '16.0±0.5'], ['[BOLD] MEAD', '45.5', '36.5', '38.5± 0.5', '17.9', '14.9', '15.4±0.5', '27.8', '29.2', '26.8±0.5'], ['[BOLD] SVM', '19.0', '48.8', '24.7±0.8', '7.5', '21.1', '10.0±0.5', '32.7', '34.3', '31.4±0.4'], ['[BOLD] LogReg', '26.9', '34.5', '28.7±0.6', '6.4', '9.9', '7.3±0.4', '12.2', '14.9', '12.7±0.5'], ['[BOLD] LogReg [ITALIC] r', '28.0', '34.8', '29.4±0.6', '6.9', '10.4', '7.8±0.4', '12.1', '14.5', '12.5±0.5'], ['[BOLD] HAN', '31.0', '42.8', '33.7±0.7', '11.2', '17.8', '12.7±0.5', '26.9', '34.1', '32.4±0.5'], ['[BOLD] HAN+pretrainT', '32.2', '42.4', '34.4±0.7', '11.5', '17.5', '12.9±0.5', '29.6', '35.8', '32.2±0.5'], ['[BOLD] HAN+pretrainU', '32.1', '42.1', '33.8±0.7', '11.6', '17.6', '12.9±0.5', '30.1', '35.6', '32.3±0.5'], ['[BOLD] HAN [ITALIC] r', '38.1', '40.5', '[BOLD] 37.8±0.5', '14.0', '17.1', '[BOLD] 14.7±0.5', '32.5', '34.4', '[BOLD] 33.4±0.5'], ['[BOLD] HAN+pretrainT [ITALIC] r', '37.9', '40.4', '[BOLD] 37.6±0.5', '13.5', '16.8', '[BOLD] 14.4±0.5', '32.5', '34.4', '[BOLD] 33.4±0.5'], ['[BOLD] HAN+pretrainU [ITALIC] r', '37.9', '40.4', '[BOLD] 37.6±0.5', '13.6', '16.9', '[BOLD] 14.4±0.5', '33.9', '33.8', '[BOLD] 33.8±0.5']]
The experimental results of all models are shown in Table 1. [CONTINUE] First, HAN models appear to be more appealing than SVM and LogReg because there is less variation in program implementation, hence less effort is required to reproduce the results. HAN models outperform both LogReg and SVM using the current set of features. They yield higher precision scores than traditional models. [CONTINUE] With respect to ROUGE scores, the HAN models outperform all supervised and unsupervised baselines except MEAD. MEAD has been shown to perform well in previous studies (Luo et al. 2016) and it appears to handle redundancy removal exceptionally well. The HAN models outperform MEAD in terms of sentence prediction. [CONTINUE] Pretraining the HAN models, although intuitively promising, yields only comparable results with those without. We suspect that there are not enough data to pretrain the models and that the thread classification task used to pretrain the HAN models may not be sophisticated enough to learn effective thread vectors. [CONTINUE] We observe that the redundancy removal step is crucial for the HAN models to achieve outstanding results. It helps improve the recall scores of both ROUGE and sentence prediction. When redundancy removal was applied to LogReg, it produces only marginal improvement. This suggests that future work may need to consider principled ways of redundancy removal.
Towards Universal Dialogue State Tracking
1810.09587v1
Table 1: Joint goal accuracy on DSTC2 and WOZ 2.0 test set vs. various approaches as reported in the literature.
['[BOLD] DST Models', '[BOLD] Joint Acc. DSTC2', '[BOLD] Joint Acc. WOZ 2.0']
[['Delexicalisation-Based (DB) Model Mrkšić et\xa0al. ( 2017 )', '69.1', '70.8'], ['DB Model + Semantic Dictionary Mrkšić et\xa0al. ( 2017 )', '72.9', '83.7'], ['Scalable Multi-domain DST Rastogi et\xa0al. ( 2017 )', '70.3', '-'], ['MemN2N Perez and Liu ( 2017 )', '74.0', '-'], ['PtrNet Xu and Hu ( 2018 )', '72.1', '-'], ['Neural Belief Tracker: NBT-DNN Mrkšić et\xa0al. ( 2017 )', '72.6', '84.4'], ['Neural Belief Tracker: NBT-CNN Mrkšić et\xa0al. ( 2017 )', '73.4', '84.2'], ['Belief Tracking: Bi-LSTM Ramadan et\xa0al. ( 2018 )', '-', '85.1'], ['Belief Tracking: CNN Ramadan et\xa0al. ( 2018 )', '-', '85.5'], ['GLAD Zhong et\xa0al. ( 2018 )', '74.5', '88.1'], ['StateNet', '74.1', '87.8'], ['StateNet_PS', '74.5', '88.2'], ['[BOLD] StateNet_PSI', '[BOLD] 75.5', '[BOLD] 88.9']]
The results in Table 1 show the effectiveness of parameter sharing and initialization. StateNet PS outperforms StateNet, and StateNet PSI performs best among all 3 models. [CONTINUE] Besides, StateNet PSI beats all the mod [CONTINUE] Table 1: Joint goal accuracy on DSTC2 and WOZ 2.0 test set vs. various approaches as reported in the literature. [CONTINUE] els reported in the previous literature, whether the model with delexicalisation (Henderson et al., 2014b,c; Rastogi et al., 2017) or not (Mrkˇsi´c et al., 2017; Perez and Liu, 2017; Xu and Hu, 2018; Ramadan et al., 2018; Zhong et al., 2018).
Towards Universal Dialogue State Tracking
1810.09587v1
Table 2: Joint goal accuracy on DSTC2 and WOZ 2.0 of StateNet_PSI using different pre-trained models based on different single slot.
['[BOLD] Initialization', '[BOLD] Joint Acc. DSTC2', '[BOLD] Joint Acc. WOZ 2.0']
[['[ITALIC] food', '[BOLD] 75.5', '[BOLD] 88.9'], ['[ITALIC] pricerange', '73.6', '88.2'], ['[ITALIC] area', '73.5', '87.8']]
Table 2: Joint goal accuracy on DSTC2 and WOZ 2.0 of StateNet PSI using different pre-trained models based on different single slot. [CONTINUE] We also test StateNet PSI with different pre-trained models, as shown in Table 2. The fact that the food initialization has the best performance verifies our selection of the slot with the worst performance for pre-training.
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task
1910.03291v1
Table 5: Textual similarity scores (asymmetric, Multi30k).
['[EMPTY]', 'EN → DE R@1', 'EN → DE R@5', 'EN → DE R@10', 'DE → EN R@1', 'DE → EN R@5', 'DE → EN R@10']
[['FME', '51.4', '76.4', '84.5', '46.9', '71.2', '79.1'], ['AME', '[BOLD] 51.7', '[BOLD] 76.7', '[BOLD] 85.1', '[BOLD] 49.1', '[BOLD] 72.6', '[BOLD] 80.5']]
Table 5, we show the performance on Multi30k dataset in asymmetric mode. AME outperforms the FME model, confirming the importance of word embeddings adaptation.
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task
1910.03291v1
Table 1: Image-caption ranking results for English (Multi30k)
['[EMPTY]', 'Image to Text R@1', 'Image to Text R@5', 'Image to Text R@10', 'Image to Text Mr', 'Text to Image R@1', 'Text to Image R@5', 'Text to Image R@10', 'Text to Image Mr', 'Alignment']
[['[BOLD] symmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Parallel\xa0gella:17', '31.7', '62.4', '74.1', '3', '24.7', '53.9', '65.7', '5', '-'], ['UVS\xa0kiros:15', '23.0', '50.7', '62.9', '5', '16.8', '42.0', '56.5', '8', '-'], ['EmbeddingNet\xa0wang:18', '40.7', '69.7', '79.2', '-', '29.2', '59.6', '71.7', '-', '-'], ['sm-LSTM\xa0huang:17', '42.5', '71.9', '81.5', '2', '30.2', '60.4', '72.3', '3', '-'], ['VSE++\xa0faghri:18', '[BOLD] 43.7', '71.9', '82.1', '2', '32.3', '60.9', '72.1', '3', '-'], ['Mono', '41.4', '74.2', '84.2', '2', '32.1', '63.0', '73.9', '3', '-'], ['FME', '39.2', '71.1', '82.1', '2', '29.7', '62.5', '74.1', '3', '76.81%'], ['AME', '43.5', '[BOLD] 77.2', '[BOLD] 85.3', '[BOLD] 2', '[BOLD] 34.0', '[BOLD] 64.2', '[BOLD] 75.4', '[BOLD] 3', '66.91%'], ['[BOLD] asymmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Pivot\xa0gella:17', '33.8', '62.8', '75.2', '3', '26.2', '56.4', '68.4', '4', '-'], ['Parallel\xa0gella:17', '31.5', '61.4', '74.7', '3', '27.1', '56.2', '66.9', '4', '-'], ['Mono', '47.7', '77.1', '86.9', '2', '35.8', '66.6', '76.8', '3', '-'], ['FME', '44.9', '76.9', '86.4', '2', '34.2', '66.1', '77.1', '3', '76.81%'], ['AME', '[BOLD] 50.5', '[BOLD] 79.7', '[BOLD] 88.4', '[BOLD] 1', '[BOLD] 38.0', '[BOLD] 68.5', '[BOLD] 78.4', '[BOLD] 2', '73.10%']]
we show the results for English and German captions. For English captions, we see 21.28% improvement on average compared to Kiros et al. (2014). There is a 1.8% boost on average compared to Mono due to more training data and multilingual text encoder. AME performs better than FME model on both symmetric and asymmetric modes, which shows the advantage of finetuning word embeddings during training. We have 25.26% boost on average compared to Kiros et al. (2014) in asymmetric mode.
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task
1910.03291v1
Table 2: Image-caption ranking results for German (Multi30k)
['[EMPTY]', 'Image to Text R@1', 'Image to Text R@5', 'Image to Text R@10', 'Image to Text Mr', 'Text to Image R@1', 'Text to Image R@5', 'Text to Image R@10', 'Text to Image Mr', 'Alignment']
[['[BOLD] symmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Parallel\xa0gella:17', '28.2', '57.7', '71.3', '4', '20.9', '46.9', '59.3', '6', '-'], ['Mono', '34.2', '67.5', '79.6', '3', '26.5', '54.7', '66.2', '4', '-'], ['FME', '36.8', '69.4', '80.8', '2', '26.6', '56.2', '68.5', '4', '76.81%'], ['AME', '[BOLD] 39.6', '[BOLD] 72.7', '[BOLD] 82.7', '[BOLD] 2', '[BOLD] 28.9', '[BOLD] 58.0', '[BOLD] 68.7', '[BOLD] 4', '66.91%'], ['[BOLD] asymmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Pivot\xa0gella:17', '28.2', '61.9', '73.4', '3', '22.5', '49.3', '61.7', '6', '-'], ['Parallel\xa0gella:17', '30.2', '60.4', '72.8', '3', '21.8', '50.5', '62.3', '5', '-'], ['Mono', '[BOLD] 42.0', '72.5', '83.0', '2', '29.6', '58.4', '69.6', '4', '-'], ['FME', '40.5', '73.3', '83.4', '2', '29.6', '59.2', '[BOLD] 72.1', '3', '76.81%'], ['AME', '40.5', '[BOLD] 74.3', '[BOLD] 83.4', '[BOLD] 2', '[BOLD] 31.0', '[BOLD] 60.5', '70.6', '[BOLD] 3', '73.10%']]
For German descriptions, The results are 11.05% better on average compared to (Gella et al., 2017) in symmetric mode. AME also achieves competitive or better results than FME model in German descriptions too.
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task
1910.03291v1
Table 3: Image-caption ranking results for English (MS-COCO)
['[EMPTY]', 'Image to Text R@1', 'Image to Text R@5', 'Image to Text R@10', 'Image to Text Mr', 'Text to Image R@1', 'Text to Image R@5', 'Text to Image R@10', 'Text to Image Mr', 'Alignment']
[['[BOLD] symmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['UVS\xa0kiros:15', '43.4', '75.7', '85.8', '2', '31.0', '66.7', '79.9', '3', '-'], ['EmbeddingNet\xa0wang:18', '50.4', '79.3', '89.4', '-', '39.8', '75.3', '86.6', '-', '-'], ['sm-LSTM\xa0huang:17', '53.2', '83.1', '91.5', '1', '40.7', '75.8', '87.4', '2', '-'], ['VSE++\xa0faghri:18', '[BOLD] 58.3', '[BOLD] 86.1', '93.3', '1', '[BOLD] 43.6', '77.6', '87.8', '2', '-'], ['Mono', '51.8', '84.8', '93.5', '1', '40.0', '77.3', '89.4', '2', '-'], ['FME', '42.2', '76.6', '91.1', '2', '31.2', '69.2', '83.7', '3', '92.70%'], ['AME', '54.6', '85', '[BOLD] 94.3', '[BOLD] 1', '42.1', '[BOLD] 78.7', '[BOLD] 90.3', '[BOLD] 2', '82.54%'], ['[BOLD] asymmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Mono', '53.2', '87.0', '94.7', '1', '42.3', '78.9', '90', '2', '-'], ['FME', '48.3', '83.6', '93.6', '2', '37.2', '75.4', '88.4', '2', '92.70%'], ['AME', '[BOLD] 58.8', '[BOLD] 88.6', '[BOLD] 96.2', '[BOLD] 1', '[BOLD] 46.2', '[BOLD] 82.5', '[BOLD] 91.9', '[BOLD] 2', '84.99%']]
We achieve 10.42% improvement on aver [CONTINUE] age compared to Kiros et al. (2014) in the symmetric manner. We show that adapting the word embedding for the task at hand, boosts the general performance, since AME model significantly outperforms FME model in both languages.
Aligning Multilingual Word Embeddings for Cross-Modal Retrieval Task
1910.03291v1
Table 4: Image-caption ranking results for Japanese (MS-COCO)
['[EMPTY]', 'Image to Text R@1', 'Image to Text R@5', 'Image to Text R@10', 'Image to Text Mr', 'Text to Image R@1', 'Text to Image R@5', 'Text to Image R@10', 'Text to Image Mr', 'Alignment']
[['[BOLD] symmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Mono', '42.7', '77.7', '88.5', '2', '33.1', '69.8', '84.3', '3', '-'], ['FME', '40.7', '77.7', '88.3', '2', '30.0', '68.9', '83.1', '3', '92.70%'], ['AME', '[BOLD] 50.2', '[BOLD] 85.6', '[BOLD] 93.1', '[BOLD] 1', '[BOLD] 40.2', '[BOLD] 76.7', '[BOLD] 87.8', '[BOLD] 2', '82.54%'], ['[BOLD] asymmetric', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Mono', '49.9', '83.4', '93.7', '2', '39.7', '76.5', '88.3', '[BOLD] 2', '-'], ['FME', '48.8', '81.9', '91.9', '2', '37.0', '74.8', '87.0', '[BOLD] 2', '92.70%'], ['AME', '[BOLD] 55.5', '[BOLD] 87.9', '[BOLD] 95.2', '[BOLD] 1', '[BOLD] 44.9', '[BOLD] 80.7', '[BOLD] 89.3', '[BOLD] 2', '84.99%']]
For the Japanese captions, AME reaches 6.25% and 3.66% better results on average compared to monolingual model in symmetric and asymmetric modes, respectively.
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages?
1910.14161v1
Table 4: Averages of similarities of pairs with same vs. different gender in Italian and German compared to English. The last row is the difference between the averages of the two sets. “Reduction” stands for gap reduction when removing gender signals from the context.
['[EMPTY]', 'Italian Original', 'Italian Debiased', 'Italian English', 'Italian Reduction', 'German Original', 'German Debiased', 'German English', 'German Reduction']
[['Same Gender', '0.442', '0.434', '0.424', '–', '0.491', '0.478', '0.446', '–'], ['Different Gender', '0.385', '0.421', '0.415', '–', '0.415', '0.435', '0.403', '–'], ['difference', '0.057', '0.013', '0.009', '[BOLD] 91.67%', '0.076', '0.043', '0.043', '[BOLD] 100%']]
Table 4 shows the results for Italian and German, compared to English, both for the original and the debiased embeddings (for each language we show the results of the best performing debiased embeddings). [CONTINUE] As expected, in both languages, the difference between the average of the two sets with the debiased embeddings is much lower. In Italian, we get a reduction of 91.67% of the gap with respect to English. In German, we get a reduction of 100%.
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages?
1910.14161v1
Table 2: Averages of rankings of the words in same-gender pairs vs. different-gender pairs for Italian and German, along with their differences. Og stands for the original embeddings, Db for the debiased embeddings, and En for English. Each row presents the averages of pairs with the respective scores in SimLex-999 (0–4, 4–7, 7–10).
['[EMPTY]', 'Italian Same-gender', 'Italian Diff-Gender', 'Italian difference', 'German Same-gender', 'German Diff-Gender', 'German difference']
[['7–10', 'Og: 4884', 'Og: 12947', 'Og: 8063', 'Og: 5925', 'Og: 33604', 'Og: 27679'], ['7–10', 'Db: 5523', 'Db: 7312', 'Db: 1789', 'Db: 7653', 'Db: 26071', 'Db: 18418'], ['7–10', 'En: 6978', 'En: 2467', 'En: -4511', 'En: 4517', 'En: 8666', 'En: 4149'], ['4–7', 'Og: 10954', 'Og: 15838', 'Og: 4884', 'Og: 19271', 'Og: 27256', 'Og: 7985'], ['4–7', 'Db: 12037', 'Db: 12564', 'Db: 527', 'Db: 24845', 'Db: 22970', 'Db: -1875'], ['4–7', 'En: 15891', 'En: 17782', 'En: 1891', 'En: 13282', 'En: 17649', 'En: 4367'], ['0–4', 'Og: 23314', 'Og: 35783', 'Og: 12469', 'Og: 50983', 'Og: 85263', 'Og: 34280'], ['0–4', 'Db: 26386', 'Db: 28067', 'Db: 1681', 'Db: 60603', 'Db: 79081', 'Db: 18478'], ['0–4', 'En: 57278', 'En: 53053', 'En: -4225', 'En: 41509', 'En: 62929', 'En: 21420']]
Table 2 shows the results for Italian and German, compared to English. As expected, the average ranking of samegender pairs is significantly lower than that of different-gender pairs, both for German and Italian, while the difference between the sets in English is much smaller. [CONTINUE] Table 2 shows the results for Italian and German, both for the original and the debiased embeddings. [CONTINUE] As we expect, the difference between the average ranking of the two sets drops significantly for both languages.
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages?
1910.14161v1
Table 6: Results on SimLex-999 and WordSim-353, in Italian and German, before and after debiasing.
['[EMPTY]', 'Italian Orig', 'Italian Debias', 'German Orig', 'German Debias']
[['SimLex', '0.280', '[BOLD] 0.288', '0.343', '[BOLD] 0.356'], ['WordSim', '0.548', '[BOLD] 0.577', '0.547', '[BOLD] 0.553']]
Table 6 shows the results for Italian and German for both datasets, compared to the original embeddings. In both cases, the new embeddings perform better than the original ones.
How does Grammatical Gender Affect Noun Representations in Gender-Marking Languages?
1910.14161v1
Table 7: Cross-lingual embedding alignment in Italian and in German, before and after debiasing.
['[EMPTY]', 'Italian → En', 'Italian En →', 'German → En', 'German En →']
[['Orig', '58.73', '59.68', '47.58', '50.48'], ['Debias', '[BOLD] 60.03', '[BOLD] 60.96', '[BOLD] 47.89', '[BOLD] 51.76']]
The results reported in Table 7 show that precision on BDI indeed increases as a result of the reduced effect of grammatical gender on the embeddings for German and Italian, i.e. that the embeddings spaces can be aligned better with the debiased embeddings.
Towards Quantifying the Distance between Opinions
2001.09879v1
Table 6: Performance comparison of the distance measures on all 18 datasets. The semantic distance in opinion distance (OD) measure is computed via cosine distance over either Word2vec (OD-w2v with semantic distance threshold 0.6) or Doc2vec (OD-d2v with distance threshold 0.3) embeddings. Sil. refers to Silhouette Coefficient. The second best result is italicized and underlined. The ARI and Silhouette coefficients scores of both OD methods (OD-d2v and OD-w2v) are statistically significant (paired t-test) with respect to baselines at significance level 0.005.
['Topic Name', 'Size', 'TF-IDF ARI', 'WMD ARI', 'Sent2vec ARI', 'Doc2vec ARI', 'BERT ARI', '[ITALIC] OD-w2v ARI', '[ITALIC] OD-d2v ARI', 'TF-IDF [ITALIC] Sil.', 'WMD [ITALIC] Sil.', 'Sent2vec [ITALIC] Sil.', 'Doc2vec [ITALIC] Sil.', 'BERT [ITALIC] Sil.', '[ITALIC] OD-w2v [ITALIC] Sil.', '[ITALIC] OD-d2v [ITALIC] Sil.']
[['Affirmative Action', '81', '-0.07', '-0.02', '0.03', '-0.01', '-0.02', '[BOLD] 0.14', '[ITALIC] 0.02', '0.01', '0.01', '-0.01', '-0.02', '-0.04', '[BOLD] 0.06', '[ITALIC] 0.01'], ['Atheism', '116', '[BOLD] 0.19', '0.07', '0.00', '0.03', '-0.01', '0.11', '[ITALIC] 0.16', '0.02', '0.01', '0.02', '0.01', '0.01', '[ITALIC] 0.05', '[BOLD] 0.07'], ['Austerity Measures', '20', '[ITALIC] 0.04', '[ITALIC] 0.04', '-0.01', '-0.05', '0.04', '[BOLD] 0.21', '-0.01', '0.06', '0.07', '0.05', '-0.03', '0.10', '[BOLD] 0.19', '0.1'], ['Democratization', '76', '0.02', '-0.01', '0.00', '[ITALIC] 0.09', '-0.01', '[BOLD] 0.11', '0.07', '0.01', '0.01', '0.02', '0.02', '0.03', '[BOLD] 0.16', '[ITALIC] 0.11'], ['Education Voucher Scheme', '30', '[BOLD] 0.25', '0.12', '0.08', '-0.02', '0.04', '0.13', '[ITALIC] 0.19', '0.01', '0.01', '0.01', '-0.01', '0.02', '[ITALIC] 0.38', '[BOLD] 0.40'], ['Gambling', '60', '-0.06', '-0.01', '-0.02', '0.04', '0.09', '[ITALIC] 0.35', '[BOLD] 0.39', '0.01', '0.02', '0.03', '0.01', '0.09', '[BOLD] 0.30', '[ITALIC] 0.22'], ['Housing', '30', '0.01', '-0.01', '-0.01', '-0.02', '0.08', '[BOLD] 0.27', '0.01', '0.02', '0.03', '0.03', '0.01', '0.11', '[BOLD] 0.13', '[ITALIC] 0.13'], ['Hydroelectric Dams', '110', '[BOLD] 0.47', '[ITALIC] 0.45', '[ITALIC] 0.45', '-0.01', '0.38', '0.35', '0.14', '0.04', '0.08', '0.12', '0.01', '0.19', '[BOLD] 0.26', '[ITALIC] 0.09'], ['Intellectual Property', '66', '0.01', '0.01', '0.00', '0.03', '0.03', '[ITALIC] 0.05', '[BOLD] 0.14', '0.01', '[ITALIC] 0.04', '0.03', '0.01', '0.03', '[ITALIC] 0.04', '[BOLD] 0.12'], ['Keystone pipeline', '18', '0.01', '0.01', '0.00', '-0.13', '[BOLD] 0.07', '-0.01', '[BOLD] 0.07', '-0.01', '-0.03', '-0.03', '-0.07', '0.03', '[BOLD] 0.05', '[ITALIC] 0.02'], ['Monarchy', '61', '-0.04', '0.01', '0.00', '0.03', '-0.02', '[BOLD] 0.15', '[BOLD] 0.15', '0.01', '0.02', '0.02', '0.01', '0.01', '[BOLD] 0.11', '[ITALIC] 0.09'], ['National Service', '33', '0.14', '-0.03', '-0.01', '0.02', '0.01', '[ITALIC] 0.31', '[BOLD] 0.39', '0.02', '0.04', '0.02', '0.01', '0.02', '[BOLD] 0.25', '[BOLD] 0.25'], ['One-child policy China', '67', '-0.05', '0.01', '[BOLD] 0.11', '-0.02', '0.02', '[BOLD] 0.11', '0.01', '0.01', '0.02', '[ITALIC] 0.04', '-0.01', '0.03', '[BOLD] 0.07', '-0.02'], ['Open-source Software', '48', '-0.02', '-0.01', '[ITALIC] 0.05', '0.01', '0.12', '[BOLD] 0.09', '-0.02', '0.01', '-0.01', '0.00', '-0.02', '0.03', '[BOLD] 0.18', '0.01'], ['Pornography', '52', '-0.02', '0.01', '0.01', '-0.02', '-0.01', '[BOLD] 0.41', '[BOLD] 0.41', '0.01', '0.01', '0.02', '-0.01', '0.03', '[BOLD] 0.47', '[ITALIC] 0.41'], ['Seanad Abolition', '25', '0.23', '0.09', '-0.01', '-0.01', '0.03', '[ITALIC] 0.32', '[BOLD] 0.54', '0.02', '0.01', '-0.01', '-0.03', '-0.04', '[ITALIC] 0.15', '[BOLD] 0.31'], ['Trades Unions', '19', '[ITALIC] 0.44', '[ITALIC] 0.44', '[BOLD] 0.60', '-0.05', '0.44', '[ITALIC] 0.44', '0.29', '0.1', '0.17', '0.21', '0.01', '0.26', '[BOLD] 0.48', '[ITALIC] 0.32'], ['Video Games', '72', '-0.01', '0.01', '0.12', '0.01', '0.08', '[ITALIC] 0.40', '[BOLD] 0.56', '0.01', '0.01', '0.06', '0.01', '0.05', '[ITALIC] 0.32', '[BOLD] 0.42'], ['Average', '54.67', '0.09', '0.07', '0.08', '0.01', '0.08', '[BOLD] 0.22', '[ITALIC] 0.20', '0.02', '0.03', '0.04', '-0.01', '0.05', '[BOLD] 0.20', '[ITALIC] 0.17']]
The semantic threshold for OD-d2v is set at 0.3 while for OD-w2v is set at 0.6. [CONTINUE] We evaluate our distance measures in the unsupervised setting, specifically, evaluating the clustering quality using the Adjusted Rand Index (ARI) and Silhouette coefficient. We benchmark against the following baselines: WMD (which relies on word2vec embeddings), Doc2vec and TF-IDF. The results are shown in Table 6. The ARI and Silhouette coefficients scores of both OD methods (OD-d2v and OD-w2v) are statistically significant (paired t-test) with respect to baselines at significance level 0.005. [CONTINUE] Opinion distance methods generally outperform the competition on both ARI and Silhouette coefficient. [CONTINUE] This is reflected in the average ARI and average Silhouette coefficients of the baseline distance measures. [CONTINUE] In the exceptional case of "Hydroelectric Dams" dataset, the opinion distance OD performs particularly bad compared to TF-IDF
Towards Quantifying the Distance between Opinions
2001.09879v1
Table 3: ARI and Silhouette coefficient scores.
['Methods', 'Seanad Abolition ARI', 'Seanad Abolition [ITALIC] Sil', 'Video Games ARI', 'Video Games [ITALIC] Sil', 'Pornography ARI', 'Pornography [ITALIC] Sil']
[['TF-IDF', '0.23', '0.02', '-0.01', '0.01', '-0.02', '0.01'], ['WMD', '0.09', '0.01', '0.01', '0.01', '-0.02', '0.01'], ['Sent2vec', '-0.01', '-0.01', '0.11', '0.06', '0.01', '0.02'], ['Doc2vec', '-0.01', '-0.03', '-0.01', '0.01', '0.02', '-0.01'], ['BERT', '0.03', '-0.04', '0.08', '0.05', '-0.01', '0.03'], ['OD-parse', '0.01', '-0.04', '-0.01', '0.02', '0.07', '0.05'], ['OD', '[BOLD] 0.54', '[BOLD] 0.31', '[BOLD] 0.56', '[BOLD] 0.42', '[BOLD] 0.41', '[BOLD] 0.41']]
among opinions: We see that OD significantly outperforms the baseline methods and the OD-parse variant [CONTINUE] OD achieves high ARI and Sil scores, [CONTINUE] From the above table, we observe that the text-similarity based baselines, such as TF-IDF, WMD and Doc2vec achieving ARI and Silhouette coefficient scores of close to zero on the "Video Games" and "Pornography" datasets (barely providing a performance improvement over random clustering, i.e., a zero ARI score). [CONTINUE] A notable exception is the "Seanad Abolition" dataset, where TF-IDF performs relatively better than WMD, Sent2vec and Doc2vec.
Towards Quantifying the Distance between Opinions
2001.09879v1
Table 4: The quality of opinion distance when leveraged as a feature for multi-class classification. Each entry in + X feature should be treated independently. The second best result is italicized and underlined.
['Baselines', 'Seanad Abolition', 'Video Games', 'Pornography']
[['Unigrams', '0.54', '0.66', '0.63'], ['Bigrams', '0.54', '0.64', '0.56'], ['LSA', '0.68', '0.57', '0.57'], ['Sentiment', '0.35', '0.60', '0.69'], ['Bigrams', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['+ Sentiment', '0.43', '0.58', '0.66'], ['TF-IDF', '0.50', '0.65', '0.57'], ['WMD', '0.40', '0.73', '0.57'], ['Sent2vec', '0.39', '0.79', '0.70'], ['Doc2vec', '0.27', '0.51', '0.56'], ['BERT', '0.46', '0.84', '0.68'], ['Unigrams', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['+ Bigrams', '0.40', '0.64', '0.78'], ['+ Sentiment', '0.24', '0.54', '0.54'], ['+ LSA', '0.73', '0.51', '0.58'], ['+ TF-IDF', '0.42', '0.65', '0.56'], ['+ WMD', '0.48', '0.73', '0.53'], ['+ Sent2vec', '0.56', '0.59', '0.66'], ['+ Doc2vec', '0.31', '0.56', '0.47'], ['OD-parse', '0.50', '0.58', '0.53'], ['OD', '0.71', '[BOLD] 0.88', '0.88'], ['OD', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['+ Unigrams', '0.83', '[ITALIC] 0.86', '[ITALIC] 0.88'], ['+ Bigrams', '[BOLD] 0.87', '0.85', '[ITALIC] 0.88'], ['+ Sentiment', '0.64', '[ITALIC] 0.86', '0.86'], ['+ LSA', '[ITALIC] 0.84', '0.82', '[BOLD] 0.90'], ['+ WMD', '0.75', '0.82', '0.86']]
For completeness, here we also compare against unigram or n-gram based classifiers [CONTINUE] The classification performance of the baselines is reported in Table 4. [CONTINUE] SVM with only OD features outperforms many baselines: We see that on "Video Games" and "Pornography" datasets, the classification performance based on SVM with only OD is significantly better than the SVM with any other combination of features excluding OD. For the "Seanad Abolition" dataset, there is one exception: SVM with unigrams and LSA features performs slightly better than OD. [CONTINUE] nating word "democracy". SVM with OD and baseline features further improves classification performance: We see that SVM with OD and bigrams achieves the best multi-class classification performance on the "Seanad Abolition" dataset. On the "Pornography" dataset, we observe SVM with OD + LSA to improve classification performance by nearly 2%.
Towards Quantifying the Distance between Opinions
2001.09879v1
Table 5: We compare the quality of variants of Opinion Distance measures on opinion clustering task with ARI.
['[EMPTY]', 'Difference Function', 'Seanad Abolition', 'Video Games', 'Pornography']
[['OD-parse', 'Absolute', '0.01', '-0.01', '0.07'], ['OD-parse', 'JS div.', '0.01', '-0.01', '-0.01'], ['OD-parse', 'EMD', '0.07', '0.01', '-0.01'], ['OD', 'Absolute', '[BOLD] 0.54', '[BOLD] 0.56', '[BOLD] 0.41'], ['OD', 'JS div.', '0.07', '-0.01', '-0.02'], ['OD', 'EMD', '0.26', '-0.01', '0.01'], ['OD (no polarity shifters)', 'Absolute', '0.23', '0.08', '0.04'], ['OD (no polarity shifters)', 'JS div.', '0.09', '-0.01', '-0.02'], ['OD (no polarity shifters)', 'EMD', '0.10', '0.01', '-0.01']]
distribuThe results of different variants are shown in Table 5. [CONTINUE] OD significantly outperforms OD-parse: We observe that compared to OD-parse, OD is much more accurate. On the three datasets, OD achieves an average weighted F1 score of 0.54, 0.56 and 0.41 respectively compared to the scores of 0.01, -0.01 and 0.07 by OD-parse. [CONTINUE] Table 5 shows that OD is significantly better than Jensen-Shannon divergence on all the three datasets. [CONTINUE] Sentiment polarity shifters have a high impact on clustering performance of opinion distance: We find that not utilizing the sentiment polarity shifters, especially in case of datasets "Video games" and "Pornography" hurts the Opinion Representation phase, and thereby leads to incorrect computation of opinion distance. This is evident from the significant drop in ARI score from OD to OD (no polarity shifters) since the only change in those variants is of sentiment polarity shifters.
Turing at SemEval-2017 Task 8: Sequential Approach to Rumour Stance Classification with Branch-LSTM
1704.07221v1
Table 3: Results on the development and testing sets. Accuracy and F1 scores: macro-averaged and per class (S: supporting, D: denying, Q: querying, C: commenting).
['[EMPTY]', '[BOLD] Accuracy', '[BOLD] Macro F', '[BOLD] S', '[BOLD] D', '[BOLD] Q', '[BOLD] C']
[['Development', '0.782', '0.561', '0.621', '0.000', '0.762', '0.860'], ['Testing', '[BOLD] 0.784', '0.434', '0.403', '0.000', '0.462', '0.873']]
The performance of our model on the testing and development set is shown in Table 3. [CONTINUE] The difference in accuracy between testing and development set is minimal, however we see significant difference in Macro-F score due to different class balance in these sets. [CONTINUE] The branch-LSTM model predicts commenting, the majority class well, however it is unable to pick out any denying, the mostchallenging under-represented class.
Turing at SemEval-2017 Task 8: Sequential Approach to Rumour Stance Classification with Branch-LSTM
1704.07221v1
Table 5: Confusion matrix for testing set predictions
['[BOLD] LabelPrediction', '[BOLD] C', '[BOLD] D', '[BOLD] Q', '[BOLD] S']
[['[BOLD] Commenting', '760', '0', '12', '6'], ['[BOLD] Denying', '68', '0', '1', '2'], ['[BOLD] Querying', '69', '0', '36', '1'], ['[BOLD] Supporting', '67', '0', '1', '26']]
Most denying instances get misclassified as commenting (see Table 5),
Language Independent Sequence Labelling for Opinion Target Extraction
1901.09755v1
Table 1: ABSA SemEval 2014-2016 datasets for the restaurant domain. B-target indicates the number of opinion targets in each set; I-target refers to the number of multiword targets.
['Language', 'ABSA', 'No. of Tokens and Opinion Targets Train', 'No. of Tokens and Opinion Targets Train', 'No. of Tokens and Opinion Targets Train', 'No. of Tokens and Opinion Targets Test', 'No. of Tokens and Opinion Targets Test', 'No. of Tokens and Opinion Targets Test']
[['[EMPTY]', '[EMPTY]', 'Token', 'B-target', 'I-target', 'Token', 'B-target', 'I-target'], ['en', '2014', '47028', '3687', '1457', '12606', '1134', '524'], ['en', '2015', '18488', '1199', '538', '10412', '542', '264'], ['en', '2016', '28900', '1743', '797', '9952', '612', '274'], ['es', '2016', '35847', '1858', '742', '13179', '713', '173'], ['fr', '2016', '26777', '1641', '443', '11646', '650', '239'], ['nl', '2016', '24788', '1231', '331', '7606', '373', '81'], ['ru', '2016', '51509', '3078', '953', '16999', '952', '372'], ['tr', '2016', '12406', '1374', '516', '1316', '145', '61']]
Table 1 shows the ABSA datasets from the restaurants domain for English, Spanish, French, Dutch, Russian and Turkish. From left to right each row displays the number of tokens, number of targets and the number of multiword targets for each training and test set.
Language Independent Sequence Labelling for Opinion Target Extraction
1901.09755v1
Table 3: ABSA SemEval 2014-2016 English results. BY: Brown Yelp 1000 classes; CYF100-CYR200: Clark Yelp Food 100 classes and Clark Yelp Reviews 200 classes; W2VW400: Word2vec Wikipedia 400 classes; ALL: BY+CYF100-CYR200+W2VW400.
['Features', '2014 P', '2014 R', '2014 F1', '2015 P', '2015 R', '2015 F1', '2016 P', '2016 R', '2016 F1']
[['Local (L)', '81.84', '74.69', '78.10', '[BOLD] 76.82', '54.43', '63.71', '74.41', '61.76', '67.50'], ['L + BY', '77.84', '84.57', '81.07', '71.73', '63.65', '67.45', '[BOLD] 74.49', '71.08', '72.74'], ['L + CYF100-CYR200', '[BOLD] 82.91', '84.30', '83.60', '73.25', '61.62', '66.93', '74.12', '72.06', '73.07'], ['L + W2VW400', '76.82', '82.10', '79.37', '74.42', '59.04', '65.84', '73.04', '65.52', '69.08'], ['L + [BOLD] ALL', '81.15', '[BOLD] 87.30', '[BOLD] 84.11', '72.90', '[BOLD] 69.00', '[BOLD] 70.90', '73.33', '[BOLD] 73.69', '[BOLD] 73.51']]
Table 3 provides detailed results on the Opinion Target Extraction (OTE) task for English. We show in bold our best model (ALL) chosen via 5-fold CV on the training data. Moreover, we also [CONTINUE] show the results of the best models using only one type of clustering feature, namely, the best Brown, Clark and Word2vec models, respectively.
Language Independent Sequence Labelling for Opinion Target Extraction
1901.09755v1
Table 6: ABSA SemEval 2016: Comparison of multilingual results in terms of F1 scores.
['Language', 'System', 'F1']
[['es', 'GTI', '68.51'], ['es', 'L + [BOLD] CW600 + W2VW300', '[BOLD] 69.92'], ['es', 'Baseline', '51.91'], ['fr', 'IIT-T', '66.67'], ['fr', 'L + [BOLD] CW100', '[BOLD] 69.50'], ['fr', 'Baseline', '45.45'], ['nl', 'IIT-T', '56.99'], ['nl', 'L + [BOLD] W2VW400', '[BOLD] 66.39'], ['nl', 'Baseline', '50.64'], ['ru', 'Danii.', '33.47'], ['ru', 'L + [BOLD] CW500', '[BOLD] 65.53'], ['ru', 'Baseline', '49.31'], ['tr', 'L + [BOLD] BW', '[BOLD] 60.22'], ['tr', 'Baseline', '41.86']]
Table 6 shows that our system outperforms the best previous approaches across the five languages.
Language Independent Sequence Labelling for Opinion Target Extraction
1901.09755v1
Table 7: False Positives and Negatives for every ABSA 2014-2016 setting.
['Error type', '2014 en', '2015 en', '2016 en', '2016 es', '2016 fr', '2016 nl', '2016 ru', '2016 tr']
[['FP', '[BOLD] 230', '151', '[BOLD] 189', '165', '194', '117', '[BOLD] 390', '62'], ['FN', '143', '[BOLD] 169', '163', '[BOLD] 248', '[BOLD] 202', '[BOLD] 132', '312', '[BOLD] 65']]
most errors in our system are caused by false negatives [FN], as it can be seen in Table 7.
Evaluation of Greek Word Embeddings
1904.04032v3
Table 8: Summary for 3CosMul and top-1 nearest vectors.
['Category Semantic', 'Category no oov words', 'gr_def 60.60%', 'gr_neg10 62.50%', 'cc.el.300 [BOLD] 70.90%', 'wiki.el 37.50%', 'gr_cbow_def 29.80%', 'gr_d300_nosub 62.50%', 'gr_w2v_sg_n5 54.60%']
[['[EMPTY]', 'with oov words', '54.90%', '57.00%', '[BOLD] 65.50%', '35.00%', '27.10%', '56.60%', '49.50%'], ['Syntactic', 'no oov words', '67.90%', '62.90%', '[BOLD] 69.60%', '50.70%', '63.80%', '56.50%', '55.40%'], ['[EMPTY]', 'with oov words', '[BOLD] 55.70%', '51.30%', '50.20%', '33.40%', '52.30%', '46.40%', '45.50%'], ['Overall', 'no oov words', '65.19%', '62.66%', '[BOLD] 70.12%', '45.01%', '51.18%', '58.73%', '55.10%'], ['[EMPTY]', 'with oov words', '55.46%', '53.30%', '[BOLD] 55.54%', '33.94%', '43.53%', '49.96%', '46.87%']]
We noticed that the sub-category in which most models had the worst performance was currency country category, [CONTINUE] Sub-categories as adjectives antonyms and performer action had the highest percentage of out-of-vocabulary terms, so we observed lower performance in these categories for all models.
Evaluation of Greek Word Embeddings
1904.04032v3
Table 1: The Greek word analogy test set.
['Relation', '#pairs', '#tuples']
[['Semantic: (13650 tuples)', 'Semantic: (13650 tuples)', 'Semantic: (13650 tuples)'], ['common_capital_country', '42', '1722'], ['all_capital_country', '78', '6006'], ['eu_city_country', '50', '2366'], ['city_in_region', '40', '1536'], ['currency_country', '24', '552'], ['man_woman_family', '18', '306'], ['profession_placeof_work', '16', '240'], ['performer_action', '24', '552'], ['politician_country', '20', '370'], ['Syntactic: (25524 tuples)', 'Syntactic: (25524 tuples)', 'Syntactic: (25524 tuples)'], ['man_woman_job', '26', '650'], ['adjective_adverb', '28', '756'], ['opposite', '35', '1190'], ['comparative', '36', '1260'], ['superlative', '25', '600'], ['present_participle_active', '48', '2256'], ['present_participle_passive', '44', '1892'], ['nationality_adjective_man', '56', '3080'], ['nationality_adjective_woman', '42', '1722'], ['past_tense', '34', '1122'], ['plural_nouns', '72', '5112'], ['plural_verbs', '37', '1332'], ['adjectives_antonyms', '50', '2450'], ['verbs_antonyms', '20', '380'], ['verbs_i_you', '42', '1722']]
Our Greek analogy test set contains 39,174 questions divided into semantic and syntactic analogy questions. [CONTINUE] Semantic questions are divided into 9 categories and include 13,650 questions in total. [CONTINUE] Syntactic questions are divided into 15 categories, which are mostly language specific. They include 25,524 questions [CONTINUE] We show the full Greek word analogy dataset in Table 1.
Evaluation of Greek Word Embeddings
1904.04032v3
Table 3: Summary for 3CosAdd and top-1 nearest vectors.
['Category Semantic', 'Category no oov words', 'gr_def 58.42%', 'gr_neg10 59.33%', 'cc.el.300 [BOLD] 68.80%', 'wiki.el 27.20%', 'gr_cbow_def 31.76%', 'gr_d300_nosub 60.79%', 'gr_w2v_sg_n5 52.70%']
[['[EMPTY]', 'with oov words', '52.97%', '55.33%', '[BOLD] 64.34%', '25.73%', '28.80%', '55.11%', '47.82%'], ['Syntactic', 'no oov words', '65.73%', '61.02%', '[BOLD] 69.35%', '40.90%', '64.02%', '53.69%', '52.60%'], ['[EMPTY]', 'with oov words', '[BOLD] 53.95%', '48.69%', '49.43%', '28.42%', '52.54%', '44.06%', '43.13%'], ['Overall', 'no oov words', '63.02%', '59.96%', '[BOLD] 68.97%', '36.45%', '52.04%', '56.30%', '52.66%'], ['[EMPTY]', 'with oov words', '53.60%', '51.00%', '[BOLD] 54.60%', '27.50%', '44.30%', '47.90%', '44.80%']]
We compared these models in word analogy task. Due to space limitations, we show summarized results only for 3CosAdd in Table 3 and move the rest in supplementary material. Considering the two aggregated categories of syntactic and semantic word analogies respectively and both 3CosAdd and 3CosMul metrics, model cc.el.300 has outperformed all the other models apart from the case of the Syntactic category when we included the out-of-vocabulary (oov) terms [CONTINUE] where the model gr def had the best performance. Model cc.el.300 was the only one that was trained with CBOW and position-weights. Model wiki.el, trained only on Wikipedia, was the worst almost in every category (and sub-category). The five models that were trained on the large scale web content (Outsios et al., 2018) had lower percentage of oov terms in comparison with the other two. In some cases where oov terms were considered, they outperformed model cc.el.300 or had a better ranking considering the accuracy rate in most categories or sub-categories. In the basic categories, syntactic and semantic, model gr cbow def was the only one that performed much worse in the semantic category than in the syntactic one. All the other models did not have large differences in performance between semantic and syntactic categories.
Evaluation of Greek Word Embeddings
1904.04032v3
Table 7: Summary for 3CosMul and top-5 nearest vectors.
['Category Semantic', 'Category no oov words', 'gr_def 83.72%', 'gr_neg10 84.38%', 'cc.el.300 [BOLD] 88.50%', 'wiki.el 65.85%', 'gr_cbow_def 52.05%', 'gr_d300_nosub 83.26%', 'gr_w2v_sg_n5 80.00%']
[['[EMPTY]', 'with oov words', '75.90%', '76.50%', '[BOLD] 81.70%', '61.40%', '47.20%', '75.50%', '72.50%'], ['Syntactic', 'no oov words', '83.86%', '80.42%', '[BOLD] 85.07%', '72.56%', '76.22%', '75.97%', '74.55%'], ['[EMPTY]', 'with oov words', '[BOLD] 68.80%', '66.00%', '61.40%', '47.80%', '62.60%', '62.30%', '61.20%'], ['Overall', 'no oov words', '83.80%', '81.90%', '[BOLD] 86.50%', '69.70%', '67.20%', '78.70%', '76.60%'], ['[EMPTY]', 'with oov words', '[BOLD] 71.29%', '69.66%', '68.48%', '52.53%', '57.20%', '66.93%', '65.14%']]
We noticed that the sub-category in which most models had the worst performance was currency country category, [CONTINUE] Sub-categories as adjectives antonyms and performer action had the highest percentage of out-of-vocabulary terms, so we observed lower performance in these categories for all models.
Evaluation of Greek Word Embeddings
1904.04032v3
Table 4: Word similarity.
['Model', 'Pearson', 'p-value', 'Pairs (unknown)']
[['gr_def', '[BOLD] 0.6042', '3.1E-35', '2.3%'], ['gr_neg10', '0.5973', '2.9E-34', '2.3%'], ['cc.el.300', '0.5311', '1.7E-25', '4.9%'], ['wiki.el', '0.5812', '2.2E-31', '4.5%'], ['gr_cbow_def', '0.5232', '2.7E-25', '2.3%'], ['gr_d300_nosub', '0.5889', '3.8E-33', '2.3%'], ['gr_w2v_sg_n5', '0.5879', '4.4E-33', '2.3%']]
According to Pearson correlation, gr def model had the highest correlation with human ratings of similarity.
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers
1708.00160v2
Table 7: In-domain and out-of-domain evaluations for the pt and wb genres of the CoNLL test set. The highest scores are boldfaced.
['[EMPTY]', '[EMPTY]', 'in-domain CoNLL', 'in-domain LEA', 'out-of-domain CoNLL', 'out-of-domain LEA']
[['[EMPTY]', '[EMPTY]', 'pt (Bible)', 'pt (Bible)', 'pt (Bible)', 'pt (Bible)'], ['deep-coref', 'ranking', '75.61', '71.00', '66.06', '57.58'], ['deep-coref', '+EPM', '76.08', '71.13', '<bold>68.14</bold>', '<bold>60.74</bold>'], ['e2e-coref', 'single', '77.80', '73.73', '65.22', '58.26'], ['e2e-coref', 'ensemble', '<bold>78.88</bold>', '<bold>74.88</bold>', '65.45', '59.71'], ['[EMPTY]', '[EMPTY]', 'wb (weblog)', 'wb (weblog)', 'wb (weblog)', 'wb (weblog)'], ['deep-coref', 'ranking', '61.46', '53.75', '57.17', '48.74'], ['deep-coref', '+EPM', '61.97', '53.93', '<bold>61.52</bold>', '<bold>53.78</bold>'], ['e2e-coref', 'single', '62.02', '53.09', '60.69', '52.69'], ['e2e-coref', 'ensemble', '<bold>64.76</bold>', '<bold>57.54</bold>', '60.99', '52.99'], ['[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]']]
"+EPM" generalizes best, and in out-ofdomain evaluations, it considerably outperforms the ensemble model of e2e-coref,
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers
1708.00160v2
Table 5: Impact of different EPM feature groups on the CoNLL development set.
['[EMPTY]', 'MUC', '<italic>B</italic>3', 'CEAF<italic>e</italic>', 'CoNLL', 'LEA']
[['+EPM', '74.92', '65.03', '60.88', '66.95', '61.34'], ['-pairwise', '74.37', '64.55', '60.46', '66.46', '60.71'], ['-type', '74.71', '64.87', '61.00', '66.86', '61.07'], ['-dep', '74.57', '64.79', '60.65', '66.67', '61.01'], ['-NER', '74.61', '65.05', '60.93', '66.86', '61.27'], ['-POS', '74.74', '65.04', '60.88', '66.89', '61.30'], ['+pairwise', '74.25', '64.33', '60.02', '66.20', '60.57']]
The POS and named entity tags have the least and the pairwise features have the most significant effect. [CONTINUE] The results of "-pairwise" compared to "+pairwise" show that pairwise feature-values have a significant impact, but only when they are considered in combination with other EPM feature-values.
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers
1708.00160v2
Table 6: Out-of-domain evaluation on the WikiCoref dataset. The highest F1 scores are boldfaced.
['[EMPTY]', '[EMPTY]', 'MUC R', 'MUC P', 'MUC F1', '<italic>B</italic>3 R', '<italic>B</italic>3 P', '<italic>B</italic>3 F1', 'CEAF<italic>e</italic> R', 'CEAF<italic>e</italic> P', 'CEAF<italic>e</italic> F1', 'CoNLL', 'LEA R', 'LEA P', 'LEA F1']
[['deep-coref', 'ranking', '57.72', '69.57', '63.10', '41.42', '58.30', '48.43', '42.20', '53.50', '47.18', '52.90', '37.57', '54.27', '44.40'], ['deep-coref', 'reinforce', '62.12', '58.98', '60.51', '46.98', '45.79', '46.38', '44.28', '46.35', '45.29', '50.73', '42.28', '41.70', '41.98'], ['deep-coref', 'top-pairs', '56.31', '71.74', '63.09', '39.78', '61.85', '48.42', '40.80', '52.85', '46.05', '52.52', '35.87', '57.58', '44.21'], ['deep-coref', '+EPM', '58.23', '74.05', '<bold>65.20</bold>', '43.33', '63.90', '51.64', '43.44', '56.33', '<bold>49.05</bold>', '<bold>55.30</bold>', '39.70', '59.81', '<bold>47.72</bold>'], ['e2e', 'single', '60.14', '64.46', '62.22', '45.20', '51.75', '48.25', '38.18', '43.50', '40.67', '50.38', '40.70', '47.56', '43.86'], ['e2e', 'ensemble', '59.58', '71.60', '65.04', '44.64', '60.91', '51.52', '40.38', '49.17', '44.35', '53.63', '40.73', '56.97', '47.50'], ['[EMPTY]', 'G&L', '66.06', '62.93', '64.46', '57.73', '48.58', '<bold>52.76</bold>', '46.76', '49.54', '48.11', '55.11', '-', '-', '-']]
Incorporating EPM feature-values improves the three points. performance by about [CONTINUE] it achieves onpar performance with that of "G&L".
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers
1708.00160v2
Table 1: Impact of linguistic features on deep-coref models on the CoNLL development set.
['[EMPTY]', 'MUC', '<italic>B</italic>3', 'CEAF<italic>e</italic>', 'CoNLL', 'LEA']
[['ranking', '74.31', '64.23', '59.73', '66.09', '60.47'], ['+linguistic', '74.35', '63.96', '60.19', '66.17', '60.20'], ['top-pairs', '73.95', '63.98', '59.52', '65.82', '60.07'], ['+linguistic', '74.32', '64.45', '60.19', '66.32', '60.62']]
We observe that incorporating all the linguistic features bridges the gap between the performance of "top-pairs" and "ranking". [CONTINUE] However, it does not improve significantly over "ranking".
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers
1708.00160v2
Table 2: Out-of-domain evaluation of deep-coref models on the WikiCoref dataset.
['[EMPTY]', 'MUC', '<italic>B</italic>3', 'CEAF<italic>e</italic>', 'CoNLL', 'LEA']
[['ranking', '63.10', '48.43', '47.18', '52.90', '44.40'], ['top-pairs', '63.09', '48.42', '46.05', '52.52', '44.21'], ['+linguistic', '63.99', '49.63', '46.60', '53.40', '45.66']]
We observe that the impact on generalization is also not notable, i.e. the CoNLL score improves only by 0.5pp over "ranking".
Using Linguistic Features to Improve the Generalization Capability of Neural Coreference Resolvers
1708.00160v2
Table 4: Comparisons on the CoNLL test set. The F1 gains that are statistically significant: (1) “+EPM” compared to “top-pairs”, “ranking” and “JIM”, (2) “+EPM” compared to “reinforce” based on MUC, B3 and LEA, (3) “single” compared to “+EPM” based on MUC and B3, and (4) “ensemble” compared to other systems. Significance is measured based on the approximate randomization test (p<0.05) Noreen (1989).
['[EMPTY]', '[EMPTY]', 'MUC R', 'MUC P', 'MUC F1', '<italic>B</italic>3 R', '<italic>B</italic>3 P', '<italic>B</italic>3 F1', 'CEAF<italic>e</italic> R', 'CEAF<italic>e</italic> P', 'CEAF<italic>e</italic> F1', 'CoNLL', 'LEA R', 'LEA P', 'LEA F1']
[['deep-coref', 'ranking', '70.43', '79.57', '74.72', '58.08', '69.26', '63.18', '54.43', '64.17', '58.90', '65.60', '54.55', '65.68', '59.60'], ['deep-coref', 'reinforce', '69.84', '79.79', '74.48', '57.41', '70.96', '63.47', '55.63', '63.83', '59.45', '65.80', '53.78', '67.23', '59.76'], ['deep-coref', 'top-pairs', '69.41', '79.90', '74.29', '57.01', '70.80', '63.16', '54.43', '63.74', '58.72', '65.39', '53.31', '67.09', '59.41'], ['deep-coref', '+EPM', '71.16', '79.35', '75.03', '59.28', '69.70', '64.07', '56.52', '64.02', '60.04', '66.38', '55.63', '66.11', '60.42'], ['deep-coref', '+JIM', '69.89', '80.45', '74.80', '57.08', '71.58', '63.51', '55.36', '64.20', '59.45', '65.93', '53.46', '67.97', '59.85'], ['e2e', 'single', '74.02', '77.82', '75.88', '62.58', '67.45', '64.92', '59.16', '62.96', '61.00', '67.27', '58.90', '63.79', '61.25'], ['e2e', 'ensemble', '73.73', '80.95', '77.17', '61.83', '72.10', '66.57', '60.11', '65.62', '62.74', '68.83', '58.48', '68.81', '63.23']]
The performance of the "+EPM" model compared to recent state-of-the-art coreference models on the CoNLL test set is presented in Table 4. [CONTINUE] EPM feature-values result in significantly better performance than those of JIM while the number of EPM feature-values is considerably less than JIM.
From Text to Lexicon: Bridging the Gap betweenWord Embeddings and Lexical Resources
7
Table 4: Lexicon member coverage (%)
['target', 'VN', 'WN-V', 'WN-N']
[['type', '81', '66', '47'], ['x+POS', '54', '39', '43'], ['lemma', '88', '76', '53'], ['x+POS', '79', '63', '50'], ['shared', '54', '39', '41']]
We first analyze the coverage of the VSMs in question with respect to the lexica at hand, see Table 4. For brevity we only report coverage on w2 contexts [CONTINUE] lemmatization allows more targets to exceed the SGNS frequency threshold, which results in consistently better coverage. POS-disambiguation, in turn, fragments the vocabulary and consistently reduces the coverage with the effect being less pronounced for lemmatized targets. WN-N shows low coverage containing many low-frequency members.
From Text to Lexicon: Bridging the Gap betweenWord Embeddings and Lexical Resources
7
Table 1: Benchmark performance, Spearman’s ρ. SGNS results with * taken from [morphfit]. Best results per column (benchmark) annotated for our setup only.
['Context: w2', 'Context: w2 SimLex', 'Context: w2 SimLex', 'Context: w2 SimLex', 'Context: w2 SimLex', 'Context: w2 SimVerb']
[['target', 'N', 'V', 'A', 'all', 'V'], ['type', '.334', '<bold>.336</bold>', '<bold>.518</bold>', '.348', '.307'], ['x + POS', '.342', '.323', '.513', '.350', '.279'], ['lemma', '<bold>.362</bold>', '.333', '.497', '<bold>.351</bold>', '.400'], ['x + POS', '.354', '<bold>.336</bold>', '.504', '.345', '<bold>.406</bold>'], ['* type', '-', '-', '-', '.339', '.277'], ['* type MFit-A', '-', '-', '-', '.385', '-'], ['* type MFit-AR', '-', '-', '-', '.439', '.381'], ['Context: dep-W', 'Context: dep-W', 'Context: dep-W', 'Context: dep-W', 'Context: dep-W', 'Context: dep-W'], ['type', '.366', '.365', '.489', '.362', '.314'], ['x + POS', '.364', '.351', '.482', '.359', '.287'], ['lemma', '<bold>.391</bold>', '.380', '<bold>.522</bold>', '<bold>.379</bold>', '.401'], ['x + POS', '.384', '<bold>.388</bold>', '.480', '.366', '<bold>.431</bold>'], ['* type', '-', '-', '-', '.376', '.313'], ['* type MFit-AR', '-', '-', '-', '.434', '.418']]
Table 1 summarizes the performance of the VSMs in question on similarity benchmarks. [CONTINUE] Lemmatized targets generally perform better, with the boost being more pronounced on SimVerb. [CONTINUE] Adding POS information benefits the SimVerb and SimLex verb performance, [CONTINUE] the type.POS targets show a considerable performance drop on SimVerb and SimLex verbs. [CONTINUE] Using dep-W contexts proves beneficial for both datasets [CONTINUE] We provide the Morph-Fitting scores (Vuli´c et al., 2017b) as a state-of-the-art reference; [CONTINUE] This approach uses word type-based VSMs specialized via Morph-Fitting (MFit), which can be seen as an alternative to lemmatization. Morph-Fitting consists of two stages: the Attract (A) stage brings word forms of the same word closer in the VSM, while the Repel (R) stage sets the derivational antonyms further apart. Lemma grouping is similar to the Attract stage. [CONTINUE] as the comparison of MFit-A and -AR shows, a major part of the Morph-Fitting performance gain on SimLex comes from the derivational Repel stage
From Text to Lexicon: Bridging the Gap betweenWord Embeddings and Lexical Resources
7
Table 5: WCS performance, shared vocabulary, k=1. Best results across VSMs in bold.
['[EMPTY]', 'WN-N P', 'WN-N R', 'WN-N F', 'WN-V P', 'WN-V R', 'WN-V F', 'VN P', 'VN R', 'VN F']
[['Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2', 'Context: w2'], ['type', '.700', '.654', '.676', '.535', '.474', '.503', '.327', '.309', '.318'], ['x+POS', '.699', '.651', '.674', '.544', '.472', '.505', '.339', '.312', '.325'], ['lemma', '.706', '.660', '.682', '.576', '.520', '.547', '.384', '.360', '.371'], ['x+POS', '<bold>.710</bold>', '<bold>.662</bold>', '<bold>.685</bold>', '<bold>.589</bold>', '<bold>.529</bold>', '<bold>.557</bold>', '<bold>.410</bold>', '<bold>.389</bold>', '<bold>.399</bold>'], ['Context: dep', 'Context: dep', 'Context: dep', 'Context: dep', 'Context: dep', 'Context: dep', 'Context: dep', 'Context: dep', 'Context: dep', 'Context: dep'], ['type', '.712', '.661', '.686', '.545', '.457', '.497', '.324', '.296', '.310'], ['x+POS', '.715', '.659', '.686', '.560', '.464', '.508', '.349', '.320', '.334'], ['lemma', '<bold>.725</bold>', '<bold>.668</bold>', '<bold>.696</bold>', '.591', '.512', '.548', '.408', '.371', '.388'], ['x+POS', '.722', '.666', '.693', '<bold>.609</bold>', '<bold>.527</bold>', '<bold>.565</bold>', '<bold>.412</bold>', '<bold>.381</bold>', '<bold>.396</bold>']]
Table 5 provides exact scores for reference. [CONTINUE] Note that the shared vocabulary setup puts the type and type.POS VSMs at advantage since it eliminates the effect of low coverage. Still, lemma-based targets significantly7 (p ≤ .005) outperform type-based targets in terms of F-measure in all cases. For window-based w2 contexts POS disambiguation yields significantly better F scores on lemmatized targets for VN (p ≤ .005) with borderline significance for WN-N and WN-V (p ≈ .05). When dependencybased dep contexts are used, the effect of POS disambiguation is only statistically significant on type targets for VN (p ≤ .005) and on lemma-based targets for WN-V (p ≤ .005). [CONTINUE] Lemma-based targets without POS disambiguation perform best on WN-N when dependency-based contexts are used; however, the difference to lemmatized and disambiguated targets is not statistically significant (p > .1).
A bag-of-concepts model improves relation extraction in a narrow knowledge domain with limited data
1904.10743v1
Table 1: Performance of supervised learning models with different features.
['Feature', 'LR P', 'LR R', 'LR F1', 'SVM P', 'SVM R', 'SVM F1', 'ANN P', 'ANN R', 'ANN F1']
[['+BoW', '0.93', '0.91', '0.92', '0.94', '0.92', '0.93', '0.91', '0.91', '0.91'], ['+BoC (Wiki-PubMed-PMC)', '0.94', '0.92', '[BOLD] 0.93', '0.94', '0.92', '[BOLD] 0.93', '0.91', '0.91', '[BOLD] 0.91'], ['+BoC (GloVe)', '0.93', '0.92', '0.92', '0.94', '0.92', '0.93', '0.91', '0.91', '0.91'], ['+ASM', '0.90', '0.85', '0.88', '0.90', '0.86', '0.88', '0.89', '0.89', '0.89'], ['+Sentence Embeddings(SEs)', '0.89', '0.89', '0.89', '0.90', '0.86', '0.88', '0.88', '0.88', '0.88'], ['+BoC(Wiki-PubMed-PMC)+SEs', '0.92', '0.92', '0.92', '0.94', '0.92', '0.93', '0.91', '0.91', '0.91']]
Word embeddings derived from Wiki-PubMed-PMC outperform GloVe-based embeddings (Table 1). The models using BoC outperform models using BoW as well as ASM features. [CONTINUE] Wikipedia-PubMed-PMC embeddings (Moen and Ananiadou, 2013) outperforms GloVe (Mikolov et al., 2013a) in the extraction of most relation types (Table 1) [CONTINUE] the combination feature of BoC and sentence embeddings outperforms sentence embeddings alone, but do not exceed the upper boundary of BoC feature, in which again demonstrating the competitiveness of BoC feature. [CONTINUE] Lin-SVM outperforms other classifiers in extracting most relations. The feed-forward ANN displays significant over-fitting across all relation types, as the performance decreases when increasing the training epochs.
A bag-of-concepts model improves relation extraction in a narrow knowledge domain with limited data
1904.10743v1
Table 2: F1 score results per relation type of the best performing models.
['Relation type', 'Count', 'Intra-sentential co-occ. [ITALIC] ρ=0', 'Intra-sentential co-occ. [ITALIC] ρ=5', 'Intra-sentential co-occ. [ITALIC] ρ=10', 'BoC(Wiki-PubMed-PMC) LR', 'BoC(Wiki-PubMed-PMC) SVM', 'BoC(Wiki-PubMed-PMC) ANN']
[['TherapyTiming(TP,TD)', '428', '[BOLD] 0.84', '0.59', '0.47', '0.78', '0.81', '0.78'], ['NextReview(Followup,TP)', '164', '[BOLD] 0.90', '0.83', '0.63', '0.86', '0.88', '0.84'], ['Toxicity(TP,CF/TR)', '163', '[BOLD] 0.91', '0.77', '0.55', '0.85', '0.86', '0.86'], ['TestTiming(TN,TD/TP)', '184', '0.90', '0.81', '0.42', '0.96', '[BOLD] 0.97', '0.95'], ['TestFinding(TN,TR)', '136', '0.76', '0.60', '0.44', '[BOLD] 0.82', '0.79', '0.78'], ['Threat(O,CF/TR)', '32', '0.85', '0.69', '0.54', '[BOLD] 0.95', '[BOLD] 0.95', '0.92'], ['Intervention(TP,YR)', '5', '[BOLD] 0.88', '0.65', '0.47', '-', '-', '-'], ['EffectOf(Com,CF)', '3', '[BOLD] 0.92', '0.62', '0.23', '-', '-', '-'], ['Severity(CF,CS)', '75', '[BOLD] 0.61', '0.53', '0.47', '0.52', '0.55', '0.51'], ['RecurLink(YR,YR/CF)', '7', '[BOLD] 1.0', '[BOLD] 1.0', '0.64', '-', '-', '-'], ['RecurInfer(NR/YR,TR)', '51', '0.97', '0.69', '0.43', '[BOLD] 0.99', '[BOLD] 0.99', '0.98'], ['GetOpinion(Referral,CF/other)', '4', '[BOLD] 0.75', '[BOLD] 0.75', '0.5', '-', '-', '-'], ['Context(Dis,DisCont)', '40', '[BOLD] 0.70', '0.63', '0.53', '0.60', '0.41', '0.57'], ['TestToAssess(TN,CF/TR)', '36', '0.76', '0.66', '0.36', '[BOLD] 0.92', '[BOLD] 0.92', '0.91'], ['TimeStamp(TD,TP)', '221', '[BOLD] 0.88', '0.83', '0.50', '0.86', '0.85', '0.83'], ['TimeLink(TP,TP)', '20', '[BOLD] 0.92', '0.85', '0.45', '0.91', '[BOLD] 0.92', '0.90'], ['Overall', '1569', '0.90', '0.73', '0.45', '0.92', '[BOLD] 0.93', '0.91']]
intra-sentential cooccurrence baseline outperforms other approaches which allow boundary expansion. [CONTINUE] As the results of applying the co-occurrence baseline (ρ = 0) shows (Table 2), the semantic relations in this data are strongly concentrated within a sentence boundary, especially for the relation of RecurLink, with an F1 of 1.0. The machine learning approaches based on BoC lexical features effectively complement the deficiency of cross-sentence relation extraction.
Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model
1906.01749v3
Table 3: Comparison of our Multi-News dataset to other MDS datasets as well as an SDS dataset used as training data for MDS (CNNDM). Training, validation and testing size splits (article(s) to summary) are provided when applicable. Statistics for multi-document inputs are calculated on the concatenation of all input sources.
['[BOLD] Dataset', '[BOLD] # pairs', '[BOLD] # words (doc)', '[BOLD] # sents (docs)', '[BOLD] # words (summary)', '[BOLD] # sents (summary)', '[BOLD] vocab size']
[['Multi-News', '44,972/5,622/5,622', '2,103.49', '82.73', '263.66', '9.97', '666,515'], ['DUC03+04', '320', '4,636.24', '173.15', '109.58', '2.88', '19,734'], ['TAC 2011', '176', '4,695.70', '188.43', '99.70', '1.00', '24,672'], ['CNNDM', '287,227/13,368/11,490', '810.57', '39.78', '56.20', '3.68', '717,951']]
Table 3 compares Multi-News to other news datasets used in experiments below. We choose to compare Multi-News with DUC data from 2003 and 2004 and TAC 2011 data, which are typically used in multi-document settings. Additionally, we compare to the single-document CNNDM dataset, as this has been recently used in work which adapts SDS to MDS (Lebanoff et al., 2018). The number of examples in our Multi-News dataset is two orders of magnitude larger than previous MDS news data. The total number of words in the concatenated inputs is shorter than other MDS datasets, as those consist of 10 input documents, but larger than SDS datasets, as expected. Our summaries are notably longer than in other works, about 260 words on average.
Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model
1906.01749v3
Table 4: Percentage of n-grams in summaries which do not appear in the input documents , a measure of the abstractiveness, in relevant datasets.
['[BOLD] % novel n-grams', '[BOLD] Multi-News', '[BOLD] DUC03+04', '[BOLD] TAC11', '[BOLD] CNNDM']
[['uni-grams', '17.76', '27.74', '16.65', '19.50'], ['bi-grams', '57.10', '72.87', '61.18', '56.88'], ['tri-grams', '75.71', '90.61', '83.34', '74.41'], ['4-grams', '82.30', '96.18', '92.04', '82.83']]
We report the percentage of n-grams in the gold summaries which do not appear in the input documents as a measure of how abstractive our summaries are in Table 4. As the table shows, the smaller MDS datasets tend to be more abstractive, but Multi-News is comparable and similar to the abstractiveness of SDS datasets. Grusky et al. (2018) additionally define three measures of the extractive nature of a dataset, which we use here for a comparison.
Multi-News: a Large-Scale Multi-Document Summarization Dataset and Abstractive Hierarchical Model
1906.01749v3
Table 6: ROUGE scores for models trained and tested on the Multi-News dataset.
['[BOLD] Method', '[BOLD] R-1', '[BOLD] R-2', '[BOLD] R-SU']
[['First-1', '26.83', '7.25', '6.46'], ['First-2', '35.99', '10.17', '12.06'], ['First-3', '39.41', '11.77', '14.51'], ['LexRank Erkan and Radev ( 2004 )', '38.27', '12.70', '13.20'], ['TextRank Mihalcea and Tarau ( 2004 )', '38.44', '13.10', '13.50'], ['MMR Carbonell and Goldstein ( 1998 )', '38.77', '11.98', '12.91'], ['PG-Original Lebanoff et\xa0al. ( 2018 )', '41.85', '12.91', '16.46'], ['PG-MMR Lebanoff et\xa0al. ( 2018 )', '40.55', '12.36', '15.87'], ['PG-BRNN Gehrmann et\xa0al. ( 2018 )', '42.80', '14.19', '16.75'], ['CopyTransformer Gehrmann et\xa0al. ( 2018 )', '[BOLD] 43.57', '14.03', '17.37'], ['Hi-MAP (Our Model)', '43.47', '[BOLD] 14.89', '[BOLD] 17.41']]
Our model outperforms PG-MMR when trained and tested on the Multi-News dataset. The Transformer performs best in terms of R-1 while Hi-MAP outperforms it on R-2 and R-SU. Also, we notice a drop in performance between PG-original, and PG-MMR (which takes the pre-trained PG-original and applies MMR on top of the model).
RC-QED: Evaluating Natural Language Derivationsin Multi-Hop Reading Comprehension
1910.04601v1
Table 5: Performance breakdown of the PRKGC+NS model. Derivation Precision denotes ROUGE-L F1 of generated NLDs.
['# gold NLD steps', 'Answer Prec.', 'Derivation Prec.']
[['1', '79.2', '38.4'], ['2', '64.4', '48.6'], ['3', '62.3', '41.3']]
As shown in Table 5, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. [CONTINUE] This indicates that the challenge of RC-QEDE is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer.
RC-QED: Evaluating Natural Language Derivationsin Multi-Hop Reading Comprehension
1910.04601v1
Table 2: Ratings of annotated NLDs by human judges.
['# steps', 'Reachability', 'Derivability Step 1', 'Derivability Step 2', 'Derivability Step 3']
[['1', '3.0', '3.8', '-', '-'], ['2', '2.8', '3.8', '3.7', '-'], ['3', '2.3', '3.9', '3.8', '3.8']]
The evaluation results shown in Table 2 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability). [CONTINUE] On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. [CONTINUE] Crowdworkers found that 45.3% of 294 (out of 900) 3-step NLDs has missing steps to derive a statement.
RC-QED: Evaluating Natural Language Derivationsin Multi-Hop Reading Comprehension
1910.04601v1
Table 4: Performance of RC-QEDE of our baseline models (see Section 2.1 for further details of each evaluation metrics). “NS” indicates the use of annotated NLDs as supervision (i.e. using Ld during training).
['Model', 'Answerability Macro P/R/F', '# Answerable', 'Answer Prec.', 'Derivation Prec. RG-L (P/R/F)', 'Derivation Prec. BL-4']
[['Shortest Path', '54.8/55.5/53.2', '976', '3.6', '56.7/38.5/41.5', '31.3'], ['PRKGC', '52.6/51.5/50.7', '1,021', '45.2', '40.7/60.7/44.7', '30.9'], ['PRKGC+NS', '53.6/54.1/52.1', '980', '45.4', '42.2/61.6/46.1', '33.4']]
As shown in Table 4, the PRKGC models learned to reason over more than simple shortest paths. [CONTINUE] Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QEDE. [CONTINUE] Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the maximum score match human-generated NLDs to some extent. [CONTINUE] Supervising path attentions (the PRKGC+NS model) is indeed effective for improving the human interpretability of generated NLDs. [CONTINUE] It also improves the generalization ability of question answering.
RC-QED: Evaluating Natural Language Derivationsin Multi-Hop Reading Comprehension
1910.04601v1
Table 7: Accuracy of our baseline models and previous work on WikiHop Welbl2017a’s development set. Note that our baseline models are explainable, whereas the others are not. “NS” indicates the use of annotated NLDs as supervision. Accuracies of existing models are taken from the papers.
['Model', 'Accuracy']
[['PRKGC (our work)', '51.4'], ['PRKGC+NS (our work)', '[BOLD] 52.7'], ['BiDAF\xa0Welbl2017a', '42.1'], ['CorefGRU\xa0Dhingra2018NeuralCoreference', '56.0'], ['MHPGM+NOIC\xa0Bauer2018CommonsenseTasks', '58.2'], ['EntityGCN\xa0DeCao2018QuestionNetworks', '65.3'], ['CFC\xa0Zhong2019Coarse-GrainAnswering', '66.4']]
As shown in Table 7, the PRKGC models achieve a comparable performance to other sophisticated neural models.
Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents
1901.02081v1
Table 3: The estimation of impact of various design choices on the final result. The entries are sorted by the out-of-fold scores from CV. The SUBMISSION here uses score from ep_1 run for the single model and ep_2 for the ensemble performance.
['ID LSTM-800', '5-fold CV 70.56', 'Δ 0.66', 'Single model 67.54', 'Δ 0.78', 'Ensemble 67.65', 'Δ 0.30']
[['LSTM-400', '70.50', '0.60', '[BOLD] 67.59', '0.83', '[BOLD] 68.00', '0.65'], ['IN-TITLE', '70.11', '0.21', '[EMPTY]', '[EMPTY]', '67.52', '0.17'], ['[BOLD] SUBMISSION', '69.90', '–', '66.76', '–', '67.35', '–'], ['NO-HIGHWAY', '69.72', '−0.18', '66.42', '−0.34', '66.64', '−0.71'], ['NO-OVERLAPS', '69.46', '−0.44', '65.07', '−1.69', '66.47', '−0.88'], ['LSTM-400-DROPOUT', '69.45', '−0.45', '65.53', '−1.23', '67.28', '−0.07'], ['NO-TRANSLATIONS', '69.42', '−0.48', '65.92', '−0.84', '67.23', '−0.12'], ['NO-ELMO-FINETUNING', '67.71', '−2.19', '65.16', '−1.60', '65.42', '−1.93']]
The results are presented in Table 3. [CONTINUE] Perhaps the most striking thing about the ablation results is that the 'traditional' LSTM layout outsperformed the 'alternating' one we chose for our submission. [CONTINUE] Apart of the flipped results of the LSTM-800 and the LSTM-400, small differences in CV score are sometimes associated with large discrepancies in test set performance. This is mostly due to small size of the data set (low [CONTINUE] Some ablated models that perform poorly in the singlemodel scenario (e.g. NO-OVERLAPS, LSTM-400DROPOUT) are able to regain a lot of accuracy when ensembled. [CONTINUE] Also, our data augmentation technique (NO-TRANSLATIONS) seem to have far smaller impact on the final score then we expected.
Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents
1901.02081v1
Table 1: The scores of our three submitted runs for similarity threshold 50%.
['Run ID', 'Official score', 'Score with correction']
[['ep_1', '60.29', '66.76'], ['ep_2', '[BOLD] 60.90', '[BOLD] 67.35'], ['ep_3', '60.61', '67.07']]
The system's official score was 60.9% (micro-F1) [CONTINUE] af [CONTINUE] Therefore, we report both the official score (from our second submission) and the result of re-scoring our second submission after replacing these 10 files with the ones from our first submission. The results are presented in Tables 1 and 2.
Team EP at TAC 2018: Automating data extraction in systematic reviews of environmental agents
1901.02081v1
Table 2: Detailed results of our best run (after correcting the submission format), along with numbers of mentions in the training set.
['Mention class', 'No. examples', 'F1 (5-CV)', 'F1 (Test)']
[['Total', '15265', '69.90', '67.35'], ['Endpoint', '4411', '66.89', '61.47'], ['TestArticle', '1922', '63.29', '64.19'], ['Species', '1624', '95.33', '95.95'], ['GroupName', '963', '67.08', '62.40'], ['EndpointUnitOfMeasure', '706', '42.27', '40.41'], ['TimeEndpointAssessed', '672', '57.27', '55.51'], ['Dose', '659', '78.47', '75.85'], ['Sex', '612', '96.27', '98.36'], ['TimeUnits', '608', '68.03', '61.26'], ['DoseRoute', '572', '69.24', '69.80'], ['DoseUnits', '493', '77.50', '72.33'], ['Vehicle', '440', '63.03', '67.15'], ['GroupSize', '387', '77.79', '75.74'], ['Strain', '375', '78.56', '76.00'], ['DoseDuration', '216', '59.78', '56.80'], ['DoseDurationUnits', '204', '57.83', '56.60'], ['TimeAtDose', '117', '34.29', '35.68'], ['DoseFrequency', '96', '41.56', '59.78'], ['TimeAtFirstDose', '47', '3.92', '0.00'], ['SampleSize', '45', '43.84', '50.00'], ['CellLine', '39', '50.00', '50.77'], ['TestArticlePurity', '28', '34.04', '60.00'], ['TimeAtLastDose', '23', '0.00', '0.00'], ['TestArticleVerification', '6', '0.00', '0.00']]
Therefore, we report both the official score (from our second submission) and the result of re-scoring our second submission after replacing these 10 files with the ones from our first submission. The results are presented in Tables 1 and 2.
Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation
1907.12894v1
Table 3: Results of non-RL (top), cross-input (DeepTD) and input-specific (REAPER) RL approaches (middle) compared with RELIS.
['[EMPTY]', 'DUC’01 <italic>R</italic>1', 'DUC’01 <italic>R</italic>2', 'DUC’02 <italic>R</italic>1', 'DUC’02 <italic>R</italic>2', 'DUC’04 <italic>R</italic>1', 'DUC’04 <italic>R</italic>2']
[['ICSI', '33.31', '7.33', '35.04', '8.51', '37.31', '9.36'], ['PriorSum', '35.98', '7.89', '36.63', '8.97', '38.91', '10.07'], ['TCSum', '<bold>36.45</bold>', '7.66', '36.90', '8.61', '38.27', '9.66'], ['TCSum−', '33.45', '6.07', '34.02', '7.39', '35.66', '8.66'], ['SRSum', '36.04', '8.44', '<bold>38.93</bold>', '<bold>10.29</bold>', '39.29', '10.70'], ['DeepTD', '28.74', '5.95', '31.63', '7.09', '33.57', '7.96'], ['REAPER', '32.43', '6.84', '35.03', '8.11', '37.22', '8.64'], ['RELIS', '34.73', '<bold>8.66</bold>', '37.11', '9.12', '<bold>39.34</bold>', '<bold>10.73</bold>']]
In Table 3, we compare RELIS with non-RL-based and RL-based summarisation systems. For non-RL-based systems, we report ICSI [Gillick and Favre, 2009] maximising [CONTINUE] the bigram overlap of summary and input using integer linear programming, PriorSum [Cao et al., 2015] learning sentence quality with CNNs, TCSum [Cao et al., 2017] employing text classification of the input documents, the variant of TCSum without the text classification pre-training (TCSum(cid:0)) and SRSum [Ren et al., 2018], which learns sentence relations with both word- and sentence-level attentive neural networks to estimate salience. [CONTINUE] RELIS significantly outperforms the other RL-based systems. Note that RELIS and REAPER use the identical RL algorithm for input-specific policy learning; hence the improvement of RELIS is due to the higher quality of the L2R-learnt reward ^(cid:27)U x . RELIS outperforms DeepTD because training cross-input policies requires much more data than available in the DUC datasets. At the same time, RELIS performs on par with neural-based TCSum and SRSum, while it requires significantly less data and time to train, as shown next.
Reward Learning for Efficient Reinforcement Learning in Extractive Document Summarisation
1907.12894v1
Table 2: The correlation of approximated and ground-truth ranking. ^σUx has significantly higher correlation over all other approaches.
['[EMPTY]', 'DUC’01 <italic>ρ</italic>', 'DUC’01 ndcg', 'DUC’02 <italic>ρ</italic>', 'DUC’02 ndcg', 'DUC’04 <italic>ρ</italic>', 'DUC’04 ndcg']
[['ASRL', '.176', '.555', '.131', '.537', '.145', '.558'], ['REAPER', '.316', '.638', '.301', '.639', '.372', '.701'], ['JS', '.549', '.736', '.525', '.700', '.570', '.763'], ['Our ^<italic>σUx</italic>', '<bold>.601</bold>', '<bold>.764</bold>', '<bold>.560</bold>', '<bold>.727</bold>', '<bold>.617</bold>', '<bold>.802</bold>']]
Table 2 compares the quality of our ^(cid:27)U x with other widely used rewards for input-specific RL (see x4). ^(cid:27)U x has significantly higher correlation to the ground-truth ranking compared with all other approaches, confirming that our proposed L2R method yields a superior reward oracle.
UKP TU-DA at GermEval 2017:Deep Learning for Aspect Based Sentiment Detection
3
Table 6: Task B results with polarity features
['[EMPTY]', 'Micro F1']
[['Baseline', '0.709'], ['W2V (<italic>d</italic>=50)', '0.748'], ['W2V (<italic>d</italic>=500)', '0.756'], ['S2V', '0.748'], ['S2V + W2V (<italic>d</italic>=50)', '0.755'], ['S2V + K + W2V(<italic>d</italic>=50)', '0.751'], ['SIF (DE)', '0.748'], ['SIF (DE-EN)', '<bold>0.757</bold>']]
We furthermore trained models on additional polarity features for Task B as mentioned before. Adding the polarity features improved the results for all models except for those using SIF embeddings (Table6).
UKP TU-DA at GermEval 2017:Deep Learning for Aspect Based Sentiment Detection
3
Table 4: Task A results
['[EMPTY]', 'Micro F1']
[['Baseline', '0.882'], ['W2V (<italic>d</italic>=50)', '0.883'], ['W2V (<italic>d</italic>=500)', '<bold>0.897</bold>'], ['S2V', '0.885'], ['S2V + W2V (<italic>d</italic>=50)', '0.891'], ['S2V + K + W2V(<italic>d</italic>=50)', '0.890'], ['SIF (DE)', '0.895'], ['SIF (DE-EN)', '0.892']]
The results of the models better than the baseline are reported in Tables 4. As can be seen, all models only slightly outperform the baseline in Task A.
UKP TU-DA at GermEval 2017:Deep Learning for Aspect Based Sentiment Detection
3
Table 5: Task B results
['[EMPTY]', 'Micro F1']
[['Baseline', '0.709'], ['W2V (<italic>d</italic>=50)', '0.736'], ['W2V (<italic>d</italic>=500)', '0.753'], ['S2V', '0.748'], ['S2V + W2V (<italic>d</italic>=50)', '0.744'], ['S2V + K + W2V(<italic>d</italic>=50)', '0.749'], ['SIF (DE)', '0.759'], ['SIF (DE-EN)', '<bold>0.765</bold>']]
For Task B, all models trained on the stacked learner beat the baseline substantially even when using only plain averaged word embeddings.
IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations
1808.08672v2
Table 2: Ablation study results.
['[BOLD] Variation', '[BOLD] Accuracy (%)', '[BOLD] Δ%']
[['Submitted', '[BOLD] 69.23', '-'], ['No emoji', '68.36', '- 0.87'], ['No ELMo', '65.52', '- 3.71'], ['Concat Pooling', '68.47', '- 0.76'], ['LSTM hidden=4096', '69.10', '- 0.13'], ['LSTM hidden=1024', '68.93', '- 0.30'], ['LSTM hidden=512', '68.43', '- 0.80'], ['POS emb dim=100', '68.99', '- 0.24'], ['POS emb dim=75', '68.61', '- 0.62'], ['POS emb dim=50', '69.33', '+ 0.10'], ['POS emb dim=25', '69.21', '- 0.02'], ['SGD optim lr=1', '64.33', '- 4.90'], ['SGD optim lr=0.1', '66.11', '- 3.12'], ['SGD optim lr=0.01', '60.72', '- 8.51'], ['SGD optim lr=0.001', '30.49', '- 38.74']]
We performed an ablation study on a single model having obtained 69.23% accuracy on the validation set. Results are summarized in Table 2. [CONTINUE] We can observe that the architectural choice that had the greatest impact on our model was the ELMo layer, providing a 3.71% boost in performance as compared to using GloVe pre-trained word embeddings. [CONTINUE] We can further see that emoji also contributed significantly to the model's performance. [CONTINUE] Additionally, we tried using the concatenation of the max-pooled, average-pooled and last hidden states of the BiLSTM as the sentence representation, following Howard and Ruder (2018), but found out that this impacted performance negatively. [CONTINUE] Using a greater BiLSTM hidden size did not help the model, [CONTINUE] We found that using 50-dimensional part-ofspeech embeddings slightly improved results, [CONTINUE] Regarding optimization strategies, we also tried using SGD with different learning rates and a stepwise learning rate schedule as described by Conneau et al. (2018), but we found that doing this did not improve performance.
IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations
1808.08672v2
Table 3: Classification Report (Test Set).
['[EMPTY]', '[BOLD] Precision', '[BOLD] Recall', '[BOLD] F1-score']
[['anger', '0.643', '0.601', '0.621'], ['disgust', '0.703', '0.661', '0.682'], ['fear', '0.742', '0.721', '0.732'], ['joy', '0.762', '0.805', '0.783'], ['sad', '0.685', '0.661', '0.673'], ['surprise', '0.627', '0.705', '0.663'], ['Average', '0.695', '0.695', '0.694']]
Table 3 the [CONTINUE] corresponding classification report. [CONTINUE] In general, we confirm what Klinger et al. (2018) report: anger was the most difficult class to predict, followed by surprise, whereas joy, fear, and disgust are the better performing ones.
IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations
1808.08672v2
Table 4: Number of tweets on the test set with and without emoji and hashtags. The number between parentheses is the proportion of tweets classified correctly.
['[EMPTY]', '[BOLD] Present', '[BOLD] Not Present']
[['Emoji', '4805 (76.6%)', '23952 (68.0%)'], ['Hashtags', '2122 (70.5%)', '26635 (69.4%)']]
Table 4 shows the overall effect of hashtags and emoji on classification performance. [CONTINUE] Tweets containing emoji seem to be easier for the model to classify than those without. [CONTINUE] Hashtags also have a [CONTINUE] positive effect on classification performance, however it is less significant.
IIIDYT at IEST 2018: Implicit Emotion Classification With Deep Contextualized Word Representations
1808.08672v2
Table 5: Fine grained performance on tweets containing emoji, and the effect of removing them.
['[BOLD] Emoji alias', '[BOLD] N', '[BOLD] emoji #', '[BOLD] emoji %', '[BOLD] no-emoji #', '[BOLD] no-emoji %', '[BOLD] Δ%']
[['mask', '163', '154', '94.48', '134', '82.21', '- 12.27'], ['two_hearts', '87', '81', '93.10', '77', '88.51', '- 4.59'], ['heart_eyes', '122', '109', '89.34', '103', '84.43', '- 4.91'], ['heart', '267', '237', '88.76', '235', '88.01', '- 0.75'], ['rage', '92', '78', '84.78', '66', '71.74', '- 13.04'], ['cry', '116', '97', '83.62', '83', '71.55', '- 12.07'], ['sob', '490', '363', '74.08', '345', '70.41', '- 3.67'], ['unamused', '167', '121', '72.46', '116', '69.46', '- 3.00'], ['weary', '204', '140', '68.63', '139', '68.14', '- 0.49'], ['joy', '978', '649', '66.36', '629', '64.31', '- 2.05'], ['sweat_smile', '111', '73', '65.77', '75', '67.57', '1.80'], ['confused', '77', '46', '59.74', '48', '62.34', '2.60']]
Table 5 shows the effect specific emoji have on classification performance. [CONTINUE] It is clear some emoji strongly contribute to improving prediction quality. [CONTINUE] The most interesting ones are mask, rage, and cry, which significantly increase accuracy. [CONTINUE] Further, contrary to intuition, the sob emoji contributes less than cry, despite representing a stronger emotion. [CONTINUE] Finally, not all emoji are beneficial for this task. [CONTINUE] When removing sweat smile and confused accuracy increased,
Solving Hard Coreference Problems
1907.05524v1
Table 7: Performance results on Winograd and WinoCoref datasets. All our three systems are trained on WinoCoref, and we evaluate the predictions on both datasets. Our systems improve over the baselines by over than 20% on Winograd and over 15% on WinoCoref.
['Dataset', 'Metric', 'Illinois', 'IlliCons', 'rahman2012resolving', 'KnowFeat', 'KnowCons', 'KnowComb']
[['[ITALIC] Winograd', 'Precision', '51.48', '53.26', '73.05', '71.81', '74.93', '[BOLD] 76.41'], ['[ITALIC] WinoCoref', 'AntePre', '68.37', '74.32', '—–', '88.48', '88.95', '[BOLD] 89.32']]
Performance results on Winograd and WinoCoref datasets are shown in Table 7. The best performing system is KnowComb. It improves by over 20% over a state-of-art general coreference system on Winograd and also outperforms Rahman and Ng (2012) by a margin of 3.3%. On the WinoCoref dataset, it improves by 15%. These results show significant performance improvement by using Predicate Schemas knowledge on hard coreference problems. Note that the system developed in Rahman and Ng (2012) cannot be used on the WinoCoref dataset. The results also show that it is better to compile knowledge into constraints when the knowledge quality is high than add them as features.
Solving Hard Coreference Problems
1907.05524v1
Table 8: Performance results on ACE and OntoNotes datasets. Our system gets the same level of performance compared to a state-of-art general coreference system.
['System', 'MUC', 'BCUB', 'CEAFe', 'AVG']
[['ACE', 'ACE', 'ACE', 'ACE', 'ACE'], ['IlliCons', '[BOLD] 78.17', '81.64', '[BOLD] 78.45', '[BOLD] 79.42'], ['KnowComb', '77.51', '[BOLD] 81.97', '77.44', '78.97'], ['OntoNotes', 'OntoNotes', 'OntoNotes', 'OntoNotes', 'OntoNotes'], ['IlliCons', '84.10', '[BOLD] 78.30', '[BOLD] 68.74', '[BOLD] 77.05'], ['KnowComb', '[BOLD] 84.33', '78.02', '67.95', '76.76']]
Performance results on standard ACE and OntoNotes datasets are shown in Table 8. Our KnowComb system achieves the same level of performance as does the state-of-art general coreference system we base it on. As hard coreference problems are rare in standard coreference datasets, we do not have significant performance improvement. However, these results show that our additional Predicate Schemas do not harm the predictions for regular mentions.
Solving Hard Coreference Problems
1907.05524v1
Table 9: Distribution of instances in Winograd dataset of each category. Cat1/Cat2 is the subset of instances that require Type 1/Type 2 schema knowledge, respectively. All other instances are put into Cat3. Cat1 and Cat2 instances can be covered by our proposed Predicate Schemas.
['Category', 'Cat1', 'Cat2', 'Cat3']
[['Size', '317', '1060', '509'], ['Portion', '16.8%', '56.2%', '27.0%']]
Detailed Analysis To study the coverage of our Predicate Schemas knowledge, we label the instances in Winograd (which also applies to WinoCoref) with the type of Predicate Schemas knowledge required. The distribution of the instances is shown in Table 9. Our proposed Predicate Schemas cover 73% of the instances.
Solving Hard Coreference Problems
1907.05524v1
Table 10: Ablation Study of Knowledge Schemas on WinoCoref. The first line specifies the preformance for KnowComb with only Type 1 schema knowledge tested on all data while the third line specifies the preformance using the same model but tested on Cat1 data. The second line specifies the preformance results for KnowComb system with only Type 2 schema knowledge on all data while the fourth line specifies the preformance using the same model but tested on Cat2 data.
['Schema', 'AntePre(Test)', 'AntePre(Train)']
[['Type 1', '76.67', '86.79'], ['Type 2', '79.55', '88.86'], ['Type 1 (Cat1)', '90.26', '93.64'], ['Type 2 (Cat2)', '83.38', '92.49']]
We also provide an ablation study on the WinoCoref dataset in Table 10. These results use the best performing KnowComb system. They showthat both Type 1 and Type 2 schema knowledge havehigher precision on Category 1 and Category 2 datainstances, respectively, compared to that on full data. Type 1 and Type 2 knowledge have similiar performance on full data, but the results show that it is harder to solve instances in category 2 than those in category 1. Also, the performance drop between Cat1/Cat2 and full data indicates that there is a need to design more complicated knowledge schemas and to refine the knowledge acquisition for further performance improvement.
Dirichlet uncertainty wrappers for actionable algorithm accuracy accountability and auditability
1912.12628v1
Table 1: Accuracy obtained by training an standalone classifier, applying the API and the proposed wrapper for each domain
['[EMPTY]', '[BOLD] BB source acc.', '[BOLD] BB target acc.', '[BOLD] Non-reject. acc. (10/20/30%)', '[BOLD] Class. quality (10/20/30%)', '[BOLD] Reject. quality (10/20/30%)']
[['[BOLD] Apply Yelp BB to SST-2', '89.18±0.08%', '77.13±0.52%', '82.43±0.22% 88.19±0.50% 93.60±0.16%', '80.40±0.39% 83.11±0.80% 83.05±0.23%', '6.03±0.45 6.04±0.51 4.97±0.07'], ['[BOLD] Apply SST-2 BB to Yelp', '83.306±0.18%', '82.106±0.88%', '87,98±0.18% 92.13±0.38% 94.19±0.33%', '85.49±0.88% 84.53±0.38% 78.99±0.46%', '8.30±1.63 5.72±0.27 3.73±0.10'], ['[BOLD] Apply Electronics BB to Music', '86.39±0.22%', '90.38±0.13%', '95.04±0.43% 96.45±0.35% 97.26±0.31%', '90.67±0.88% 83.93±0.67% 75.77±0.54%', '10.7±1.65 4.82±0.35 3.25±0.14'], ['[BOLD] Apply Music BB to Electronics', '93.10±0.02%', '79.85±0.0%', '83.26±0.41% 87.06±0.55% 90.50±0.29%', '79.97±0.74% 79.93±0.87% 76.81±0.41%', '4.1±0.55 3.80±0.35 3.32±0.09']]
Table 1 shows the numerical results obtained during the experiments for the four combinations tested. [CONTINUE] In general terms, the results displayed in table 1 show that the rejection method can reduce the error of the output predictions when applying a pre-trained black-box classification system to a new domain. [CONTINUE] Table 1: Accuracy obtained by training an standalone classifier, applying the API and the proposed wrapper for each domain