paper
stringlengths 0
839
| paper_id
stringlengths 1
12
| table_caption
stringlengths 3
2.35k
| table_column_names
large_stringlengths 13
1.76k
| table_content_values
large_stringlengths 2
11.9k
| text
large_stringlengths 69
2.82k
|
---|---|---|---|---|---|
Leveraging Deep Graph-Based Text Representation for Sentiment Polarity Applications | 1902.10247 | Table 3: Hyperparameters of the CNN algorithms | ['Parameter', 'Value'] | [['Sequence length', '2633'], ['Embedding dimensions', '20'], ['Filter size', '(3, 4)'], ['Number of filters', '150'], ['Dropout probability', '0.25'], ['Hidden dimensions', '150']] | Convolutional neural network is employed on the experiments to compare with the proposed approach. This network typically includes two operations, which can be considered of as feature extractors, convolution and pooling. CNN performs a sequence of operations on the data in its training phase and the output of this sequence is then typically connected to a fully connected layer which is in principle the same as the traditional multi-layer perceptron neural network. |
Leveraging Deep Graph-Based Text Representation for Sentiment Polarity Applications | 1902.10247 | Table 4: Experimental results on given datasets | ['Method', 'Negative class (%) precision', 'Negative class (%) recall', 'Negative class (%) F1', 'Positive class (%) precision', 'Positive class (%) recall', 'Positive class (%) F1', 'Overall (%) accuracy', 'Overall (%) F1'] | [['[BOLD] HCR', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Proposed method(CNN+Graph)', '89.11', '88.60', '81.31', '85.17', '84.32', '84.20', '85.71', '82.12'], ['SVM(linear)', '80.21', '91.40', '85.01', '67.12', '45.23', '54.24', '76.01', '76.74'], ['SVM(RBF)', '77.87', '99.46', '87.35', '95.65', '29.73', '45.36', '79.45', '45.36'], ['NB(tf-idf)', '74.04', '88.00', '80.42', '58.00', '34.94', '43.61', '70.93', '43.60'], ['Kim(CNN+w2v)', '75.39', '78.69', '77.71', '40.91', '36.49', '38.52', '66.53', '65.94'], ['RNTN(socher2013recursive)', '88.64', '85.71', '87.15', '68.29', '73.68', '70.89', '82.17', '70.88'], ['[BOLD] Stanford', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Proposed method(CNN+Graph)', '86.38', '90.37', '91.29', '77.46', '56.45', '65.52', '83.71', '78.72'], ['SVM(linear)', '79.21', '100.0', '88.40', '00.00', '00.00', '00.00', '79.20', '70.04'], ['SVM(RBF)', '63.64', '85.37', '72.92', '64.71', '35.48', '45.83', '63.88', '45.83'], ['NB(tf-idf)', '61.29', '54.29', '57.58', '60.98', '67.57', '64.10', '61.11', '64.10'], ['Kim(CNN+w2v)', '79.96', '99.59', '88.70', '22.22', '0.56', '0.95', '79.72', '71.10'], ['RNTN(socher2013recursive)', '64.29', '61.36', '62.79', '71.33', '73.82', '72.55', '68.04', '72.54'], ['[BOLD] Michigan', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Proposed method(CNN+Graph)', '98.89', '98.75', '98.41', '98.82', '98.14', '98.26', '98.41', '98.73'], ['SVM(linear)', '99.51', '91.51', '97.50', '98.56', '98.14', '99.62', '98.73', '98.72'], ['SVM(RBF)', '76.02', '73.67', '74.83', '66.40', '69.13', '67.74', '71.72', '67.73'], ['NB(tf-idf)', '76.92', '74.07', '75.47', '84.78', '86.67', '85.71', '81.94', '85.71'], ['Kim(CNN+w2v)', '95.64', '93.43', '94.58', '95.12', '96.73', '95.46', '95.31', '95.34'], ['RNTN(socher2013recursive)', '93.19', '95.61', '94.38', '96.57', '94.65', '95.60', '95.06', '95.59'], ['[BOLD] SemEval', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Proposed method(CNN+Graph)', '90.80', '80.35', '84.81', '87.32', '92.24', '90.76', '87.69', '87.78'], ['SVM(linear)', '77.91', '61.97', '69.06', '85.74', '92.89', '89.17', '83.95', '83.36'], ['SVM(RBF)', '24.21', '30.67', '27.06', '72.63', '65.71', '69.00', '56.49', '69.00'], ['NB(tf-idf)', '28.57', '23.53', '25.81', '77.59', '81.82', '79.65', '68.05', '79.64'], ['Kim(CNN+w2v)', '57.87', '42.26', '46.97', '78.85', '85.13', '81.87', '72.50', '71.98'], ['RNTN(socher2013recursive)', '55.56', '45.45', '50.00', '77.78', '84.00', '80.77', '72.22', '80.76'], ['[BOLD] IMDB', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Proposed method(CNN+Graph)', '87.42', '90.85', '88.31', '86.25', '86.80', '86.60', '86.07', '87.27'], ['SVM(linear)', '77.37', '76.01', '76.69', '75.70', '77.07', '76.38', '76.53', '76.54'], ['SVM(RBF)', '65.85', '58.70', '62.07', '67.80', '74.07', '70.80', '67.00', '70.79'], ['NB(tf-idf)', '74.72', '73.41', '74.06', '73.84', '75.14', '74.49', '74.27', '74.48'], ['Kim(CNN+w2v)', '81.84', '82.35', '81.29', '82.31', '82.32', '81.01', '79.97', '81.11'], ['RNTN(socher2013recursive)', '80.98', '80.21', '80.59', '80.38', '81.14', '80.76', '80.67', '80.75']] | We compare performance of the proposed method to support vector machine and convolutional neural network for short sentences by using pre-trained Google word embeddings (kim2014convolutional). It is important to note how well an algorithm is performing on different classes in a dataset, for example, SVM is not showing good performance on positive samples of Stanford dataset which is probably due to the sample size and therefore the model is biased toward the negative class. On the other hand, F1 scores of the proposed method for both positive and negative classes show how efficiently the algorithm can extract features from different classes and do not get biased toward one of them. |
Leveraging Deep Graph-Based Text Representation for Sentiment Polarity Applications | 1902.10247 | Table 5: Comparison of graph-based learning vs. word2vec | ['Method', 'Negative class (%) precision', 'Negative class (%) recall', 'Negative class (%) F1', 'Positive class (%) precision', 'Positive class (%) recall', 'Positive class (%) F1', 'Overall (%) accuracy', 'Overall (%) F1'] | [['[BOLD] IMDB', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]', '[EMPTY]'], ['Graph', '87.42', '90.85', '88.31', '86.25', '86.80', '86.60', '86.07', '87.27'], ['w2v', '74.34', '73.37', '75.20', '71.41', '70.82', '71.32', '70.14', '72.71']] | With the intention to show the priority of the graph representation procedure over word2vec, we have extracted the word embeddings only on the IMDB dataset to demonstrate the effect of graph representation on text documents. This shows the superiority of the graphs in extracting features from the text materials even if the corpus size is limited. It is worth mentioning that the word graphs are made only out of the available corpus and are not dependent on any external features. |
KSU KDD: Word Sense Induction by Clustering in Topic Space | 1302.7056 | Table 1: Effect of varying the number of topics K on performance | ['K', '10', '50', '200', '400', '500'] | [['V-measure', '5.1', '5.8', '7.2', '8.4', '8.1'], ['F-score', '8.6', '32.0', '53.9', '63.9', '64.2']] | Before running our main experiments, we wanted to see how the number of topics K used in the topic model could affect the performance of our system. We found that the V-measure and F-score values increase with increasing K, as more dimensions are added to the topic space, the different senses in this K-dimensional space unfold. This trend stops at a value of K=400 in a sign to the limited vocabulary of the training data. This K value is used in all other experiments. |
KSU KDD: Word Sense Induction by Clustering in Topic Space | 1302.7056 | Table 2: V-measure and F-score on SemEval-1 | ['[EMPTY]', 'All', 'Verbs', 'Nouns'] | [['V-measure', '8.4', '8.0', '8.7'], ['F-score', '63.9', '56.8', '69.0']] | Next, we evaluated the performance of our system on SemEval-1 WSI task data. Since no training data was provided for this task, we used an un-annotated version of the test instances to create the LDA topic model. For each target word (verb or noun), we trained the topic model on its given test instances. Then we used the generated model鈥檚 inferencer to find the topics distribution of each one of them. These distributions are then clustered in the topic space using the K-means algorithm and the cosine similarity measure was used to evaluate the distances between these distributions. |
Learning End-to-End Goal-Oriented Dialog with Multiple Answers | 1808.09996 | Table 3: Ablation study of our proposed model on permuted-bAbI dialog task. Results (accuracy %) are given in the standard setup, without match-type features. | ['[BOLD] Model', '[BOLD] Per-turn', '[BOLD] Per-dialog'] | [['Mask-memN2N', '93.4', '32'], ['Mask-memN2N (w/o entropy)', '92.1', '24.6'], ['Mask-memN2N (w/o L2 mask pre-training)', '85.8', '2.2'], ['Mask-memN2N (Reinforcement learning phase only)', '16.0', '0']] | Here, we study the different parts of our model for better understanding of how the different parts influence the overall model performance. We show results for Mask-memN2N in various settings - a) without entropy, b) without pre-training mask c) reinforcement learning phase only. When we do only the RL phase, it might be very tough for the system to learn everything by trial and error, especially because the action space is so large. Preceding it with the SL phase and L2 mask pre-training would have put the system and its parameters at a good spot from which the RL phase can improve performance. Note that it would not be valid to check performance of the SL phase in the test set as the SL phase requires the actual answers for it to create the mask. |
Hand-crafted Attention is All You Need? A Study of Attention on Self-supervised Audio Transformer | 2006.05174 | Table 2: Performance of all attentions | ['[BOLD] Attention', '[BOLD] Speaker [BOLD] Utterance', '[BOLD] Speaker [BOLD] Frame', '[BOLD] Phoneme [BOLD] 1-hidden', '[BOLD] Phoneme [BOLD] 2-hidden'] | [['Baseline (Mel)', '0.0060', '0.0033', '0.5246', '0.5768'], ['Baseline (QK)', '0.9926', '0.9824', '0.6460', '0.6887'], ['Baseline (Q)', '0.9898', '0.9622', '0.5893', '0.6345'], ['Sparse (Strided)', '0.9786', '0.9039', '0.6048', '0.6450'], ['Sparse (Fixed)', '0.9597', '0.7960', '0.6069', '0.6846'], ['Sign-ALSH', '0.9716', '0.8237', '0.5863', '0.6393'], ['XBOX', '0.9639', '0.7994', '0.5860', '0.6262'], ['XBOX (QNF)', '0.9667', '0.7958', '0.5819', '0.6241'], ['Simple LSH', '0.9628', '0.7370', '0.5771', '0.6189'], ['Simple ALSH', '0.9678', '0.7999', '0.5783', '0.6214'], ['SYN. (Dense)', '0.9660', '0.9027', '0.6180', '0.6287'], ['SYN. (Dense+Mfn:mh)', '0.9509', '0.9135', '0.6073', '0.6471'], ['SYN. (Random)', '0.9803', '0.8868', '0.5820', '0.6237'], ['SYN. (Ours)', '0.9842', '0.9855', '0.6157', '0.6492']] | Baseline (QK) and Baseline (Q) (shared-QK attention) remarkably outperform Baseline (Mel), which shows the importance of pre-training. LSH /ALSH algorithms have negative influences on most downstream tasks, showing that restricting the attention by LSH/ALSH algorithm is not effective enough. For utterance-level speaker classification, the average pooling layer in the downstream model acts like a global attention mechanism, which compensates the effects of LSH/ALSH. |