question_id
stringlengths 40
40
| question
stringlengths 4
171
| answer
sequence | evidence
sequence |
---|---|---|---|
60cb756d382b3594d9e1f4a5e2366db407e378ae | Apart from using desired properties, do they evaluate their LAN approach in some other way? | [
"No"
] | [
[]
] |
352a1bf734b2d7f0618e9e2b0dbed4a3f1787160 | Do they evaluate existing methods in terms of desired properties? | [
"Yes"
] | [
[]
] |
045dbdbda5d96a672e5c69442e30dbf21917a1ee | How does the model differ from Generative Adversarial Networks? | [
"Unanswerable"
] | [
[]
] |
c20b012ad31da46642c553ce462bc0aad56912db | What is the dataset used to train the model? | [
" movie sentence polarity dataset from BIBREF19, laptop and restaurant datasets collected from SemEval-201, we collected 2,000 reviews for each domain from the same review source"
] | [
[
"Clean-labeled Datasets. We use three clean labeled datasets. The first one is the movie sentence polarity dataset from BIBREF19. The other two datasets are laptop and restaurant datasets collected from SemEval-2016 . The former consists of laptop review sentences and the latter consists of restaurant review sentences. The original datasets (i.e., Laptop and Restaurant) were annotated with aspect polarity in each sentence. We used all sentences with only one polarity (positive or negative) for their aspects. That is, we only used sentences with aspects having the same sentiment label in each sentence. Thus, the sentiment of each aspect gives the ground-truth as the sentiments of all aspects are the same.",
"Noisy-labeled Training Datasets. For the above three domains (movie, laptop, and restaurant), we collected 2,000 reviews for each domain from the same review source. We extracted sentences from each review and assigned review's label to its sentences. Like previous work, we treat 4 or 5 stars as positive and 1 or 2 stars as negative. The data is noisy because a positive (negative) review can contain negative (positive) sentences, and there are also neutral sentences. This gives us three noisy-labeled training datasets. We still use the same test sets as those for the clean-labeled datasets. Summary statistics of all the datasets are shown in Table TABREF9."
]
] |
13e87f6d68f7217fd14f4f9a008a65dd2a0ba91c | What is the performance of the model? | [
"Experiment 1: ACC around 0.5 with 50% noise rate in worst case - clearly higher than baselines for all noise rates\nExperiment 2: ACC on real noisy datasets: 0.7 on Movie, 0.79 on Laptop, 0.86 on Restaurant (clearly higher than baselines in almost all cases)"
] | [
[
"The test accuracy curves with the noise rates [0, $0.1$, $0.2$, $0.3$, $0.4$, $0.5$] are shown in Figure FIGREF13. From the figure, we can see that the test accuracy drops from around 0.8 to 0.5 when the noise rate increases from 0 to 0.5, but our NetAb outperforms CNN. The results clearly show that the performance of the CNN drops quite a lot with the noise rate increasing.",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels. These baselines are inferior to ours as they were tailored for image classification. Note that we found no existing method to deal with noisy labels for SSC. Training Details. We use the publicly available pre-trained embedding GloVe.840B BIBREF48 to initialize the word vectors and the embedding dimension is 300."
]
] |
89b9a2389166b992c42ca19939d750d88c5fa79b | Is the model evaluated against a CNN baseline? | [
"Yes"
] | [
[
"Baselines. We use one strong non-DNN baseline, NBSVM (with unigrams or bigrams features) BIBREF23 and six DNN baselines. The first DNN baseline is CNN BIBREF25, which does not handle noisy labels. The other five were designed to handle noisy labels.",
"The comparison results are shown in Table TABREF12. From the results, we can make the following observations. (1) Our NetAb model achieves the best ACC and F1 on all datasets except for F1 of negative class on Laptop. The results demonstrate the superiority of NetAb. (2) NetAb outperforms the baselines designed for learning with noisy labels. These baselines are inferior to ours as they were tailored for image classification. Note that we found no existing method to deal with noisy labels for SSC. Training Details. We use the publicly available pre-trained embedding GloVe.840B BIBREF48 to initialize the word vectors and the embedding dimension is 300."
]
] |
dccc3b182861fd19ccce5bd00ce9c3f40451ed6e | Does the model proposed beat the baseline models for all the values of the masking parameter tested? | [
"No"
] | [
[]
] |
98ba7a7aae388b1a77dd6cab890977251d906359 | Has STES been previously used in the literature to evaluate similar tasks? | [
"No"
] | [
[
"We propose a pipeline called SMERTI (pronounced `smarty') for STE. Combining entity replacement (ER), similarity masking (SM), and text infilling (TI), SMERTI can modify the semantic content of text. We define a metric called the Semantic Text Exchange Score (STES) that evaluates the overall ability of a model to perform STE, and an adjustable parameter masking (replacement) rate threshold (MRT/RRT) that can be used to control the amount of semantic change."
]
] |
3da9a861dfa25ed486cff0ef657d398fdebf8a93 | What are the baseline models mentioned in the paper? | [
"Noun WordNet Semantic Text Exchange Model (NWN-STEM), General WordNet Semantic Text Exchange Model (GWN-STEM), Word2Vec Semantic Text Exchange Model (W2V-STEM)"
] | [
[
"We evaluate on three datasets: Yelp and Amazon reviews BIBREF1, and Kaggle news headlines BIBREF2. We implement three baseline models for comparison: Noun WordNet Semantic Text Exchange Model (NWN-STEM), General WordNet Semantic Text Exchange Model (GWN-STEM), and Word2Vec Semantic Text Exchange Model (W2V-STEM)."
]
] |
8c0a0747a970f6ea607ff9b18cfeb738502d9a95 | What was the performance of both approaches on their dataset? | [
"ERR of 19.05 with i-vectors and 15.52 with x-vectors"
] | [
[]
] |
529dabe7b4a8a01b20ee099701834b60fb0c43b0 | What kind of settings do the utterances come from? | [
"entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement"
] | [
[
"CN-Celeb specially focuses on Chinese celebrities, and contains more than $130,000$ utterances from $1,000$ persons.",
"CN-Celeb covers more genres of speech. We intentionally collected data from 11 genres, including entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement. The speech of a particular speaker may be in more than 5 genres. As a comparison, most of the utterances in VoxCeleb were extracted from interview videos. The diversity in genres makes our database more representative for the true scenarios in unconstrained conditions, but also more challenging."
]
] |
a2be2bd84e5ae85de2ab9968147b3d49c84dfb7f | What genres are covered? | [
"genre, entertainment, interview, singing, play, movie, vlog, live broadcast, speech, drama, recitation and advertisement"
] | [
[]
] |
5699996a7a2bb62c68c1e62e730cabf1e3186eef | Do they experiment with cross-genre setups? | [
"No"
] | [
[]
] |
944d5dbe0cfc64bf41ea36c11b1d378c408d40b8 | Which of the two speech recognition models works better overall on CN-Celeb? | [
"x-vector"
] | [
[]
] |
327e6c6609fbd4c6ae76284ca639951f03eb4a4c | By how much is performance on CN-Celeb inferior to performance on VoxCeleb? | [
"For i-vector system, performances are 11.75% inferior to voxceleb. For x-vector system, performances are 10.74% inferior to voxceleb"
] | [
[]
] |
df8cc1f395486a12db98df805248eb37c087458b | On what datasets is the new model evaluated on? | [
"SST (Stanford Sentiment Treebank), Subj (Subjectivity dataset), MPQA Opinion Corpus, RT is another movie review sentiment dataset, TREC is a dataset for classification of the six question types"
] | [
[
"SST BIBREF25 SST (Stanford Sentiment Treebank) is a dataset for sentiment classification on movie reviews, which are annotated with five labels (SST5: very positive, positive, neutral, negative, or very negative) or two labels (SST2: positive or negative).",
"Subj BIBREF26 Subj (Subjectivity dataset) is annotated with whether a sentence is subjective or objective.",
"MPQA BIBREF27 MPQA Opinion Corpus is an opinion polarity detection dataset of short phrases rather than sentences, which contains news articles from a wide variety of news sources manually annotated for opinions and other private states (i.e., beliefs, emotions, sentiments, speculations, etc.).",
"RT BIBREF28 RT is another movie review sentiment dataset contains a collection of short review excerpts from Rotten Tomatoes collected by Bo Pang and Lillian Lee.",
"TREC BIBREF29 TREC is a dataset for classification of the six question types (whether the question is about person, location, numeric information, etc.)."
]
] |
6e97c06f998f09256be752fa75c24ba853b0db24 | How do the authors measure performance? | [
"Accuracy across six datasets"
] | [
[]
] |
de2d33760dc05f9d28e9dabc13bab2b3264cadb7 | Does the new objective perform better than the original objective bert is trained on? | [
"Yes"
] | [
[
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does. For we masked words randomly, the masked words may be label-sensitive or label-insensitive. If label-insensitive words are masked, words predicted through BERT may not be compatible with original labels. The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks."
]
] |
63bb39fd098786a510147f8ebc02408de350cb7c | Are other pretrained language models also evaluated for contextual augmentation? | [
"No"
] | [
[]
] |
6333845facb22f862ffc684293eccc03002a4830 | Do the authors report performance of conditional bert on tasks without data augmentation? | [
"Yes"
] | [
[
"Table 2 lists the accuracies of the all methods on two classifier architectures. The results show that, for various datasets on different classifier architectures, our conditional BERT contextual augmentation improves the model performances most. BERT can also augments sentences to some extent, but not as much as conditional BERT does. For we masked words randomly, the masked words may be label-sensitive or label-insensitive. If label-insensitive words are masked, words predicted through BERT may not be compatible with original labels. The improvement over all benchmark datasets also shows that conditional BERT is a general augmentation method for multi-labels sentence classification tasks."
]
] |
a12a08099e8193ff2833f79ecf70acf132eda646 | Do they cover data augmentation papers? | [
"No"
] | [
[]
] |
999b20dc14cb3d389d9e3ba5466bc3869d2d6190 | What is the latest paper covered by this survey? | [
"Kim et al. (2019)"
] | [
[]
] |
ca4b66ffa4581f9491442dcec78ca556253c8146 | Do they survey visual question generation work? | [
"Yes"
] | [
[
"Visual Question Generation (VQG) is another emerging topic which aims to ask questions given an image. We categorize VQG into grounded- and open-ended VQG by the level of cognition. Grounded VQG generates visually grounded questions, i.e., all relevant information for the answer can be found in the input image BIBREF74 . A key purpose of grounded VQG is to support the dataset construction for VQA. To ensure the questions are grounded, existing systems rely on image captions to varying degrees. BIBREF75 and BIBREF76 simply convert image captions into questions using rule-based methods with textual patterns. BIBREF74 proposed a neural model that can generate questions with diverse types for a single image, using separate networks to construct dense image captions and to select question types.",
"In contrast to grounded QG, humans ask higher cognitive level questions about what can be inferred rather than what can be seen from an image. Motivated by this, BIBREF10 proposed open-ended VQG that aims to generate natural and engaging questions about an image. These are deep questions that require high cognition such as analyzing and creation. With significant progress in deep generative models, marked by variational auto-encoders (VAEs) and GANs, such models are also used in open-ended VQG to bring “creativity” into generated questions BIBREF77 , BIBREF78 , showing promising results. This also brings hope to address deep QG from text, as applied in NLG: e.g., SeqGAN BIBREF79 and LeakGAN BIBREF80 ."
]
] |
b3ff166bd480048e099d09ba4a96e2e32b42422b | Do they survey multilingual aspects? | [
"No"
] | [
[]
] |
3703433d434f1913307ceb6a8cfb9a07842667dd | What learning paradigms do they cover in this survey? | [
"Considering \"What\" and \"How\" separately versus jointly optimizing for both."
] | [
[
"Past research took a reductionist approach, separately considering these two problems of “what” and “how” via content selection and question construction. Given a sentence or a paragraph as input, content selection selects a particular salient topic worthwhile to ask about and determines the question type (What, When, Who, etc.). Approaches either take a syntactic BIBREF11 , BIBREF12 , BIBREF13 or semantic BIBREF14 , BIBREF3 , BIBREF15 , BIBREF16 tack, both starting by applying syntactic or semantic parsing, respectively, to obtain intermediate symbolic representations. Question construction then converts intermediate representations to a natural language question, taking either a tranformation- or template-based approach. The former BIBREF17 , BIBREF18 , BIBREF13 rearranges the surface form of the input sentence to produce the question; the latter BIBREF19 , BIBREF20 , BIBREF21 generates questions from pre-defined question templates. Unfortunately, such QG architectures are limiting, as their representation is confined to the variety of intermediate representations, transformation rules or templates.",
"In contrast, neural models motivate an end-to-end architectures. Deep learned frameworks contrast with the reductionist approach, admitting approaches that jointly optimize for both the “what” and “how” in an unified framework. The majority of current NQG models follow the sequence-to-sequence (Seq2Seq) framework that use a unified representation and joint learning of content selection (via the encoder) and question construction (via the decoder). In this framework, traditional parsing-based content selection has been replaced by more flexible approaches such as attention BIBREF22 and copying mechanism BIBREF23 . Question construction has become completely data-driven, requiring far less labor compared to transformation rules, enabling better language flexibility compared to question templates."
]
] |
f7c34b128f8919e658ba4d5f1f3fc604fb7ff793 | What are all the input modalities considered in prior work in question generation? | [
"Textual inputs, knowledge bases, and images."
] | [
[
"Question generation is an NLG task for which the input has a wealth of possibilities depending on applications. While a host of input modalities have been considered in other NLG tasks, such as text summarization BIBREF24 , image captioning BIBREF25 and table-to-text generation BIBREF26 , traditional QG mainly focused on textual inputs, especially declarative sentences, explained by the original application domains of question answering and education, which also typically featured textual inputs.",
"Recently, with the growth of various QA applications such as Knowledge Base Question Answering (KBQA) BIBREF27 and Visual Question Answering (VQA) BIBREF28 , NQG research has also widened the spectrum of sources to include knowledge bases BIBREF29 and images BIBREF10 . This trend is also spurred by the remarkable success of neural models in feature representation, especially on image features BIBREF30 and knowledge representations BIBREF31 . We discuss adapting NQG models to other input modalities in Section \"Wider Input Modalities\" ."
]
] |
d42031893fd4ba5721c7d37e1acb1c8d229ffc21 | Do they survey non-neural methods for question generation? | [
"No"
] | [
[]
] |
a999761aa976458bbc7b4f330764796446d030ff | What is their model? | [
"cross-lingual NE recognition"
] | [
[
"Most annotated corpus based NE recognition tasks can benefit a great deal from a known NE dictionary, as NEs are those words which carry common sense knowledge quite differ from the rest ones in any language vocabulary. This work will focus on the NE recognition from plain text instead of corpus based NE recognition. For a purpose of learning from limited annotated linguistic resources, our preliminary discovery shows that it is possible to build a geometric space projection between embedding spaces to help cross-lingual NE recognition. Our study contains two main steps: First, we explore the NE distribution in monolingual case. Next, we learn a hypersphere mapping between embedding spaces of languages with minimal supervision."
]
] |
f229069bcb05c2e811e4786c89b0208af90d9a25 | Do they evaluate on NER data sets? | [
"Yes"
] | [
[
"To evaluate the influence of our hypersphere feature for off-the-shelf NER systems, we perform the NE recognition on two standard NER benchmark datasets, CoNLL2003 and ONTONOTES 5.0. Our results in Table 6 and Table 7 demonstrate the power of hypersphere features, which contribute to nearly all of the three types of entities as shown in Table 6, except for a slight drop in the PER type of BIBREF22 on a strong baseline. HS features stably enhance all strong state-of-the-art baselines, BIBREF22 , BIBREF21 and BIBREF23 by 0.33/0.72/0.23 $F_1$ point and 0.13/0.3/0.1 $F_1$ point on both benchmark datasets, CoNLL-2003 and ONTONOTES 5.0. We show that our HS feature is also comparable with previous much more complicated LS feature, and our model surpasses their baseline (without LS feature) by 0.58/0.78 $F_1$ point with only HS features. We establish a new state-of-the-art $F_1$ score of 89.75 on ONTONOTES 5.0, while matching state-of-the-art performance with a $F_1$ score of 92.95 on CoNLL-2003 dataset."
]
] |
6b55b558ed581759425ede5d3a6fcdf44b8082ac | What previously proposed methods is this method compared against? | [
"Naive Bayes, SVM, Maximum Entropy classifiers"
] | [
[
"The baseline model for our experiments is explained in the paper by Alec Go [1]. The model uses the Naive Bayes, SVM, and the Maximum Entropy classifiers for their experiment. Their feature vector is either composed of Unigrams, Bigrams, Unigrams + Bigrams, or Unigrams + POS tags."
]
] |
3e3f5254b729beb657310a5561950085fa690e83 | How is effective word score calculated? | [
"We define the Effective Word Score of score x as\n\nEFWS(x) = N(+x) - N(-x),\n\nwhere N(x) is the number of words in the tweet with polarity score x."
] | [
[
"We define the Effective Word Score of score x as",
"EFWS(x) = N(+x) - N(-x),",
"where N(x) is the number of words in the tweet with polarity score x."
]
] |
5bb96b255dab3e47a8a68b1ffd7142d0e21ebe2a | How is tweet subjectivity measured? | [
"Unanswerable"
] | [
[]
] |
129c03acb0963ede3915415953317556a55f34ee | Why is supporting fact supervision necessary for DMN? | [
"First, the GRU only allows sentences to have context from sentences before them, but not after them. This prevents information propagation from future sentences. Second, the supporting sentences may be too far away from each other on a word level to allow for these distant sentences to interact through the word level GRU."
] | [
[
"We speculate that there are two main reasons for this performance disparity, all exacerbated by the removal of supporting facts. First, the GRU only allows sentences to have context from sentences before them, but not after them. This prevents information propagation from future sentences. Second, the supporting sentences may be too far away from each other on a word level to allow for these distant sentences to interact through the word level GRU."
]
] |
58b3b630a31fcb9bffb510390e1ec30efe87bfbf | What does supporting fact supervision mean? | [
" the facts that are relevant for answering a particular question) are labeled during training."
] | [
[
"We analyze the DMN components, specifically the input module and memory module, to improve question answering. We propose a new input module which uses a two level encoder with a sentence reader and input fusion layer to allow for information flow between sentences. For the memory, we propose a modification to gated recurrent units (GRU) BIBREF7 . The new GRU formulation incorporates attention gates that are computed using global knowledge over the facts. Unlike before, the new DMN+ model does not require that supporting facts (i.e. the facts that are relevant for answering a particular question) are labeled during training. The model learns to select the important facts from a larger set."
]
] |
141dab98d19a070f1ce7e7dc384001d49125d545 | What changes they did on input module? | [
"For the DMN+, we propose replacing this single GRU with two different components. The first component is a sentence reader, The second component is the input fusion layer"
] | [
[
"For the DMN+, we propose replacing this single GRU with two different components. The first component is a sentence reader, responsible only for encoding the words into a sentence embedding. The second component is the input fusion layer, allowing for interactions between sentences. This resembles the hierarchical neural auto-encoder architecture of BIBREF9 and allows content interaction between sentences. We adopt the bi-directional GRU for this input fusion layer because it allows information from both past and future sentences to be used. As gradients do not need to propagate through the words between sentences, the fusion layer also allows for distant supporting sentences to have a more direct interaction."
]
] |
afdad4c9bdebf88630262f1a9a86ac494f06c4c1 | What improvements they did for DMN? | [
"the new DMN+ model does not require that supporting facts (i.e. the facts that are relevant for answering a particular question) are labeled during training., In addition, we introduce a new input module to represent images."
] | [
[
"We analyze the DMN components, specifically the input module and memory module, to improve question answering. We propose a new input module which uses a two level encoder with a sentence reader and input fusion layer to allow for information flow between sentences. For the memory, we propose a modification to gated recurrent units (GRU) BIBREF7 . The new GRU formulation incorporates attention gates that are computed using global knowledge over the facts. Unlike before, the new DMN+ model does not require that supporting facts (i.e. the facts that are relevant for answering a particular question) are labeled during training. The model learns to select the important facts from a larger set.",
"In addition, we introduce a new input module to represent images. This module is compatible with the rest of the DMN architecture and its output is fed into the memory module. We show that the changes in the memory module that improved textual question answering also improve visual question answering. Both tasks are illustrated in Fig. 1 ."
]
] |
bfd4fc82ffdc5b2b32c37f4222e878106421ce2a | How does the model circumvent the lack of supporting facts during training? | [
"the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs. "
] | [
[
"We have proposed new modules for the DMN framework to achieve strong results without supervision of supporting facts. These improvements include the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs. Our resulting model obtains state of the art results on both the VQA dataset and the bAbI-10k text question-answering dataset, proving the framework can be generalized across input domains."
]
] |
1ce26783f0ff38925bfc07bbbb65d206e52c2d21 | Does the DMN+ model establish state-of-the-art ? | [
"Yes"
] | [
[
"We have proposed new modules for the DMN framework to achieve strong results without supervision of supporting facts. These improvements include the input fusion layer to allow interactions between input facts and a novel attention based GRU that allows for logical reasoning over ordered inputs. Our resulting model obtains state of the art results on both the VQA dataset and the bAbI-10k text question-answering dataset, proving the framework can be generalized across input domains."
]
] |
9213159f874b3bdd9b4de956a88c703aac988411 | Is this style generator compared to some baseline? | [
"Yes"
] | [
[
"We compare the above model to a similar model, where rather than explicitly represent $K$ features as input, we have $K$ features in the form of a genre embedding, i.e. we learn a genre specific embedding for each of the gothic, scifi, and philosophy genres, as studied in BIBREF8 and BIBREF7. To generate in a specific style, we simply set the appropriate embedding. We use genre embeddings of size 850 which is equivalent to the total size of the $K$ feature embeddings in the StyleEQ model."
]
] |
5f4e6ce4a811c4b3ab07335d89db2fd2a8d8d8b2 | How they perform manual evaluation, what is criteria? | [
"accuracy"
] | [
[
"Each annotator annotated 90 reference sentences (i.e. from the training corpus) with which style they thought the sentence was from. The accuracy on this baseline task for annotators A1, A2, and A3 was 80%, 88%, and 80% respectively, giving us an upper expected bound on the human evaluation."
]
] |
a234bcbf2e41429422adda37d9e926b49ef66150 | What metrics are used for automatic evaluation? | [
"classification accuracy, BLEU scores, model perplexities of the reconstruction"
] | [
[
"In tab:blueperpl we report BLEU scores for the reconstruction of test set sentences from their content and feature representations, as well as the model perplexities of the reconstruction. For both models, we use beam decoding with a beam size of eight. Beam candidates are ranked according to their length normalized log-likelihood. On these automatic measures we see that StyleEQ is better able to reconstruct the original sentences. In some sense this evaluation is mostly a sanity check, as the feature controls contain more locally specific information than the genre embeddings, which say very little about how many specific function words one should expect to see in the output.",
"In table:fasttext-results we see the results. Note that for both models, the all and top classification accuracy tends to be quite similar, though for the Baseline they are often almost exactly the same when the Baseline has little to no diversity in the outputs."
]
] |
c383fa9170ae00a4a24a8e39358c38395c5f034b | How they know what are content words? | [
" words found in the control word lists are then removed, The remaining words, which represent the content"
] | [
[
"fig:sentenceinput illustrates the process. Controls are calculated heuristically. All words found in the control word lists are then removed from the reference sentence. The remaining words, which represent the content, are used as input into the model, along with their POS tags and lemmas.",
"In this way we encourage models to construct a sentence using content and style independently. This will allow us to vary the stylistic controls while keeping the content constant, and successfully perform style transfer. When generating a new sentence, the controls correspond to the counts of the corresponding syntactic features that we expect to be realized in the output."
]
] |
83251fd4a641cea8b180b49027e74920bca2699a | How they model style as a suite of low-level linguistic controls, such as frequency of pronouns, prepositions, and subordinate clause constructions? | [
"style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style"
] | [
[
"Given that non-content words are distinctive enough for a classifier to determine style, we propose a suite of low-level linguistic feature counts (henceforth, controls) as our formal, content-blind definition of style. The style of a sentence is represented as a vector of counts of closed word classes (like personal pronouns) as well as counts of syntactic features like the number of SBAR non-terminals in its constituency parse, since clause structure has been shown to be indicative of style BIBREF20. Controls are extracted heuristically, and almost all rely on counts of pre-defined word lists. For constituency parses we use the Stanford Parser BIBREF21. table:controlexamples lists all the controls along with examples."
]
] |
5d70c32137e82943526911ebdf78694899b3c28a | Do they report results only on English data? | [
"Unanswerable"
] | [
[]
] |
97dac7092cf8082a6238aaa35f4b185343b914af | What insights into the relationship between demographics and mental health are provided? | [
"either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age, more women than men were given a diagnosis of depression"
] | [
[
"Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51",
"Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter."
]
] |
195611926760d1ceec00bd043dfdc8eba2df5ad1 | What model is used to achieve 5% improvement on F1 for identifying depressed individuals on Twitter? | [
"Random Forest classifier"
] | [
[
"We use the above findings for predicting depressive behavior. Our model exploits early fusion BIBREF32 technique in feature space and requires modeling each user INLINEFORM0 in INLINEFORM1 as vector concatenation of individual modality features. As opposed to computationally expensive late fusion scheme where each modality requires a separate supervised modeling, this model reduces the learning effort and shows promising results BIBREF75 . To develop a generalizable model that avoids overfitting, we perform feature selection using statistical tests and all relevant ensemble learning models. It adds randomness to the data by creating shuffled copies of all features (shadow feature), and then trains Random Forest classifier on the extended data. Iteratively, it checks whether the actual feature has a higher Z-score than its shadow feature (See Algorithm SECREF6 and Figure FIGREF45 ) BIBREF76 ."
]
] |
445e792ce7e699e960e2cb4fe217aeacdd88d392 | How do this framework facilitate demographic inference from social media? | [
"Demographic information is predicted using weighted lexicon of terms."
] | [
[
"We employ BIBREF73 's weighted lexicon of terms that uses the dataset of 75,394 Facebook users who shared their status, age and gender. The predictive power of this lexica was evaluated on Twitter, blog, and Facebook, showing promising results BIBREF73 . Utilizing these two weighted lexicon of terms, we are predicting the demographic information (age or gender) of INLINEFORM0 (denoted by INLINEFORM1 ) using following equation: INLINEFORM2",
"where INLINEFORM0 is the lexicon weight of the term, and INLINEFORM1 represents the frequency of the term in the user generated INLINEFORM2 , and INLINEFORM3 measures total word count in INLINEFORM4 . As our data is biased toward young people, we report age prediction performance for each age group separately (Table TABREF42 ). Moreover, to measure the average accuracy of this model, we build a balanced dataset (keeping all the users above 23 -416 users), and then randomly sampling the same number of users from the age ranges (11,19] and (19,23]. The average accuracy of this model is 0.63 for depressed users and 0.64 for control class. Table TABREF44 illustrates the performance of gender prediction for each class. The average accuracy is 0.82 on INLINEFORM5 ground-truth dataset."
]
] |
a3b1520e3da29d64af2b6e22ff15d330026d0b36 | What types of features are used from each data type? | [
"facial presence, Facial Expression, General Image Features, textual content, analytical thinking, clout, authenticity, emotional tone, Sixltr, informal language markers, 1st person singular pronouns"
] | [
[
"For capturing facial presence, we rely on BIBREF56 's approach that uses multilevel convolutional coarse-to-fine network cascade to tackle facial landmark localization. We identify facial presentation, emotion from facial expression, and demographic features from profile/posted images . Table TABREF21 illustrates facial presentation differences in both profile and posted images (media) for depressed and control users in INLINEFORM0 . With control class showing significantly higher in both profile and media (8%, 9% respectively) compared to that for the depressed class. In contrast with age and gender disclosure, vulnerable users are less likely to disclose their facial identity, possibly due to lack of confidence or fear of stigma.",
"Facial Expression:",
"Following BIBREF8 's approach, we adopt Ekman's model of six emotions: anger, disgust, fear, joy, sadness and surprise, and use the Face++ API to automatically capture them from the shared images. Positive emotions are joy and surprise, and negative emotions are anger, disgust, fear, and sadness. In general, for each user u in INLINEFORM0 , we process profile/shared images for both the depressed and the control groups with at least one face from the shared images (Table TABREF23 ). For the photos that contain multiple faces, we measure the average emotion.",
"General Image Features:",
"The importance of interpretable computational aesthetic features for studying users' online behavior has been highlighted by several efforts BIBREF55 , BIBREF8 , BIBREF57 . Color, as a pillar of the human vision system, has a strong association with conceptual ideas like emotion BIBREF58 , BIBREF59 . We measured the normalized red, green, blue and the mean of original colors, and brightness and contrast relative to variations of luminance. We represent images in Hue-Saturation-Value color space that seems intuitive for humans, and measure mean and variance for saturation and hue. Saturation is defined as the difference in the intensities of the different light wavelengths that compose the color. Although hue is not interpretable, high saturation indicates vividness and chromatic purity which are more appealing to the human eye BIBREF8 . Colorfulness is measured as a difference against gray background BIBREF60 . Naturalness is a measure of the degree of correspondence between images and the human perception of reality BIBREF60 . In color reproduction, naturalness is measured from the mental recollection of the colors of familiar objects. Additionally, there is a tendency among vulnerable users to share sentimental quotes bearing negative emotions. We performed optical character recognition (OCR) with python-tesseract to extract text and their sentiment score. As illustrated in Table TABREF26 , vulnerable users tend to use less colorful (higher grayscale) profile as well as shared images to convey their negative feelings, and share images that are less natural (Figure FIGREF15 ). With respect to the aesthetic quality of images (saturation, brightness, and hue), depressed users use images that are less appealing to the human eye. We employ independent t-test, while adopting Bonferroni Correction as a conservative approach to adjust the confidence intervals. Overall, we have 223 features, and choose Bonferroni-corrected INLINEFORM0 level of INLINEFORM1 (*** INLINEFORM2 , ** INLINEFORM3 ).",
"Qualitative Language Analysis: The recent LIWC version summarizes textual content in terms of language variables such as analytical thinking, clout, authenticity, and emotional tone. It also measures other linguistic dimensions such as descriptors categories (e.g., percent of target words gleaned by dictionary, or longer than six letters - Sixltr) and informal language markers (e.g., swear words, netspeak), and other linguistic aspects (e.g., 1st person singular pronouns.)"
]
] |
2cf8825639164a842c3172af039ff079a8448592 | How is the data annotated? | [
"The data are self-reported by Twitter users and then verified by two human experts."
] | [
[
"Self-disclosure clues have been extensively utilized for creating ground-truth data for numerous social media analytic studies e.g., for predicting demographics BIBREF36 , BIBREF41 , and user's depressive behavior BIBREF46 , BIBREF47 , BIBREF48 . For instance, vulnerable individuals may employ depressive-indicative terms in their Twitter profile descriptions. Others may share their age and gender, e.g., \"16 years old suicidal girl\"(see Figure FIGREF15 ). We employ a huge dataset of 45,000 self-reported depressed users introduced in BIBREF46 where a lexicon of depression symptoms consisting of 1500 depression-indicative terms was created with the help of psychologist clinician and employed for collecting self-declared depressed individual's profiles. A subset of 8,770 users (24 million time-stamped tweets) containing 3981 depressed and 4789 control users (that do not show any depressive behavior) were verified by two human judges BIBREF46 . This dataset INLINEFORM0 contains the metadata values of each user such as profile descriptions, followers_count, created_at, and profile_image_url."
]
] |
36b25021464a9574bf449e52ae50810c4ac7b642 | Where does the information on individual-level demographics come from? | [
"From Twitter profile descriptions of the users."
] | [
[
"Age Enabled Ground-truth Dataset: We extract user's age by applying regular expression patterns to profile descriptions (such as \"17 years old, self-harm, anxiety, depression\") BIBREF41 . We compile \"age prefixes\" and \"age suffixes\", and use three age-extraction rules: 1. I am X years old 2. Born in X 3. X years old, where X is a \"date\" or age (e.g., 1994). We selected a subset of 1061 users among INLINEFORM0 as gold standard dataset INLINEFORM1 who disclose their age. From these 1061 users, 822 belong to depressed class and 239 belong to control class. From 3981 depressed users, 20.6% disclose their age in contrast with only 4% (239/4789) among control group. So self-disclosure of age is more prevalent among vulnerable users. Figure FIGREF18 depicts the age distribution in INLINEFORM2 . The general trend, consistent with the results in BIBREF42 , BIBREF49 , is biased toward young people. Indeed, according to Pew, 47% of Twitter users are younger than 30 years old BIBREF50 . Similar data collection procedure with comparable distribution have been used in many prior efforts BIBREF51 , BIBREF49 , BIBREF42 . We discuss our approach to mitigate the impact of the bias in Section 4.1. The median age is 17 for depressed class versus 19 for control class suggesting either likely depressed-user population is younger, or depressed youngsters are more likely to disclose their age for connecting to their peers (social homophily.) BIBREF51",
"Gender Enabled Ground-truth Dataset: We selected a subset of 1464 users INLINEFORM0 from INLINEFORM1 who disclose their gender in their profile description. From 1464 users 64% belonged to the depressed group, and the rest (36%) to the control group. 23% of the likely depressed users disclose their gender which is considerably higher (12%) than that for the control class. Once again, gender disclosure varies among the two gender groups. For statistical significance, we performed chi-square test (null hypothesis: gender and depression are two independent variables). Figure FIGREF19 illustrates gender association with each of the two classes. Blue circles (positive residuals, see Figure FIGREF19 -A,D) show positive association among corresponding row and column variables while red circles (negative residuals, see Figure FIGREF19 -B,C) imply a repulsion. Our findings are consistent with the medical literature BIBREF10 as according to BIBREF52 more women than men were given a diagnosis of depression. In particular, the female-to-male ratio is 2.1 and 1.9 for Major Depressive Disorder and Dysthymic Disorder respectively. Our findings from Twitter data indicate there is a strong association (Chi-square: 32.75, p-value:1.04e-08) between being female and showing depressive behavior on Twitter."
]
] |
98515bd97e4fae6bfce2d164659cd75e87a9fc89 | What is the source of the user interaction data? | [
"Sociability from ego-network on Twitter"
] | [
[
"The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users."
]
] |
53bf6238baa29a10f4ff91656c470609c16320e1 | What is the source of the textual data? | [
"Users' tweets"
] | [
[
"The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users."
]
] |
b27f7993b1fe7804c5660d1a33655e424cea8d10 | What is the source of the visual data? | [
"Profile pictures from the Twitter users' profiles."
] | [
[
"The recent advancements in deep neural networks, specifically for image analysis task, can lead to determining demographic features such as age and gender BIBREF13 . We show that by determining and integrating heterogeneous set of features from different modalities – aesthetic features from posted images (colorfulness, hue variance, sharpness, brightness, blurriness, naturalness), choice of profile picture (for gender, age, and facial expression), the screen name, the language features from both textual content and profile's description (n-gram, emotion, sentiment), and finally sociability from ego-network, and user engagement – we can reliably detect likely depressed individuals in a data set of 8,770 human-annotated Twitter users."
]
] |
e21a8581cc858483a31c6133e53dd0cfda76ae4c | Is there an online demo of their system? | [
"No"
] | [
[]
] |
9f6e877e3bde771595e8aee10c2656a0e7b9aeb2 | Do they perform manual evaluation? | [
"Yes"
] | [
[
"Table 2 lists some example definitions generated with different models. For each word-sememes pair, the generated three definitions are ordered according to the order: Baseline, AAM and SAAM. For AAM and SAAM, we use the model that incorporates sememes. These examples show that with sememes, the model can generate more accurate and concrete definitions. For example, for the word “旅馆” (hotel), the baseline model fails to generate definition containing the token “旅行者”(tourists). However, by incoporating sememes' information, especially the sememe “旅游” (tour), AAM and SAAM successfully generate “旅行者”(tourists). Manual inspection of others examples also supports our claim."
]
] |
a3783e42c2bf616c8a07bd3b3d503886660e4344 | Do they compare against Noraset et al. 2017? | [
"Yes"
] | [
[
"The baseline model BIBREF3 is implemented with a recurrent neural network based encoder-decoder framework. Without utilizing the information of sememes, it learns a probabilistic mapping $P(y | x)$ from the word $x$ to be defined to a definition $y = [y_1, \\dots , y_T ]$ , in which $y_t$ is the $t$ -th word of definition $y$ ."
]
] |
0d0959dba3f7c15ee4f5cdee51682656c4abbd8f | What is a sememe? | [
"Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes"
] | [
[
"In this work, we introduce a new dataset for the Chinese definition modeling task that we call Chinese Definition Modeling Corpus cdm(CDM). CDM consists of 104,517 entries, where each entry contains a word, the sememes of a specific word sense, and the definition in Chinese of the same word sense. Sememes are minimum semantic units of word meanings, and the meaning of each word sense is typically composed of several sememes, as is illustrated in Figure 1 . For a given word sense, CDM annotates the sememes according to HowNet BIBREF5 , and the definition according to Chinese Concept Dictionary (CCD) BIBREF6 . Since sememes have been widely used in improving word representation learning BIBREF7 and word similarity computation BIBREF8 , we argue that sememes can benefit the task of definition modeling."
]
] |
589be705a5cc73a23f30decba23ce58ec39d313b | What data did they use? | [
"the Dutch section of the OSCAR corpus"
] | [
[
"We pre-trained our model on the Dutch section of the OSCAR corpus, a large multilingual corpus which was obtained by language classification in the Common Crawl corpus BIBREF16. This Dutch corpus has 6.6 billion words, totalling 39 GB of text. It contains 126,064,722 lines of text, where each line can contain multiple sentences. Subsequent lines are however not related to each other, due to the shuffled nature of the OSCAR data set. For comparison, the French RoBERTa-based language model CamemBERT BIBREF7 has been trained on the French portion of OSCAR, which consists of 138 GB of scraped text."
]
] |
6e962f1f23061f738f651177346b38fd440ff480 | What is the state of the art? | [
"BERTje BIBREF8, an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19., mBERT"
] | [
[
"We replicated the high-level sentiment analysis task used to evaluate BERTje BIBREF8 to be able to compare our methods. This task uses a dataset called Dutch Book Reviews Dataset (DBRD), in which book reviews scraped from hebban.nl are labeled as positive or negative BIBREF19. Although the dataset contains 118,516 reviews, only 22,252 of these reviews are actually labeled as positive or negative. The DBRD dataset is already split in a balanced 10% test and 90% train split, allowing us to easily compare to other models trained for solving this task. This dataset was released in a paper analysing the performance of an ULMFiT model (Universal Language Model Fine-tuning for Text Classification model) BIBREF19."
]
] |
594a6bf37eab64a16c6a05c365acc100e38fcff1 | What language tasks did they experiment on? | [
"sentiment analysis, the disambiguation of demonstrative pronouns,"
] | [
[
"We evaluated RobBERT in several different settings on multiple downstream tasks. First, we compare its performance with other BERT-models and state-of-the-art systems in sentiment analysis, to show its performance for classification tasks. Second, we compare its performance in a recent Dutch language task, namely the disambiguation of demonstrative pronouns, which allows us to additionally compare the zero-shot performance of our and other BERT models, i.e. using only the pre-trained model without any fine-tuning."
]
] |
d79d897f94e666d5a6fcda3b0c7e807c8fad109e | What result from experiments suggest that natural language based agents are more robust? | [
"Average reward across 5 seeds show that NLP representations are robust to changes in the environment as well task-nuisances"
] | [
[
"Results of the DQN-based agent are presented in fig: scenario comparison. Each plot depicts the average reward (across 5 seeds) of all representations methods. It can be seen that the NLP representation outperforms the other methods. This is contrary to the fact that it contains the same information as the semantic segmentation maps. More interestingly, comparing the vision-based and feature-based representations render inconsistent conclusions with respect to their relative performance. NLP representations remain robust to changes in the environment as well as task-nuisances in the state. As depicted in fig: nuisance scenarios, inflating the state space with task-nuisances impairs the performance of all representations. There, a large amount of unnecessary objects were spawned in the level, increasing the state's description length to over 250 words, whilst retaining the same amount of useful information. Nevertheless, the NLP representation outperformed the vision and feature based representations, with high robustness to the applied noise."
]
] |
599d9ca21bbe2dbe95b08cf44dfc7537bde06f98 | How better is performance of natural language based agents in experiments? | [
"Unanswerable"
] | [
[]
] |
827464c79f33e69959de619958ade2df6f65fdee | How much faster natural language agents converge in performed experiments? | [
"Unanswerable"
] | [
[]
] |
8e857e44e4233193c7b2d538e520d37be3ae1552 | What experiments authors perform? | [
"a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios"
] | [
[
"We tested the natural language representation against the visual-based and feature representations on several tasks, with varying difficulty. In these tasks, the agent could navigate, shoot, and collect items such as weapons and medipacks. Often, enemies of different types attacked the agent, and a positive reward was given when an enemy was killed. Occasionally, the agent also suffered from health degeneration. The tasks included a basic scenario, a health gathering scenario, a scenario in which the agent must take cover from fireballs, a scenario in which the agent must defend itself from charging enemies, and a super scenario, where a mixture of the above scenarios was designed to challenge the agent."
]
] |
084fb7c80a24b341093d4bf968120e3aff56f693 | How is state to learn and complete tasks represented via natural language? | [
" represent the state using natural language"
] | [
[
"The term representation is used differently in different contexts. For the purpose of this paper we define a semantic representation of a state as one that reflects its meaning as it is understood by an expert. The semantic representation of a state should thus be paired with a reliable and computationally efficient method for extracting information from it. Previous success in RL has mainly focused on representing the state in its raw form (e.g., visual input in Atari-based games BIBREF2). This approach stems from the belief that neural networks (specifically convolutional networks) can extract meaningful features from complex inputs. In this work, we challenge current representation techniques and suggest to represent the state using natural language, similar to the way we, as humans, summarize and transfer information efficiently from one to the other BIBREF5."
]
] |
babe72f0491e65beff0e5889380e8e32d7a81f78 | How does the model compare with the MMR baseline? | [
" Moreover, our TL-TranSum method also outperforms other approaches such as MaxCover ( $5\\%$ ) and MRMR ( $7\\%$ )"
] | [
[
"Various classes of NP-hard problems involving a submodular and non-decreasing function can be solved approximately by polynomial time algorithms with provable approximation factors. Algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" are our core methods for the detection of approximations of maximal budgeted hypergraph transversals and minimal soft hypergraph transversals, respectively. In each case, a transversal is found and the summary is formed by extracting and aggregating the associated sentences. Algorithm \"Detection of hypergraph transversals for text summarization\" is based on an adaptation of an algorithm presented in BIBREF30 for the maximization of submodular functions under a Knaspack constraint. It is our primary transversal-based summarization model, and we refer to it as the method of Transversal Summarization with Target Length (TL-TranSum algorithm). Algorithm \"Detection of hypergraph transversals for text summarization\" is an application of the algorithm presented in BIBREF20 for solving the submodular set covering problem. We refer to it as Transversal Summarization with Target Coverage (TC-TranSum algorithm). Both algorithms produce transversals by iteratively appending the node inducing the largest increase in the total weight of the covered hyperedges relative to the node weight. While long sentences are expected to cover more themes and induce a larger increase in the total weight of covered hyperedges, the division by the node weights (i.e. the sentence lengths) balances this tendency and allows the inclusion of short sentences as well. In contrast, the methods of sentence selection based on a maximal relevance and a minimal redundancy such as, for instance, the maximal marginal relevance approach of BIBREF31 , tend to favor the selection of long sentences only. The main difference between algorithms \"Detection of hypergraph transversals for text summarization\" and \"Detection of hypergraph transversals for text summarization\" is the stopping criterion: in algorithm \"Detection of hypergraph transversals for text summarization\" , the approximate minimal soft transversal is obtained whenever the targeted hyperedge coverage is reached while algorithm \"Detection of hypergraph transversals for text summarization\" appends a given sentence to the approximate maximal budgeted transversal only if its addition does not make the summary length exceed the target length $L$ ."
]
] |
31ee92e521be110b6a5a8d08cc9e6f90a3a97aae | Does the paper discuss previous models which have been applied to the same task? | [
"Yes"
] | [
[
"An emerging body of work in natural language processing and computational social science has investigated how NLP systems can detect moral sentiment in online text. For example, moral rhetoric in social media and political discourse BIBREF19, BIBREF20, BIBREF21, the relation between moralization in social media and violent protests BIBREF22, and bias toward refugees in talk radio shows BIBREF23 have been some of the topics explored in this line of inquiry. In contrast to this line of research, the development of a formal framework for moral sentiment change is still under-explored, with no existing systematic and formal treatment of this topic BIBREF16."
]
] |
737397f66751624bcf4ef891a10b29cfc46b0520 | Which datasets are used in the paper? | [
"Google N-grams\nCOHA\nMoral Foundations Dictionary (MFD)\n"
] | [
[
"To ground moral sentiment in text, we leverage the Moral Foundations Dictionary BIBREF27. The MFD is a psycholinguistic resource that associates each MFT category with a set of seed words, which are words that provide evidence for the corresponding moral category in text. We use the MFD for moral polarity classification by dividing seed words into positive and negative sets, and for fine-grained categorization by splitting them into the 10 MFT categories.",
"To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words.",
"We divide historical time into decade-long bins, and use two sets of embeddings provided by BIBREF30, each trained on a different historical corpus of English:",
"Google N-grams BIBREF31: a corpus of $8.5 \\times 10^{11}$ tokens collected from the English literature (Google Books, all-genres) spanning the period 1800–1999.",
"COHA BIBREF32: a smaller corpus of $4.1 \\times 10^8$ tokens from works selected so as to be genre-balanced and representative of American English in the period 1810–2009."
]
] |
87cb19e453cf7e248f24b5f7d1ff9f02d87fc261 | How does the parameter-free model work? | [
"A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;, A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;"
] | [
[
"Table TABREF2 specifies the formulation of each model. Note that we adopt a parsimonious design principle in our modelling: both Centroid and Naïve Bayes are parameter-free models, $k$NN only depends on the choice of $k$, and KDE uses a single bandwidth parameter $h$.",
"A Centroid model summarizes each set of seed words by its expected vector in embedding space, and classifies concepts into the class of closest expected embedding in Euclidean distance following a softmax rule;",
"A Naïve Bayes model considers both mean and variance, under the assumption of independence among embedding dimensions, by fitting a normal distribution with mean vector and diagonal covariance matrix to the set of seed words of each class;"
]
] |
5fb6a21d10adf4e81482bb5c1ec1787dc9de260d | How do they quantify moral relevance? | [
"By complementing morally relevant seed words with a set of morally irrelevant seed words based on the notion of valence"
] | [
[
"To implement the first tier of our framework and detect moral relevance, we complement our morally relevant seed words with a corresponding set of seed words approximating moral irrelevance based on the notion of valence, i.e., the degree of pleasantness or unpleasantness of a stimulus. We refer to the emotional valence ratings collected by BIBREF28 for approximately 14,000 English words, and choose the words with most neutral valence rating that do not occur in the MFD as our set of morally irrelevant seed words, for an equal total number of morally relevant and morally irrelevant words."
]
] |
542a87f856cb2c934072bacaa495f3c2645f93be | Which fine-grained moral dimension examples do they showcase? | [
"Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation"
] | [
[
"We draw from research in social psychology to inform our methodology, most prominently Moral Foundations Theory BIBREF26. MFT seeks to explain the structure and variation of human morality across cultures, and proposes five moral foundations: Care / Harm, Fairness / Cheating, Loyalty / Betrayal, Authority / Subversion, and Sanctity / Degradation. Each foundation is summarized by a positive and a negative pole, resulting in ten fine-grained moral categories."
]
] |
4fcc668eb3a042f60c4ce2e7d008e7923b25b4fc | Which dataset sources to they use to demonstrate moral sentiment through history? | [
"Unanswerable"
] | [
[]
] |
c180f44667505ec03214d44f4970c0db487a8bae | How well did the system do? | [
"the neural approach is generally preferred by a greater percentage of participants than the rules or random, human-made game outperforms them all"
] | [
[
"We conducted two sets of human participant evaluations by recruiting participants over Amazon Mechanical Turk. The first evaluation tests the knowledge graph construction phase, in which we measure perceived coherence and genre or theme resemblance of graphs extracted by different models. The second study compares full games—including description generation and game assembly, which can't easily be isolated from graph construction—generated by different methods. This study looks at how interesting the games were to the players in addition to overall coherence and genre resemblance. Both studies are performed across two genres: mystery and fairy-tales. This is done in part to test the relative effectiveness of our approach across different genres with varying thematic commonsense knowledge. The dataset used was compiled via story summaries that were scraped from Wikipedia via a recursive crawling bot. The bot searched pages for both for plot sections as well as links to other potential stories. From the process, 695 fairy-tales and 536 mystery stories were compiled from two categories: novels and short stories. We note that the mysteries did not often contain many fantasy elements, i.e. they consisted of mysteries set in our world such as Sherlock Holmes, while the fairy-tales were much more removed from reality. Details regarding how each of the studies were conducted and the corresponding setup are presented below.",
"Each participant was was asked to play the neural game and then another one from one of the three additional models within a genre. The completion criteria for each game is collect half the total score possible in the game, i.e. explore half of all possible rooms and examine half of all possible entities. This provided the participant with multiple possible methods of finishing a particular game. On completion, the participant was asked to rank the two games according to overall perceived coherence, interestingness, and adherence to the genre. We additionally provided a required initial tutorial game which demonstrated all of these mechanics. The order in which participants played the games was also randomized as in the graph evaluation to remove potential correlations. We had 75 participants in total, 39 for mystery and 36 for fairy-tales. As each player played the neural model created game and one from each of the other approaches—this gave us 13 on average for the other approaches in the mystery genre and 12 for fairy-tales.",
"In the mystery genre, the neural approach is generally preferred by a greater percentage of participants than the rules or random. The human-made game outperforms them all. A significant exception to is that participants thought that the rules-based game was more interesting than the neural game. The trends in the fairy-tale genre are in general similar with a few notable deviations. The first deviation is that the rules-based and random approaches perform significantly worse than neural in this genre. We see also that the neural game is as coherent as the human-made game."
]
] |
76d62e414a345fe955dc2d99562ef5772130bc7e | How is the information extracted? | [
"neural question-answering technique to extract relations from a story text, OpenIE5, a commonly used rule-based information extraction technique"
] | [
[
"The first phase is to extract a knowledge graph from the story that depicts locations, characters, objects, and the relations between these entities. We present two techniques. The first uses neural question-answering technique to extract relations from a story text. The second, provided as a baseline, uses OpenIE5, a commonly used rule-based information extraction technique. For the sake of simplicity, we considered primarily the location-location and location-character/object relations, represented by the “next to” and “has” edges respectively in Figure FIGREF4.",
"While many neural models already exist that perform similar tasks such as named entity extraction and part of speech tagging, they often come at the cost of large amounts of specialized labeled data suited for that task. We instead propose a new method that leverages models trained for context-grounded question-answering tasks to do entity extraction with no task dependent data or fine-tuning necessary. Our method, dubbed AskBERT, leverages the Question-Answering (QA) model ALBERT BIBREF15. AskBERT consists of two main steps as shown in Figure FIGREF7: vertex extraction and graph construction.",
"The first step is to extract the set of entities—graph vertices—from the story. We are looking to extract information specifically regarding characters, locations, and objects. This is done by using asking the QA model questions such as “Who is a character in the story?”. BIBREF16 have shown that the phrasing of questions given to a QA model is important and this forms the basis of how we formulate our questions—questions are asked so that they are more likely to return a single answer, e.g. asking “Where is a location in the story?” as opposed to “Where are the locations in the story?”. In particular, we notice that pronoun choice can be crucial; “Where is a location in the story?” yielded more consistent extraction than “What is a location in the story?”. ALBERT QA is trained to also output a special <$no$-$answer$> token when it cannot find an answer to the question within the story. Our method makes use of this by iteratively asking QA model a question and masking out the most likely answer outputted on the previous step. This process continues until the <$no$-$answer$> token becomes the most likely answer.",
"The next step is graph construction. Typical interactive fiction worlds are usually structured as trees, i.e. no cycles except between locations. Using this fact, we use an approach that builds a graph from the vertex set by one relation—or edge—at a time. Once again using the entire story plot as context, we query the ALBERT-QA model picking a random starting location $x$ from the set of vertices previously extracted.and asking the questions “What location can I visit from $x$?” and “Who/What is in $x$?”. The methodology for phrasing these questions follows that described for the vertex extraction. The answer given by the QA model is matched to the vertex set by picking the vertex $u$ that contains the best word-token overlap with the answer. Relations between vertices are added by computing a relation probability on the basis of the output probabilities of the answer given by the QA model. The probability that vertices $x,u$ are related:",
"We compared our proposed AskBERT method with a non-neural, rule-based approach. This approach is based on the information extracted by OpenIE5, followed by some post-processing such as named-entity recognition and part-of-speech tagging. OpenIE5 combines several cutting-edge ideas from several existing papers BIBREF17, BIBREF18, BIBREF19 to create a powerful information extraction tools. For a given sentence, OpenIE5 generates multiple triples in the format of $\\langle entity, relation, entity\\rangle $ as concise representations of the sentence, each with a confidence score. These triples are also occasionally annotated with location information indicating that a triple happened in a location.",
"As in the neural AskBERT model, we attempt to extract information regarding locations, characters, and objects. The entire story plot is passed into the OpenIE5 and we receive a set of triples. The location annotations on the triples are used to create a set of locations. We mark which sentences in the story contain these locations. POS tagging based on marking noun-phrases is then used in conjunction with NER to further filter the set of triples—identifying the set of characters and objects in the story.",
"The graph is constructed by linking the set of triples on the basis of the location they belong to. While some sentences contain very explicit location information for OpenIE5 to mark it out in the triples, most of them do not. We therefore make the assumption that the location remains the same for all triples extracted in between sentences where locations are explicitly mentioned. For example, if there exists $location A$ in the 1st sentence and $location B$ in the 5th sentence of the story, all the events described in sentences 1-4 are considered to take place in $location A$. The entities mentioned in these events are connected to $location A$ in the graph."
]
] |
6b9310b577c6232e3614a1612cbbbb17067b3886 | What are some guidelines in writing input vernacular so model can generate | [
" if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score, poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs"
] | [
[
"1) In classical Chinese poems, poetic images UTF8gbsn(意象) were widely used to express emotions and to build artistic conception. A certain poetic image usually has some fixed implications. For example, autumn is usually used to imply sadness and loneliness. However, with the change of time, poetic images and their implications have also changed. According to our observation, if a vernacular paragraph contains more poetic images used in classical literature, its generated poem usually achieves higher score. As illustrated in Table TABREF12, both paragraph 2 and 3 are generated from pop song lyrics, paragraph 2 uses many poetic images from classical literature (e.g. pear flowers, makeup), while paragraph 3 uses modern poetic images (e.g. sparrows on the utility pole). Obviously, compared with poem 2, sentences in poem 3 seems more confusing, as the poetic images in modern times may not fit well into the language model of classical poems.",
"2) We also observed that poems generated from descriptive paragraphs achieve higher scores than from logical or philosophical paragraphs. For example, in Table TABREF12, both paragraph 4 (more descriptive) and paragraph 5 (more philosophical) were selected from famous modern prose. However, compared with poem 4, poem 5 seems semantically more confusing. We offer two explanations to the above phenomenon: i. Limited by the 28-character restriction, it is hard for quatrain poems to cover complex logical or philosophical explanation. ii. As vernacular paragraphs are more detailed and lengthy, some information in a vernacular paragraph may be lost when it is summarized into a classical poem. While losing some information may not change the general meaning of a descriptive paragraph, it could make a big difference in a logical or philosophical paragraph."
]
] |
d484a71e23d128f146182dccc30001df35cdf93f | How much is proposed model better in perplexity and BLEU score than typical UMT models? | [
"Perplexity of the best model is 65.58 compared to best baseline 105.79.\nBleu of the best model is 6.57 compared to best baseline 5.50."
] | [
[
"As illustrated in Table TABREF12 (ID 1). Given the vernacular translation of each gold poem in test set, we generate five poems using our models. Intuitively, the more the generated poem resembles the gold poem, the better the model is. We report mean perplexity and BLEU scores in Table TABREF19 (Where +Anti OT refers to adding the reinforcement loss to mitigate over-fitting and +Anti UT refers to adding phrase segmentation-based padding to mitigate under-translation), human evaluation results in Table TABREF20."
]
] |
5787ac3e80840fe4cf7bfae7e8983fa6644d6220 | What dataset is used for training? | [
"We collected a corpus of poems and a corpus of vernacular literature from online resources"
] | [
[
"Training and Validation Sets We collected a corpus of poems and a corpus of vernacular literature from online resources. The poem corpus contains 163K quatrain poems from Tang Poems and Song Poems, the vernacular literature corpus contains 337K short paragraphs from 281 famous books, the corpus covers various literary forms including prose, fiction and essay. Note that our poem corpus and a vernacular corpus are not aligned. We further split the two corpora into a training set and a validation set."
]
] |
ee31c8a94e07b3207ca28caef3fbaf9a38d94964 | What were the evaluation metrics? | [
"BLEU, Micro Entity F1, quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5"
] | [
[
"Follow the prior works BIBREF6, BIBREF7, BIBREF9, we adopt the BLEU and the Micro Entity F1 to evaluate our model performance. The experimental results are illustrated in Table TABREF30.",
"We provide human evaluation on our framework and the compared models. These responses are based on distinct dialogue history. We hire several human experts and ask them to judge the quality of the responses according to correctness, fluency, and humanlikeness on a scale from 1 to 5. In each judgment, the expert is presented with the dialogue history, an output of a system with the name anonymized, and the gold response."
]
] |
66d743b735ba75589486e6af073e955b6bb9d2a4 | What were the baseline systems? | [
"Attn seq2seq, Ptr-UNK, KV Net, Mem2Seq, DSR"
] | [
[
"We compare our model with several baselines including:",
"Attn seq2seq BIBREF22: A model with simple attention over the input context at each time step during decoding.",
"Ptr-UNK BIBREF23: Ptr-UNK is the model which augments a sequence-to-sequence architecture with attention-based copy mechanism over the encoder context.",
"KV Net BIBREF6: The model adopted and argumented decoder which decodes over the concatenation of vocabulary and KB entities, which allows the model to generate entities.",
"Mem2Seq BIBREF7: Mem2Seq is the model that takes dialogue history and KB entities as input and uses a pointer gate to control either generating a vocabulary word or selecting an input as the output.",
"DSR BIBREF9: DSR leveraged dialogue state representation to retrieve the KB implicitly and applied copying mechanism to retrieve entities from knowledge base while decoding."
]
] |
b9f852256113ef468d60e95912800fab604966f6 | Which dialog datasets did they experiment with? | [
"Camrest, InCar Assistant"
] | [
[
"Since dialogue dataset is not typically annotated with the retrieval results, training the KB-retriever is non-trivial. To make the training feasible, we propose two methods: 1) we use a set of heuristics to derive the training data and train the retriever in a distant supervised fashion; 2) we use Gumbel-Softmax BIBREF14 as an approximation of the non-differentiable selecting process and train the retriever along with the Seq2Seq dialogue generation model. Experiments on two publicly available datasets (Camrest BIBREF11 and InCar Assistant BIBREF6) confirm the effectiveness of the KB-retriever. Both the retrievers trained with distant-supervision and Gumbel-Softmax technique outperform the compared systems in the automatic and human evaluations. Analysis empirically verifies our assumption that more than 80% responses in the dataset can be supported by a single KB row and better retrieval results lead to better task-oriented dialogue generation performance."
]
] |
88f8ab2a417eae497338514142ac12c3cec20876 | What KB is used? | [
"Unanswerable"
] | [
[]
] |
05e3b831e4c02bbd64a6e35f6c52f0922a41539a | At which interval do they extract video and audio frames? | [
"Unanswerable"
] | [
[]
] |
bd74452f8ea0d1d82bbd6911fbacea1bf6e08cab | Do they use pretrained word vectors for dialogue context embedding? | [
"Yes"
] | [
[]
] |
6472f9d0a385be81e0970be91795b1b97aa5a9cf | Do they train a different training method except from scheduled sampling? | [
"Answer with content missing: (list missing) \nScheduled sampling: In our experiments, we found that models trained with scheduled sampling performed better (about 0.004 BLEU-4 on validation set) than the ones trained using teacher-forcing for the AVSD dataset. Hence, we use scheduled sampling for all the results we report in this paper.\n\nYes."
] | [
[
"Since the official test set has not been released publicly, results reported on the official test set have been provided by the challenge organizers. For the prototype test set and for the ablation study presented in Table TABREF24 , we use the same code for evaluation metrics as used by BIBREF11 for fairness and comparability. We attribute the significant performance gain of our model over the baseline to a combination of several factors as described below:"
]
] |
2173809eb117570d289cefada6971e946b902bd6 | Is the web interface publicly accessible? | [
"Unanswerable"
] | [
[]
] |
293e9a0b30670f4f0a380e574a416665a8c55bc2 | Is the Android application publicly available? | [
"Unanswerable"
] | [
[]
] |
17de58c17580350c9da9c2f3612784b432154d11 | What classifier is used for emergency categorization? | [
"multi-class Naive Bayes"
] | [
[
"We employ a multi-class Naive Bayes classifier as the second stage classification mechanism, for categorizing tweets appropriately, depending on the type of emergencies they indicate. This multi-class classifier is trained on data manually labeled with classes. We tokenize the training data using “NgramTokenizer” and then, apply a filter to create word vectors of strings before training. We use “trigrams” as features to build a model which, later, classifies tweets into appropriate categories, in real time. We then perform cross validation using standard techniques to calculate the results, which are shown under the label “Stage 2”, in table TABREF20 ."
]
] |
ff27d6e6eb77e55b4d39d343870118d1a6debd5e | What classifier is used for emergency detection? | [
"SVM"
] | [
[]
] |
29772ba04886bee2d26b7320e1c6d9b156078891 | Do the tweets come from any individual? | [
"Yes"
] | [
[
"We collect data by using the Twitter API for saved data, available for public use. For our experiments we collect 3200 tweets filtered by keywords like “fire”, “earthquake”, “theft”, “robbery”, “drunk driving”, “drunk driving accident” etc. Later, we manually label tweets with <emergency>and <non-emergency>labels for classification as stage one. Our dataset contains 1313 tweet with positive label <emergency>and 1887 tweets with a negative label <non-emergency>. We create another dataset with the positively labeled tweets and provide them with category labels like “fire”, “accident”, “earthquake” etc."
]
] |
94dc437463f7a7d68b8f6b57f1e3606eacc4490a | How many categories are there? | [
"Unanswerable"
] | [
[]
] |
9d9d84822a9c42eb0257feb7c18086d390dae3be | What was the baseline? | [
"Unanswerable"
] | [
[]
] |
d27e3a099954e917b6491e81b2e907478d7f1233 | Are the tweets specific to a region? | [
"No"
] | [
[]
] |
c0a11ba0f6bbb4c69b5a0d4ae9d18e86a4a8f354 | Do they release MED? | [
"Yes"
] | [
[
"To tackle this issue, we present a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and collected from linguistics publications (Section \"Dataset\" ). Compared with manual or automatic construction, we can collect naturally-occurring examples by crowdsourcing and well-designed ones from linguistics publications. To enable the evaluation of skills required for monotonicity reasoning, we annotate each example in our dataset with linguistic tags associated with monotonicity reasoning."
]
] |
dfc393ba10ec4af5a17e5957fcbafdffdb1a6443 | What NLI models do they analyze? | [
"BiMPM, ESIM, Decomposable Attention Model, KIM, BERT"
] | [
[
"To test the difficulty of our dataset, we checked the majority class label and the accuracies of five state-of-the-art NLI models adopting different approaches: BiMPM (Bilateral Multi-Perspective Matching Model; BIBREF31 , BIBREF31 ), ESIM (Enhanced Sequential Inference Model; BIBREF32 , BIBREF32 ), Decomposable Attention Model BIBREF33 , KIM (Knowledge-based Inference Model; BIBREF34 , BIBREF34 ), and BERT (Bidirectional Encoder Representations from Transformers model; BIBREF35 , BIBREF35 ). Regarding BERT, we checked the performance of a model pretrained on Wikipedia and BookCorpus for language modeling and trained with SNLI and MultiNLI. For other models, we checked the performance trained with SNLI. In agreement with our dataset, we regarded the prediction label contradiction as non-entailment."
]
] |
311a7fa62721e82265f4e0689b4adc05f6b74215 | How do they define upward and downward reasoning? | [
"Upward reasoning is defined as going from one specific concept to a more general one. Downward reasoning is defined as the opposite, going from a general concept to one that is more specific."
] | [
[
"A context is upward entailing (shown by [... $\\leavevmode {\\color {red!80!black}\\uparrow }$ ]) that allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where French dinner is replaced by a more general concept dinner. On the other hand, a downward entailing context (shown by [... $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ]) allows an inference from ( \"Introduction\" ) to ( \"Introduction\" ), where workers is replaced by a more specific concept new workers. Interestingly, the direction of monotonicity can be reversed again by embedding yet another downward entailing context (e.g., not in ( \"Introduction\" )), as witness the fact that ( \"Introduction\" ) entails ( \"Introduction\" ). To properly handle both directions of monotonicity, NLI models must detect monotonicity operators (e.g., all, not) and their arguments from the syntactic structure.",
"All [ workers $\\leavevmode {\\color {blue!80!black}\\downarrow }$ ] [joined for a French dinner $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] All workers joined for a dinner All new workers joined for a French dinner Not all [new workers $\\leavevmode {\\color {red!80!black}\\uparrow }$ ] joined for a dinner Not all workers joined for a dinner"
]
] |
82bcacad668351c0f81bd841becb2dbf115f000e | What is monotonicity reasoning? | [
"a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures"
] | [
[
"Concerning logical inferences, monotonicity reasoning BIBREF6 , BIBREF7 , which is a type of reasoning based on word replacement, requires the ability to capture the interaction between lexical and syntactic structures. Consider examples in ( \"Introduction\" ) and ( \"Introduction\" )."
]
] |
5937ebbf04f62d41b48cbc6b5c38fc309e5c2328 | What other relations were found in the datasets? | [
"Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration', Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset, Acknowledgements (b) are mostly with positive or neutral, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP), Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral, No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny)., Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'"
] | [
[
"We can see emotional dialogue act co-occurrences with respect to emotion labels in Figure FIGREF12 for both datasets. There are sets of three bars per dialogue act in the figure, the first and second bar represent emotion labels of IEMOCAP (IE) and MELD (ME), and the third bar is for MELD sentiment (MS) labels. MELD emotion and sentiment statistics are interesting as they are strongly correlated to each other. The bars contain the normalized number of utterances for emotion labels with respect to the total number of utterances for that particular dialogue act category. The statements without-opinion (sd) and with-opinion (sv) contain utterances with almost all emotions. Many neutral utterances are spanning over all the dialogue acts.",
"Quotation (⌃q) dialogue acts, on the other hand, are mostly used with `Anger' and `Frustration' (in case of IEMOCAP), however, some utterances with `Joy' or `Sadness' as well (see examples in Table TABREF21). Action Directive (ad) dialogue act utterances, which are usually orders, frequently occur with `Anger' or `Frustration' although many with `Happy' emotion in case of the MELD dataset. Acknowledgements (b) are mostly with positive or neutral, however, Appreciation (ba) and Rhetorical (bh) backchannels often occur with a greater number in `Surprise', `Joy' and/or with `Excited' (in case of IEMOCAP). Questions (qh, qw, qy and qy⌃d) are mostly asked with emotions `Surprise', `Excited', `Frustration' or `Disgust' (in case of MELD) and many are neutral. No-answers (nn) are mostly `Sad' or `Frustrated' as compared to yes-answers (ny). Forward-functions such as Apology (fa) are mostly with `Sadness' whereas Thanking (ft) and Conventional-closing or -opening (fc or fp) are usually with `Joy' or `Excited'."
]
] |
dcd6f18922ac5c00c22cef33c53ff5ae08b42298 | How does the ensemble annotator extract the final label? | [
"First preference is given to the labels that are perfectly matching in all the neural annotators., In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models., When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. , Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category."
] | [
[
"First preference is given to the labels that are perfectly matching in all the neural annotators. In Table TABREF11, we can see that both datasets have about 40% of exactly matching labels over all models (AM). Then priority is given to the context-based models to check if the label in all context models is matching perfectly. In case two out of three context models are correct, then it is being checked if that label is also produced by at least one of the non-context models. Then, we allow labels to rely on these at least two context models. As a result, about 47% of the labels are taken based on the context models (CM). When we see that none of the context models is producing the same results, then we rank the labels with their respective confidence values produced as a probability distribution using the $softmax$ function. The labels are sorted in descending order according to confidence values. Then we check if the first three (case when one context model and both non-context models produce the same label) or at least two labels are matching, then we allow to pick that one. There are about 3% in IEMOCAP and 5% in MELD (BM).",
"Finally, when none the above conditions are fulfilled, we leave out the label with an unknown category. This unknown category of the dialogue act is labeled with `xx' in the final annotations, and they are about 7% in IEMOCAP and 11% in MELD (NM). The statistics of the EDAs is reported in Table TABREF13 for both datasets. Total utterances in MELD includes training, validation and test datasets."
]
] |
2965c86467d12b79abc16e1457d848cb6ca88973 | How were dialogue act labels defined? | [
"Dialogue Act Markup in Several Layers (DAMSL) tag set"
] | [
[
"There have been many taxonomies for dialogue acts: speech acts BIBREF14 refer to the utterance, not only to present information but to the action at is performed. Speech acts were later modified into five classes (Assertive, Directive, Commissive, Expressive, Declarative) BIBREF15. There are many such standard taxonomies and schemes to annotate conversational data, and most of them follow the discourse compositionality. These schemes have proven their importance for discourse or conversational analysis BIBREF16. During the increased development of dialogue systems and discourse analysis, the standard taxonomy was introduced in recent decades, called Dialogue Act Markup in Several Layers (DAMSL) tag set. According to DAMSL, each DA has a forward-looking function (such as Statement, Info-request, Thanking) and a backwards-looking function (such as Accept, Reject, Answer) BIBREF17."
]
] |
b99948ac4810a7fe3477f6591b8cf211d6398e67 | How many models were used? | [
"five"
] | [
[
"In this work, we apply an automated neural ensemble annotation process for dialogue act labeling. Several neural models are trained with the Switchboard Dialogue Act (SwDA) Corpus BIBREF9, BIBREF10 and used for inferring dialogue acts on the emotion datasets. We ensemble five model output labels by checking majority occurrences (most of the model labels are the same) and ranking confidence values of the models. We have annotated two potential multi-modal conversation datasets for emotion recognition: IEMOCAP (Interactive Emotional dyadic MOtion CAPture database) BIBREF6 and MELD (Multimodal EmotionLines Dataset) BIBREF8. Figure FIGREF2, shows an example of dialogue acts with emotion and sentiment labels from the MELD dataset. We confirmed the reliability of annotations with inter-annotator metrics. We analysed the co-occurrences of the dialogue act and emotion labels and discovered a key relationship between them; certain dialogue acts of the utterances show significant and useful association with respective emotional states. For example, Accept/Agree dialogue act often occurs with the Joy emotion while Reject with Anger, Acknowledgements with Surprise, Thanking with Joy, and Apology with Sadness, etc. The detailed analysis of the emotional dialogue acts (EDAs) and annotated datasets are being made available at the SECURE EU Project website.",
"We adopted the neural architectures based on Bothe et al. bothe2018discourse where two variants are: non-context model (classifying at utterance level) and context model (recognizing the dialogue act of the current utterance given a few preceding utterances). From conversational analysis using dialogue acts in Bothe et al. bothe2018interspeech, we learned that the preceding two utterances contribute significantly to recognizing the dialogue act of the current utterance. Hence, we adapt this setting for the context model and create a pool of annotators using recurrent neural networks (RNNs). RNNs can model the contextual information in the sequence of words of an utterance and in the sequence of utterances of a dialogue. Each word in an utterance is represented with a word embedding vector of dimension 1024. We use the word embedding vectors from pre-trained ELMo (Embeddings from Language Models) embeddings BIBREF22. We have a pool of five neural annotators as shown in Figure FIGREF6. Our online tool called Discourse-Wizard is available to practice automated dialogue act labeling. In this tool we use the same neural architectures but model-trained embeddings (while, in this work we use pre-trained ELMo embeddings, as they are better performant but computationally and size-wise expensive to be hosted in the online tool). The annotators are:"
]
] |