id
stringlengths 40
40
| pid
stringlengths 42
42
| input
stringlengths 8.37k
169k
| output
stringlengths 1
1.63k
|
---|---|---|---|
61272b1d0338ed7708cf9ed9c63060a6a53e97a2 | 61272b1d0338ed7708cf9ed9c63060a6a53e97a2_0 | Q: What was their performance on the dataset?
Text: Introduction
In recent years, social media, forums, blogs and other forms of online communication tools have radically affected everyday life, especially how people express their opinions and comments. The extraction of useful information (such as people's opinion about companies brand) from the huge amount of unstructured data is vital for most companies and organizations BIBREF0 . The product reviews are important for business owners as they can take business decision accordingly to automatically classify user’s opinions towards products and services. The application of sentiment analysis is not limited to product or movie reviews but can be applied to different fields such as news, politics, sport etc. For example, in online political debates, the sentiment analysis can be used to identify people's opinions on a certain election candidate or political parties BIBREF1 BIBREF2 BIBREF3 . In this context, sentiment analysis has been widely used in different languages by using traditional and advanced machine learning techniques. However, limited research has been conducted to develop models for the Persian language.
The sentiment analysis is a method to automatically process large amounts of data and classify text into positive or negative sentiments) BIBREF4 BIBREF5 . Sentiment analysis can be performed at two levels: at the document level or at the sentence level. At document level it is used to classify the sentiment expressed in the document (positive or negative), whereas, at sentence level is used to identify the sentiments expressed only in the sentence under analysis BIBREF6 BIBREF7 .
In the literature, deep learning based automated feature extraction has been shown to outperform state-of-the-art manual feature engineering based classifiers such as Support Vector Machine (SVM), Naive Bayes (NB) or Multilayer Perceptron (MLP) etc. One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration. The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach. Similarly, CNNs have been proven to be very effective in sentiment analysis. However, little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 . In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. The obtained deep learning results are compared with MLP.
The rest of the paper is organized as follows: Section 2 presents related work. Section 3 presents methodology and experimental results. Finally, section 4 concludes this paper.
Related Works
In the literature, extensive research has been carried out to model novel sentiment analysis models using both shallow and deep learning algorithms. For example, the authors in BIBREF10 proposed a novel deep learning approach for polarity detection in product reviews. The authors addressed two major limitations of stacked denoising of autoencoders, high computational cost and the lack of scalability of high dimensional features. Their experimental results showed the effectiveness of proposed autoencoders in achieving accuracy upto 87%. Zhai et al., BIBREF11 proposed a five layers autoencoder for learning the specific representation of textual data. The autoencoders are generalised using loss function and derived discriminative loss function from label information. The experimental results showed that the model outperformed bag of words, denoising autoencoders and other traditional methods, achieving accuracy rate up to 85% . Sun et al., BIBREF12 proposed a novel method to extract contextual information from text using a convolutional autoencoder architecture. The experimental results showed that the proposed model outperformed traditional SVM and Nave Bayes models, reporting accuracy of 83.1 %, 63.9% and 67.8% respectively.
Su et al., BIBREF13 proposed an approach for a neural generative autoencoder for learning bilingual word embedding. The experimental results showed the effectiveness of their approach on English-Chinese, English-German, English-French and English-Spanish (75.36% accuracy). Kim et al., BIBREF14 proposed a method to capture the non-linear structure of data using CNN classifier. The experimental results showed the effectiveness of the method on the multi-domain dataset (movie reviews and product reviews). However, the disadvantage is only SVM and Naive Bayes classifiers are used to evaluate the performance of the method and deep learning classifiers are not exploited. Zhang et al., BIBREF15 proposed an approach using deep learning classifiers to detect polarity in Japanese movie reviews. The approach used denoising autoencoder and adapted to other domains such as product reviews. The advantage of the approach is not depended on any language and could be used for various languages by applying different datasets. AP et al., BIBREF16 proposed a CNN based model for cross-language learning of vectorial word representations that is coherent between two languages. The method is evaluated using English and German movie reviews dataset. The experimental results showed CNN (83.45% accuracy) outperformed as compared to SVM (65.25% accuracy).
Zhou et al., BIBREF17 proposed an autoencoder architecture constituting an LSTM-encoder and decoder in order to capture features in the text and reduce dimensionality of data. The LSTM encoder used the interactive scheme to go through the sequence of sentences and LSTM decoder reconstructed the vector of sentences. The model is evaluated using different datasets such as book reviews, DVD reviews, and music reviews, acquiring accuracy up to 81.05%, 81.06%, and 79.40% respectively. Mesnil et al., BIBREF18 proposed an approach using ensemble classification to detect polarity in the movie reviews. The authors combined several machine learning algorithms such as SVM, Naive Bayes and RNN to achieve better results, where autoencoders were used to reduce the dimensionality of features. The experimental results showed the combination of unigram, bigram and trigram features (91.87% accuracy) outperformed unigram (91.56% accuracy) and bigram (88.61% accuracy).
Scheible et al., BIBREF19 trained an approach using semi-supervised recursive autoencoder to detect polarity in movie reviews dataset, consisted of 5000 positive and 5000 negative sentiments. The experimental results demonstrated that the proposed approach successfully detected polarity in movie reviews dataset (83.13% accuracy) and outperformed standard SVM (68.36% accuracy) model. Dai et al., BIBREF20 developed an autoencoder to detect polarity in the text using deep learning classifier. The LSTM was trained on IMDB movie reviews dataset. The experimental results showed the outperformance of their proposed approach over SVM. In table 1 some of the autoencoder approaches are depicted.
Methodology and Experimental Results
The novel dataset used in this work was collected manually and includes Persian movie reviews from 2014 to 2016. A subset of dataset was used to train the neural network (60% training dataset) and rest of the data (40%) was used to test and validate the performance of the trained neural network (testing set (30%), validation set (10%)). There are two types of labels in the dataset: positive or negative. The reviews were manually annotated by three native Persian speakers aged between 30 and 50 years old.
After data collection, the corpus was pre-processed using tokenisation, normalisation and stemming techniques. The process of converting sentences into single word or token is called tokenisation. For example, "The movie is great" is changed to "The", "movie", "is", "great" BIBREF21 . There are some words which contain numbers. For example, "great" is written as "gr8" or "gooood" as written as "good" . The normalisation is used to convert these words into normal forms BIBREF22 . The process of converting words into their root is called stemming. For example, going was changed to go BIBREF23 . Words were converted into vectors. The fasttext was used to convert each word into 300-dimensions vectors. Fasttext is a library for text classification and representation BIBREF24 BIBREF25 BIBREF9 .
For classification, MLP, autoencoders and CNNs have been used. Fig. 1. depicts the modelled MLP architectures. MLP classifer was trained for 100 iterations BIBREF26 . Fig. 2. depicts the modelled autoencoder architecture. Autoencoder is a feed-forward deep neural network with unsupervised learning and it is used for dimensionality reduction. The autoencoder consists of input, output and hidden layers. Autoencoder is used to compress the input into a latent-space and then the output is reconstructed BIBREF27 BIBREF28 BIBREF29 . The exploited autoencoder model is depcited in Fig. 1. The autoencoder consists of one input layer three hidden layers (1500, 512, 1500) and an output layer. Convolutional Neural Networks contains three layers (input, hidden and output layer). The hidden layer consists of convolutional layers, pooling layers, fully connected layers and normalisation layer. The INLINEFORM0 is denotes the hidden neurons of j, with bias of INLINEFORM1 , is a weight sum over continuous visible nodes v which is given by: DISPLAYFORM0
The modelled CNN architecture is depicted in Fig. 3 BIBREF29 BIBREF28 . For CNN modelling, each utterance was represented as a concatenation vector of constituent words. The network has total 11 layers: 4 convolution layers, 4 max pooling and 3 fully connected layers. Convolution layers have filters of size 2 and with 15 feature maps. Each convolution layer is followed by a max polling layer with window size 2. The last max pooling layer is followed by fully connected layers of size 5000, 500 and 4. For final layer, softmax activation is used.
To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1
where TP is denotes true positive, TN is true negative, FP is false positive, and FN is false negative.
Conclusion
Sentiment analysis has been used extensively for a wide of range of real-world applications, ranging from product reviews, surveys feedback, to business intelligence, and operational improvements. However, the majority of research efforts are devoted to English-language only, where information of great importance is also available in other languages. In this work, we focus on developing sentiment analysis models for Persian language, specifically for Persian movie reviews. Two deep learning models (deep autoencoders and deep CNNs) are developed and compared with the the state-of-the-art shallow MLP based machine learning model. Simulations results revealed the outperformance of our proposed CNN model over autoencoders and MLP. In future, we intend to exploit more advanced deep learning models such as Long Short-Term Memory (LSTM) and LSTM-CNNs to further evaluate the performance of our developed novel Persian dataset.
Acknowledgment
Amir Hussain and Ahsan Adeel were supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant No.EP/M026981/1. | accuracy of 82.6% |
53b02095ba7625d85721692fce578654f66bbdf0 | 53b02095ba7625d85721692fce578654f66bbdf0_0 | Q: How large is the dataset?
Text: Introduction
In recent years, social media, forums, blogs and other forms of online communication tools have radically affected everyday life, especially how people express their opinions and comments. The extraction of useful information (such as people's opinion about companies brand) from the huge amount of unstructured data is vital for most companies and organizations BIBREF0 . The product reviews are important for business owners as they can take business decision accordingly to automatically classify user’s opinions towards products and services. The application of sentiment analysis is not limited to product or movie reviews but can be applied to different fields such as news, politics, sport etc. For example, in online political debates, the sentiment analysis can be used to identify people's opinions on a certain election candidate or political parties BIBREF1 BIBREF2 BIBREF3 . In this context, sentiment analysis has been widely used in different languages by using traditional and advanced machine learning techniques. However, limited research has been conducted to develop models for the Persian language.
The sentiment analysis is a method to automatically process large amounts of data and classify text into positive or negative sentiments) BIBREF4 BIBREF5 . Sentiment analysis can be performed at two levels: at the document level or at the sentence level. At document level it is used to classify the sentiment expressed in the document (positive or negative), whereas, at sentence level is used to identify the sentiments expressed only in the sentence under analysis BIBREF6 BIBREF7 .
In the literature, deep learning based automated feature extraction has been shown to outperform state-of-the-art manual feature engineering based classifiers such as Support Vector Machine (SVM), Naive Bayes (NB) or Multilayer Perceptron (MLP) etc. One of the important techniques in deep learning is the autoencoder that generally involves reducing the number of feature dimensions under consideration. The aim of dimensionality reduction is to obtain a set of principal variables to improve the performance of the approach. Similarly, CNNs have been proven to be very effective in sentiment analysis. However, little work has been carried out to exploit deep learning based feature representation for Persian sentiment analysis BIBREF8 BIBREF9 . In this paper, we present two deep learning models (deep autoencoders and CNNs) for Persian sentiment analysis. The obtained deep learning results are compared with MLP.
The rest of the paper is organized as follows: Section 2 presents related work. Section 3 presents methodology and experimental results. Finally, section 4 concludes this paper.
Related Works
In the literature, extensive research has been carried out to model novel sentiment analysis models using both shallow and deep learning algorithms. For example, the authors in BIBREF10 proposed a novel deep learning approach for polarity detection in product reviews. The authors addressed two major limitations of stacked denoising of autoencoders, high computational cost and the lack of scalability of high dimensional features. Their experimental results showed the effectiveness of proposed autoencoders in achieving accuracy upto 87%. Zhai et al., BIBREF11 proposed a five layers autoencoder for learning the specific representation of textual data. The autoencoders are generalised using loss function and derived discriminative loss function from label information. The experimental results showed that the model outperformed bag of words, denoising autoencoders and other traditional methods, achieving accuracy rate up to 85% . Sun et al., BIBREF12 proposed a novel method to extract contextual information from text using a convolutional autoencoder architecture. The experimental results showed that the proposed model outperformed traditional SVM and Nave Bayes models, reporting accuracy of 83.1 %, 63.9% and 67.8% respectively.
Su et al., BIBREF13 proposed an approach for a neural generative autoencoder for learning bilingual word embedding. The experimental results showed the effectiveness of their approach on English-Chinese, English-German, English-French and English-Spanish (75.36% accuracy). Kim et al., BIBREF14 proposed a method to capture the non-linear structure of data using CNN classifier. The experimental results showed the effectiveness of the method on the multi-domain dataset (movie reviews and product reviews). However, the disadvantage is only SVM and Naive Bayes classifiers are used to evaluate the performance of the method and deep learning classifiers are not exploited. Zhang et al., BIBREF15 proposed an approach using deep learning classifiers to detect polarity in Japanese movie reviews. The approach used denoising autoencoder and adapted to other domains such as product reviews. The advantage of the approach is not depended on any language and could be used for various languages by applying different datasets. AP et al., BIBREF16 proposed a CNN based model for cross-language learning of vectorial word representations that is coherent between two languages. The method is evaluated using English and German movie reviews dataset. The experimental results showed CNN (83.45% accuracy) outperformed as compared to SVM (65.25% accuracy).
Zhou et al., BIBREF17 proposed an autoencoder architecture constituting an LSTM-encoder and decoder in order to capture features in the text and reduce dimensionality of data. The LSTM encoder used the interactive scheme to go through the sequence of sentences and LSTM decoder reconstructed the vector of sentences. The model is evaluated using different datasets such as book reviews, DVD reviews, and music reviews, acquiring accuracy up to 81.05%, 81.06%, and 79.40% respectively. Mesnil et al., BIBREF18 proposed an approach using ensemble classification to detect polarity in the movie reviews. The authors combined several machine learning algorithms such as SVM, Naive Bayes and RNN to achieve better results, where autoencoders were used to reduce the dimensionality of features. The experimental results showed the combination of unigram, bigram and trigram features (91.87% accuracy) outperformed unigram (91.56% accuracy) and bigram (88.61% accuracy).
Scheible et al., BIBREF19 trained an approach using semi-supervised recursive autoencoder to detect polarity in movie reviews dataset, consisted of 5000 positive and 5000 negative sentiments. The experimental results demonstrated that the proposed approach successfully detected polarity in movie reviews dataset (83.13% accuracy) and outperformed standard SVM (68.36% accuracy) model. Dai et al., BIBREF20 developed an autoencoder to detect polarity in the text using deep learning classifier. The LSTM was trained on IMDB movie reviews dataset. The experimental results showed the outperformance of their proposed approach over SVM. In table 1 some of the autoencoder approaches are depicted.
Methodology and Experimental Results
The novel dataset used in this work was collected manually and includes Persian movie reviews from 2014 to 2016. A subset of dataset was used to train the neural network (60% training dataset) and rest of the data (40%) was used to test and validate the performance of the trained neural network (testing set (30%), validation set (10%)). There are two types of labels in the dataset: positive or negative. The reviews were manually annotated by three native Persian speakers aged between 30 and 50 years old.
After data collection, the corpus was pre-processed using tokenisation, normalisation and stemming techniques. The process of converting sentences into single word or token is called tokenisation. For example, "The movie is great" is changed to "The", "movie", "is", "great" BIBREF21 . There are some words which contain numbers. For example, "great" is written as "gr8" or "gooood" as written as "good" . The normalisation is used to convert these words into normal forms BIBREF22 . The process of converting words into their root is called stemming. For example, going was changed to go BIBREF23 . Words were converted into vectors. The fasttext was used to convert each word into 300-dimensions vectors. Fasttext is a library for text classification and representation BIBREF24 BIBREF25 BIBREF9 .
For classification, MLP, autoencoders and CNNs have been used. Fig. 1. depicts the modelled MLP architectures. MLP classifer was trained for 100 iterations BIBREF26 . Fig. 2. depicts the modelled autoencoder architecture. Autoencoder is a feed-forward deep neural network with unsupervised learning and it is used for dimensionality reduction. The autoencoder consists of input, output and hidden layers. Autoencoder is used to compress the input into a latent-space and then the output is reconstructed BIBREF27 BIBREF28 BIBREF29 . The exploited autoencoder model is depcited in Fig. 1. The autoencoder consists of one input layer three hidden layers (1500, 512, 1500) and an output layer. Convolutional Neural Networks contains three layers (input, hidden and output layer). The hidden layer consists of convolutional layers, pooling layers, fully connected layers and normalisation layer. The INLINEFORM0 is denotes the hidden neurons of j, with bias of INLINEFORM1 , is a weight sum over continuous visible nodes v which is given by: DISPLAYFORM0
The modelled CNN architecture is depicted in Fig. 3 BIBREF29 BIBREF28 . For CNN modelling, each utterance was represented as a concatenation vector of constituent words. The network has total 11 layers: 4 convolution layers, 4 max pooling and 3 fully connected layers. Convolution layers have filters of size 2 and with 15 feature maps. Each convolution layer is followed by a max polling layer with window size 2. The last max pooling layer is followed by fully connected layers of size 5000, 500 and 4. For final layer, softmax activation is used.
To evaluate the performance of the proposed approach, precision (1), recall (2), f-Measure (3), and prediction accuracy (4) have been used as a performance matrices. The experimental results are shown in Table 1, where it can be seen that autoencoders outperformed MLP and CNN outperformed autoencoders with the highest achieved accuracy of 82.6%. DISPLAYFORM0 DISPLAYFORM1
where TP is denotes true positive, TN is true negative, FP is false positive, and FN is false negative.
Conclusion
Sentiment analysis has been used extensively for a wide of range of real-world applications, ranging from product reviews, surveys feedback, to business intelligence, and operational improvements. However, the majority of research efforts are devoted to English-language only, where information of great importance is also available in other languages. In this work, we focus on developing sentiment analysis models for Persian language, specifically for Persian movie reviews. Two deep learning models (deep autoencoders and deep CNNs) are developed and compared with the the state-of-the-art shallow MLP based machine learning model. Simulations results revealed the outperformance of our proposed CNN model over autoencoders and MLP. In future, we intend to exploit more advanced deep learning models such as Long Short-Term Memory (LSTM) and LSTM-CNNs to further evaluate the performance of our developed novel Persian dataset.
Acknowledgment
Amir Hussain and Ahsan Adeel were supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grant No.EP/M026981/1. | Unanswerable |
0cd0755ac458c3bafbc70e4268c1e37b87b9721b | 0cd0755ac458c3bafbc70e4268c1e37b87b9721b_0 | Q: Did the authors use crowdsourcing platforms?
Text: 0pt0.03.03 *
0pt0.030.03 *
0pt0.030.03
We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.
Introduction
As artificial intelligence plays an ever more prominent role in everyday human lives, it becomes increasingly important to enable machines to communicate via natural language—not only with humans, but also with each other. Learning algorithms for natural language understanding, such as in machine translation and reading comprehension, have progressed at an unprecedented rate in recent years, but still rely on static, large-scale, text-only datasets that lack crucial aspects of how humans understand and produce natural language. Namely, humans develop language capabilities by being embodied in an environment which they can perceive, manipulate and move around in; and by interacting with other humans. Hence, we argue that we should incorporate all three fundamental aspects of human language acquisition—perception, action and interactive communication—and develop a task and dataset to that effect.
We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .
Grounded language learning has (re-)gained traction in the AI community, and much attention is currently devoted to virtual embodiment—the development of multi-agent communication tasks in virtual environments—which has been argued to be a viable strategy for acquiring natural language semantics BIBREF0 . Various related tasks have recently been introduced, but in each case with some limitations. Although visually grounded dialogue tasks BIBREF1 , BIBREF2 comprise perceptual grounding and multi-agent interaction, their agents are passive observers and do not act in the environment. By contrast, instruction-following tasks, such as VNL BIBREF3 , involve action and perception but lack natural language interaction with other agents. Furthermore, some of these works use simulated environments BIBREF4 and/or templated language BIBREF5 , which arguably oversimplifies real perception or natural language, respectively. See Table TABREF15 for a comparison.
Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
We argue that for artificial agents to solve this challenging problem, some fundamental architecture designs are missing, and our hope is that this task motivates their innovation. To that end, we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism. To model the interaction between language and action, this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence.
This work makes the following contributions: 1) We present the first large scale dialogue dataset grounded in action and perception; 2) We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language; 4) Using localization models, we establish initial baselines on the full task; 5) We show that our best model exceeds human performance under the assumption of “perfect perception” and with a learned emergent communication protocol, and sets a non-trivial baseline with natural language.
Talk The Walk
We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera. Most parts of the city are grid-like and uniform, which makes it well-suited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side—see Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.
The tourist's location is given as a tuple INLINEFORM0 , where INLINEFORM1 are the coordinates and INLINEFORM2 signifies the orientation (north, east, south or west). The tourist can take three actions: turn left, turn right and go forward. For moving forward, we add INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 to the INLINEFORM7 coordinates for the respective orientations. Upon a turning action, the orientation is updated by INLINEFORM8 where INLINEFORM9 for left and INLINEFORM10 for right. If the tourist moves outside the grid, we issue a warning that they cannot go in that direction and do not update the location. Moreover, tourists are shown different types of transitions: a short transition for actions that bring the tourist to a different corner of the same intersection; and a longer transition for actions that bring them to a new intersection.
The guide observes a map that corresponds to the tourist's environment. We exploit the fact that urban areas like NYC are full of local businesses, and overlay the map with these landmarks as localization points for our task. Specifically, we manually annotate each corner of the intersection with a set of landmarks INLINEFORM0 , each coming from one of the following categories:
Bar Playfield Bank Hotel Shop Subway Coffee Shop Restaurant Theater
The right-side of Figure FIGREF3 illustrates how the map is presented. Note that within-intersection transitions have a smaller grid distance than transitions to new intersections. To ensure that the localization task is not too easy, we do not include street names in the overhead map and keep the landmark categories coarse. That is, the dialogue is driven by uncertainty in the tourist's current location and the properties of the target location: if the exact location and orientation of the tourist were known, it would suffice to communicate a sequence of actions.
Task
For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners INLINEFORM0 . Next, we construct the overhead map of the environment, i.e. INLINEFORM1 with INLINEFORM2 and INLINEFORM3 . We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random.
The shared goal of the two agents is to navigate the tourist to the target location INLINEFORM0 , which is only known to the guide. The tourist perceives a “street view” planar projection INLINEFORM1 of the 360 image at location INLINEFORM2 and can simultaneously chat with the guide and navigate through the environment. The guide's role consists of reading the tourist description of the environment, building a “mental map” of their current position and providing instructions for navigating towards the target location. Whenever the guide believes that the tourist has reached the target location, they instruct the system to evaluate the tourist's location. The task ends when the evaluation is successful—i.e., when INLINEFORM3 —or otherwise continues until a total of three failed attempts. The additional attempts are meant to ease the task for humans, as we found that they otherwise often fail at the task but still end up close to the target location, e.g., at the wrong corner of the correct intersection.
Data Collection
We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs.
Dataset Statistics
The Talk The Walk dataset consists of over 10k successful dialogues—see Table FIGREF66 in the appendix for the dataset statistics split by neighborhood. Turkers successfully completed INLINEFORM0 of all finished tasks (we use this statistic as the human success rate). More than six hundred participants successfully completed at least one Talk The Walk HIT. Although the Visual Dialog BIBREF2 and GuessWhat BIBREF1 datasets are larger, the collected Talk The Walk dialogs are significantly longer. On average, Turkers needed more than 62 acts (i.e utterances and actions) before they successfully completed the task, whereas Visual Dialog requires 20 acts. The majority of acts comprise the tourist's actions, with on average more than 44 actions per dialogue. The guide produces roughly 9 utterances per dialogue, slightly more than the tourist's 8 utterances. Turkers use diverse discourse, with a vocabulary size of more than 10K (calculated over all successful dialogues). An example from the dataset is shown in Appendix SECREF14 . The dataset is available at https://github.com/facebookresearch/talkthewalk.
Experiments
We investigate the difficulty of the proposed task by establishing initial baselines. The final Talk The Walk task is challenging and encompasses several important sub-tasks, ranging from landmark recognition to tourist localization and natural language instruction-giving. Arguably the most important sub-task is localization: without such capabilities the guide can not tell whether the tourist reached the target location. In this work, we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization. Specifically, we let trained tourist models undertake random walks, using the following protocol: at each step, the tourist communicates its observations and actions to the guide, who predicts the tourist's location. If the guide predicts that the tourist is at target, we evaluate its location. If successful, the task ends, otherwise we continue until there have been three wrong evaluations. The protocol is given as pseudo-code in Appendix SECREF12 .
Tourist Localization
The designed navigation protocol relies on a trained localization model that predicts the tourist's location from a communicated message. Before we formalize this localization sub-task in Section UID21 , we further introduce two simplifying assumptions—perfect perception and orientation-agnosticism—so as to overcome some of the difficulties we encountered in preliminary experiments.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Perfect Perception Early experiments revealed that perceptual grounding of landmarks is difficult: we set up a landmark classification problem, on which models with extracted CNN BIBREF7 or text recognition features BIBREF8 barely outperform a random baseline—see Appendix SECREF13 for full details. This finding implies that localization models from image input are limited by their ability to recognize landmarks, and, as a result, would not generalize to unseen environments. To ensure that perception is not the limiting factor when investigating the landmark-grounding and action-grounding capabilities of localization models, we assume “perfect perception”: in lieu of the 360 image view, the tourist is given the landmarks at its current location. More formally, each state observation INLINEFORM0 now equals the set of landmarks at the INLINEFORM1 -location, i.e. INLINEFORM2 . If the INLINEFORM3 -location does not have any visible landmarks, we return a single “empty corner” symbol. We stress that our findings—including a novel architecture for grounding actions into an overhead map, see Section UID28 —should carry over to settings without the perfect perception assumption.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Orientation-agnosticism We opt to ignore the tourist's orientation, which simplifies the set of actions to [Left, Right, Up, Down], corresponding to adding [(-1, 0), (1, 0), (0, 1), (0, -1)] to the current INLINEFORM0 coordinates, respectively. Note that actions are now coupled to an orientation on the map—e.g. up is equal to going north—and this implicitly assumes that the tourist has access to a compass. This also affects perception, since the tourist now has access to views from all orientations: in conjunction with “perfect perception”, implying that only landmarks at the current corner are given, whereas landmarks from different corners (e.g. across the street) are not visible.
Even with these simplifications, the localization-based baseline comes with its own set of challenges. As we show in Section SECREF34 , the task requires communication about a short (random) path—i.e., not only a sequence of observations but also actions—in order to achieve high localization accuracy. This means that the guide needs to decode observations from multiple time steps, as well as understand their 2D spatial arrangement as communicated via the sequence of actions. Thus, in order to get to a good understanding of the task, we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guide's map. In doing so, we thoroughly study the role of the communication channel in the localization task, by investigating increasingly constrained forms of communication: from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language.
The full navigation baseline hinges on a localization model from random trajectories. While we can sample random actions in the emergent communication setup, this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators. This leads to slightly different problem setups, as described below.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Emergent language A tourist, starting from a random location, takes INLINEFORM0 random actions INLINEFORM1 to reach target location INLINEFORM2 . Every location in the environment has a corresponding set of landmarks INLINEFORM3 for each of the INLINEFORM4 coordinates. As the tourist navigates, the agent perceives INLINEFORM5 state-observations INLINEFORM6 where each observation INLINEFORM7 consists of a set of INLINEFORM8 landmark symbols INLINEFORM9 . Given the observations INLINEFORM10 and actions INLINEFORM11 , the tourist generates a message INLINEFORM12 which is communicated to the other agent. The objective of the guide is to predict the location INLINEFORM13 from the tourist's message INLINEFORM14 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural language In contrast to our emergent communication experiments, we do not take random actions but instead extract actions, observations, and messages from the dataset. Specifically, we consider each tourist utterance (i.e. at any point in the dialogue), obtain the current tourist location as target location INLINEFORM0 , the utterance itself as message INLINEFORM1 , and the sequence of observations and actions that took place between the current and previous tourist utterance as INLINEFORM2 and INLINEFORM3 , respectively. Similar to the emergent language setting, the guide's objective is to predict the target location INLINEFORM4 models from the tourist message INLINEFORM5 . We conduct experiments with INLINEFORM6 taken from the dataset and with INLINEFORM7 generated from the extracted observations INLINEFORM8 and actions INLINEFORM9 .
Model
This section outlines the tourist and guide architectures. We first describe how the tourist produces messages for the various communication channels across which the messages are sent. We subsequently describe how these messages are processed by the guide, and introduce the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding into the 2D overhead map in order to predict the tourist's location.
The Tourist
For each of the communication channels, we outline the procedure for generating a message INLINEFORM0 . Given a set of state observations INLINEFORM1 , we represent each observation by summing the INLINEFORM2 -dimensional embeddings of the observed landmarks, i.e. for INLINEFORM3 , INLINEFORM4 , where INLINEFORM5 is the landmark embedding lookup table. In addition, we embed action INLINEFORM6 into a INLINEFORM7 -dimensional embedding INLINEFORM8 via a look-up table INLINEFORM9 . We experiment with three types of communication channel.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vectors The tourist has access to observations of several time steps, whose order is important for accurate localization. Because summing embeddings is order-invariant, we introduce a sum over positionally-gated embeddings, which, conditioned on time step INLINEFORM0 , pushes embedding information into the appropriate dimensions. More specifically, we generate an observation message INLINEFORM1 , where INLINEFORM2 is a learned gating vector for time step INLINEFORM3 . In a similar fashion, we produce action message INLINEFORM4 and send the concatenated vectors INLINEFORM5 as message to the guide. We can interpret continuous vector communication as a single, monolithic model because its architecture is end-to-end differentiable, enabling gradient-based optimization for training.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete symbols Like the continuous vector communication model, with discrete communication the tourist also uses separate channels for observations and actions, as well as a sum over positionally gated embeddings to generate observation embedding INLINEFORM0 . We pass this embedding through a sigmoid and generate a message INLINEFORM1 by sampling from the resulting Bernoulli distributions:
INLINEFORM0
The action message INLINEFORM0 is produced in the same way, and we obtain the final tourist message INLINEFORM1 through concatenating the messages.
The communication channel's sampling operation yields the model non-differentiable, so we use policy gradients BIBREF9 , BIBREF10 to train the parameters INLINEFORM0 of the tourist model. That is, we estimate the gradient by INLINEFORM1
where the reward function INLINEFORM0 is the negative guide's loss (see Section SECREF25 ) and INLINEFORM1 a state-value baseline to reduce variance. We use a linear transformation over the concatenated embeddings as baseline prediction, i.e. INLINEFORM2 , and train it with a mean squared error loss.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language Because observations and actions are of variable-length, we use an LSTM encoder over the sequence of observations embeddings INLINEFORM0 , and extract its last hidden state INLINEFORM1 . We use a separate LSTM encoder for action embeddings INLINEFORM2 , and concatenate both INLINEFORM3 and INLINEFORM4 to the input of the LSTM decoder at each time step: DISPLAYFORM0
where INLINEFORM0 a look-up table, taking input tokens INLINEFORM1 . We train with teacher-forcing, i.e. we optimize the cross-entropy loss: INLINEFORM2 . At test time, we explore the following decoding strategies: greedy, sampling and a beam-search. We also fine-tune a trained tourist model (starting from a pre-trained model) with policy gradients in order to minimize the guide's prediction loss.
The Guide
Given a tourist message INLINEFORM0 describing their observations and actions, the objective of the guide is to predict the tourist's location on the map. First, we outline the procedure for extracting observation embedding INLINEFORM1 and action embeddings INLINEFORM2 from the message INLINEFORM3 for each of the types of communication. Next, we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guide's map in order to predict the tourist's location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous For the continuous communication model, we assign the observation message to the observation embedding, i.e. INLINEFORM0 . To extract the action embedding for time step INLINEFORM1 , we apply a linear layer to the action message, i.e. INLINEFORM2 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete For discrete communication, we obtain observation INLINEFORM0 by applying a linear layer to the observation message, i.e. INLINEFORM1 . Similar to the continuous communication model, we use a linear layer over action message INLINEFORM2 to obtain action embedding INLINEFORM3 for time step INLINEFORM4 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language The message INLINEFORM0 contains information about observations and actions, so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings. Specifically, we encode the message INLINEFORM1 , consisting of INLINEFORM2 tokens INLINEFORM3 taken from vocabulary INLINEFORM4 , with a bidirectional LSTM: DISPLAYFORM0
where INLINEFORM0 is the word embedding look-up table. We obtain observation embedding INLINEFORM1 through an attention mechanism over the hidden states INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding: INLINEFORM1 . We use the same mechanism to extract the action embedding INLINEFORM2 from the hidden states. For the observation embedding, we obtain the final representation by summing positionally gated embeddings, i.e., INLINEFORM3 .
We represent the guide's map as INLINEFORM0 , where in this case INLINEFORM1 , where each INLINEFORM2 -dimensional INLINEFORM3 location embedding INLINEFORM4 is computed as the sum of the guide's landmark embeddings for that location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Motivation While the guide's map representation contains only local landmark information, the tourist communicates a trajectory of the map (i.e. actions and observations from multiple locations), implying that directly comparing the tourist's message with the individual landmark embeddings is probably suboptimal. Instead, we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations. We propose a mechanism for translating landmark embeddings according to state transitions (left, right, up, down), which can be expressed as a 2D convolution over the map embeddings. For simplicity, let us assume that the map embedding INLINEFORM0 is 1-dimensional, then a left action can be realized through application of the following INLINEFORM1 kernel: INLINEFORM2 which effectively shifts all values of INLINEFORM3 one position to the left. We propose to learn such state-transitions from the tourist message through a differentiable attention-mask over the spatial dimensions of a 3x3 convolution.
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC We linearly project each predicted action embedding INLINEFORM0 to a 9-dimensional vector INLINEFORM1 , normalize it by a softmax and subsequently reshape the vector into a 3x3 mask INLINEFORM2 : DISPLAYFORM0
We learn a 3x3 convolutional kernel INLINEFORM0 , with INLINEFORM1 features, and apply the mask INLINEFORM2 to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions, i.e. INLINEFORM3 , and subsequently taking the Hadamard product: INLINEFORM4 . For each action step INLINEFORM5 , we then apply a 2D convolution with masked weight INLINEFORM6 to obtain a new map embedding INLINEFORM7 , where we zero-pad the input to maintain identical spatial dimensions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction model We repeat the MASC operation INLINEFORM0 times (i.e. once for each action), and then aggregate the map embeddings by a sum over positionally-gated embeddings: INLINEFORM1 . We score locations by taking the dot-product of the observation embedding INLINEFORM2 , which contains information about the sequence of observed landmarks by the tourist, and the map. We compute a distribution over the locations of the map INLINEFORM3 by taking a softmax over the computed scores: DISPLAYFORM0
paragraph4 0.1ex plus0.1ex minus.1ex-1em Predicting T While emergent communication models use a fixed length trasjectory INLINEFORM0 , natural language messages may differ in the number of communicated observations and actions. Hence, we predict INLINEFORM1 from the communicated message. Specifically, we use a softmax regression layer over the last hidden state INLINEFORM2 of the RNN, and subsequently sample INLINEFORM3 from the resulting multinomial distribution: DISPLAYFORM0
We jointly train the INLINEFORM0 -prediction model via REINFORCE, with the guide's loss as reward function and a mean-reward baseline.
Comparisons
To better analyze the performance of the models incorporating MASC, we compare against a no-MASC baseline in our experiments, as well as a prediction upper bound.
paragraph4 0.1ex plus0.1ex minus.1ex-1em No MASC We compare the proposed MASC model with a model that does not include this mechanism. Whereas MASC predicts a convolution mask from the tourist message, the “No MASC” model uses INLINEFORM0 , the ordinary convolutional kernel to convolve the map embedding INLINEFORM1 to obtain INLINEFORM2 . We also share the weights of this convolution at each time step.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction upper-bound Because we have access to the class-conditional likelihood INLINEFORM0 , we are able to compute the Bayes error rate (or irreducible error). No model (no matter how expressive) with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions.
Results and Discussion
In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work.
Analysis of Localization Task
paragraph4 0.1ex plus0.1ex minus.1ex-1em Task is not too easy The upper-bound on localization performance in Table TABREF32 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist ( INLINEFORM0 35% accuracy). This is an important result for the full navigation task because the need for two-way communication disappears if localization is too easy; if the guide knows the exact location of the tourist it suffices to communicate a list of instructions, which is then executed by the tourist. The uncertainty in the tourist's location is what drives the dialogue between the two agents.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Importance of actions We observe that the upperbound for only communicating observations plateaus around 57% (even for INLINEFORM0 actions), whereas it exceeds 90% when we also take actions into account. This implies that, at least for random walks, it is essential to communicate a trajectory, including observations and actions, in order to achieve high localization accuracy.
Emergent Language Localization
We first report the results for tourist localization with emergent language in Table TABREF32 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism. For instance, for INLINEFORM0 action, MASC already achieves 56.09 % on the test set and this further increases to 69.85% for INLINEFORM1 . On the other hand, no-MASC models hit a plateau at 43%. In Appendix SECREF11 , we analyze learned MASC values, and show that communicated actions are often mapped to corresponding state-transitions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vs discrete We observe similar performance for continuous and discrete emergent communication models, implying that a discrete communication channel is not a limiting factor for localization performance.
Natural Language Localization
We report the results of tourist localization with natural language in Table TABREF36 . We compare accuracy of the guide model (with MASC) trained on utterances from (i) humans, (ii) a supervised model with various decoding strategies, and (iii) a policy gradient model optimized with respect to the loss of a frozen, pre-trained guide model on human utterances.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human utterances Compared to emergent language, localization from human utterances is much harder, achieving only INLINEFORM0 on the test set. Here, we report localization from a single utterance, but in Appendix SECREF45 we show that including up to five dialogue utterances only improves performance to INLINEFORM1 . We also show that MASC outperform no-MASC models for natural language communication.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Generated utterances We also investigate generated tourist utterances from conditional language models. Interestingly, we observe that the supervised model (with greedy and beam-search decoding) as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances. However, their level of accuracy is slightly below the baseline of communicating a single observation, indicating that these models only learn to ground utterances in a single landmark observation.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Better grounding of generated utterances We analyze natural language samples in Table TABREF38 , and confirm that, unlike human utterances, the generated utterances are talking about the observed landmarks. This observation explains why the generated utterances obtain higher localization accuracy. The current language models are most successful when conditioned on a single landmark observation; We show in Appendix UID43 that performance quickly deteriorates when the model is conditioned on more observations, suggesting that it can not produce natural language utterances about multiple time steps.
Localization-based Baseline
Table TABREF36 shows results for the best localization models on the full task, evaluated via the random walk protocol defined in Algorithm SECREF12 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Comparison with human annotators Interestingly, our best localization model (continuous communication, with MASC, and INLINEFORM0 ) achieves 88.33% on the test set and thus exceed human performance of 76.74% on the full task. While emergent models appear to be stronger localizers, humans might cope with their localization uncertainty through other mechanisms (e.g. better guidance, bias towards taking particular paths, etc). The simplifying assumption of perfect perception also helps.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Number of actions Unsurprisingly, humans take fewer steps (roughly 15) than our best random walk model (roughly 34). Our human annotators likely used some form of guidance to navigate faster to the target.
Conclusion
We introduced the Talk The Walk task and dataset, which consists of crowd-sourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC. For the important localization sub-task, we proposed MASC—a novel grounding mechanism to learn state-transition from the tourist's message—and showed that it improves localization performance for emergent and natural language. We use the localization model to provide baseline numbers on the Talk The Walk task, in order to facilitate future research.
Related Work
The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence, including grounded language learning, goal-oriented dialogue research and situated navigation. Here, we describe related previous work in these areas.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Related tasks There has been a long line of work involving related tasks. Early work on task-oriented dialogue dates back to the early 90s with the introduction of the Map Task BIBREF11 and Maze Game BIBREF25 corpora. Recent efforts have led to larger-scale goal-oriented dialogue datasets, for instance to aid research on visually-grounded dialogue BIBREF2 , BIBREF1 , knowledge-base-grounded discourse BIBREF29 or negotiation tasks BIBREF36 . At the same time, there has been a big push to develop environments for embodied AI, many of which involve agents following natural language instructions with respect to an environment BIBREF13 , BIBREF50 , BIBREF5 , BIBREF39 , BIBREF19 , BIBREF18 , following-up on early work in this area BIBREF38 , BIBREF20 . An early example of navigation using neural networks is BIBREF28 , who propose an online learning approach for robot navigation. Recently, there has been increased interest in using end-to-end trainable neural networks for learning to navigate indoor scenes BIBREF27 , BIBREF26 or large cities BIBREF17 , BIBREF40 , but, unlike our work, without multi-agent communication. Also the task of localization (without multi-agent communication) has recently been studied BIBREF18 , BIBREF48 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied (grounded) in sensorimotor experience of the physical world BIBREF15 , BIBREF45 . On the one hand, work in multi-modal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks BIBREF14 , BIBREF31 . In robotics, researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world BIBREF44 , BIBREF46 . Recently, grounding has also been applied to the learning of sentence representations BIBREF32 , image captioning BIBREF37 , BIBREF49 , visual question answering BIBREF12 , BIBREF22 , visual reasoning BIBREF30 , BIBREF42 , and grounded machine translation BIBREF43 , BIBREF23 . Grounding also plays a crucial role in the emergent research of multi-agent communication, where, agents communicate (in natural language or otherwise) in order to solve a task, with respect to their shared environment BIBREF35 , BIBREF21 , BIBREF41 , BIBREF24 , BIBREF36 , BIBREF47 , BIBREF34 .
Implementation Details
For the emergent communication models, we use an embedding size INLINEFORM0 . The natural language experiments use 128-dimensional word embeddings and a bidirectional RNN with 256 units. In all experiments, we train the guide with a cross entropy loss using the ADAM optimizer with default hyper-parameters BIBREF33 . We perform early stopping on the validation accuracy, and report the corresponding train, valid and test accuracy. We optimize the localization models with continuous, discrete and natural language communication channels for 200, 200, and 25 epochs, respectively. To facilitate further research on Talk The Walk, we make our code base for reproducing experiments publicly available at https://github.com/facebookresearch/talkthewalk.
Additional Natural Language Experiments
First, we investigate the sensitivity of tourist generation models to the trajectory length, finding that the model conditioned on a single observation (i.e. INLINEFORM0 ) achieves best performance. In the next subsection, we further analyze localization models from human utterances by investigating MASC and no-MASC models with increasing dialogue context.
Tourist Generation Models
After training the supervised tourist model (conditioned on observations and action from human expert trajectories), there are two ways to train an accompanying guide model. We can optimize a location prediction model on either (i) extracted human trajectories (as in the localization setup from human utterances) or (ii) on all random paths of length INLINEFORM0 (as in the full task evaluation). Here, we investigate the impact of (1) using either human or random trajectories for training the guide model, and (2) the effect of varying the path length INLINEFORM1 during the full-task evaluation. For random trajectories, guide training uses the same path length INLINEFORM2 as is used during evaluation. We use a pre-trained tourist model with greedy decoding for generating the tourist utterances. Table TABREF40 summarizes the results.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human vs random trajectories We only observe small improvements for training on random trajectories. Human trajectories are thus diverse enough to generalize to random trajectories.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length. We observe that the full task performance quickly deteriorates for both human and random trajectories. This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions. Although it is possible that the guide model can not process such utterances, this is not very likely because the MASC architectures handles such messages successfully for emergent communication.
We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table TABREF40 . We find that performance decreases from 29.05% to 20.87% accuracy on the test set when we increase the beam-size from one to eight.
Localization from Human Utterances
We conduct an ablation study for MASC on natural language with varying dialogue context. Specifically, we compare localization accuracy of MASC and no-MASC models trained on the last [1, 3, 5] utterances of the dialogue (including guide utterances). We report these results in Table TABREF41 . In all cases, MASC outperforms the no-MASC models by several accuracy points. We also observe that mean predicted INLINEFORM0 (over the test set) increases from 1 to 2 when more dialogue context is included.
Visualizing MASC predictions
Figure FIGREF46 shows the MASC values for a learned model with emergent discrete communications and INLINEFORM0 actions. Specifically, we look at the predicted MASC values for different action sequences taken by the tourist. We observe that the first action is always mapped to the correct state-transition, but that the second and third MASC values do not always correspond to right state-transitions.
Evaluation on Full Setup
We provide pseudo-code for evaluation of localization models on the full task in Algorithm SECREF12 , as well as results for all emergent communication models in Table TABREF55 .
INLINEFORM0 INLINEFORM1
INLINEFORM0 take new action INLINEFORM1 INLINEFORM2
Performance evaluation of location prediction model on full Talk The Walk setup
Landmark Classification
While the guide has access to the landmark labels, the tourist needs to recognize these landmarks from raw perceptual information. In this section, we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk.
The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes, see Figure FIGREF62 for how they are distributed. The class distribution is fairly imbalanced, with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters. We treat landmark recognition as a multi-label classification problem as there can be multiple landmarks on a corner.
For the task of landmark classification, we extract the relevant views of the 360 image from which a landmark is visible. Because landmarks are labeled to be on a specific corner of an intersection, we assume that they are visible from one of the orientations facing away from the intersection. For example, for a landmark on the northwest corner of an intersection, we extract views from both the north and west direction. The orientation-specific views are obtained by a planar projection of the full 360-image with a small field of view (60 degrees) to limit distortions. To cover the full field of view, we extract two images per orientation, with their horizontal focus point 30 degrees apart. Hence, we obtain eight images per 360 image with corresponding orientation INLINEFORM0 .
We run the following pre-trained feature extractors over the extracted images:
For the text recognition model, we use a learned look-up table INLINEFORM0 to embed the extracted text features INLINEFORM1 , and fuse all embeddings of four images through a bag of embeddings, i.e., INLINEFORM2 . We use a linear layer followed by a sigmoid to predict the probability for each class, i.e. INLINEFORM3 . We also experiment with replacing the look-up embeddings with pre-trained FastText embeddings BIBREF16 . For the ResNet model, we use a bag of embeddings over the four ResNet features, i.e. INLINEFORM4 , before we pass it through a linear layer to predict the class probabilities: INLINEFORM5 . We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model.
To account for class imbalance, we train all described models with a binary cross entropy loss weighted by the inverted class frequency. We create a 80-20 class-conditional split of the dataset into a training and validation set. We train for 100 epochs and perform early stopping on the validation loss.
The F1 scores for the described methods in Table TABREF65 . We compare to an “all positive” baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline. Although 256-dimensional ResNet features achieve slightly better precision on the validation set, it results in much worse recall and a lower F1 score. Our results indicate that perceptual grounding is a difficult task, which easily merits a paper of its own right, and so we leave further improvements (e.g. better text recognizers) for future work.
Dataset Details
paragraph4 0.1ex plus0.1ex minus.1ex-1em Dataset split We split the full dataset by assigning entire 4x4 grids (independent of the target location) to the train, valid or test set. Specifically, we design the split such that the valid set contains at least one intersection (out of four) is not part of the train set. For the test set, all four intersections are novel. See our source code, available at URL ANONYMIZED, for more details on how this split is realized.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Example
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Hello, what are you near?
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Tourist: Hello, in front of me is a Brooks Brothers
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: Is that a shop or restaurant?
Tourist: ACTION:TURNLEFT
Tourist: It is a clothing shop.
Tourist: ACTION:TURNLEFT
Guide: You need to go to the intersection in the northwest corner of the map
Tourist: ACTION:TURNLEFT
Tourist: There appears to be a bank behind me.
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Ok, turn left then go straight up that road
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Guide: There should be shops on two of the corners but you
need to go to the corner without a shop.
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: let me know when you get there.
Tourist: on my left is Radio city Music hall
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Tourist: I can't go straight any further.
Guide: ok. turn so that the theater is on your right.
Guide: then go straight
Tourist: That would be going back the way I came
Guide: yeah. I was looking at the wrong bank
Tourist: I'll notify when I am back at the brooks brothers, and the bank.
Tourist: ACTION:TURNRIGHT
Guide: make a right when the bank is on your left
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT
Tourist: Making the right at the bank.
Tourist: ACTION:FORWARD ACTION:FORWARD
Tourist: I can't go that way.
Tourist: ACTION:TURNLEFT
Tourist: Bank is ahead of me on the right
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Guide: turn around on that intersection
Tourist: I can only go to the left or back the way I just came.
Tourist: ACTION:TURNLEFT
Guide: you're in the right place. do you see shops on the corners?
Guide: If you're on the corner with the bank, cross the street
Tourist: I'm back where I started by the shop and the bank.
Tourist: ACTION:TURNRIGHT
Guide: on the same side of the street?
Tourist: crossing the street now
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Tourist: there is an I love new york shop across the street on the left from me now
Tourist: ACTION:TURNRIGHT ACTION:FORWARD
Guide: ok. I'll see if it's right.
Guide: EVALUATE_LOCATION
Guide: It's not right.
Tourist: What should I be on the look for?
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: There should be shops on two corners but you need to be on one of the corners
without the shop.
Guide: Try the other corner.
Tourist: this intersection has 2 shop corners and a bank corner
Guide: yes. that's what I see on the map.
Tourist: should I go to the bank corner? or one of the shop corners?
or the blank corner (perhaps a hotel)
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Go to the one near the hotel. The map says the hotel is a little
further down but it might be a little off.
Tourist: It's a big hotel it's possible.
Tourist: ACTION:FORWARD ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
Tourist: I'm on the hotel corner
Guide: EVALUATE_LOCATION | Yes |
0cd0755ac458c3bafbc70e4268c1e37b87b9721b | 0cd0755ac458c3bafbc70e4268c1e37b87b9721b_1 | Q: Did the authors use crowdsourcing platforms?
Text: 0pt0.03.03 *
0pt0.030.03 *
0pt0.030.03
We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.
Introduction
As artificial intelligence plays an ever more prominent role in everyday human lives, it becomes increasingly important to enable machines to communicate via natural language—not only with humans, but also with each other. Learning algorithms for natural language understanding, such as in machine translation and reading comprehension, have progressed at an unprecedented rate in recent years, but still rely on static, large-scale, text-only datasets that lack crucial aspects of how humans understand and produce natural language. Namely, humans develop language capabilities by being embodied in an environment which they can perceive, manipulate and move around in; and by interacting with other humans. Hence, we argue that we should incorporate all three fundamental aspects of human language acquisition—perception, action and interactive communication—and develop a task and dataset to that effect.
We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .
Grounded language learning has (re-)gained traction in the AI community, and much attention is currently devoted to virtual embodiment—the development of multi-agent communication tasks in virtual environments—which has been argued to be a viable strategy for acquiring natural language semantics BIBREF0 . Various related tasks have recently been introduced, but in each case with some limitations. Although visually grounded dialogue tasks BIBREF1 , BIBREF2 comprise perceptual grounding and multi-agent interaction, their agents are passive observers and do not act in the environment. By contrast, instruction-following tasks, such as VNL BIBREF3 , involve action and perception but lack natural language interaction with other agents. Furthermore, some of these works use simulated environments BIBREF4 and/or templated language BIBREF5 , which arguably oversimplifies real perception or natural language, respectively. See Table TABREF15 for a comparison.
Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
We argue that for artificial agents to solve this challenging problem, some fundamental architecture designs are missing, and our hope is that this task motivates their innovation. To that end, we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism. To model the interaction between language and action, this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence.
This work makes the following contributions: 1) We present the first large scale dialogue dataset grounded in action and perception; 2) We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language; 4) Using localization models, we establish initial baselines on the full task; 5) We show that our best model exceeds human performance under the assumption of “perfect perception” and with a learned emergent communication protocol, and sets a non-trivial baseline with natural language.
Talk The Walk
We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera. Most parts of the city are grid-like and uniform, which makes it well-suited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side—see Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.
The tourist's location is given as a tuple INLINEFORM0 , where INLINEFORM1 are the coordinates and INLINEFORM2 signifies the orientation (north, east, south or west). The tourist can take three actions: turn left, turn right and go forward. For moving forward, we add INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 to the INLINEFORM7 coordinates for the respective orientations. Upon a turning action, the orientation is updated by INLINEFORM8 where INLINEFORM9 for left and INLINEFORM10 for right. If the tourist moves outside the grid, we issue a warning that they cannot go in that direction and do not update the location. Moreover, tourists are shown different types of transitions: a short transition for actions that bring the tourist to a different corner of the same intersection; and a longer transition for actions that bring them to a new intersection.
The guide observes a map that corresponds to the tourist's environment. We exploit the fact that urban areas like NYC are full of local businesses, and overlay the map with these landmarks as localization points for our task. Specifically, we manually annotate each corner of the intersection with a set of landmarks INLINEFORM0 , each coming from one of the following categories:
Bar Playfield Bank Hotel Shop Subway Coffee Shop Restaurant Theater
The right-side of Figure FIGREF3 illustrates how the map is presented. Note that within-intersection transitions have a smaller grid distance than transitions to new intersections. To ensure that the localization task is not too easy, we do not include street names in the overhead map and keep the landmark categories coarse. That is, the dialogue is driven by uncertainty in the tourist's current location and the properties of the target location: if the exact location and orientation of the tourist were known, it would suffice to communicate a sequence of actions.
Task
For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners INLINEFORM0 . Next, we construct the overhead map of the environment, i.e. INLINEFORM1 with INLINEFORM2 and INLINEFORM3 . We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random.
The shared goal of the two agents is to navigate the tourist to the target location INLINEFORM0 , which is only known to the guide. The tourist perceives a “street view” planar projection INLINEFORM1 of the 360 image at location INLINEFORM2 and can simultaneously chat with the guide and navigate through the environment. The guide's role consists of reading the tourist description of the environment, building a “mental map” of their current position and providing instructions for navigating towards the target location. Whenever the guide believes that the tourist has reached the target location, they instruct the system to evaluate the tourist's location. The task ends when the evaluation is successful—i.e., when INLINEFORM3 —or otherwise continues until a total of three failed attempts. The additional attempts are meant to ease the task for humans, as we found that they otherwise often fail at the task but still end up close to the target location, e.g., at the wrong corner of the correct intersection.
Data Collection
We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs.
Dataset Statistics
The Talk The Walk dataset consists of over 10k successful dialogues—see Table FIGREF66 in the appendix for the dataset statistics split by neighborhood. Turkers successfully completed INLINEFORM0 of all finished tasks (we use this statistic as the human success rate). More than six hundred participants successfully completed at least one Talk The Walk HIT. Although the Visual Dialog BIBREF2 and GuessWhat BIBREF1 datasets are larger, the collected Talk The Walk dialogs are significantly longer. On average, Turkers needed more than 62 acts (i.e utterances and actions) before they successfully completed the task, whereas Visual Dialog requires 20 acts. The majority of acts comprise the tourist's actions, with on average more than 44 actions per dialogue. The guide produces roughly 9 utterances per dialogue, slightly more than the tourist's 8 utterances. Turkers use diverse discourse, with a vocabulary size of more than 10K (calculated over all successful dialogues). An example from the dataset is shown in Appendix SECREF14 . The dataset is available at https://github.com/facebookresearch/talkthewalk.
Experiments
We investigate the difficulty of the proposed task by establishing initial baselines. The final Talk The Walk task is challenging and encompasses several important sub-tasks, ranging from landmark recognition to tourist localization and natural language instruction-giving. Arguably the most important sub-task is localization: without such capabilities the guide can not tell whether the tourist reached the target location. In this work, we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization. Specifically, we let trained tourist models undertake random walks, using the following protocol: at each step, the tourist communicates its observations and actions to the guide, who predicts the tourist's location. If the guide predicts that the tourist is at target, we evaluate its location. If successful, the task ends, otherwise we continue until there have been three wrong evaluations. The protocol is given as pseudo-code in Appendix SECREF12 .
Tourist Localization
The designed navigation protocol relies on a trained localization model that predicts the tourist's location from a communicated message. Before we formalize this localization sub-task in Section UID21 , we further introduce two simplifying assumptions—perfect perception and orientation-agnosticism—so as to overcome some of the difficulties we encountered in preliminary experiments.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Perfect Perception Early experiments revealed that perceptual grounding of landmarks is difficult: we set up a landmark classification problem, on which models with extracted CNN BIBREF7 or text recognition features BIBREF8 barely outperform a random baseline—see Appendix SECREF13 for full details. This finding implies that localization models from image input are limited by their ability to recognize landmarks, and, as a result, would not generalize to unseen environments. To ensure that perception is not the limiting factor when investigating the landmark-grounding and action-grounding capabilities of localization models, we assume “perfect perception”: in lieu of the 360 image view, the tourist is given the landmarks at its current location. More formally, each state observation INLINEFORM0 now equals the set of landmarks at the INLINEFORM1 -location, i.e. INLINEFORM2 . If the INLINEFORM3 -location does not have any visible landmarks, we return a single “empty corner” symbol. We stress that our findings—including a novel architecture for grounding actions into an overhead map, see Section UID28 —should carry over to settings without the perfect perception assumption.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Orientation-agnosticism We opt to ignore the tourist's orientation, which simplifies the set of actions to [Left, Right, Up, Down], corresponding to adding [(-1, 0), (1, 0), (0, 1), (0, -1)] to the current INLINEFORM0 coordinates, respectively. Note that actions are now coupled to an orientation on the map—e.g. up is equal to going north—and this implicitly assumes that the tourist has access to a compass. This also affects perception, since the tourist now has access to views from all orientations: in conjunction with “perfect perception”, implying that only landmarks at the current corner are given, whereas landmarks from different corners (e.g. across the street) are not visible.
Even with these simplifications, the localization-based baseline comes with its own set of challenges. As we show in Section SECREF34 , the task requires communication about a short (random) path—i.e., not only a sequence of observations but also actions—in order to achieve high localization accuracy. This means that the guide needs to decode observations from multiple time steps, as well as understand their 2D spatial arrangement as communicated via the sequence of actions. Thus, in order to get to a good understanding of the task, we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guide's map. In doing so, we thoroughly study the role of the communication channel in the localization task, by investigating increasingly constrained forms of communication: from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language.
The full navigation baseline hinges on a localization model from random trajectories. While we can sample random actions in the emergent communication setup, this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators. This leads to slightly different problem setups, as described below.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Emergent language A tourist, starting from a random location, takes INLINEFORM0 random actions INLINEFORM1 to reach target location INLINEFORM2 . Every location in the environment has a corresponding set of landmarks INLINEFORM3 for each of the INLINEFORM4 coordinates. As the tourist navigates, the agent perceives INLINEFORM5 state-observations INLINEFORM6 where each observation INLINEFORM7 consists of a set of INLINEFORM8 landmark symbols INLINEFORM9 . Given the observations INLINEFORM10 and actions INLINEFORM11 , the tourist generates a message INLINEFORM12 which is communicated to the other agent. The objective of the guide is to predict the location INLINEFORM13 from the tourist's message INLINEFORM14 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural language In contrast to our emergent communication experiments, we do not take random actions but instead extract actions, observations, and messages from the dataset. Specifically, we consider each tourist utterance (i.e. at any point in the dialogue), obtain the current tourist location as target location INLINEFORM0 , the utterance itself as message INLINEFORM1 , and the sequence of observations and actions that took place between the current and previous tourist utterance as INLINEFORM2 and INLINEFORM3 , respectively. Similar to the emergent language setting, the guide's objective is to predict the target location INLINEFORM4 models from the tourist message INLINEFORM5 . We conduct experiments with INLINEFORM6 taken from the dataset and with INLINEFORM7 generated from the extracted observations INLINEFORM8 and actions INLINEFORM9 .
Model
This section outlines the tourist and guide architectures. We first describe how the tourist produces messages for the various communication channels across which the messages are sent. We subsequently describe how these messages are processed by the guide, and introduce the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding into the 2D overhead map in order to predict the tourist's location.
The Tourist
For each of the communication channels, we outline the procedure for generating a message INLINEFORM0 . Given a set of state observations INLINEFORM1 , we represent each observation by summing the INLINEFORM2 -dimensional embeddings of the observed landmarks, i.e. for INLINEFORM3 , INLINEFORM4 , where INLINEFORM5 is the landmark embedding lookup table. In addition, we embed action INLINEFORM6 into a INLINEFORM7 -dimensional embedding INLINEFORM8 via a look-up table INLINEFORM9 . We experiment with three types of communication channel.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vectors The tourist has access to observations of several time steps, whose order is important for accurate localization. Because summing embeddings is order-invariant, we introduce a sum over positionally-gated embeddings, which, conditioned on time step INLINEFORM0 , pushes embedding information into the appropriate dimensions. More specifically, we generate an observation message INLINEFORM1 , where INLINEFORM2 is a learned gating vector for time step INLINEFORM3 . In a similar fashion, we produce action message INLINEFORM4 and send the concatenated vectors INLINEFORM5 as message to the guide. We can interpret continuous vector communication as a single, monolithic model because its architecture is end-to-end differentiable, enabling gradient-based optimization for training.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete symbols Like the continuous vector communication model, with discrete communication the tourist also uses separate channels for observations and actions, as well as a sum over positionally gated embeddings to generate observation embedding INLINEFORM0 . We pass this embedding through a sigmoid and generate a message INLINEFORM1 by sampling from the resulting Bernoulli distributions:
INLINEFORM0
The action message INLINEFORM0 is produced in the same way, and we obtain the final tourist message INLINEFORM1 through concatenating the messages.
The communication channel's sampling operation yields the model non-differentiable, so we use policy gradients BIBREF9 , BIBREF10 to train the parameters INLINEFORM0 of the tourist model. That is, we estimate the gradient by INLINEFORM1
where the reward function INLINEFORM0 is the negative guide's loss (see Section SECREF25 ) and INLINEFORM1 a state-value baseline to reduce variance. We use a linear transformation over the concatenated embeddings as baseline prediction, i.e. INLINEFORM2 , and train it with a mean squared error loss.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language Because observations and actions are of variable-length, we use an LSTM encoder over the sequence of observations embeddings INLINEFORM0 , and extract its last hidden state INLINEFORM1 . We use a separate LSTM encoder for action embeddings INLINEFORM2 , and concatenate both INLINEFORM3 and INLINEFORM4 to the input of the LSTM decoder at each time step: DISPLAYFORM0
where INLINEFORM0 a look-up table, taking input tokens INLINEFORM1 . We train with teacher-forcing, i.e. we optimize the cross-entropy loss: INLINEFORM2 . At test time, we explore the following decoding strategies: greedy, sampling and a beam-search. We also fine-tune a trained tourist model (starting from a pre-trained model) with policy gradients in order to minimize the guide's prediction loss.
The Guide
Given a tourist message INLINEFORM0 describing their observations and actions, the objective of the guide is to predict the tourist's location on the map. First, we outline the procedure for extracting observation embedding INLINEFORM1 and action embeddings INLINEFORM2 from the message INLINEFORM3 for each of the types of communication. Next, we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guide's map in order to predict the tourist's location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous For the continuous communication model, we assign the observation message to the observation embedding, i.e. INLINEFORM0 . To extract the action embedding for time step INLINEFORM1 , we apply a linear layer to the action message, i.e. INLINEFORM2 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete For discrete communication, we obtain observation INLINEFORM0 by applying a linear layer to the observation message, i.e. INLINEFORM1 . Similar to the continuous communication model, we use a linear layer over action message INLINEFORM2 to obtain action embedding INLINEFORM3 for time step INLINEFORM4 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language The message INLINEFORM0 contains information about observations and actions, so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings. Specifically, we encode the message INLINEFORM1 , consisting of INLINEFORM2 tokens INLINEFORM3 taken from vocabulary INLINEFORM4 , with a bidirectional LSTM: DISPLAYFORM0
where INLINEFORM0 is the word embedding look-up table. We obtain observation embedding INLINEFORM1 through an attention mechanism over the hidden states INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding: INLINEFORM1 . We use the same mechanism to extract the action embedding INLINEFORM2 from the hidden states. For the observation embedding, we obtain the final representation by summing positionally gated embeddings, i.e., INLINEFORM3 .
We represent the guide's map as INLINEFORM0 , where in this case INLINEFORM1 , where each INLINEFORM2 -dimensional INLINEFORM3 location embedding INLINEFORM4 is computed as the sum of the guide's landmark embeddings for that location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Motivation While the guide's map representation contains only local landmark information, the tourist communicates a trajectory of the map (i.e. actions and observations from multiple locations), implying that directly comparing the tourist's message with the individual landmark embeddings is probably suboptimal. Instead, we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations. We propose a mechanism for translating landmark embeddings according to state transitions (left, right, up, down), which can be expressed as a 2D convolution over the map embeddings. For simplicity, let us assume that the map embedding INLINEFORM0 is 1-dimensional, then a left action can be realized through application of the following INLINEFORM1 kernel: INLINEFORM2 which effectively shifts all values of INLINEFORM3 one position to the left. We propose to learn such state-transitions from the tourist message through a differentiable attention-mask over the spatial dimensions of a 3x3 convolution.
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC We linearly project each predicted action embedding INLINEFORM0 to a 9-dimensional vector INLINEFORM1 , normalize it by a softmax and subsequently reshape the vector into a 3x3 mask INLINEFORM2 : DISPLAYFORM0
We learn a 3x3 convolutional kernel INLINEFORM0 , with INLINEFORM1 features, and apply the mask INLINEFORM2 to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions, i.e. INLINEFORM3 , and subsequently taking the Hadamard product: INLINEFORM4 . For each action step INLINEFORM5 , we then apply a 2D convolution with masked weight INLINEFORM6 to obtain a new map embedding INLINEFORM7 , where we zero-pad the input to maintain identical spatial dimensions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction model We repeat the MASC operation INLINEFORM0 times (i.e. once for each action), and then aggregate the map embeddings by a sum over positionally-gated embeddings: INLINEFORM1 . We score locations by taking the dot-product of the observation embedding INLINEFORM2 , which contains information about the sequence of observed landmarks by the tourist, and the map. We compute a distribution over the locations of the map INLINEFORM3 by taking a softmax over the computed scores: DISPLAYFORM0
paragraph4 0.1ex plus0.1ex minus.1ex-1em Predicting T While emergent communication models use a fixed length trasjectory INLINEFORM0 , natural language messages may differ in the number of communicated observations and actions. Hence, we predict INLINEFORM1 from the communicated message. Specifically, we use a softmax regression layer over the last hidden state INLINEFORM2 of the RNN, and subsequently sample INLINEFORM3 from the resulting multinomial distribution: DISPLAYFORM0
We jointly train the INLINEFORM0 -prediction model via REINFORCE, with the guide's loss as reward function and a mean-reward baseline.
Comparisons
To better analyze the performance of the models incorporating MASC, we compare against a no-MASC baseline in our experiments, as well as a prediction upper bound.
paragraph4 0.1ex plus0.1ex minus.1ex-1em No MASC We compare the proposed MASC model with a model that does not include this mechanism. Whereas MASC predicts a convolution mask from the tourist message, the “No MASC” model uses INLINEFORM0 , the ordinary convolutional kernel to convolve the map embedding INLINEFORM1 to obtain INLINEFORM2 . We also share the weights of this convolution at each time step.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction upper-bound Because we have access to the class-conditional likelihood INLINEFORM0 , we are able to compute the Bayes error rate (or irreducible error). No model (no matter how expressive) with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions.
Results and Discussion
In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work.
Analysis of Localization Task
paragraph4 0.1ex plus0.1ex minus.1ex-1em Task is not too easy The upper-bound on localization performance in Table TABREF32 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist ( INLINEFORM0 35% accuracy). This is an important result for the full navigation task because the need for two-way communication disappears if localization is too easy; if the guide knows the exact location of the tourist it suffices to communicate a list of instructions, which is then executed by the tourist. The uncertainty in the tourist's location is what drives the dialogue between the two agents.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Importance of actions We observe that the upperbound for only communicating observations plateaus around 57% (even for INLINEFORM0 actions), whereas it exceeds 90% when we also take actions into account. This implies that, at least for random walks, it is essential to communicate a trajectory, including observations and actions, in order to achieve high localization accuracy.
Emergent Language Localization
We first report the results for tourist localization with emergent language in Table TABREF32 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism. For instance, for INLINEFORM0 action, MASC already achieves 56.09 % on the test set and this further increases to 69.85% for INLINEFORM1 . On the other hand, no-MASC models hit a plateau at 43%. In Appendix SECREF11 , we analyze learned MASC values, and show that communicated actions are often mapped to corresponding state-transitions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vs discrete We observe similar performance for continuous and discrete emergent communication models, implying that a discrete communication channel is not a limiting factor for localization performance.
Natural Language Localization
We report the results of tourist localization with natural language in Table TABREF36 . We compare accuracy of the guide model (with MASC) trained on utterances from (i) humans, (ii) a supervised model with various decoding strategies, and (iii) a policy gradient model optimized with respect to the loss of a frozen, pre-trained guide model on human utterances.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human utterances Compared to emergent language, localization from human utterances is much harder, achieving only INLINEFORM0 on the test set. Here, we report localization from a single utterance, but in Appendix SECREF45 we show that including up to five dialogue utterances only improves performance to INLINEFORM1 . We also show that MASC outperform no-MASC models for natural language communication.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Generated utterances We also investigate generated tourist utterances from conditional language models. Interestingly, we observe that the supervised model (with greedy and beam-search decoding) as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances. However, their level of accuracy is slightly below the baseline of communicating a single observation, indicating that these models only learn to ground utterances in a single landmark observation.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Better grounding of generated utterances We analyze natural language samples in Table TABREF38 , and confirm that, unlike human utterances, the generated utterances are talking about the observed landmarks. This observation explains why the generated utterances obtain higher localization accuracy. The current language models are most successful when conditioned on a single landmark observation; We show in Appendix UID43 that performance quickly deteriorates when the model is conditioned on more observations, suggesting that it can not produce natural language utterances about multiple time steps.
Localization-based Baseline
Table TABREF36 shows results for the best localization models on the full task, evaluated via the random walk protocol defined in Algorithm SECREF12 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Comparison with human annotators Interestingly, our best localization model (continuous communication, with MASC, and INLINEFORM0 ) achieves 88.33% on the test set and thus exceed human performance of 76.74% on the full task. While emergent models appear to be stronger localizers, humans might cope with their localization uncertainty through other mechanisms (e.g. better guidance, bias towards taking particular paths, etc). The simplifying assumption of perfect perception also helps.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Number of actions Unsurprisingly, humans take fewer steps (roughly 15) than our best random walk model (roughly 34). Our human annotators likely used some form of guidance to navigate faster to the target.
Conclusion
We introduced the Talk The Walk task and dataset, which consists of crowd-sourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC. For the important localization sub-task, we proposed MASC—a novel grounding mechanism to learn state-transition from the tourist's message—and showed that it improves localization performance for emergent and natural language. We use the localization model to provide baseline numbers on the Talk The Walk task, in order to facilitate future research.
Related Work
The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence, including grounded language learning, goal-oriented dialogue research and situated navigation. Here, we describe related previous work in these areas.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Related tasks There has been a long line of work involving related tasks. Early work on task-oriented dialogue dates back to the early 90s with the introduction of the Map Task BIBREF11 and Maze Game BIBREF25 corpora. Recent efforts have led to larger-scale goal-oriented dialogue datasets, for instance to aid research on visually-grounded dialogue BIBREF2 , BIBREF1 , knowledge-base-grounded discourse BIBREF29 or negotiation tasks BIBREF36 . At the same time, there has been a big push to develop environments for embodied AI, many of which involve agents following natural language instructions with respect to an environment BIBREF13 , BIBREF50 , BIBREF5 , BIBREF39 , BIBREF19 , BIBREF18 , following-up on early work in this area BIBREF38 , BIBREF20 . An early example of navigation using neural networks is BIBREF28 , who propose an online learning approach for robot navigation. Recently, there has been increased interest in using end-to-end trainable neural networks for learning to navigate indoor scenes BIBREF27 , BIBREF26 or large cities BIBREF17 , BIBREF40 , but, unlike our work, without multi-agent communication. Also the task of localization (without multi-agent communication) has recently been studied BIBREF18 , BIBREF48 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied (grounded) in sensorimotor experience of the physical world BIBREF15 , BIBREF45 . On the one hand, work in multi-modal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks BIBREF14 , BIBREF31 . In robotics, researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world BIBREF44 , BIBREF46 . Recently, grounding has also been applied to the learning of sentence representations BIBREF32 , image captioning BIBREF37 , BIBREF49 , visual question answering BIBREF12 , BIBREF22 , visual reasoning BIBREF30 , BIBREF42 , and grounded machine translation BIBREF43 , BIBREF23 . Grounding also plays a crucial role in the emergent research of multi-agent communication, where, agents communicate (in natural language or otherwise) in order to solve a task, with respect to their shared environment BIBREF35 , BIBREF21 , BIBREF41 , BIBREF24 , BIBREF36 , BIBREF47 , BIBREF34 .
Implementation Details
For the emergent communication models, we use an embedding size INLINEFORM0 . The natural language experiments use 128-dimensional word embeddings and a bidirectional RNN with 256 units. In all experiments, we train the guide with a cross entropy loss using the ADAM optimizer with default hyper-parameters BIBREF33 . We perform early stopping on the validation accuracy, and report the corresponding train, valid and test accuracy. We optimize the localization models with continuous, discrete and natural language communication channels for 200, 200, and 25 epochs, respectively. To facilitate further research on Talk The Walk, we make our code base for reproducing experiments publicly available at https://github.com/facebookresearch/talkthewalk.
Additional Natural Language Experiments
First, we investigate the sensitivity of tourist generation models to the trajectory length, finding that the model conditioned on a single observation (i.e. INLINEFORM0 ) achieves best performance. In the next subsection, we further analyze localization models from human utterances by investigating MASC and no-MASC models with increasing dialogue context.
Tourist Generation Models
After training the supervised tourist model (conditioned on observations and action from human expert trajectories), there are two ways to train an accompanying guide model. We can optimize a location prediction model on either (i) extracted human trajectories (as in the localization setup from human utterances) or (ii) on all random paths of length INLINEFORM0 (as in the full task evaluation). Here, we investigate the impact of (1) using either human or random trajectories for training the guide model, and (2) the effect of varying the path length INLINEFORM1 during the full-task evaluation. For random trajectories, guide training uses the same path length INLINEFORM2 as is used during evaluation. We use a pre-trained tourist model with greedy decoding for generating the tourist utterances. Table TABREF40 summarizes the results.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human vs random trajectories We only observe small improvements for training on random trajectories. Human trajectories are thus diverse enough to generalize to random trajectories.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length. We observe that the full task performance quickly deteriorates for both human and random trajectories. This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions. Although it is possible that the guide model can not process such utterances, this is not very likely because the MASC architectures handles such messages successfully for emergent communication.
We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table TABREF40 . We find that performance decreases from 29.05% to 20.87% accuracy on the test set when we increase the beam-size from one to eight.
Localization from Human Utterances
We conduct an ablation study for MASC on natural language with varying dialogue context. Specifically, we compare localization accuracy of MASC and no-MASC models trained on the last [1, 3, 5] utterances of the dialogue (including guide utterances). We report these results in Table TABREF41 . In all cases, MASC outperforms the no-MASC models by several accuracy points. We also observe that mean predicted INLINEFORM0 (over the test set) increases from 1 to 2 when more dialogue context is included.
Visualizing MASC predictions
Figure FIGREF46 shows the MASC values for a learned model with emergent discrete communications and INLINEFORM0 actions. Specifically, we look at the predicted MASC values for different action sequences taken by the tourist. We observe that the first action is always mapped to the correct state-transition, but that the second and third MASC values do not always correspond to right state-transitions.
Evaluation on Full Setup
We provide pseudo-code for evaluation of localization models on the full task in Algorithm SECREF12 , as well as results for all emergent communication models in Table TABREF55 .
INLINEFORM0 INLINEFORM1
INLINEFORM0 take new action INLINEFORM1 INLINEFORM2
Performance evaluation of location prediction model on full Talk The Walk setup
Landmark Classification
While the guide has access to the landmark labels, the tourist needs to recognize these landmarks from raw perceptual information. In this section, we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk.
The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes, see Figure FIGREF62 for how they are distributed. The class distribution is fairly imbalanced, with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters. We treat landmark recognition as a multi-label classification problem as there can be multiple landmarks on a corner.
For the task of landmark classification, we extract the relevant views of the 360 image from which a landmark is visible. Because landmarks are labeled to be on a specific corner of an intersection, we assume that they are visible from one of the orientations facing away from the intersection. For example, for a landmark on the northwest corner of an intersection, we extract views from both the north and west direction. The orientation-specific views are obtained by a planar projection of the full 360-image with a small field of view (60 degrees) to limit distortions. To cover the full field of view, we extract two images per orientation, with their horizontal focus point 30 degrees apart. Hence, we obtain eight images per 360 image with corresponding orientation INLINEFORM0 .
We run the following pre-trained feature extractors over the extracted images:
For the text recognition model, we use a learned look-up table INLINEFORM0 to embed the extracted text features INLINEFORM1 , and fuse all embeddings of four images through a bag of embeddings, i.e., INLINEFORM2 . We use a linear layer followed by a sigmoid to predict the probability for each class, i.e. INLINEFORM3 . We also experiment with replacing the look-up embeddings with pre-trained FastText embeddings BIBREF16 . For the ResNet model, we use a bag of embeddings over the four ResNet features, i.e. INLINEFORM4 , before we pass it through a linear layer to predict the class probabilities: INLINEFORM5 . We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model.
To account for class imbalance, we train all described models with a binary cross entropy loss weighted by the inverted class frequency. We create a 80-20 class-conditional split of the dataset into a training and validation set. We train for 100 epochs and perform early stopping on the validation loss.
The F1 scores for the described methods in Table TABREF65 . We compare to an “all positive” baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline. Although 256-dimensional ResNet features achieve slightly better precision on the validation set, it results in much worse recall and a lower F1 score. Our results indicate that perceptual grounding is a difficult task, which easily merits a paper of its own right, and so we leave further improvements (e.g. better text recognizers) for future work.
Dataset Details
paragraph4 0.1ex plus0.1ex minus.1ex-1em Dataset split We split the full dataset by assigning entire 4x4 grids (independent of the target location) to the train, valid or test set. Specifically, we design the split such that the valid set contains at least one intersection (out of four) is not part of the train set. For the test set, all four intersections are novel. See our source code, available at URL ANONYMIZED, for more details on how this split is realized.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Example
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Hello, what are you near?
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Tourist: Hello, in front of me is a Brooks Brothers
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: Is that a shop or restaurant?
Tourist: ACTION:TURNLEFT
Tourist: It is a clothing shop.
Tourist: ACTION:TURNLEFT
Guide: You need to go to the intersection in the northwest corner of the map
Tourist: ACTION:TURNLEFT
Tourist: There appears to be a bank behind me.
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Ok, turn left then go straight up that road
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Guide: There should be shops on two of the corners but you
need to go to the corner without a shop.
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: let me know when you get there.
Tourist: on my left is Radio city Music hall
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Tourist: I can't go straight any further.
Guide: ok. turn so that the theater is on your right.
Guide: then go straight
Tourist: That would be going back the way I came
Guide: yeah. I was looking at the wrong bank
Tourist: I'll notify when I am back at the brooks brothers, and the bank.
Tourist: ACTION:TURNRIGHT
Guide: make a right when the bank is on your left
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT
Tourist: Making the right at the bank.
Tourist: ACTION:FORWARD ACTION:FORWARD
Tourist: I can't go that way.
Tourist: ACTION:TURNLEFT
Tourist: Bank is ahead of me on the right
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Guide: turn around on that intersection
Tourist: I can only go to the left or back the way I just came.
Tourist: ACTION:TURNLEFT
Guide: you're in the right place. do you see shops on the corners?
Guide: If you're on the corner with the bank, cross the street
Tourist: I'm back where I started by the shop and the bank.
Tourist: ACTION:TURNRIGHT
Guide: on the same side of the street?
Tourist: crossing the street now
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Tourist: there is an I love new york shop across the street on the left from me now
Tourist: ACTION:TURNRIGHT ACTION:FORWARD
Guide: ok. I'll see if it's right.
Guide: EVALUATE_LOCATION
Guide: It's not right.
Tourist: What should I be on the look for?
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: There should be shops on two corners but you need to be on one of the corners
without the shop.
Guide: Try the other corner.
Tourist: this intersection has 2 shop corners and a bank corner
Guide: yes. that's what I see on the map.
Tourist: should I go to the bank corner? or one of the shop corners?
or the blank corner (perhaps a hotel)
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Go to the one near the hotel. The map says the hotel is a little
further down but it might be a little off.
Tourist: It's a big hotel it's possible.
Tourist: ACTION:FORWARD ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
Tourist: I'm on the hotel corner
Guide: EVALUATE_LOCATION | Yes |
c1ce652085ef9a7f02cb5c363ce2b8757adbe213 | c1ce652085ef9a7f02cb5c363ce2b8757adbe213_0 | Q: How was the dataset collected?
Text: 0pt0.03.03 *
0pt0.030.03 *
0pt0.030.03
We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.
Introduction
As artificial intelligence plays an ever more prominent role in everyday human lives, it becomes increasingly important to enable machines to communicate via natural language—not only with humans, but also with each other. Learning algorithms for natural language understanding, such as in machine translation and reading comprehension, have progressed at an unprecedented rate in recent years, but still rely on static, large-scale, text-only datasets that lack crucial aspects of how humans understand and produce natural language. Namely, humans develop language capabilities by being embodied in an environment which they can perceive, manipulate and move around in; and by interacting with other humans. Hence, we argue that we should incorporate all three fundamental aspects of human language acquisition—perception, action and interactive communication—and develop a task and dataset to that effect.
We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .
Grounded language learning has (re-)gained traction in the AI community, and much attention is currently devoted to virtual embodiment—the development of multi-agent communication tasks in virtual environments—which has been argued to be a viable strategy for acquiring natural language semantics BIBREF0 . Various related tasks have recently been introduced, but in each case with some limitations. Although visually grounded dialogue tasks BIBREF1 , BIBREF2 comprise perceptual grounding and multi-agent interaction, their agents are passive observers and do not act in the environment. By contrast, instruction-following tasks, such as VNL BIBREF3 , involve action and perception but lack natural language interaction with other agents. Furthermore, some of these works use simulated environments BIBREF4 and/or templated language BIBREF5 , which arguably oversimplifies real perception or natural language, respectively. See Table TABREF15 for a comparison.
Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
We argue that for artificial agents to solve this challenging problem, some fundamental architecture designs are missing, and our hope is that this task motivates their innovation. To that end, we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism. To model the interaction between language and action, this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence.
This work makes the following contributions: 1) We present the first large scale dialogue dataset grounded in action and perception; 2) We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language; 4) Using localization models, we establish initial baselines on the full task; 5) We show that our best model exceeds human performance under the assumption of “perfect perception” and with a learned emergent communication protocol, and sets a non-trivial baseline with natural language.
Talk The Walk
We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera. Most parts of the city are grid-like and uniform, which makes it well-suited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side—see Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.
The tourist's location is given as a tuple INLINEFORM0 , where INLINEFORM1 are the coordinates and INLINEFORM2 signifies the orientation (north, east, south or west). The tourist can take three actions: turn left, turn right and go forward. For moving forward, we add INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 to the INLINEFORM7 coordinates for the respective orientations. Upon a turning action, the orientation is updated by INLINEFORM8 where INLINEFORM9 for left and INLINEFORM10 for right. If the tourist moves outside the grid, we issue a warning that they cannot go in that direction and do not update the location. Moreover, tourists are shown different types of transitions: a short transition for actions that bring the tourist to a different corner of the same intersection; and a longer transition for actions that bring them to a new intersection.
The guide observes a map that corresponds to the tourist's environment. We exploit the fact that urban areas like NYC are full of local businesses, and overlay the map with these landmarks as localization points for our task. Specifically, we manually annotate each corner of the intersection with a set of landmarks INLINEFORM0 , each coming from one of the following categories:
Bar Playfield Bank Hotel Shop Subway Coffee Shop Restaurant Theater
The right-side of Figure FIGREF3 illustrates how the map is presented. Note that within-intersection transitions have a smaller grid distance than transitions to new intersections. To ensure that the localization task is not too easy, we do not include street names in the overhead map and keep the landmark categories coarse. That is, the dialogue is driven by uncertainty in the tourist's current location and the properties of the target location: if the exact location and orientation of the tourist were known, it would suffice to communicate a sequence of actions.
Task
For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners INLINEFORM0 . Next, we construct the overhead map of the environment, i.e. INLINEFORM1 with INLINEFORM2 and INLINEFORM3 . We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random.
The shared goal of the two agents is to navigate the tourist to the target location INLINEFORM0 , which is only known to the guide. The tourist perceives a “street view” planar projection INLINEFORM1 of the 360 image at location INLINEFORM2 and can simultaneously chat with the guide and navigate through the environment. The guide's role consists of reading the tourist description of the environment, building a “mental map” of their current position and providing instructions for navigating towards the target location. Whenever the guide believes that the tourist has reached the target location, they instruct the system to evaluate the tourist's location. The task ends when the evaluation is successful—i.e., when INLINEFORM3 —or otherwise continues until a total of three failed attempts. The additional attempts are meant to ease the task for humans, as we found that they otherwise often fail at the task but still end up close to the target location, e.g., at the wrong corner of the correct intersection.
Data Collection
We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs.
Dataset Statistics
The Talk The Walk dataset consists of over 10k successful dialogues—see Table FIGREF66 in the appendix for the dataset statistics split by neighborhood. Turkers successfully completed INLINEFORM0 of all finished tasks (we use this statistic as the human success rate). More than six hundred participants successfully completed at least one Talk The Walk HIT. Although the Visual Dialog BIBREF2 and GuessWhat BIBREF1 datasets are larger, the collected Talk The Walk dialogs are significantly longer. On average, Turkers needed more than 62 acts (i.e utterances and actions) before they successfully completed the task, whereas Visual Dialog requires 20 acts. The majority of acts comprise the tourist's actions, with on average more than 44 actions per dialogue. The guide produces roughly 9 utterances per dialogue, slightly more than the tourist's 8 utterances. Turkers use diverse discourse, with a vocabulary size of more than 10K (calculated over all successful dialogues). An example from the dataset is shown in Appendix SECREF14 . The dataset is available at https://github.com/facebookresearch/talkthewalk.
Experiments
We investigate the difficulty of the proposed task by establishing initial baselines. The final Talk The Walk task is challenging and encompasses several important sub-tasks, ranging from landmark recognition to tourist localization and natural language instruction-giving. Arguably the most important sub-task is localization: without such capabilities the guide can not tell whether the tourist reached the target location. In this work, we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization. Specifically, we let trained tourist models undertake random walks, using the following protocol: at each step, the tourist communicates its observations and actions to the guide, who predicts the tourist's location. If the guide predicts that the tourist is at target, we evaluate its location. If successful, the task ends, otherwise we continue until there have been three wrong evaluations. The protocol is given as pseudo-code in Appendix SECREF12 .
Tourist Localization
The designed navigation protocol relies on a trained localization model that predicts the tourist's location from a communicated message. Before we formalize this localization sub-task in Section UID21 , we further introduce two simplifying assumptions—perfect perception and orientation-agnosticism—so as to overcome some of the difficulties we encountered in preliminary experiments.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Perfect Perception Early experiments revealed that perceptual grounding of landmarks is difficult: we set up a landmark classification problem, on which models with extracted CNN BIBREF7 or text recognition features BIBREF8 barely outperform a random baseline—see Appendix SECREF13 for full details. This finding implies that localization models from image input are limited by their ability to recognize landmarks, and, as a result, would not generalize to unseen environments. To ensure that perception is not the limiting factor when investigating the landmark-grounding and action-grounding capabilities of localization models, we assume “perfect perception”: in lieu of the 360 image view, the tourist is given the landmarks at its current location. More formally, each state observation INLINEFORM0 now equals the set of landmarks at the INLINEFORM1 -location, i.e. INLINEFORM2 . If the INLINEFORM3 -location does not have any visible landmarks, we return a single “empty corner” symbol. We stress that our findings—including a novel architecture for grounding actions into an overhead map, see Section UID28 —should carry over to settings without the perfect perception assumption.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Orientation-agnosticism We opt to ignore the tourist's orientation, which simplifies the set of actions to [Left, Right, Up, Down], corresponding to adding [(-1, 0), (1, 0), (0, 1), (0, -1)] to the current INLINEFORM0 coordinates, respectively. Note that actions are now coupled to an orientation on the map—e.g. up is equal to going north—and this implicitly assumes that the tourist has access to a compass. This also affects perception, since the tourist now has access to views from all orientations: in conjunction with “perfect perception”, implying that only landmarks at the current corner are given, whereas landmarks from different corners (e.g. across the street) are not visible.
Even with these simplifications, the localization-based baseline comes with its own set of challenges. As we show in Section SECREF34 , the task requires communication about a short (random) path—i.e., not only a sequence of observations but also actions—in order to achieve high localization accuracy. This means that the guide needs to decode observations from multiple time steps, as well as understand their 2D spatial arrangement as communicated via the sequence of actions. Thus, in order to get to a good understanding of the task, we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guide's map. In doing so, we thoroughly study the role of the communication channel in the localization task, by investigating increasingly constrained forms of communication: from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language.
The full navigation baseline hinges on a localization model from random trajectories. While we can sample random actions in the emergent communication setup, this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators. This leads to slightly different problem setups, as described below.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Emergent language A tourist, starting from a random location, takes INLINEFORM0 random actions INLINEFORM1 to reach target location INLINEFORM2 . Every location in the environment has a corresponding set of landmarks INLINEFORM3 for each of the INLINEFORM4 coordinates. As the tourist navigates, the agent perceives INLINEFORM5 state-observations INLINEFORM6 where each observation INLINEFORM7 consists of a set of INLINEFORM8 landmark symbols INLINEFORM9 . Given the observations INLINEFORM10 and actions INLINEFORM11 , the tourist generates a message INLINEFORM12 which is communicated to the other agent. The objective of the guide is to predict the location INLINEFORM13 from the tourist's message INLINEFORM14 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural language In contrast to our emergent communication experiments, we do not take random actions but instead extract actions, observations, and messages from the dataset. Specifically, we consider each tourist utterance (i.e. at any point in the dialogue), obtain the current tourist location as target location INLINEFORM0 , the utterance itself as message INLINEFORM1 , and the sequence of observations and actions that took place between the current and previous tourist utterance as INLINEFORM2 and INLINEFORM3 , respectively. Similar to the emergent language setting, the guide's objective is to predict the target location INLINEFORM4 models from the tourist message INLINEFORM5 . We conduct experiments with INLINEFORM6 taken from the dataset and with INLINEFORM7 generated from the extracted observations INLINEFORM8 and actions INLINEFORM9 .
Model
This section outlines the tourist and guide architectures. We first describe how the tourist produces messages for the various communication channels across which the messages are sent. We subsequently describe how these messages are processed by the guide, and introduce the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding into the 2D overhead map in order to predict the tourist's location.
The Tourist
For each of the communication channels, we outline the procedure for generating a message INLINEFORM0 . Given a set of state observations INLINEFORM1 , we represent each observation by summing the INLINEFORM2 -dimensional embeddings of the observed landmarks, i.e. for INLINEFORM3 , INLINEFORM4 , where INLINEFORM5 is the landmark embedding lookup table. In addition, we embed action INLINEFORM6 into a INLINEFORM7 -dimensional embedding INLINEFORM8 via a look-up table INLINEFORM9 . We experiment with three types of communication channel.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vectors The tourist has access to observations of several time steps, whose order is important for accurate localization. Because summing embeddings is order-invariant, we introduce a sum over positionally-gated embeddings, which, conditioned on time step INLINEFORM0 , pushes embedding information into the appropriate dimensions. More specifically, we generate an observation message INLINEFORM1 , where INLINEFORM2 is a learned gating vector for time step INLINEFORM3 . In a similar fashion, we produce action message INLINEFORM4 and send the concatenated vectors INLINEFORM5 as message to the guide. We can interpret continuous vector communication as a single, monolithic model because its architecture is end-to-end differentiable, enabling gradient-based optimization for training.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete symbols Like the continuous vector communication model, with discrete communication the tourist also uses separate channels for observations and actions, as well as a sum over positionally gated embeddings to generate observation embedding INLINEFORM0 . We pass this embedding through a sigmoid and generate a message INLINEFORM1 by sampling from the resulting Bernoulli distributions:
INLINEFORM0
The action message INLINEFORM0 is produced in the same way, and we obtain the final tourist message INLINEFORM1 through concatenating the messages.
The communication channel's sampling operation yields the model non-differentiable, so we use policy gradients BIBREF9 , BIBREF10 to train the parameters INLINEFORM0 of the tourist model. That is, we estimate the gradient by INLINEFORM1
where the reward function INLINEFORM0 is the negative guide's loss (see Section SECREF25 ) and INLINEFORM1 a state-value baseline to reduce variance. We use a linear transformation over the concatenated embeddings as baseline prediction, i.e. INLINEFORM2 , and train it with a mean squared error loss.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language Because observations and actions are of variable-length, we use an LSTM encoder over the sequence of observations embeddings INLINEFORM0 , and extract its last hidden state INLINEFORM1 . We use a separate LSTM encoder for action embeddings INLINEFORM2 , and concatenate both INLINEFORM3 and INLINEFORM4 to the input of the LSTM decoder at each time step: DISPLAYFORM0
where INLINEFORM0 a look-up table, taking input tokens INLINEFORM1 . We train with teacher-forcing, i.e. we optimize the cross-entropy loss: INLINEFORM2 . At test time, we explore the following decoding strategies: greedy, sampling and a beam-search. We also fine-tune a trained tourist model (starting from a pre-trained model) with policy gradients in order to minimize the guide's prediction loss.
The Guide
Given a tourist message INLINEFORM0 describing their observations and actions, the objective of the guide is to predict the tourist's location on the map. First, we outline the procedure for extracting observation embedding INLINEFORM1 and action embeddings INLINEFORM2 from the message INLINEFORM3 for each of the types of communication. Next, we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guide's map in order to predict the tourist's location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous For the continuous communication model, we assign the observation message to the observation embedding, i.e. INLINEFORM0 . To extract the action embedding for time step INLINEFORM1 , we apply a linear layer to the action message, i.e. INLINEFORM2 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete For discrete communication, we obtain observation INLINEFORM0 by applying a linear layer to the observation message, i.e. INLINEFORM1 . Similar to the continuous communication model, we use a linear layer over action message INLINEFORM2 to obtain action embedding INLINEFORM3 for time step INLINEFORM4 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language The message INLINEFORM0 contains information about observations and actions, so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings. Specifically, we encode the message INLINEFORM1 , consisting of INLINEFORM2 tokens INLINEFORM3 taken from vocabulary INLINEFORM4 , with a bidirectional LSTM: DISPLAYFORM0
where INLINEFORM0 is the word embedding look-up table. We obtain observation embedding INLINEFORM1 through an attention mechanism over the hidden states INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding: INLINEFORM1 . We use the same mechanism to extract the action embedding INLINEFORM2 from the hidden states. For the observation embedding, we obtain the final representation by summing positionally gated embeddings, i.e., INLINEFORM3 .
We represent the guide's map as INLINEFORM0 , where in this case INLINEFORM1 , where each INLINEFORM2 -dimensional INLINEFORM3 location embedding INLINEFORM4 is computed as the sum of the guide's landmark embeddings for that location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Motivation While the guide's map representation contains only local landmark information, the tourist communicates a trajectory of the map (i.e. actions and observations from multiple locations), implying that directly comparing the tourist's message with the individual landmark embeddings is probably suboptimal. Instead, we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations. We propose a mechanism for translating landmark embeddings according to state transitions (left, right, up, down), which can be expressed as a 2D convolution over the map embeddings. For simplicity, let us assume that the map embedding INLINEFORM0 is 1-dimensional, then a left action can be realized through application of the following INLINEFORM1 kernel: INLINEFORM2 which effectively shifts all values of INLINEFORM3 one position to the left. We propose to learn such state-transitions from the tourist message through a differentiable attention-mask over the spatial dimensions of a 3x3 convolution.
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC We linearly project each predicted action embedding INLINEFORM0 to a 9-dimensional vector INLINEFORM1 , normalize it by a softmax and subsequently reshape the vector into a 3x3 mask INLINEFORM2 : DISPLAYFORM0
We learn a 3x3 convolutional kernel INLINEFORM0 , with INLINEFORM1 features, and apply the mask INLINEFORM2 to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions, i.e. INLINEFORM3 , and subsequently taking the Hadamard product: INLINEFORM4 . For each action step INLINEFORM5 , we then apply a 2D convolution with masked weight INLINEFORM6 to obtain a new map embedding INLINEFORM7 , where we zero-pad the input to maintain identical spatial dimensions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction model We repeat the MASC operation INLINEFORM0 times (i.e. once for each action), and then aggregate the map embeddings by a sum over positionally-gated embeddings: INLINEFORM1 . We score locations by taking the dot-product of the observation embedding INLINEFORM2 , which contains information about the sequence of observed landmarks by the tourist, and the map. We compute a distribution over the locations of the map INLINEFORM3 by taking a softmax over the computed scores: DISPLAYFORM0
paragraph4 0.1ex plus0.1ex minus.1ex-1em Predicting T While emergent communication models use a fixed length trasjectory INLINEFORM0 , natural language messages may differ in the number of communicated observations and actions. Hence, we predict INLINEFORM1 from the communicated message. Specifically, we use a softmax regression layer over the last hidden state INLINEFORM2 of the RNN, and subsequently sample INLINEFORM3 from the resulting multinomial distribution: DISPLAYFORM0
We jointly train the INLINEFORM0 -prediction model via REINFORCE, with the guide's loss as reward function and a mean-reward baseline.
Comparisons
To better analyze the performance of the models incorporating MASC, we compare against a no-MASC baseline in our experiments, as well as a prediction upper bound.
paragraph4 0.1ex plus0.1ex minus.1ex-1em No MASC We compare the proposed MASC model with a model that does not include this mechanism. Whereas MASC predicts a convolution mask from the tourist message, the “No MASC” model uses INLINEFORM0 , the ordinary convolutional kernel to convolve the map embedding INLINEFORM1 to obtain INLINEFORM2 . We also share the weights of this convolution at each time step.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction upper-bound Because we have access to the class-conditional likelihood INLINEFORM0 , we are able to compute the Bayes error rate (or irreducible error). No model (no matter how expressive) with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions.
Results and Discussion
In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work.
Analysis of Localization Task
paragraph4 0.1ex plus0.1ex minus.1ex-1em Task is not too easy The upper-bound on localization performance in Table TABREF32 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist ( INLINEFORM0 35% accuracy). This is an important result for the full navigation task because the need for two-way communication disappears if localization is too easy; if the guide knows the exact location of the tourist it suffices to communicate a list of instructions, which is then executed by the tourist. The uncertainty in the tourist's location is what drives the dialogue between the two agents.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Importance of actions We observe that the upperbound for only communicating observations plateaus around 57% (even for INLINEFORM0 actions), whereas it exceeds 90% when we also take actions into account. This implies that, at least for random walks, it is essential to communicate a trajectory, including observations and actions, in order to achieve high localization accuracy.
Emergent Language Localization
We first report the results for tourist localization with emergent language in Table TABREF32 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism. For instance, for INLINEFORM0 action, MASC already achieves 56.09 % on the test set and this further increases to 69.85% for INLINEFORM1 . On the other hand, no-MASC models hit a plateau at 43%. In Appendix SECREF11 , we analyze learned MASC values, and show that communicated actions are often mapped to corresponding state-transitions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vs discrete We observe similar performance for continuous and discrete emergent communication models, implying that a discrete communication channel is not a limiting factor for localization performance.
Natural Language Localization
We report the results of tourist localization with natural language in Table TABREF36 . We compare accuracy of the guide model (with MASC) trained on utterances from (i) humans, (ii) a supervised model with various decoding strategies, and (iii) a policy gradient model optimized with respect to the loss of a frozen, pre-trained guide model on human utterances.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human utterances Compared to emergent language, localization from human utterances is much harder, achieving only INLINEFORM0 on the test set. Here, we report localization from a single utterance, but in Appendix SECREF45 we show that including up to five dialogue utterances only improves performance to INLINEFORM1 . We also show that MASC outperform no-MASC models for natural language communication.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Generated utterances We also investigate generated tourist utterances from conditional language models. Interestingly, we observe that the supervised model (with greedy and beam-search decoding) as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances. However, their level of accuracy is slightly below the baseline of communicating a single observation, indicating that these models only learn to ground utterances in a single landmark observation.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Better grounding of generated utterances We analyze natural language samples in Table TABREF38 , and confirm that, unlike human utterances, the generated utterances are talking about the observed landmarks. This observation explains why the generated utterances obtain higher localization accuracy. The current language models are most successful when conditioned on a single landmark observation; We show in Appendix UID43 that performance quickly deteriorates when the model is conditioned on more observations, suggesting that it can not produce natural language utterances about multiple time steps.
Localization-based Baseline
Table TABREF36 shows results for the best localization models on the full task, evaluated via the random walk protocol defined in Algorithm SECREF12 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Comparison with human annotators Interestingly, our best localization model (continuous communication, with MASC, and INLINEFORM0 ) achieves 88.33% on the test set and thus exceed human performance of 76.74% on the full task. While emergent models appear to be stronger localizers, humans might cope with their localization uncertainty through other mechanisms (e.g. better guidance, bias towards taking particular paths, etc). The simplifying assumption of perfect perception also helps.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Number of actions Unsurprisingly, humans take fewer steps (roughly 15) than our best random walk model (roughly 34). Our human annotators likely used some form of guidance to navigate faster to the target.
Conclusion
We introduced the Talk The Walk task and dataset, which consists of crowd-sourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC. For the important localization sub-task, we proposed MASC—a novel grounding mechanism to learn state-transition from the tourist's message—and showed that it improves localization performance for emergent and natural language. We use the localization model to provide baseline numbers on the Talk The Walk task, in order to facilitate future research.
Related Work
The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence, including grounded language learning, goal-oriented dialogue research and situated navigation. Here, we describe related previous work in these areas.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Related tasks There has been a long line of work involving related tasks. Early work on task-oriented dialogue dates back to the early 90s with the introduction of the Map Task BIBREF11 and Maze Game BIBREF25 corpora. Recent efforts have led to larger-scale goal-oriented dialogue datasets, for instance to aid research on visually-grounded dialogue BIBREF2 , BIBREF1 , knowledge-base-grounded discourse BIBREF29 or negotiation tasks BIBREF36 . At the same time, there has been a big push to develop environments for embodied AI, many of which involve agents following natural language instructions with respect to an environment BIBREF13 , BIBREF50 , BIBREF5 , BIBREF39 , BIBREF19 , BIBREF18 , following-up on early work in this area BIBREF38 , BIBREF20 . An early example of navigation using neural networks is BIBREF28 , who propose an online learning approach for robot navigation. Recently, there has been increased interest in using end-to-end trainable neural networks for learning to navigate indoor scenes BIBREF27 , BIBREF26 or large cities BIBREF17 , BIBREF40 , but, unlike our work, without multi-agent communication. Also the task of localization (without multi-agent communication) has recently been studied BIBREF18 , BIBREF48 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied (grounded) in sensorimotor experience of the physical world BIBREF15 , BIBREF45 . On the one hand, work in multi-modal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks BIBREF14 , BIBREF31 . In robotics, researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world BIBREF44 , BIBREF46 . Recently, grounding has also been applied to the learning of sentence representations BIBREF32 , image captioning BIBREF37 , BIBREF49 , visual question answering BIBREF12 , BIBREF22 , visual reasoning BIBREF30 , BIBREF42 , and grounded machine translation BIBREF43 , BIBREF23 . Grounding also plays a crucial role in the emergent research of multi-agent communication, where, agents communicate (in natural language or otherwise) in order to solve a task, with respect to their shared environment BIBREF35 , BIBREF21 , BIBREF41 , BIBREF24 , BIBREF36 , BIBREF47 , BIBREF34 .
Implementation Details
For the emergent communication models, we use an embedding size INLINEFORM0 . The natural language experiments use 128-dimensional word embeddings and a bidirectional RNN with 256 units. In all experiments, we train the guide with a cross entropy loss using the ADAM optimizer with default hyper-parameters BIBREF33 . We perform early stopping on the validation accuracy, and report the corresponding train, valid and test accuracy. We optimize the localization models with continuous, discrete and natural language communication channels for 200, 200, and 25 epochs, respectively. To facilitate further research on Talk The Walk, we make our code base for reproducing experiments publicly available at https://github.com/facebookresearch/talkthewalk.
Additional Natural Language Experiments
First, we investigate the sensitivity of tourist generation models to the trajectory length, finding that the model conditioned on a single observation (i.e. INLINEFORM0 ) achieves best performance. In the next subsection, we further analyze localization models from human utterances by investigating MASC and no-MASC models with increasing dialogue context.
Tourist Generation Models
After training the supervised tourist model (conditioned on observations and action from human expert trajectories), there are two ways to train an accompanying guide model. We can optimize a location prediction model on either (i) extracted human trajectories (as in the localization setup from human utterances) or (ii) on all random paths of length INLINEFORM0 (as in the full task evaluation). Here, we investigate the impact of (1) using either human or random trajectories for training the guide model, and (2) the effect of varying the path length INLINEFORM1 during the full-task evaluation. For random trajectories, guide training uses the same path length INLINEFORM2 as is used during evaluation. We use a pre-trained tourist model with greedy decoding for generating the tourist utterances. Table TABREF40 summarizes the results.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human vs random trajectories We only observe small improvements for training on random trajectories. Human trajectories are thus diverse enough to generalize to random trajectories.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length. We observe that the full task performance quickly deteriorates for both human and random trajectories. This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions. Although it is possible that the guide model can not process such utterances, this is not very likely because the MASC architectures handles such messages successfully for emergent communication.
We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table TABREF40 . We find that performance decreases from 29.05% to 20.87% accuracy on the test set when we increase the beam-size from one to eight.
Localization from Human Utterances
We conduct an ablation study for MASC on natural language with varying dialogue context. Specifically, we compare localization accuracy of MASC and no-MASC models trained on the last [1, 3, 5] utterances of the dialogue (including guide utterances). We report these results in Table TABREF41 . In all cases, MASC outperforms the no-MASC models by several accuracy points. We also observe that mean predicted INLINEFORM0 (over the test set) increases from 1 to 2 when more dialogue context is included.
Visualizing MASC predictions
Figure FIGREF46 shows the MASC values for a learned model with emergent discrete communications and INLINEFORM0 actions. Specifically, we look at the predicted MASC values for different action sequences taken by the tourist. We observe that the first action is always mapped to the correct state-transition, but that the second and third MASC values do not always correspond to right state-transitions.
Evaluation on Full Setup
We provide pseudo-code for evaluation of localization models on the full task in Algorithm SECREF12 , as well as results for all emergent communication models in Table TABREF55 .
INLINEFORM0 INLINEFORM1
INLINEFORM0 take new action INLINEFORM1 INLINEFORM2
Performance evaluation of location prediction model on full Talk The Walk setup
Landmark Classification
While the guide has access to the landmark labels, the tourist needs to recognize these landmarks from raw perceptual information. In this section, we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk.
The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes, see Figure FIGREF62 for how they are distributed. The class distribution is fairly imbalanced, with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters. We treat landmark recognition as a multi-label classification problem as there can be multiple landmarks on a corner.
For the task of landmark classification, we extract the relevant views of the 360 image from which a landmark is visible. Because landmarks are labeled to be on a specific corner of an intersection, we assume that they are visible from one of the orientations facing away from the intersection. For example, for a landmark on the northwest corner of an intersection, we extract views from both the north and west direction. The orientation-specific views are obtained by a planar projection of the full 360-image with a small field of view (60 degrees) to limit distortions. To cover the full field of view, we extract two images per orientation, with their horizontal focus point 30 degrees apart. Hence, we obtain eight images per 360 image with corresponding orientation INLINEFORM0 .
We run the following pre-trained feature extractors over the extracted images:
For the text recognition model, we use a learned look-up table INLINEFORM0 to embed the extracted text features INLINEFORM1 , and fuse all embeddings of four images through a bag of embeddings, i.e., INLINEFORM2 . We use a linear layer followed by a sigmoid to predict the probability for each class, i.e. INLINEFORM3 . We also experiment with replacing the look-up embeddings with pre-trained FastText embeddings BIBREF16 . For the ResNet model, we use a bag of embeddings over the four ResNet features, i.e. INLINEFORM4 , before we pass it through a linear layer to predict the class probabilities: INLINEFORM5 . We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model.
To account for class imbalance, we train all described models with a binary cross entropy loss weighted by the inverted class frequency. We create a 80-20 class-conditional split of the dataset into a training and validation set. We train for 100 epochs and perform early stopping on the validation loss.
The F1 scores for the described methods in Table TABREF65 . We compare to an “all positive” baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline. Although 256-dimensional ResNet features achieve slightly better precision on the validation set, it results in much worse recall and a lower F1 score. Our results indicate that perceptual grounding is a difficult task, which easily merits a paper of its own right, and so we leave further improvements (e.g. better text recognizers) for future work.
Dataset Details
paragraph4 0.1ex plus0.1ex minus.1ex-1em Dataset split We split the full dataset by assigning entire 4x4 grids (independent of the target location) to the train, valid or test set. Specifically, we design the split such that the valid set contains at least one intersection (out of four) is not part of the train set. For the test set, all four intersections are novel. See our source code, available at URL ANONYMIZED, for more details on how this split is realized.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Example
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Hello, what are you near?
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Tourist: Hello, in front of me is a Brooks Brothers
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: Is that a shop or restaurant?
Tourist: ACTION:TURNLEFT
Tourist: It is a clothing shop.
Tourist: ACTION:TURNLEFT
Guide: You need to go to the intersection in the northwest corner of the map
Tourist: ACTION:TURNLEFT
Tourist: There appears to be a bank behind me.
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Ok, turn left then go straight up that road
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Guide: There should be shops on two of the corners but you
need to go to the corner without a shop.
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: let me know when you get there.
Tourist: on my left is Radio city Music hall
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Tourist: I can't go straight any further.
Guide: ok. turn so that the theater is on your right.
Guide: then go straight
Tourist: That would be going back the way I came
Guide: yeah. I was looking at the wrong bank
Tourist: I'll notify when I am back at the brooks brothers, and the bank.
Tourist: ACTION:TURNRIGHT
Guide: make a right when the bank is on your left
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT
Tourist: Making the right at the bank.
Tourist: ACTION:FORWARD ACTION:FORWARD
Tourist: I can't go that way.
Tourist: ACTION:TURNLEFT
Tourist: Bank is ahead of me on the right
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Guide: turn around on that intersection
Tourist: I can only go to the left or back the way I just came.
Tourist: ACTION:TURNLEFT
Guide: you're in the right place. do you see shops on the corners?
Guide: If you're on the corner with the bank, cross the street
Tourist: I'm back where I started by the shop and the bank.
Tourist: ACTION:TURNRIGHT
Guide: on the same side of the street?
Tourist: crossing the street now
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Tourist: there is an I love new york shop across the street on the left from me now
Tourist: ACTION:TURNRIGHT ACTION:FORWARD
Guide: ok. I'll see if it's right.
Guide: EVALUATE_LOCATION
Guide: It's not right.
Tourist: What should I be on the look for?
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: There should be shops on two corners but you need to be on one of the corners
without the shop.
Guide: Try the other corner.
Tourist: this intersection has 2 shop corners and a bank corner
Guide: yes. that's what I see on the map.
Tourist: should I go to the bank corner? or one of the shop corners?
or the blank corner (perhaps a hotel)
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Go to the one near the hotel. The map says the hotel is a little
further down but it might be a little off.
Tourist: It's a big hotel it's possible.
Tourist: ACTION:FORWARD ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
Tourist: I'm on the hotel corner
Guide: EVALUATE_LOCATION | crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk) |
96be67b1729c3a91ddf0ec7d6a80f2aa75e30a30 | 96be67b1729c3a91ddf0ec7d6a80f2aa75e30a30_0 | Q: What language do the agents talk in?
Text: 0pt0.03.03 *
0pt0.030.03 *
0pt0.030.03
We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.
Introduction
As artificial intelligence plays an ever more prominent role in everyday human lives, it becomes increasingly important to enable machines to communicate via natural language—not only with humans, but also with each other. Learning algorithms for natural language understanding, such as in machine translation and reading comprehension, have progressed at an unprecedented rate in recent years, but still rely on static, large-scale, text-only datasets that lack crucial aspects of how humans understand and produce natural language. Namely, humans develop language capabilities by being embodied in an environment which they can perceive, manipulate and move around in; and by interacting with other humans. Hence, we argue that we should incorporate all three fundamental aspects of human language acquisition—perception, action and interactive communication—and develop a task and dataset to that effect.
We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .
Grounded language learning has (re-)gained traction in the AI community, and much attention is currently devoted to virtual embodiment—the development of multi-agent communication tasks in virtual environments—which has been argued to be a viable strategy for acquiring natural language semantics BIBREF0 . Various related tasks have recently been introduced, but in each case with some limitations. Although visually grounded dialogue tasks BIBREF1 , BIBREF2 comprise perceptual grounding and multi-agent interaction, their agents are passive observers and do not act in the environment. By contrast, instruction-following tasks, such as VNL BIBREF3 , involve action and perception but lack natural language interaction with other agents. Furthermore, some of these works use simulated environments BIBREF4 and/or templated language BIBREF5 , which arguably oversimplifies real perception or natural language, respectively. See Table TABREF15 for a comparison.
Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
We argue that for artificial agents to solve this challenging problem, some fundamental architecture designs are missing, and our hope is that this task motivates their innovation. To that end, we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism. To model the interaction between language and action, this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence.
This work makes the following contributions: 1) We present the first large scale dialogue dataset grounded in action and perception; 2) We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language; 4) Using localization models, we establish initial baselines on the full task; 5) We show that our best model exceeds human performance under the assumption of “perfect perception” and with a learned emergent communication protocol, and sets a non-trivial baseline with natural language.
Talk The Walk
We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera. Most parts of the city are grid-like and uniform, which makes it well-suited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side—see Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.
The tourist's location is given as a tuple INLINEFORM0 , where INLINEFORM1 are the coordinates and INLINEFORM2 signifies the orientation (north, east, south or west). The tourist can take three actions: turn left, turn right and go forward. For moving forward, we add INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 to the INLINEFORM7 coordinates for the respective orientations. Upon a turning action, the orientation is updated by INLINEFORM8 where INLINEFORM9 for left and INLINEFORM10 for right. If the tourist moves outside the grid, we issue a warning that they cannot go in that direction and do not update the location. Moreover, tourists are shown different types of transitions: a short transition for actions that bring the tourist to a different corner of the same intersection; and a longer transition for actions that bring them to a new intersection.
The guide observes a map that corresponds to the tourist's environment. We exploit the fact that urban areas like NYC are full of local businesses, and overlay the map with these landmarks as localization points for our task. Specifically, we manually annotate each corner of the intersection with a set of landmarks INLINEFORM0 , each coming from one of the following categories:
Bar Playfield Bank Hotel Shop Subway Coffee Shop Restaurant Theater
The right-side of Figure FIGREF3 illustrates how the map is presented. Note that within-intersection transitions have a smaller grid distance than transitions to new intersections. To ensure that the localization task is not too easy, we do not include street names in the overhead map and keep the landmark categories coarse. That is, the dialogue is driven by uncertainty in the tourist's current location and the properties of the target location: if the exact location and orientation of the tourist were known, it would suffice to communicate a sequence of actions.
Task
For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners INLINEFORM0 . Next, we construct the overhead map of the environment, i.e. INLINEFORM1 with INLINEFORM2 and INLINEFORM3 . We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random.
The shared goal of the two agents is to navigate the tourist to the target location INLINEFORM0 , which is only known to the guide. The tourist perceives a “street view” planar projection INLINEFORM1 of the 360 image at location INLINEFORM2 and can simultaneously chat with the guide and navigate through the environment. The guide's role consists of reading the tourist description of the environment, building a “mental map” of their current position and providing instructions for navigating towards the target location. Whenever the guide believes that the tourist has reached the target location, they instruct the system to evaluate the tourist's location. The task ends when the evaluation is successful—i.e., when INLINEFORM3 —or otherwise continues until a total of three failed attempts. The additional attempts are meant to ease the task for humans, as we found that they otherwise often fail at the task but still end up close to the target location, e.g., at the wrong corner of the correct intersection.
Data Collection
We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs.
Dataset Statistics
The Talk The Walk dataset consists of over 10k successful dialogues—see Table FIGREF66 in the appendix for the dataset statistics split by neighborhood. Turkers successfully completed INLINEFORM0 of all finished tasks (we use this statistic as the human success rate). More than six hundred participants successfully completed at least one Talk The Walk HIT. Although the Visual Dialog BIBREF2 and GuessWhat BIBREF1 datasets are larger, the collected Talk The Walk dialogs are significantly longer. On average, Turkers needed more than 62 acts (i.e utterances and actions) before they successfully completed the task, whereas Visual Dialog requires 20 acts. The majority of acts comprise the tourist's actions, with on average more than 44 actions per dialogue. The guide produces roughly 9 utterances per dialogue, slightly more than the tourist's 8 utterances. Turkers use diverse discourse, with a vocabulary size of more than 10K (calculated over all successful dialogues). An example from the dataset is shown in Appendix SECREF14 . The dataset is available at https://github.com/facebookresearch/talkthewalk.
Experiments
We investigate the difficulty of the proposed task by establishing initial baselines. The final Talk The Walk task is challenging and encompasses several important sub-tasks, ranging from landmark recognition to tourist localization and natural language instruction-giving. Arguably the most important sub-task is localization: without such capabilities the guide can not tell whether the tourist reached the target location. In this work, we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization. Specifically, we let trained tourist models undertake random walks, using the following protocol: at each step, the tourist communicates its observations and actions to the guide, who predicts the tourist's location. If the guide predicts that the tourist is at target, we evaluate its location. If successful, the task ends, otherwise we continue until there have been three wrong evaluations. The protocol is given as pseudo-code in Appendix SECREF12 .
Tourist Localization
The designed navigation protocol relies on a trained localization model that predicts the tourist's location from a communicated message. Before we formalize this localization sub-task in Section UID21 , we further introduce two simplifying assumptions—perfect perception and orientation-agnosticism—so as to overcome some of the difficulties we encountered in preliminary experiments.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Perfect Perception Early experiments revealed that perceptual grounding of landmarks is difficult: we set up a landmark classification problem, on which models with extracted CNN BIBREF7 or text recognition features BIBREF8 barely outperform a random baseline—see Appendix SECREF13 for full details. This finding implies that localization models from image input are limited by their ability to recognize landmarks, and, as a result, would not generalize to unseen environments. To ensure that perception is not the limiting factor when investigating the landmark-grounding and action-grounding capabilities of localization models, we assume “perfect perception”: in lieu of the 360 image view, the tourist is given the landmarks at its current location. More formally, each state observation INLINEFORM0 now equals the set of landmarks at the INLINEFORM1 -location, i.e. INLINEFORM2 . If the INLINEFORM3 -location does not have any visible landmarks, we return a single “empty corner” symbol. We stress that our findings—including a novel architecture for grounding actions into an overhead map, see Section UID28 —should carry over to settings without the perfect perception assumption.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Orientation-agnosticism We opt to ignore the tourist's orientation, which simplifies the set of actions to [Left, Right, Up, Down], corresponding to adding [(-1, 0), (1, 0), (0, 1), (0, -1)] to the current INLINEFORM0 coordinates, respectively. Note that actions are now coupled to an orientation on the map—e.g. up is equal to going north—and this implicitly assumes that the tourist has access to a compass. This also affects perception, since the tourist now has access to views from all orientations: in conjunction with “perfect perception”, implying that only landmarks at the current corner are given, whereas landmarks from different corners (e.g. across the street) are not visible.
Even with these simplifications, the localization-based baseline comes with its own set of challenges. As we show in Section SECREF34 , the task requires communication about a short (random) path—i.e., not only a sequence of observations but also actions—in order to achieve high localization accuracy. This means that the guide needs to decode observations from multiple time steps, as well as understand their 2D spatial arrangement as communicated via the sequence of actions. Thus, in order to get to a good understanding of the task, we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guide's map. In doing so, we thoroughly study the role of the communication channel in the localization task, by investigating increasingly constrained forms of communication: from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language.
The full navigation baseline hinges on a localization model from random trajectories. While we can sample random actions in the emergent communication setup, this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators. This leads to slightly different problem setups, as described below.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Emergent language A tourist, starting from a random location, takes INLINEFORM0 random actions INLINEFORM1 to reach target location INLINEFORM2 . Every location in the environment has a corresponding set of landmarks INLINEFORM3 for each of the INLINEFORM4 coordinates. As the tourist navigates, the agent perceives INLINEFORM5 state-observations INLINEFORM6 where each observation INLINEFORM7 consists of a set of INLINEFORM8 landmark symbols INLINEFORM9 . Given the observations INLINEFORM10 and actions INLINEFORM11 , the tourist generates a message INLINEFORM12 which is communicated to the other agent. The objective of the guide is to predict the location INLINEFORM13 from the tourist's message INLINEFORM14 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural language In contrast to our emergent communication experiments, we do not take random actions but instead extract actions, observations, and messages from the dataset. Specifically, we consider each tourist utterance (i.e. at any point in the dialogue), obtain the current tourist location as target location INLINEFORM0 , the utterance itself as message INLINEFORM1 , and the sequence of observations and actions that took place between the current and previous tourist utterance as INLINEFORM2 and INLINEFORM3 , respectively. Similar to the emergent language setting, the guide's objective is to predict the target location INLINEFORM4 models from the tourist message INLINEFORM5 . We conduct experiments with INLINEFORM6 taken from the dataset and with INLINEFORM7 generated from the extracted observations INLINEFORM8 and actions INLINEFORM9 .
Model
This section outlines the tourist and guide architectures. We first describe how the tourist produces messages for the various communication channels across which the messages are sent. We subsequently describe how these messages are processed by the guide, and introduce the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding into the 2D overhead map in order to predict the tourist's location.
The Tourist
For each of the communication channels, we outline the procedure for generating a message INLINEFORM0 . Given a set of state observations INLINEFORM1 , we represent each observation by summing the INLINEFORM2 -dimensional embeddings of the observed landmarks, i.e. for INLINEFORM3 , INLINEFORM4 , where INLINEFORM5 is the landmark embedding lookup table. In addition, we embed action INLINEFORM6 into a INLINEFORM7 -dimensional embedding INLINEFORM8 via a look-up table INLINEFORM9 . We experiment with three types of communication channel.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vectors The tourist has access to observations of several time steps, whose order is important for accurate localization. Because summing embeddings is order-invariant, we introduce a sum over positionally-gated embeddings, which, conditioned on time step INLINEFORM0 , pushes embedding information into the appropriate dimensions. More specifically, we generate an observation message INLINEFORM1 , where INLINEFORM2 is a learned gating vector for time step INLINEFORM3 . In a similar fashion, we produce action message INLINEFORM4 and send the concatenated vectors INLINEFORM5 as message to the guide. We can interpret continuous vector communication as a single, monolithic model because its architecture is end-to-end differentiable, enabling gradient-based optimization for training.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete symbols Like the continuous vector communication model, with discrete communication the tourist also uses separate channels for observations and actions, as well as a sum over positionally gated embeddings to generate observation embedding INLINEFORM0 . We pass this embedding through a sigmoid and generate a message INLINEFORM1 by sampling from the resulting Bernoulli distributions:
INLINEFORM0
The action message INLINEFORM0 is produced in the same way, and we obtain the final tourist message INLINEFORM1 through concatenating the messages.
The communication channel's sampling operation yields the model non-differentiable, so we use policy gradients BIBREF9 , BIBREF10 to train the parameters INLINEFORM0 of the tourist model. That is, we estimate the gradient by INLINEFORM1
where the reward function INLINEFORM0 is the negative guide's loss (see Section SECREF25 ) and INLINEFORM1 a state-value baseline to reduce variance. We use a linear transformation over the concatenated embeddings as baseline prediction, i.e. INLINEFORM2 , and train it with a mean squared error loss.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language Because observations and actions are of variable-length, we use an LSTM encoder over the sequence of observations embeddings INLINEFORM0 , and extract its last hidden state INLINEFORM1 . We use a separate LSTM encoder for action embeddings INLINEFORM2 , and concatenate both INLINEFORM3 and INLINEFORM4 to the input of the LSTM decoder at each time step: DISPLAYFORM0
where INLINEFORM0 a look-up table, taking input tokens INLINEFORM1 . We train with teacher-forcing, i.e. we optimize the cross-entropy loss: INLINEFORM2 . At test time, we explore the following decoding strategies: greedy, sampling and a beam-search. We also fine-tune a trained tourist model (starting from a pre-trained model) with policy gradients in order to minimize the guide's prediction loss.
The Guide
Given a tourist message INLINEFORM0 describing their observations and actions, the objective of the guide is to predict the tourist's location on the map. First, we outline the procedure for extracting observation embedding INLINEFORM1 and action embeddings INLINEFORM2 from the message INLINEFORM3 for each of the types of communication. Next, we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guide's map in order to predict the tourist's location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous For the continuous communication model, we assign the observation message to the observation embedding, i.e. INLINEFORM0 . To extract the action embedding for time step INLINEFORM1 , we apply a linear layer to the action message, i.e. INLINEFORM2 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete For discrete communication, we obtain observation INLINEFORM0 by applying a linear layer to the observation message, i.e. INLINEFORM1 . Similar to the continuous communication model, we use a linear layer over action message INLINEFORM2 to obtain action embedding INLINEFORM3 for time step INLINEFORM4 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language The message INLINEFORM0 contains information about observations and actions, so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings. Specifically, we encode the message INLINEFORM1 , consisting of INLINEFORM2 tokens INLINEFORM3 taken from vocabulary INLINEFORM4 , with a bidirectional LSTM: DISPLAYFORM0
where INLINEFORM0 is the word embedding look-up table. We obtain observation embedding INLINEFORM1 through an attention mechanism over the hidden states INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding: INLINEFORM1 . We use the same mechanism to extract the action embedding INLINEFORM2 from the hidden states. For the observation embedding, we obtain the final representation by summing positionally gated embeddings, i.e., INLINEFORM3 .
We represent the guide's map as INLINEFORM0 , where in this case INLINEFORM1 , where each INLINEFORM2 -dimensional INLINEFORM3 location embedding INLINEFORM4 is computed as the sum of the guide's landmark embeddings for that location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Motivation While the guide's map representation contains only local landmark information, the tourist communicates a trajectory of the map (i.e. actions and observations from multiple locations), implying that directly comparing the tourist's message with the individual landmark embeddings is probably suboptimal. Instead, we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations. We propose a mechanism for translating landmark embeddings according to state transitions (left, right, up, down), which can be expressed as a 2D convolution over the map embeddings. For simplicity, let us assume that the map embedding INLINEFORM0 is 1-dimensional, then a left action can be realized through application of the following INLINEFORM1 kernel: INLINEFORM2 which effectively shifts all values of INLINEFORM3 one position to the left. We propose to learn such state-transitions from the tourist message through a differentiable attention-mask over the spatial dimensions of a 3x3 convolution.
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC We linearly project each predicted action embedding INLINEFORM0 to a 9-dimensional vector INLINEFORM1 , normalize it by a softmax and subsequently reshape the vector into a 3x3 mask INLINEFORM2 : DISPLAYFORM0
We learn a 3x3 convolutional kernel INLINEFORM0 , with INLINEFORM1 features, and apply the mask INLINEFORM2 to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions, i.e. INLINEFORM3 , and subsequently taking the Hadamard product: INLINEFORM4 . For each action step INLINEFORM5 , we then apply a 2D convolution with masked weight INLINEFORM6 to obtain a new map embedding INLINEFORM7 , where we zero-pad the input to maintain identical spatial dimensions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction model We repeat the MASC operation INLINEFORM0 times (i.e. once for each action), and then aggregate the map embeddings by a sum over positionally-gated embeddings: INLINEFORM1 . We score locations by taking the dot-product of the observation embedding INLINEFORM2 , which contains information about the sequence of observed landmarks by the tourist, and the map. We compute a distribution over the locations of the map INLINEFORM3 by taking a softmax over the computed scores: DISPLAYFORM0
paragraph4 0.1ex plus0.1ex minus.1ex-1em Predicting T While emergent communication models use a fixed length trasjectory INLINEFORM0 , natural language messages may differ in the number of communicated observations and actions. Hence, we predict INLINEFORM1 from the communicated message. Specifically, we use a softmax regression layer over the last hidden state INLINEFORM2 of the RNN, and subsequently sample INLINEFORM3 from the resulting multinomial distribution: DISPLAYFORM0
We jointly train the INLINEFORM0 -prediction model via REINFORCE, with the guide's loss as reward function and a mean-reward baseline.
Comparisons
To better analyze the performance of the models incorporating MASC, we compare against a no-MASC baseline in our experiments, as well as a prediction upper bound.
paragraph4 0.1ex plus0.1ex minus.1ex-1em No MASC We compare the proposed MASC model with a model that does not include this mechanism. Whereas MASC predicts a convolution mask from the tourist message, the “No MASC” model uses INLINEFORM0 , the ordinary convolutional kernel to convolve the map embedding INLINEFORM1 to obtain INLINEFORM2 . We also share the weights of this convolution at each time step.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction upper-bound Because we have access to the class-conditional likelihood INLINEFORM0 , we are able to compute the Bayes error rate (or irreducible error). No model (no matter how expressive) with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions.
Results and Discussion
In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work.
Analysis of Localization Task
paragraph4 0.1ex plus0.1ex minus.1ex-1em Task is not too easy The upper-bound on localization performance in Table TABREF32 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist ( INLINEFORM0 35% accuracy). This is an important result for the full navigation task because the need for two-way communication disappears if localization is too easy; if the guide knows the exact location of the tourist it suffices to communicate a list of instructions, which is then executed by the tourist. The uncertainty in the tourist's location is what drives the dialogue between the two agents.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Importance of actions We observe that the upperbound for only communicating observations plateaus around 57% (even for INLINEFORM0 actions), whereas it exceeds 90% when we also take actions into account. This implies that, at least for random walks, it is essential to communicate a trajectory, including observations and actions, in order to achieve high localization accuracy.
Emergent Language Localization
We first report the results for tourist localization with emergent language in Table TABREF32 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism. For instance, for INLINEFORM0 action, MASC already achieves 56.09 % on the test set and this further increases to 69.85% for INLINEFORM1 . On the other hand, no-MASC models hit a plateau at 43%. In Appendix SECREF11 , we analyze learned MASC values, and show that communicated actions are often mapped to corresponding state-transitions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vs discrete We observe similar performance for continuous and discrete emergent communication models, implying that a discrete communication channel is not a limiting factor for localization performance.
Natural Language Localization
We report the results of tourist localization with natural language in Table TABREF36 . We compare accuracy of the guide model (with MASC) trained on utterances from (i) humans, (ii) a supervised model with various decoding strategies, and (iii) a policy gradient model optimized with respect to the loss of a frozen, pre-trained guide model on human utterances.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human utterances Compared to emergent language, localization from human utterances is much harder, achieving only INLINEFORM0 on the test set. Here, we report localization from a single utterance, but in Appendix SECREF45 we show that including up to five dialogue utterances only improves performance to INLINEFORM1 . We also show that MASC outperform no-MASC models for natural language communication.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Generated utterances We also investigate generated tourist utterances from conditional language models. Interestingly, we observe that the supervised model (with greedy and beam-search decoding) as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances. However, their level of accuracy is slightly below the baseline of communicating a single observation, indicating that these models only learn to ground utterances in a single landmark observation.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Better grounding of generated utterances We analyze natural language samples in Table TABREF38 , and confirm that, unlike human utterances, the generated utterances are talking about the observed landmarks. This observation explains why the generated utterances obtain higher localization accuracy. The current language models are most successful when conditioned on a single landmark observation; We show in Appendix UID43 that performance quickly deteriorates when the model is conditioned on more observations, suggesting that it can not produce natural language utterances about multiple time steps.
Localization-based Baseline
Table TABREF36 shows results for the best localization models on the full task, evaluated via the random walk protocol defined in Algorithm SECREF12 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Comparison with human annotators Interestingly, our best localization model (continuous communication, with MASC, and INLINEFORM0 ) achieves 88.33% on the test set and thus exceed human performance of 76.74% on the full task. While emergent models appear to be stronger localizers, humans might cope with their localization uncertainty through other mechanisms (e.g. better guidance, bias towards taking particular paths, etc). The simplifying assumption of perfect perception also helps.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Number of actions Unsurprisingly, humans take fewer steps (roughly 15) than our best random walk model (roughly 34). Our human annotators likely used some form of guidance to navigate faster to the target.
Conclusion
We introduced the Talk The Walk task and dataset, which consists of crowd-sourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC. For the important localization sub-task, we proposed MASC—a novel grounding mechanism to learn state-transition from the tourist's message—and showed that it improves localization performance for emergent and natural language. We use the localization model to provide baseline numbers on the Talk The Walk task, in order to facilitate future research.
Related Work
The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence, including grounded language learning, goal-oriented dialogue research and situated navigation. Here, we describe related previous work in these areas.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Related tasks There has been a long line of work involving related tasks. Early work on task-oriented dialogue dates back to the early 90s with the introduction of the Map Task BIBREF11 and Maze Game BIBREF25 corpora. Recent efforts have led to larger-scale goal-oriented dialogue datasets, for instance to aid research on visually-grounded dialogue BIBREF2 , BIBREF1 , knowledge-base-grounded discourse BIBREF29 or negotiation tasks BIBREF36 . At the same time, there has been a big push to develop environments for embodied AI, many of which involve agents following natural language instructions with respect to an environment BIBREF13 , BIBREF50 , BIBREF5 , BIBREF39 , BIBREF19 , BIBREF18 , following-up on early work in this area BIBREF38 , BIBREF20 . An early example of navigation using neural networks is BIBREF28 , who propose an online learning approach for robot navigation. Recently, there has been increased interest in using end-to-end trainable neural networks for learning to navigate indoor scenes BIBREF27 , BIBREF26 or large cities BIBREF17 , BIBREF40 , but, unlike our work, without multi-agent communication. Also the task of localization (without multi-agent communication) has recently been studied BIBREF18 , BIBREF48 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied (grounded) in sensorimotor experience of the physical world BIBREF15 , BIBREF45 . On the one hand, work in multi-modal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks BIBREF14 , BIBREF31 . In robotics, researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world BIBREF44 , BIBREF46 . Recently, grounding has also been applied to the learning of sentence representations BIBREF32 , image captioning BIBREF37 , BIBREF49 , visual question answering BIBREF12 , BIBREF22 , visual reasoning BIBREF30 , BIBREF42 , and grounded machine translation BIBREF43 , BIBREF23 . Grounding also plays a crucial role in the emergent research of multi-agent communication, where, agents communicate (in natural language or otherwise) in order to solve a task, with respect to their shared environment BIBREF35 , BIBREF21 , BIBREF41 , BIBREF24 , BIBREF36 , BIBREF47 , BIBREF34 .
Implementation Details
For the emergent communication models, we use an embedding size INLINEFORM0 . The natural language experiments use 128-dimensional word embeddings and a bidirectional RNN with 256 units. In all experiments, we train the guide with a cross entropy loss using the ADAM optimizer with default hyper-parameters BIBREF33 . We perform early stopping on the validation accuracy, and report the corresponding train, valid and test accuracy. We optimize the localization models with continuous, discrete and natural language communication channels for 200, 200, and 25 epochs, respectively. To facilitate further research on Talk The Walk, we make our code base for reproducing experiments publicly available at https://github.com/facebookresearch/talkthewalk.
Additional Natural Language Experiments
First, we investigate the sensitivity of tourist generation models to the trajectory length, finding that the model conditioned on a single observation (i.e. INLINEFORM0 ) achieves best performance. In the next subsection, we further analyze localization models from human utterances by investigating MASC and no-MASC models with increasing dialogue context.
Tourist Generation Models
After training the supervised tourist model (conditioned on observations and action from human expert trajectories), there are two ways to train an accompanying guide model. We can optimize a location prediction model on either (i) extracted human trajectories (as in the localization setup from human utterances) or (ii) on all random paths of length INLINEFORM0 (as in the full task evaluation). Here, we investigate the impact of (1) using either human or random trajectories for training the guide model, and (2) the effect of varying the path length INLINEFORM1 during the full-task evaluation. For random trajectories, guide training uses the same path length INLINEFORM2 as is used during evaluation. We use a pre-trained tourist model with greedy decoding for generating the tourist utterances. Table TABREF40 summarizes the results.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human vs random trajectories We only observe small improvements for training on random trajectories. Human trajectories are thus diverse enough to generalize to random trajectories.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length. We observe that the full task performance quickly deteriorates for both human and random trajectories. This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions. Although it is possible that the guide model can not process such utterances, this is not very likely because the MASC architectures handles such messages successfully for emergent communication.
We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table TABREF40 . We find that performance decreases from 29.05% to 20.87% accuracy on the test set when we increase the beam-size from one to eight.
Localization from Human Utterances
We conduct an ablation study for MASC on natural language with varying dialogue context. Specifically, we compare localization accuracy of MASC and no-MASC models trained on the last [1, 3, 5] utterances of the dialogue (including guide utterances). We report these results in Table TABREF41 . In all cases, MASC outperforms the no-MASC models by several accuracy points. We also observe that mean predicted INLINEFORM0 (over the test set) increases from 1 to 2 when more dialogue context is included.
Visualizing MASC predictions
Figure FIGREF46 shows the MASC values for a learned model with emergent discrete communications and INLINEFORM0 actions. Specifically, we look at the predicted MASC values for different action sequences taken by the tourist. We observe that the first action is always mapped to the correct state-transition, but that the second and third MASC values do not always correspond to right state-transitions.
Evaluation on Full Setup
We provide pseudo-code for evaluation of localization models on the full task in Algorithm SECREF12 , as well as results for all emergent communication models in Table TABREF55 .
INLINEFORM0 INLINEFORM1
INLINEFORM0 take new action INLINEFORM1 INLINEFORM2
Performance evaluation of location prediction model on full Talk The Walk setup
Landmark Classification
While the guide has access to the landmark labels, the tourist needs to recognize these landmarks from raw perceptual information. In this section, we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk.
The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes, see Figure FIGREF62 for how they are distributed. The class distribution is fairly imbalanced, with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters. We treat landmark recognition as a multi-label classification problem as there can be multiple landmarks on a corner.
For the task of landmark classification, we extract the relevant views of the 360 image from which a landmark is visible. Because landmarks are labeled to be on a specific corner of an intersection, we assume that they are visible from one of the orientations facing away from the intersection. For example, for a landmark on the northwest corner of an intersection, we extract views from both the north and west direction. The orientation-specific views are obtained by a planar projection of the full 360-image with a small field of view (60 degrees) to limit distortions. To cover the full field of view, we extract two images per orientation, with their horizontal focus point 30 degrees apart. Hence, we obtain eight images per 360 image with corresponding orientation INLINEFORM0 .
We run the following pre-trained feature extractors over the extracted images:
For the text recognition model, we use a learned look-up table INLINEFORM0 to embed the extracted text features INLINEFORM1 , and fuse all embeddings of four images through a bag of embeddings, i.e., INLINEFORM2 . We use a linear layer followed by a sigmoid to predict the probability for each class, i.e. INLINEFORM3 . We also experiment with replacing the look-up embeddings with pre-trained FastText embeddings BIBREF16 . For the ResNet model, we use a bag of embeddings over the four ResNet features, i.e. INLINEFORM4 , before we pass it through a linear layer to predict the class probabilities: INLINEFORM5 . We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model.
To account for class imbalance, we train all described models with a binary cross entropy loss weighted by the inverted class frequency. We create a 80-20 class-conditional split of the dataset into a training and validation set. We train for 100 epochs and perform early stopping on the validation loss.
The F1 scores for the described methods in Table TABREF65 . We compare to an “all positive” baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline. Although 256-dimensional ResNet features achieve slightly better precision on the validation set, it results in much worse recall and a lower F1 score. Our results indicate that perceptual grounding is a difficult task, which easily merits a paper of its own right, and so we leave further improvements (e.g. better text recognizers) for future work.
Dataset Details
paragraph4 0.1ex plus0.1ex minus.1ex-1em Dataset split We split the full dataset by assigning entire 4x4 grids (independent of the target location) to the train, valid or test set. Specifically, we design the split such that the valid set contains at least one intersection (out of four) is not part of the train set. For the test set, all four intersections are novel. See our source code, available at URL ANONYMIZED, for more details on how this split is realized.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Example
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Hello, what are you near?
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Tourist: Hello, in front of me is a Brooks Brothers
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: Is that a shop or restaurant?
Tourist: ACTION:TURNLEFT
Tourist: It is a clothing shop.
Tourist: ACTION:TURNLEFT
Guide: You need to go to the intersection in the northwest corner of the map
Tourist: ACTION:TURNLEFT
Tourist: There appears to be a bank behind me.
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Ok, turn left then go straight up that road
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Guide: There should be shops on two of the corners but you
need to go to the corner without a shop.
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: let me know when you get there.
Tourist: on my left is Radio city Music hall
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Tourist: I can't go straight any further.
Guide: ok. turn so that the theater is on your right.
Guide: then go straight
Tourist: That would be going back the way I came
Guide: yeah. I was looking at the wrong bank
Tourist: I'll notify when I am back at the brooks brothers, and the bank.
Tourist: ACTION:TURNRIGHT
Guide: make a right when the bank is on your left
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT
Tourist: Making the right at the bank.
Tourist: ACTION:FORWARD ACTION:FORWARD
Tourist: I can't go that way.
Tourist: ACTION:TURNLEFT
Tourist: Bank is ahead of me on the right
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Guide: turn around on that intersection
Tourist: I can only go to the left or back the way I just came.
Tourist: ACTION:TURNLEFT
Guide: you're in the right place. do you see shops on the corners?
Guide: If you're on the corner with the bank, cross the street
Tourist: I'm back where I started by the shop and the bank.
Tourist: ACTION:TURNRIGHT
Guide: on the same side of the street?
Tourist: crossing the street now
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Tourist: there is an I love new york shop across the street on the left from me now
Tourist: ACTION:TURNRIGHT ACTION:FORWARD
Guide: ok. I'll see if it's right.
Guide: EVALUATE_LOCATION
Guide: It's not right.
Tourist: What should I be on the look for?
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: There should be shops on two corners but you need to be on one of the corners
without the shop.
Guide: Try the other corner.
Tourist: this intersection has 2 shop corners and a bank corner
Guide: yes. that's what I see on the map.
Tourist: should I go to the bank corner? or one of the shop corners?
or the blank corner (perhaps a hotel)
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Go to the one near the hotel. The map says the hotel is a little
further down but it might be a little off.
Tourist: It's a big hotel it's possible.
Tourist: ACTION:FORWARD ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
Tourist: I'm on the hotel corner
Guide: EVALUATE_LOCATION | English |
b85ab5f862221fac819cf2fef239bcb08b9cafc6 | b85ab5f862221fac819cf2fef239bcb08b9cafc6_0 | Q: What evaluation metrics did the authors look at?
Text: 0pt0.03.03 *
0pt0.030.03 *
0pt0.030.03
We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.
Introduction
As artificial intelligence plays an ever more prominent role in everyday human lives, it becomes increasingly important to enable machines to communicate via natural language—not only with humans, but also with each other. Learning algorithms for natural language understanding, such as in machine translation and reading comprehension, have progressed at an unprecedented rate in recent years, but still rely on static, large-scale, text-only datasets that lack crucial aspects of how humans understand and produce natural language. Namely, humans develop language capabilities by being embodied in an environment which they can perceive, manipulate and move around in; and by interacting with other humans. Hence, we argue that we should incorporate all three fundamental aspects of human language acquisition—perception, action and interactive communication—and develop a task and dataset to that effect.
We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .
Grounded language learning has (re-)gained traction in the AI community, and much attention is currently devoted to virtual embodiment—the development of multi-agent communication tasks in virtual environments—which has been argued to be a viable strategy for acquiring natural language semantics BIBREF0 . Various related tasks have recently been introduced, but in each case with some limitations. Although visually grounded dialogue tasks BIBREF1 , BIBREF2 comprise perceptual grounding and multi-agent interaction, their agents are passive observers and do not act in the environment. By contrast, instruction-following tasks, such as VNL BIBREF3 , involve action and perception but lack natural language interaction with other agents. Furthermore, some of these works use simulated environments BIBREF4 and/or templated language BIBREF5 , which arguably oversimplifies real perception or natural language, respectively. See Table TABREF15 for a comparison.
Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
We argue that for artificial agents to solve this challenging problem, some fundamental architecture designs are missing, and our hope is that this task motivates their innovation. To that end, we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism. To model the interaction between language and action, this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence.
This work makes the following contributions: 1) We present the first large scale dialogue dataset grounded in action and perception; 2) We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language; 4) Using localization models, we establish initial baselines on the full task; 5) We show that our best model exceeds human performance under the assumption of “perfect perception” and with a learned emergent communication protocol, and sets a non-trivial baseline with natural language.
Talk The Walk
We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera. Most parts of the city are grid-like and uniform, which makes it well-suited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side—see Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.
The tourist's location is given as a tuple INLINEFORM0 , where INLINEFORM1 are the coordinates and INLINEFORM2 signifies the orientation (north, east, south or west). The tourist can take three actions: turn left, turn right and go forward. For moving forward, we add INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 to the INLINEFORM7 coordinates for the respective orientations. Upon a turning action, the orientation is updated by INLINEFORM8 where INLINEFORM9 for left and INLINEFORM10 for right. If the tourist moves outside the grid, we issue a warning that they cannot go in that direction and do not update the location. Moreover, tourists are shown different types of transitions: a short transition for actions that bring the tourist to a different corner of the same intersection; and a longer transition for actions that bring them to a new intersection.
The guide observes a map that corresponds to the tourist's environment. We exploit the fact that urban areas like NYC are full of local businesses, and overlay the map with these landmarks as localization points for our task. Specifically, we manually annotate each corner of the intersection with a set of landmarks INLINEFORM0 , each coming from one of the following categories:
Bar Playfield Bank Hotel Shop Subway Coffee Shop Restaurant Theater
The right-side of Figure FIGREF3 illustrates how the map is presented. Note that within-intersection transitions have a smaller grid distance than transitions to new intersections. To ensure that the localization task is not too easy, we do not include street names in the overhead map and keep the landmark categories coarse. That is, the dialogue is driven by uncertainty in the tourist's current location and the properties of the target location: if the exact location and orientation of the tourist were known, it would suffice to communicate a sequence of actions.
Task
For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners INLINEFORM0 . Next, we construct the overhead map of the environment, i.e. INLINEFORM1 with INLINEFORM2 and INLINEFORM3 . We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random.
The shared goal of the two agents is to navigate the tourist to the target location INLINEFORM0 , which is only known to the guide. The tourist perceives a “street view” planar projection INLINEFORM1 of the 360 image at location INLINEFORM2 and can simultaneously chat with the guide and navigate through the environment. The guide's role consists of reading the tourist description of the environment, building a “mental map” of their current position and providing instructions for navigating towards the target location. Whenever the guide believes that the tourist has reached the target location, they instruct the system to evaluate the tourist's location. The task ends when the evaluation is successful—i.e., when INLINEFORM3 —or otherwise continues until a total of three failed attempts. The additional attempts are meant to ease the task for humans, as we found that they otherwise often fail at the task but still end up close to the target location, e.g., at the wrong corner of the correct intersection.
Data Collection
We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs.
Dataset Statistics
The Talk The Walk dataset consists of over 10k successful dialogues—see Table FIGREF66 in the appendix for the dataset statistics split by neighborhood. Turkers successfully completed INLINEFORM0 of all finished tasks (we use this statistic as the human success rate). More than six hundred participants successfully completed at least one Talk The Walk HIT. Although the Visual Dialog BIBREF2 and GuessWhat BIBREF1 datasets are larger, the collected Talk The Walk dialogs are significantly longer. On average, Turkers needed more than 62 acts (i.e utterances and actions) before they successfully completed the task, whereas Visual Dialog requires 20 acts. The majority of acts comprise the tourist's actions, with on average more than 44 actions per dialogue. The guide produces roughly 9 utterances per dialogue, slightly more than the tourist's 8 utterances. Turkers use diverse discourse, with a vocabulary size of more than 10K (calculated over all successful dialogues). An example from the dataset is shown in Appendix SECREF14 . The dataset is available at https://github.com/facebookresearch/talkthewalk.
Experiments
We investigate the difficulty of the proposed task by establishing initial baselines. The final Talk The Walk task is challenging and encompasses several important sub-tasks, ranging from landmark recognition to tourist localization and natural language instruction-giving. Arguably the most important sub-task is localization: without such capabilities the guide can not tell whether the tourist reached the target location. In this work, we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization. Specifically, we let trained tourist models undertake random walks, using the following protocol: at each step, the tourist communicates its observations and actions to the guide, who predicts the tourist's location. If the guide predicts that the tourist is at target, we evaluate its location. If successful, the task ends, otherwise we continue until there have been three wrong evaluations. The protocol is given as pseudo-code in Appendix SECREF12 .
Tourist Localization
The designed navigation protocol relies on a trained localization model that predicts the tourist's location from a communicated message. Before we formalize this localization sub-task in Section UID21 , we further introduce two simplifying assumptions—perfect perception and orientation-agnosticism—so as to overcome some of the difficulties we encountered in preliminary experiments.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Perfect Perception Early experiments revealed that perceptual grounding of landmarks is difficult: we set up a landmark classification problem, on which models with extracted CNN BIBREF7 or text recognition features BIBREF8 barely outperform a random baseline—see Appendix SECREF13 for full details. This finding implies that localization models from image input are limited by their ability to recognize landmarks, and, as a result, would not generalize to unseen environments. To ensure that perception is not the limiting factor when investigating the landmark-grounding and action-grounding capabilities of localization models, we assume “perfect perception”: in lieu of the 360 image view, the tourist is given the landmarks at its current location. More formally, each state observation INLINEFORM0 now equals the set of landmarks at the INLINEFORM1 -location, i.e. INLINEFORM2 . If the INLINEFORM3 -location does not have any visible landmarks, we return a single “empty corner” symbol. We stress that our findings—including a novel architecture for grounding actions into an overhead map, see Section UID28 —should carry over to settings without the perfect perception assumption.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Orientation-agnosticism We opt to ignore the tourist's orientation, which simplifies the set of actions to [Left, Right, Up, Down], corresponding to adding [(-1, 0), (1, 0), (0, 1), (0, -1)] to the current INLINEFORM0 coordinates, respectively. Note that actions are now coupled to an orientation on the map—e.g. up is equal to going north—and this implicitly assumes that the tourist has access to a compass. This also affects perception, since the tourist now has access to views from all orientations: in conjunction with “perfect perception”, implying that only landmarks at the current corner are given, whereas landmarks from different corners (e.g. across the street) are not visible.
Even with these simplifications, the localization-based baseline comes with its own set of challenges. As we show in Section SECREF34 , the task requires communication about a short (random) path—i.e., not only a sequence of observations but also actions—in order to achieve high localization accuracy. This means that the guide needs to decode observations from multiple time steps, as well as understand their 2D spatial arrangement as communicated via the sequence of actions. Thus, in order to get to a good understanding of the task, we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guide's map. In doing so, we thoroughly study the role of the communication channel in the localization task, by investigating increasingly constrained forms of communication: from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language.
The full navigation baseline hinges on a localization model from random trajectories. While we can sample random actions in the emergent communication setup, this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators. This leads to slightly different problem setups, as described below.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Emergent language A tourist, starting from a random location, takes INLINEFORM0 random actions INLINEFORM1 to reach target location INLINEFORM2 . Every location in the environment has a corresponding set of landmarks INLINEFORM3 for each of the INLINEFORM4 coordinates. As the tourist navigates, the agent perceives INLINEFORM5 state-observations INLINEFORM6 where each observation INLINEFORM7 consists of a set of INLINEFORM8 landmark symbols INLINEFORM9 . Given the observations INLINEFORM10 and actions INLINEFORM11 , the tourist generates a message INLINEFORM12 which is communicated to the other agent. The objective of the guide is to predict the location INLINEFORM13 from the tourist's message INLINEFORM14 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural language In contrast to our emergent communication experiments, we do not take random actions but instead extract actions, observations, and messages from the dataset. Specifically, we consider each tourist utterance (i.e. at any point in the dialogue), obtain the current tourist location as target location INLINEFORM0 , the utterance itself as message INLINEFORM1 , and the sequence of observations and actions that took place between the current and previous tourist utterance as INLINEFORM2 and INLINEFORM3 , respectively. Similar to the emergent language setting, the guide's objective is to predict the target location INLINEFORM4 models from the tourist message INLINEFORM5 . We conduct experiments with INLINEFORM6 taken from the dataset and with INLINEFORM7 generated from the extracted observations INLINEFORM8 and actions INLINEFORM9 .
Model
This section outlines the tourist and guide architectures. We first describe how the tourist produces messages for the various communication channels across which the messages are sent. We subsequently describe how these messages are processed by the guide, and introduce the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding into the 2D overhead map in order to predict the tourist's location.
The Tourist
For each of the communication channels, we outline the procedure for generating a message INLINEFORM0 . Given a set of state observations INLINEFORM1 , we represent each observation by summing the INLINEFORM2 -dimensional embeddings of the observed landmarks, i.e. for INLINEFORM3 , INLINEFORM4 , where INLINEFORM5 is the landmark embedding lookup table. In addition, we embed action INLINEFORM6 into a INLINEFORM7 -dimensional embedding INLINEFORM8 via a look-up table INLINEFORM9 . We experiment with three types of communication channel.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vectors The tourist has access to observations of several time steps, whose order is important for accurate localization. Because summing embeddings is order-invariant, we introduce a sum over positionally-gated embeddings, which, conditioned on time step INLINEFORM0 , pushes embedding information into the appropriate dimensions. More specifically, we generate an observation message INLINEFORM1 , where INLINEFORM2 is a learned gating vector for time step INLINEFORM3 . In a similar fashion, we produce action message INLINEFORM4 and send the concatenated vectors INLINEFORM5 as message to the guide. We can interpret continuous vector communication as a single, monolithic model because its architecture is end-to-end differentiable, enabling gradient-based optimization for training.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete symbols Like the continuous vector communication model, with discrete communication the tourist also uses separate channels for observations and actions, as well as a sum over positionally gated embeddings to generate observation embedding INLINEFORM0 . We pass this embedding through a sigmoid and generate a message INLINEFORM1 by sampling from the resulting Bernoulli distributions:
INLINEFORM0
The action message INLINEFORM0 is produced in the same way, and we obtain the final tourist message INLINEFORM1 through concatenating the messages.
The communication channel's sampling operation yields the model non-differentiable, so we use policy gradients BIBREF9 , BIBREF10 to train the parameters INLINEFORM0 of the tourist model. That is, we estimate the gradient by INLINEFORM1
where the reward function INLINEFORM0 is the negative guide's loss (see Section SECREF25 ) and INLINEFORM1 a state-value baseline to reduce variance. We use a linear transformation over the concatenated embeddings as baseline prediction, i.e. INLINEFORM2 , and train it with a mean squared error loss.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language Because observations and actions are of variable-length, we use an LSTM encoder over the sequence of observations embeddings INLINEFORM0 , and extract its last hidden state INLINEFORM1 . We use a separate LSTM encoder for action embeddings INLINEFORM2 , and concatenate both INLINEFORM3 and INLINEFORM4 to the input of the LSTM decoder at each time step: DISPLAYFORM0
where INLINEFORM0 a look-up table, taking input tokens INLINEFORM1 . We train with teacher-forcing, i.e. we optimize the cross-entropy loss: INLINEFORM2 . At test time, we explore the following decoding strategies: greedy, sampling and a beam-search. We also fine-tune a trained tourist model (starting from a pre-trained model) with policy gradients in order to minimize the guide's prediction loss.
The Guide
Given a tourist message INLINEFORM0 describing their observations and actions, the objective of the guide is to predict the tourist's location on the map. First, we outline the procedure for extracting observation embedding INLINEFORM1 and action embeddings INLINEFORM2 from the message INLINEFORM3 for each of the types of communication. Next, we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guide's map in order to predict the tourist's location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous For the continuous communication model, we assign the observation message to the observation embedding, i.e. INLINEFORM0 . To extract the action embedding for time step INLINEFORM1 , we apply a linear layer to the action message, i.e. INLINEFORM2 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete For discrete communication, we obtain observation INLINEFORM0 by applying a linear layer to the observation message, i.e. INLINEFORM1 . Similar to the continuous communication model, we use a linear layer over action message INLINEFORM2 to obtain action embedding INLINEFORM3 for time step INLINEFORM4 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language The message INLINEFORM0 contains information about observations and actions, so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings. Specifically, we encode the message INLINEFORM1 , consisting of INLINEFORM2 tokens INLINEFORM3 taken from vocabulary INLINEFORM4 , with a bidirectional LSTM: DISPLAYFORM0
where INLINEFORM0 is the word embedding look-up table. We obtain observation embedding INLINEFORM1 through an attention mechanism over the hidden states INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding: INLINEFORM1 . We use the same mechanism to extract the action embedding INLINEFORM2 from the hidden states. For the observation embedding, we obtain the final representation by summing positionally gated embeddings, i.e., INLINEFORM3 .
We represent the guide's map as INLINEFORM0 , where in this case INLINEFORM1 , where each INLINEFORM2 -dimensional INLINEFORM3 location embedding INLINEFORM4 is computed as the sum of the guide's landmark embeddings for that location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Motivation While the guide's map representation contains only local landmark information, the tourist communicates a trajectory of the map (i.e. actions and observations from multiple locations), implying that directly comparing the tourist's message with the individual landmark embeddings is probably suboptimal. Instead, we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations. We propose a mechanism for translating landmark embeddings according to state transitions (left, right, up, down), which can be expressed as a 2D convolution over the map embeddings. For simplicity, let us assume that the map embedding INLINEFORM0 is 1-dimensional, then a left action can be realized through application of the following INLINEFORM1 kernel: INLINEFORM2 which effectively shifts all values of INLINEFORM3 one position to the left. We propose to learn such state-transitions from the tourist message through a differentiable attention-mask over the spatial dimensions of a 3x3 convolution.
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC We linearly project each predicted action embedding INLINEFORM0 to a 9-dimensional vector INLINEFORM1 , normalize it by a softmax and subsequently reshape the vector into a 3x3 mask INLINEFORM2 : DISPLAYFORM0
We learn a 3x3 convolutional kernel INLINEFORM0 , with INLINEFORM1 features, and apply the mask INLINEFORM2 to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions, i.e. INLINEFORM3 , and subsequently taking the Hadamard product: INLINEFORM4 . For each action step INLINEFORM5 , we then apply a 2D convolution with masked weight INLINEFORM6 to obtain a new map embedding INLINEFORM7 , where we zero-pad the input to maintain identical spatial dimensions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction model We repeat the MASC operation INLINEFORM0 times (i.e. once for each action), and then aggregate the map embeddings by a sum over positionally-gated embeddings: INLINEFORM1 . We score locations by taking the dot-product of the observation embedding INLINEFORM2 , which contains information about the sequence of observed landmarks by the tourist, and the map. We compute a distribution over the locations of the map INLINEFORM3 by taking a softmax over the computed scores: DISPLAYFORM0
paragraph4 0.1ex plus0.1ex minus.1ex-1em Predicting T While emergent communication models use a fixed length trasjectory INLINEFORM0 , natural language messages may differ in the number of communicated observations and actions. Hence, we predict INLINEFORM1 from the communicated message. Specifically, we use a softmax regression layer over the last hidden state INLINEFORM2 of the RNN, and subsequently sample INLINEFORM3 from the resulting multinomial distribution: DISPLAYFORM0
We jointly train the INLINEFORM0 -prediction model via REINFORCE, with the guide's loss as reward function and a mean-reward baseline.
Comparisons
To better analyze the performance of the models incorporating MASC, we compare against a no-MASC baseline in our experiments, as well as a prediction upper bound.
paragraph4 0.1ex plus0.1ex minus.1ex-1em No MASC We compare the proposed MASC model with a model that does not include this mechanism. Whereas MASC predicts a convolution mask from the tourist message, the “No MASC” model uses INLINEFORM0 , the ordinary convolutional kernel to convolve the map embedding INLINEFORM1 to obtain INLINEFORM2 . We also share the weights of this convolution at each time step.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction upper-bound Because we have access to the class-conditional likelihood INLINEFORM0 , we are able to compute the Bayes error rate (or irreducible error). No model (no matter how expressive) with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions.
Results and Discussion
In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work.
Analysis of Localization Task
paragraph4 0.1ex plus0.1ex minus.1ex-1em Task is not too easy The upper-bound on localization performance in Table TABREF32 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist ( INLINEFORM0 35% accuracy). This is an important result for the full navigation task because the need for two-way communication disappears if localization is too easy; if the guide knows the exact location of the tourist it suffices to communicate a list of instructions, which is then executed by the tourist. The uncertainty in the tourist's location is what drives the dialogue between the two agents.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Importance of actions We observe that the upperbound for only communicating observations plateaus around 57% (even for INLINEFORM0 actions), whereas it exceeds 90% when we also take actions into account. This implies that, at least for random walks, it is essential to communicate a trajectory, including observations and actions, in order to achieve high localization accuracy.
Emergent Language Localization
We first report the results for tourist localization with emergent language in Table TABREF32 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism. For instance, for INLINEFORM0 action, MASC already achieves 56.09 % on the test set and this further increases to 69.85% for INLINEFORM1 . On the other hand, no-MASC models hit a plateau at 43%. In Appendix SECREF11 , we analyze learned MASC values, and show that communicated actions are often mapped to corresponding state-transitions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vs discrete We observe similar performance for continuous and discrete emergent communication models, implying that a discrete communication channel is not a limiting factor for localization performance.
Natural Language Localization
We report the results of tourist localization with natural language in Table TABREF36 . We compare accuracy of the guide model (with MASC) trained on utterances from (i) humans, (ii) a supervised model with various decoding strategies, and (iii) a policy gradient model optimized with respect to the loss of a frozen, pre-trained guide model on human utterances.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human utterances Compared to emergent language, localization from human utterances is much harder, achieving only INLINEFORM0 on the test set. Here, we report localization from a single utterance, but in Appendix SECREF45 we show that including up to five dialogue utterances only improves performance to INLINEFORM1 . We also show that MASC outperform no-MASC models for natural language communication.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Generated utterances We also investigate generated tourist utterances from conditional language models. Interestingly, we observe that the supervised model (with greedy and beam-search decoding) as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances. However, their level of accuracy is slightly below the baseline of communicating a single observation, indicating that these models only learn to ground utterances in a single landmark observation.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Better grounding of generated utterances We analyze natural language samples in Table TABREF38 , and confirm that, unlike human utterances, the generated utterances are talking about the observed landmarks. This observation explains why the generated utterances obtain higher localization accuracy. The current language models are most successful when conditioned on a single landmark observation; We show in Appendix UID43 that performance quickly deteriorates when the model is conditioned on more observations, suggesting that it can not produce natural language utterances about multiple time steps.
Localization-based Baseline
Table TABREF36 shows results for the best localization models on the full task, evaluated via the random walk protocol defined in Algorithm SECREF12 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Comparison with human annotators Interestingly, our best localization model (continuous communication, with MASC, and INLINEFORM0 ) achieves 88.33% on the test set and thus exceed human performance of 76.74% on the full task. While emergent models appear to be stronger localizers, humans might cope with their localization uncertainty through other mechanisms (e.g. better guidance, bias towards taking particular paths, etc). The simplifying assumption of perfect perception also helps.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Number of actions Unsurprisingly, humans take fewer steps (roughly 15) than our best random walk model (roughly 34). Our human annotators likely used some form of guidance to navigate faster to the target.
Conclusion
We introduced the Talk The Walk task and dataset, which consists of crowd-sourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC. For the important localization sub-task, we proposed MASC—a novel grounding mechanism to learn state-transition from the tourist's message—and showed that it improves localization performance for emergent and natural language. We use the localization model to provide baseline numbers on the Talk The Walk task, in order to facilitate future research.
Related Work
The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence, including grounded language learning, goal-oriented dialogue research and situated navigation. Here, we describe related previous work in these areas.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Related tasks There has been a long line of work involving related tasks. Early work on task-oriented dialogue dates back to the early 90s with the introduction of the Map Task BIBREF11 and Maze Game BIBREF25 corpora. Recent efforts have led to larger-scale goal-oriented dialogue datasets, for instance to aid research on visually-grounded dialogue BIBREF2 , BIBREF1 , knowledge-base-grounded discourse BIBREF29 or negotiation tasks BIBREF36 . At the same time, there has been a big push to develop environments for embodied AI, many of which involve agents following natural language instructions with respect to an environment BIBREF13 , BIBREF50 , BIBREF5 , BIBREF39 , BIBREF19 , BIBREF18 , following-up on early work in this area BIBREF38 , BIBREF20 . An early example of navigation using neural networks is BIBREF28 , who propose an online learning approach for robot navigation. Recently, there has been increased interest in using end-to-end trainable neural networks for learning to navigate indoor scenes BIBREF27 , BIBREF26 or large cities BIBREF17 , BIBREF40 , but, unlike our work, without multi-agent communication. Also the task of localization (without multi-agent communication) has recently been studied BIBREF18 , BIBREF48 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied (grounded) in sensorimotor experience of the physical world BIBREF15 , BIBREF45 . On the one hand, work in multi-modal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks BIBREF14 , BIBREF31 . In robotics, researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world BIBREF44 , BIBREF46 . Recently, grounding has also been applied to the learning of sentence representations BIBREF32 , image captioning BIBREF37 , BIBREF49 , visual question answering BIBREF12 , BIBREF22 , visual reasoning BIBREF30 , BIBREF42 , and grounded machine translation BIBREF43 , BIBREF23 . Grounding also plays a crucial role in the emergent research of multi-agent communication, where, agents communicate (in natural language or otherwise) in order to solve a task, with respect to their shared environment BIBREF35 , BIBREF21 , BIBREF41 , BIBREF24 , BIBREF36 , BIBREF47 , BIBREF34 .
Implementation Details
For the emergent communication models, we use an embedding size INLINEFORM0 . The natural language experiments use 128-dimensional word embeddings and a bidirectional RNN with 256 units. In all experiments, we train the guide with a cross entropy loss using the ADAM optimizer with default hyper-parameters BIBREF33 . We perform early stopping on the validation accuracy, and report the corresponding train, valid and test accuracy. We optimize the localization models with continuous, discrete and natural language communication channels for 200, 200, and 25 epochs, respectively. To facilitate further research on Talk The Walk, we make our code base for reproducing experiments publicly available at https://github.com/facebookresearch/talkthewalk.
Additional Natural Language Experiments
First, we investigate the sensitivity of tourist generation models to the trajectory length, finding that the model conditioned on a single observation (i.e. INLINEFORM0 ) achieves best performance. In the next subsection, we further analyze localization models from human utterances by investigating MASC and no-MASC models with increasing dialogue context.
Tourist Generation Models
After training the supervised tourist model (conditioned on observations and action from human expert trajectories), there are two ways to train an accompanying guide model. We can optimize a location prediction model on either (i) extracted human trajectories (as in the localization setup from human utterances) or (ii) on all random paths of length INLINEFORM0 (as in the full task evaluation). Here, we investigate the impact of (1) using either human or random trajectories for training the guide model, and (2) the effect of varying the path length INLINEFORM1 during the full-task evaluation. For random trajectories, guide training uses the same path length INLINEFORM2 as is used during evaluation. We use a pre-trained tourist model with greedy decoding for generating the tourist utterances. Table TABREF40 summarizes the results.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human vs random trajectories We only observe small improvements for training on random trajectories. Human trajectories are thus diverse enough to generalize to random trajectories.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length. We observe that the full task performance quickly deteriorates for both human and random trajectories. This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions. Although it is possible that the guide model can not process such utterances, this is not very likely because the MASC architectures handles such messages successfully for emergent communication.
We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table TABREF40 . We find that performance decreases from 29.05% to 20.87% accuracy on the test set when we increase the beam-size from one to eight.
Localization from Human Utterances
We conduct an ablation study for MASC on natural language with varying dialogue context. Specifically, we compare localization accuracy of MASC and no-MASC models trained on the last [1, 3, 5] utterances of the dialogue (including guide utterances). We report these results in Table TABREF41 . In all cases, MASC outperforms the no-MASC models by several accuracy points. We also observe that mean predicted INLINEFORM0 (over the test set) increases from 1 to 2 when more dialogue context is included.
Visualizing MASC predictions
Figure FIGREF46 shows the MASC values for a learned model with emergent discrete communications and INLINEFORM0 actions. Specifically, we look at the predicted MASC values for different action sequences taken by the tourist. We observe that the first action is always mapped to the correct state-transition, but that the second and third MASC values do not always correspond to right state-transitions.
Evaluation on Full Setup
We provide pseudo-code for evaluation of localization models on the full task in Algorithm SECREF12 , as well as results for all emergent communication models in Table TABREF55 .
INLINEFORM0 INLINEFORM1
INLINEFORM0 take new action INLINEFORM1 INLINEFORM2
Performance evaluation of location prediction model on full Talk The Walk setup
Landmark Classification
While the guide has access to the landmark labels, the tourist needs to recognize these landmarks from raw perceptual information. In this section, we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk.
The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes, see Figure FIGREF62 for how they are distributed. The class distribution is fairly imbalanced, with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters. We treat landmark recognition as a multi-label classification problem as there can be multiple landmarks on a corner.
For the task of landmark classification, we extract the relevant views of the 360 image from which a landmark is visible. Because landmarks are labeled to be on a specific corner of an intersection, we assume that they are visible from one of the orientations facing away from the intersection. For example, for a landmark on the northwest corner of an intersection, we extract views from both the north and west direction. The orientation-specific views are obtained by a planar projection of the full 360-image with a small field of view (60 degrees) to limit distortions. To cover the full field of view, we extract two images per orientation, with their horizontal focus point 30 degrees apart. Hence, we obtain eight images per 360 image with corresponding orientation INLINEFORM0 .
We run the following pre-trained feature extractors over the extracted images:
For the text recognition model, we use a learned look-up table INLINEFORM0 to embed the extracted text features INLINEFORM1 , and fuse all embeddings of four images through a bag of embeddings, i.e., INLINEFORM2 . We use a linear layer followed by a sigmoid to predict the probability for each class, i.e. INLINEFORM3 . We also experiment with replacing the look-up embeddings with pre-trained FastText embeddings BIBREF16 . For the ResNet model, we use a bag of embeddings over the four ResNet features, i.e. INLINEFORM4 , before we pass it through a linear layer to predict the class probabilities: INLINEFORM5 . We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model.
To account for class imbalance, we train all described models with a binary cross entropy loss weighted by the inverted class frequency. We create a 80-20 class-conditional split of the dataset into a training and validation set. We train for 100 epochs and perform early stopping on the validation loss.
The F1 scores for the described methods in Table TABREF65 . We compare to an “all positive” baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline. Although 256-dimensional ResNet features achieve slightly better precision on the validation set, it results in much worse recall and a lower F1 score. Our results indicate that perceptual grounding is a difficult task, which easily merits a paper of its own right, and so we leave further improvements (e.g. better text recognizers) for future work.
Dataset Details
paragraph4 0.1ex plus0.1ex minus.1ex-1em Dataset split We split the full dataset by assigning entire 4x4 grids (independent of the target location) to the train, valid or test set. Specifically, we design the split such that the valid set contains at least one intersection (out of four) is not part of the train set. For the test set, all four intersections are novel. See our source code, available at URL ANONYMIZED, for more details on how this split is realized.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Example
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Hello, what are you near?
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Tourist: Hello, in front of me is a Brooks Brothers
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: Is that a shop or restaurant?
Tourist: ACTION:TURNLEFT
Tourist: It is a clothing shop.
Tourist: ACTION:TURNLEFT
Guide: You need to go to the intersection in the northwest corner of the map
Tourist: ACTION:TURNLEFT
Tourist: There appears to be a bank behind me.
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Ok, turn left then go straight up that road
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Guide: There should be shops on two of the corners but you
need to go to the corner without a shop.
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: let me know when you get there.
Tourist: on my left is Radio city Music hall
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Tourist: I can't go straight any further.
Guide: ok. turn so that the theater is on your right.
Guide: then go straight
Tourist: That would be going back the way I came
Guide: yeah. I was looking at the wrong bank
Tourist: I'll notify when I am back at the brooks brothers, and the bank.
Tourist: ACTION:TURNRIGHT
Guide: make a right when the bank is on your left
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT
Tourist: Making the right at the bank.
Tourist: ACTION:FORWARD ACTION:FORWARD
Tourist: I can't go that way.
Tourist: ACTION:TURNLEFT
Tourist: Bank is ahead of me on the right
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Guide: turn around on that intersection
Tourist: I can only go to the left or back the way I just came.
Tourist: ACTION:TURNLEFT
Guide: you're in the right place. do you see shops on the corners?
Guide: If you're on the corner with the bank, cross the street
Tourist: I'm back where I started by the shop and the bank.
Tourist: ACTION:TURNRIGHT
Guide: on the same side of the street?
Tourist: crossing the street now
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Tourist: there is an I love new york shop across the street on the left from me now
Tourist: ACTION:TURNRIGHT ACTION:FORWARD
Guide: ok. I'll see if it's right.
Guide: EVALUATE_LOCATION
Guide: It's not right.
Tourist: What should I be on the look for?
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: There should be shops on two corners but you need to be on one of the corners
without the shop.
Guide: Try the other corner.
Tourist: this intersection has 2 shop corners and a bank corner
Guide: yes. that's what I see on the map.
Tourist: should I go to the bank corner? or one of the shop corners?
or the blank corner (perhaps a hotel)
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Go to the one near the hotel. The map says the hotel is a little
further down but it might be a little off.
Tourist: It's a big hotel it's possible.
Tourist: ACTION:FORWARD ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
Tourist: I'm on the hotel corner
Guide: EVALUATE_LOCATION | localization accuracy |
7e34501255b89d64b9598b409d73f96489aafe45 | 7e34501255b89d64b9598b409d73f96489aafe45_0 | Q: What data did they use?
Text: 0pt0.03.03 *
0pt0.030.03 *
0pt0.030.03
We introduce “Talk The Walk”, the first large-scale dialogue dataset grounded in action and perception. The task involves two agents (a “guide” and a “tourist”) that communicate via natural language in order to achieve a common goal: having the tourist navigate to a given target location. The task and dataset, which are described in detail, are challenging and their full solution is an open problem that we pose to the community. We (i) focus on the task of tourist localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding tourist utterances into the guide's map, (ii) show it yields significant improvements for both emergent and natural language communication, and (iii) using this method, we establish non-trivial baselines on the full task.
Introduction
As artificial intelligence plays an ever more prominent role in everyday human lives, it becomes increasingly important to enable machines to communicate via natural language—not only with humans, but also with each other. Learning algorithms for natural language understanding, such as in machine translation and reading comprehension, have progressed at an unprecedented rate in recent years, but still rely on static, large-scale, text-only datasets that lack crucial aspects of how humans understand and produce natural language. Namely, humans develop language capabilities by being embodied in an environment which they can perceive, manipulate and move around in; and by interacting with other humans. Hence, we argue that we should incorporate all three fundamental aspects of human language acquisition—perception, action and interactive communication—and develop a task and dataset to that effect.
We introduce the Talk the Walk dataset, where the aim is for two agents, a “guide” and a “tourist”, to interact with each other via natural language in order to achieve a common goal: having the tourist navigate towards the correct location. The guide has access to a map and knows the target location, but does not know where the tourist is; the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The agents need to work together through communication in order to successfully solve the task. An example of the task is given in Figure FIGREF3 .
Grounded language learning has (re-)gained traction in the AI community, and much attention is currently devoted to virtual embodiment—the development of multi-agent communication tasks in virtual environments—which has been argued to be a viable strategy for acquiring natural language semantics BIBREF0 . Various related tasks have recently been introduced, but in each case with some limitations. Although visually grounded dialogue tasks BIBREF1 , BIBREF2 comprise perceptual grounding and multi-agent interaction, their agents are passive observers and do not act in the environment. By contrast, instruction-following tasks, such as VNL BIBREF3 , involve action and perception but lack natural language interaction with other agents. Furthermore, some of these works use simulated environments BIBREF4 and/or templated language BIBREF5 , which arguably oversimplifies real perception or natural language, respectively. See Table TABREF15 for a comparison.
Talk The Walk is the first task to bring all three aspects together: perception for the tourist observing the world, action for the tourist to navigate through the environment, and interactive dialogue for the tourist and guide to work towards their common goal. To collect grounded dialogues, we constructed a virtual 2D grid environment by manually capturing 360-views of several neighborhoods in New York City (NYC). As the main focus of our task is on interactive dialogue, we limit the difficulty of the control problem by having the tourist navigating a 2D grid via discrete actions (turning left, turning right and moving forward). Our street view environment was integrated into ParlAI BIBREF6 and used to collect a large-scale dataset on Mechanical Turk involving human perception, action and communication.
We argue that for artificial agents to solve this challenging problem, some fundamental architecture designs are missing, and our hope is that this task motivates their innovation. To that end, we focus on the task of localization and develop the novel Masked Attention for Spatial Convolutions (MASC) mechanism. To model the interaction between language and action, this architecture repeatedly conditions the spatial dimensions of a convolution on the communicated message sequence.
This work makes the following contributions: 1) We present the first large scale dialogue dataset grounded in action and perception; 2) We introduce the MASC architecture for localization and show it yields improvements for both emergent and natural language; 4) Using localization models, we establish initial baselines on the full task; 5) We show that our best model exceeds human performance under the assumption of “perfect perception” and with a learned emergent communication protocol, and sets a non-trivial baseline with natural language.
Talk The Walk
We create a perceptual environment by manually capturing several neighborhoods of New York City (NYC) with a 360 camera. Most parts of the city are grid-like and uniform, which makes it well-suited for obtaining a 2D grid. For Talk The Walk, we capture parts of Hell's Kitchen, East Village, the Financial District, Williamsburg and the Upper East Side—see Figure FIGREF66 in Appendix SECREF14 for their respective locations within NYC. For each neighborhood, we choose an approximately 5x5 grid and capture a 360 view on all four corners of each intersection, leading to a grid-size of roughly 10x10 per neighborhood.
The tourist's location is given as a tuple INLINEFORM0 , where INLINEFORM1 are the coordinates and INLINEFORM2 signifies the orientation (north, east, south or west). The tourist can take three actions: turn left, turn right and go forward. For moving forward, we add INLINEFORM3 , INLINEFORM4 , INLINEFORM5 , INLINEFORM6 to the INLINEFORM7 coordinates for the respective orientations. Upon a turning action, the orientation is updated by INLINEFORM8 where INLINEFORM9 for left and INLINEFORM10 for right. If the tourist moves outside the grid, we issue a warning that they cannot go in that direction and do not update the location. Moreover, tourists are shown different types of transitions: a short transition for actions that bring the tourist to a different corner of the same intersection; and a longer transition for actions that bring them to a new intersection.
The guide observes a map that corresponds to the tourist's environment. We exploit the fact that urban areas like NYC are full of local businesses, and overlay the map with these landmarks as localization points for our task. Specifically, we manually annotate each corner of the intersection with a set of landmarks INLINEFORM0 , each coming from one of the following categories:
Bar Playfield Bank Hotel Shop Subway Coffee Shop Restaurant Theater
The right-side of Figure FIGREF3 illustrates how the map is presented. Note that within-intersection transitions have a smaller grid distance than transitions to new intersections. To ensure that the localization task is not too easy, we do not include street names in the overhead map and keep the landmark categories coarse. That is, the dialogue is driven by uncertainty in the tourist's current location and the properties of the target location: if the exact location and orientation of the tourist were known, it would suffice to communicate a sequence of actions.
Task
For the Talk The Walk task, we randomly choose one of the five neighborhoods, and subsample a 4x4 grid (one block with four complete intersections) from the entire grid. We specify the boundaries of the grid by the top-left and bottom-right corners INLINEFORM0 . Next, we construct the overhead map of the environment, i.e. INLINEFORM1 with INLINEFORM2 and INLINEFORM3 . We subsequently sample a start location and orientation INLINEFORM4 and a target location INLINEFORM5 at random.
The shared goal of the two agents is to navigate the tourist to the target location INLINEFORM0 , which is only known to the guide. The tourist perceives a “street view” planar projection INLINEFORM1 of the 360 image at location INLINEFORM2 and can simultaneously chat with the guide and navigate through the environment. The guide's role consists of reading the tourist description of the environment, building a “mental map” of their current position and providing instructions for navigating towards the target location. Whenever the guide believes that the tourist has reached the target location, they instruct the system to evaluate the tourist's location. The task ends when the evaluation is successful—i.e., when INLINEFORM3 —or otherwise continues until a total of three failed attempts. The additional attempts are meant to ease the task for humans, as we found that they otherwise often fail at the task but still end up close to the target location, e.g., at the wrong corner of the correct intersection.
Data Collection
We crowd-sourced the collection of the dataset on Amazon Mechanical Turk (MTurk). We use the MTurk interface of ParlAI BIBREF6 to render 360 images via WebGL and dynamically display neighborhood maps with an HTML5 canvas. Detailed task instructions, which were also given to our workers before they started their task, are shown in Appendix SECREF15 . We paired Turkers at random and let them alternate between the tourist and guide role across different HITs.
Dataset Statistics
The Talk The Walk dataset consists of over 10k successful dialogues—see Table FIGREF66 in the appendix for the dataset statistics split by neighborhood. Turkers successfully completed INLINEFORM0 of all finished tasks (we use this statistic as the human success rate). More than six hundred participants successfully completed at least one Talk The Walk HIT. Although the Visual Dialog BIBREF2 and GuessWhat BIBREF1 datasets are larger, the collected Talk The Walk dialogs are significantly longer. On average, Turkers needed more than 62 acts (i.e utterances and actions) before they successfully completed the task, whereas Visual Dialog requires 20 acts. The majority of acts comprise the tourist's actions, with on average more than 44 actions per dialogue. The guide produces roughly 9 utterances per dialogue, slightly more than the tourist's 8 utterances. Turkers use diverse discourse, with a vocabulary size of more than 10K (calculated over all successful dialogues). An example from the dataset is shown in Appendix SECREF14 . The dataset is available at https://github.com/facebookresearch/talkthewalk.
Experiments
We investigate the difficulty of the proposed task by establishing initial baselines. The final Talk The Walk task is challenging and encompasses several important sub-tasks, ranging from landmark recognition to tourist localization and natural language instruction-giving. Arguably the most important sub-task is localization: without such capabilities the guide can not tell whether the tourist reached the target location. In this work, we establish a minimal baseline for Talk The Walk by utilizing agents trained for localization. Specifically, we let trained tourist models undertake random walks, using the following protocol: at each step, the tourist communicates its observations and actions to the guide, who predicts the tourist's location. If the guide predicts that the tourist is at target, we evaluate its location. If successful, the task ends, otherwise we continue until there have been three wrong evaluations. The protocol is given as pseudo-code in Appendix SECREF12 .
Tourist Localization
The designed navigation protocol relies on a trained localization model that predicts the tourist's location from a communicated message. Before we formalize this localization sub-task in Section UID21 , we further introduce two simplifying assumptions—perfect perception and orientation-agnosticism—so as to overcome some of the difficulties we encountered in preliminary experiments.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Perfect Perception Early experiments revealed that perceptual grounding of landmarks is difficult: we set up a landmark classification problem, on which models with extracted CNN BIBREF7 or text recognition features BIBREF8 barely outperform a random baseline—see Appendix SECREF13 for full details. This finding implies that localization models from image input are limited by their ability to recognize landmarks, and, as a result, would not generalize to unseen environments. To ensure that perception is not the limiting factor when investigating the landmark-grounding and action-grounding capabilities of localization models, we assume “perfect perception”: in lieu of the 360 image view, the tourist is given the landmarks at its current location. More formally, each state observation INLINEFORM0 now equals the set of landmarks at the INLINEFORM1 -location, i.e. INLINEFORM2 . If the INLINEFORM3 -location does not have any visible landmarks, we return a single “empty corner” symbol. We stress that our findings—including a novel architecture for grounding actions into an overhead map, see Section UID28 —should carry over to settings without the perfect perception assumption.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Orientation-agnosticism We opt to ignore the tourist's orientation, which simplifies the set of actions to [Left, Right, Up, Down], corresponding to adding [(-1, 0), (1, 0), (0, 1), (0, -1)] to the current INLINEFORM0 coordinates, respectively. Note that actions are now coupled to an orientation on the map—e.g. up is equal to going north—and this implicitly assumes that the tourist has access to a compass. This also affects perception, since the tourist now has access to views from all orientations: in conjunction with “perfect perception”, implying that only landmarks at the current corner are given, whereas landmarks from different corners (e.g. across the street) are not visible.
Even with these simplifications, the localization-based baseline comes with its own set of challenges. As we show in Section SECREF34 , the task requires communication about a short (random) path—i.e., not only a sequence of observations but also actions—in order to achieve high localization accuracy. This means that the guide needs to decode observations from multiple time steps, as well as understand their 2D spatial arrangement as communicated via the sequence of actions. Thus, in order to get to a good understanding of the task, we thoroughly examine whether the agents can learn a communication protocol that simultaneously grounds observations and actions into the guide's map. In doing so, we thoroughly study the role of the communication channel in the localization task, by investigating increasingly constrained forms of communication: from differentiable continuous vectors to emergent discrete symbols to the full complexity of natural language.
The full navigation baseline hinges on a localization model from random trajectories. While we can sample random actions in the emergent communication setup, this is not possible for the natural language setup because the messages are coupled to the trajectories of the human annotators. This leads to slightly different problem setups, as described below.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Emergent language A tourist, starting from a random location, takes INLINEFORM0 random actions INLINEFORM1 to reach target location INLINEFORM2 . Every location in the environment has a corresponding set of landmarks INLINEFORM3 for each of the INLINEFORM4 coordinates. As the tourist navigates, the agent perceives INLINEFORM5 state-observations INLINEFORM6 where each observation INLINEFORM7 consists of a set of INLINEFORM8 landmark symbols INLINEFORM9 . Given the observations INLINEFORM10 and actions INLINEFORM11 , the tourist generates a message INLINEFORM12 which is communicated to the other agent. The objective of the guide is to predict the location INLINEFORM13 from the tourist's message INLINEFORM14 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural language In contrast to our emergent communication experiments, we do not take random actions but instead extract actions, observations, and messages from the dataset. Specifically, we consider each tourist utterance (i.e. at any point in the dialogue), obtain the current tourist location as target location INLINEFORM0 , the utterance itself as message INLINEFORM1 , and the sequence of observations and actions that took place between the current and previous tourist utterance as INLINEFORM2 and INLINEFORM3 , respectively. Similar to the emergent language setting, the guide's objective is to predict the target location INLINEFORM4 models from the tourist message INLINEFORM5 . We conduct experiments with INLINEFORM6 taken from the dataset and with INLINEFORM7 generated from the extracted observations INLINEFORM8 and actions INLINEFORM9 .
Model
This section outlines the tourist and guide architectures. We first describe how the tourist produces messages for the various communication channels across which the messages are sent. We subsequently describe how these messages are processed by the guide, and introduce the novel Masked Attention for Spatial Convolutions (MASC) mechanism that allows for grounding into the 2D overhead map in order to predict the tourist's location.
The Tourist
For each of the communication channels, we outline the procedure for generating a message INLINEFORM0 . Given a set of state observations INLINEFORM1 , we represent each observation by summing the INLINEFORM2 -dimensional embeddings of the observed landmarks, i.e. for INLINEFORM3 , INLINEFORM4 , where INLINEFORM5 is the landmark embedding lookup table. In addition, we embed action INLINEFORM6 into a INLINEFORM7 -dimensional embedding INLINEFORM8 via a look-up table INLINEFORM9 . We experiment with three types of communication channel.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vectors The tourist has access to observations of several time steps, whose order is important for accurate localization. Because summing embeddings is order-invariant, we introduce a sum over positionally-gated embeddings, which, conditioned on time step INLINEFORM0 , pushes embedding information into the appropriate dimensions. More specifically, we generate an observation message INLINEFORM1 , where INLINEFORM2 is a learned gating vector for time step INLINEFORM3 . In a similar fashion, we produce action message INLINEFORM4 and send the concatenated vectors INLINEFORM5 as message to the guide. We can interpret continuous vector communication as a single, monolithic model because its architecture is end-to-end differentiable, enabling gradient-based optimization for training.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete symbols Like the continuous vector communication model, with discrete communication the tourist also uses separate channels for observations and actions, as well as a sum over positionally gated embeddings to generate observation embedding INLINEFORM0 . We pass this embedding through a sigmoid and generate a message INLINEFORM1 by sampling from the resulting Bernoulli distributions:
INLINEFORM0
The action message INLINEFORM0 is produced in the same way, and we obtain the final tourist message INLINEFORM1 through concatenating the messages.
The communication channel's sampling operation yields the model non-differentiable, so we use policy gradients BIBREF9 , BIBREF10 to train the parameters INLINEFORM0 of the tourist model. That is, we estimate the gradient by INLINEFORM1
where the reward function INLINEFORM0 is the negative guide's loss (see Section SECREF25 ) and INLINEFORM1 a state-value baseline to reduce variance. We use a linear transformation over the concatenated embeddings as baseline prediction, i.e. INLINEFORM2 , and train it with a mean squared error loss.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language Because observations and actions are of variable-length, we use an LSTM encoder over the sequence of observations embeddings INLINEFORM0 , and extract its last hidden state INLINEFORM1 . We use a separate LSTM encoder for action embeddings INLINEFORM2 , and concatenate both INLINEFORM3 and INLINEFORM4 to the input of the LSTM decoder at each time step: DISPLAYFORM0
where INLINEFORM0 a look-up table, taking input tokens INLINEFORM1 . We train with teacher-forcing, i.e. we optimize the cross-entropy loss: INLINEFORM2 . At test time, we explore the following decoding strategies: greedy, sampling and a beam-search. We also fine-tune a trained tourist model (starting from a pre-trained model) with policy gradients in order to minimize the guide's prediction loss.
The Guide
Given a tourist message INLINEFORM0 describing their observations and actions, the objective of the guide is to predict the tourist's location on the map. First, we outline the procedure for extracting observation embedding INLINEFORM1 and action embeddings INLINEFORM2 from the message INLINEFORM3 for each of the types of communication. Next, we discuss the MASC mechanism that takes the observations and actions in order to ground them on the guide's map in order to predict the tourist's location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous For the continuous communication model, we assign the observation message to the observation embedding, i.e. INLINEFORM0 . To extract the action embedding for time step INLINEFORM1 , we apply a linear layer to the action message, i.e. INLINEFORM2 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Discrete For discrete communication, we obtain observation INLINEFORM0 by applying a linear layer to the observation message, i.e. INLINEFORM1 . Similar to the continuous communication model, we use a linear layer over action message INLINEFORM2 to obtain action embedding INLINEFORM3 for time step INLINEFORM4 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Natural Language The message INLINEFORM0 contains information about observations and actions, so we use a recurrent neural network with attention mechanism to extract the relevant observation and action embeddings. Specifically, we encode the message INLINEFORM1 , consisting of INLINEFORM2 tokens INLINEFORM3 taken from vocabulary INLINEFORM4 , with a bidirectional LSTM: DISPLAYFORM0
where INLINEFORM0 is the word embedding look-up table. We obtain observation embedding INLINEFORM1 through an attention mechanism over the hidden states INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 is a learned control embedding who is updated through a linear transformation of the previous control and observation embedding: INLINEFORM1 . We use the same mechanism to extract the action embedding INLINEFORM2 from the hidden states. For the observation embedding, we obtain the final representation by summing positionally gated embeddings, i.e., INLINEFORM3 .
We represent the guide's map as INLINEFORM0 , where in this case INLINEFORM1 , where each INLINEFORM2 -dimensional INLINEFORM3 location embedding INLINEFORM4 is computed as the sum of the guide's landmark embeddings for that location.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Motivation While the guide's map representation contains only local landmark information, the tourist communicates a trajectory of the map (i.e. actions and observations from multiple locations), implying that directly comparing the tourist's message with the individual landmark embeddings is probably suboptimal. Instead, we want to aggregate landmark information from surrounding locations by imputing trajectories over the map to predict locations. We propose a mechanism for translating landmark embeddings according to state transitions (left, right, up, down), which can be expressed as a 2D convolution over the map embeddings. For simplicity, let us assume that the map embedding INLINEFORM0 is 1-dimensional, then a left action can be realized through application of the following INLINEFORM1 kernel: INLINEFORM2 which effectively shifts all values of INLINEFORM3 one position to the left. We propose to learn such state-transitions from the tourist message through a differentiable attention-mask over the spatial dimensions of a 3x3 convolution.
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC We linearly project each predicted action embedding INLINEFORM0 to a 9-dimensional vector INLINEFORM1 , normalize it by a softmax and subsequently reshape the vector into a 3x3 mask INLINEFORM2 : DISPLAYFORM0
We learn a 3x3 convolutional kernel INLINEFORM0 , with INLINEFORM1 features, and apply the mask INLINEFORM2 to the spatial dimensions of the convolution by first broadcasting its values along the feature dimensions, i.e. INLINEFORM3 , and subsequently taking the Hadamard product: INLINEFORM4 . For each action step INLINEFORM5 , we then apply a 2D convolution with masked weight INLINEFORM6 to obtain a new map embedding INLINEFORM7 , where we zero-pad the input to maintain identical spatial dimensions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction model We repeat the MASC operation INLINEFORM0 times (i.e. once for each action), and then aggregate the map embeddings by a sum over positionally-gated embeddings: INLINEFORM1 . We score locations by taking the dot-product of the observation embedding INLINEFORM2 , which contains information about the sequence of observed landmarks by the tourist, and the map. We compute a distribution over the locations of the map INLINEFORM3 by taking a softmax over the computed scores: DISPLAYFORM0
paragraph4 0.1ex plus0.1ex minus.1ex-1em Predicting T While emergent communication models use a fixed length trasjectory INLINEFORM0 , natural language messages may differ in the number of communicated observations and actions. Hence, we predict INLINEFORM1 from the communicated message. Specifically, we use a softmax regression layer over the last hidden state INLINEFORM2 of the RNN, and subsequently sample INLINEFORM3 from the resulting multinomial distribution: DISPLAYFORM0
We jointly train the INLINEFORM0 -prediction model via REINFORCE, with the guide's loss as reward function and a mean-reward baseline.
Comparisons
To better analyze the performance of the models incorporating MASC, we compare against a no-MASC baseline in our experiments, as well as a prediction upper bound.
paragraph4 0.1ex plus0.1ex minus.1ex-1em No MASC We compare the proposed MASC model with a model that does not include this mechanism. Whereas MASC predicts a convolution mask from the tourist message, the “No MASC” model uses INLINEFORM0 , the ordinary convolutional kernel to convolve the map embedding INLINEFORM1 to obtain INLINEFORM2 . We also share the weights of this convolution at each time step.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Prediction upper-bound Because we have access to the class-conditional likelihood INLINEFORM0 , we are able to compute the Bayes error rate (or irreducible error). No model (no matter how expressive) with any amount of data can ever obtain better localization accuracy as there are multiple locations consistent with the observations and actions.
Results and Discussion
In this section, we describe the findings of various experiments. First, we analyze how much information needs to be communicated for accurate localization in the Talk The Walk environment, and find that a short random path (including actions) is necessary. Next, for emergent language, we show that the MASC architecture can achieve very high localization accuracy, significantly outperforming the baseline that does not include this mechanism. We then turn our attention to the natural language experiments, and find that localization from human utterances is much harder, reaching an accuracy level that is below communicating a single landmark observation. We show that generated utterances from a conditional language model leads to significantly better localization performance, by successfully grounding the utterance on a single landmark observation (but not yet on multiple observations and actions). Finally, we show performance of the localization baseline on the full task, which can be used for future comparisons to this work.
Analysis of Localization Task
paragraph4 0.1ex plus0.1ex minus.1ex-1em Task is not too easy The upper-bound on localization performance in Table TABREF32 suggest that communicating a single landmark observation is not sufficient for accurate localization of the tourist ( INLINEFORM0 35% accuracy). This is an important result for the full navigation task because the need for two-way communication disappears if localization is too easy; if the guide knows the exact location of the tourist it suffices to communicate a list of instructions, which is then executed by the tourist. The uncertainty in the tourist's location is what drives the dialogue between the two agents.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Importance of actions We observe that the upperbound for only communicating observations plateaus around 57% (even for INLINEFORM0 actions), whereas it exceeds 90% when we also take actions into account. This implies that, at least for random walks, it is essential to communicate a trajectory, including observations and actions, in order to achieve high localization accuracy.
Emergent Language Localization
We first report the results for tourist localization with emergent language in Table TABREF32 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em MASC improves performance The MASC architecture significantly improves performance compared to models that do not include this mechanism. For instance, for INLINEFORM0 action, MASC already achieves 56.09 % on the test set and this further increases to 69.85% for INLINEFORM1 . On the other hand, no-MASC models hit a plateau at 43%. In Appendix SECREF11 , we analyze learned MASC values, and show that communicated actions are often mapped to corresponding state-transitions.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Continuous vs discrete We observe similar performance for continuous and discrete emergent communication models, implying that a discrete communication channel is not a limiting factor for localization performance.
Natural Language Localization
We report the results of tourist localization with natural language in Table TABREF36 . We compare accuracy of the guide model (with MASC) trained on utterances from (i) humans, (ii) a supervised model with various decoding strategies, and (iii) a policy gradient model optimized with respect to the loss of a frozen, pre-trained guide model on human utterances.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human utterances Compared to emergent language, localization from human utterances is much harder, achieving only INLINEFORM0 on the test set. Here, we report localization from a single utterance, but in Appendix SECREF45 we show that including up to five dialogue utterances only improves performance to INLINEFORM1 . We also show that MASC outperform no-MASC models for natural language communication.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Generated utterances We also investigate generated tourist utterances from conditional language models. Interestingly, we observe that the supervised model (with greedy and beam-search decoding) as well as the policy gradient model leads to an improvement of more than 10 accuracy points over the human utterances. However, their level of accuracy is slightly below the baseline of communicating a single observation, indicating that these models only learn to ground utterances in a single landmark observation.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Better grounding of generated utterances We analyze natural language samples in Table TABREF38 , and confirm that, unlike human utterances, the generated utterances are talking about the observed landmarks. This observation explains why the generated utterances obtain higher localization accuracy. The current language models are most successful when conditioned on a single landmark observation; We show in Appendix UID43 that performance quickly deteriorates when the model is conditioned on more observations, suggesting that it can not produce natural language utterances about multiple time steps.
Localization-based Baseline
Table TABREF36 shows results for the best localization models on the full task, evaluated via the random walk protocol defined in Algorithm SECREF12 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Comparison with human annotators Interestingly, our best localization model (continuous communication, with MASC, and INLINEFORM0 ) achieves 88.33% on the test set and thus exceed human performance of 76.74% on the full task. While emergent models appear to be stronger localizers, humans might cope with their localization uncertainty through other mechanisms (e.g. better guidance, bias towards taking particular paths, etc). The simplifying assumption of perfect perception also helps.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Number of actions Unsurprisingly, humans take fewer steps (roughly 15) than our best random walk model (roughly 34). Our human annotators likely used some form of guidance to navigate faster to the target.
Conclusion
We introduced the Talk The Walk task and dataset, which consists of crowd-sourced dialogues in which two human annotators collaborate to navigate to target locations in the virtual streets of NYC. For the important localization sub-task, we proposed MASC—a novel grounding mechanism to learn state-transition from the tourist's message—and showed that it improves localization performance for emergent and natural language. We use the localization model to provide baseline numbers on the Talk The Walk task, in order to facilitate future research.
Related Work
The Talk the Walk task and dataset facilitate future research on various important subfields of artificial intelligence, including grounded language learning, goal-oriented dialogue research and situated navigation. Here, we describe related previous work in these areas.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Related tasks There has been a long line of work involving related tasks. Early work on task-oriented dialogue dates back to the early 90s with the introduction of the Map Task BIBREF11 and Maze Game BIBREF25 corpora. Recent efforts have led to larger-scale goal-oriented dialogue datasets, for instance to aid research on visually-grounded dialogue BIBREF2 , BIBREF1 , knowledge-base-grounded discourse BIBREF29 or negotiation tasks BIBREF36 . At the same time, there has been a big push to develop environments for embodied AI, many of which involve agents following natural language instructions with respect to an environment BIBREF13 , BIBREF50 , BIBREF5 , BIBREF39 , BIBREF19 , BIBREF18 , following-up on early work in this area BIBREF38 , BIBREF20 . An early example of navigation using neural networks is BIBREF28 , who propose an online learning approach for robot navigation. Recently, there has been increased interest in using end-to-end trainable neural networks for learning to navigate indoor scenes BIBREF27 , BIBREF26 or large cities BIBREF17 , BIBREF40 , but, unlike our work, without multi-agent communication. Also the task of localization (without multi-agent communication) has recently been studied BIBREF18 , BIBREF48 .
paragraph4 0.1ex plus0.1ex minus.1ex-1em Grounded language learning Grounded language learning is motivated by the observation that humans learn language embodied (grounded) in sensorimotor experience of the physical world BIBREF15 , BIBREF45 . On the one hand, work in multi-modal semantics has shown that grounding can lead to practical improvements on various natural language understanding tasks BIBREF14 , BIBREF31 . In robotics, researchers dissatisfied with purely symbolic accounts of meaning attempted to build robotic systems with the aim of grounding meaning in physical experience of the world BIBREF44 , BIBREF46 . Recently, grounding has also been applied to the learning of sentence representations BIBREF32 , image captioning BIBREF37 , BIBREF49 , visual question answering BIBREF12 , BIBREF22 , visual reasoning BIBREF30 , BIBREF42 , and grounded machine translation BIBREF43 , BIBREF23 . Grounding also plays a crucial role in the emergent research of multi-agent communication, where, agents communicate (in natural language or otherwise) in order to solve a task, with respect to their shared environment BIBREF35 , BIBREF21 , BIBREF41 , BIBREF24 , BIBREF36 , BIBREF47 , BIBREF34 .
Implementation Details
For the emergent communication models, we use an embedding size INLINEFORM0 . The natural language experiments use 128-dimensional word embeddings and a bidirectional RNN with 256 units. In all experiments, we train the guide with a cross entropy loss using the ADAM optimizer with default hyper-parameters BIBREF33 . We perform early stopping on the validation accuracy, and report the corresponding train, valid and test accuracy. We optimize the localization models with continuous, discrete and natural language communication channels for 200, 200, and 25 epochs, respectively. To facilitate further research on Talk The Walk, we make our code base for reproducing experiments publicly available at https://github.com/facebookresearch/talkthewalk.
Additional Natural Language Experiments
First, we investigate the sensitivity of tourist generation models to the trajectory length, finding that the model conditioned on a single observation (i.e. INLINEFORM0 ) achieves best performance. In the next subsection, we further analyze localization models from human utterances by investigating MASC and no-MASC models with increasing dialogue context.
Tourist Generation Models
After training the supervised tourist model (conditioned on observations and action from human expert trajectories), there are two ways to train an accompanying guide model. We can optimize a location prediction model on either (i) extracted human trajectories (as in the localization setup from human utterances) or (ii) on all random paths of length INLINEFORM0 (as in the full task evaluation). Here, we investigate the impact of (1) using either human or random trajectories for training the guide model, and (2) the effect of varying the path length INLINEFORM1 during the full-task evaluation. For random trajectories, guide training uses the same path length INLINEFORM2 as is used during evaluation. We use a pre-trained tourist model with greedy decoding for generating the tourist utterances. Table TABREF40 summarizes the results.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Human vs random trajectories We only observe small improvements for training on random trajectories. Human trajectories are thus diverse enough to generalize to random trajectories.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Effect of path length There is a strong negative correlation between task success and the conditioned trajectory length. We observe that the full task performance quickly deteriorates for both human and random trajectories. This suggests that the tourist generation model can not produce natural language utterances that describe multiple observations and actions. Although it is possible that the guide model can not process such utterances, this is not very likely because the MASC architectures handles such messages successfully for emergent communication.
We report localization performance of tourist utterances generated by beam search decoding of varying beam size in Table TABREF40 . We find that performance decreases from 29.05% to 20.87% accuracy on the test set when we increase the beam-size from one to eight.
Localization from Human Utterances
We conduct an ablation study for MASC on natural language with varying dialogue context. Specifically, we compare localization accuracy of MASC and no-MASC models trained on the last [1, 3, 5] utterances of the dialogue (including guide utterances). We report these results in Table TABREF41 . In all cases, MASC outperforms the no-MASC models by several accuracy points. We also observe that mean predicted INLINEFORM0 (over the test set) increases from 1 to 2 when more dialogue context is included.
Visualizing MASC predictions
Figure FIGREF46 shows the MASC values for a learned model with emergent discrete communications and INLINEFORM0 actions. Specifically, we look at the predicted MASC values for different action sequences taken by the tourist. We observe that the first action is always mapped to the correct state-transition, but that the second and third MASC values do not always correspond to right state-transitions.
Evaluation on Full Setup
We provide pseudo-code for evaluation of localization models on the full task in Algorithm SECREF12 , as well as results for all emergent communication models in Table TABREF55 .
INLINEFORM0 INLINEFORM1
INLINEFORM0 take new action INLINEFORM1 INLINEFORM2
Performance evaluation of location prediction model on full Talk The Walk setup
Landmark Classification
While the guide has access to the landmark labels, the tourist needs to recognize these landmarks from raw perceptual information. In this section, we study landmark classification as a supervised learning problem to investigate the difficulty of perceptual grounding in Talk The Walk.
The Talk The Walk dataset contains a total of 307 different landmarks divided among nine classes, see Figure FIGREF62 for how they are distributed. The class distribution is fairly imbalanced, with shops and restaurants as the most frequent landmarks and relatively few play fields and theaters. We treat landmark recognition as a multi-label classification problem as there can be multiple landmarks on a corner.
For the task of landmark classification, we extract the relevant views of the 360 image from which a landmark is visible. Because landmarks are labeled to be on a specific corner of an intersection, we assume that they are visible from one of the orientations facing away from the intersection. For example, for a landmark on the northwest corner of an intersection, we extract views from both the north and west direction. The orientation-specific views are obtained by a planar projection of the full 360-image with a small field of view (60 degrees) to limit distortions. To cover the full field of view, we extract two images per orientation, with their horizontal focus point 30 degrees apart. Hence, we obtain eight images per 360 image with corresponding orientation INLINEFORM0 .
We run the following pre-trained feature extractors over the extracted images:
For the text recognition model, we use a learned look-up table INLINEFORM0 to embed the extracted text features INLINEFORM1 , and fuse all embeddings of four images through a bag of embeddings, i.e., INLINEFORM2 . We use a linear layer followed by a sigmoid to predict the probability for each class, i.e. INLINEFORM3 . We also experiment with replacing the look-up embeddings with pre-trained FastText embeddings BIBREF16 . For the ResNet model, we use a bag of embeddings over the four ResNet features, i.e. INLINEFORM4 , before we pass it through a linear layer to predict the class probabilities: INLINEFORM5 . We also conduct experiments where we first apply PCA to the extracted ResNet and FastText features before we feed them to the model.
To account for class imbalance, we train all described models with a binary cross entropy loss weighted by the inverted class frequency. We create a 80-20 class-conditional split of the dataset into a training and validation set. We train for 100 epochs and perform early stopping on the validation loss.
The F1 scores for the described methods in Table TABREF65 . We compare to an “all positive” baseline that always predicts that the landmark class is visible and observe that all presented models struggle to outperform this baseline. Although 256-dimensional ResNet features achieve slightly better precision on the validation set, it results in much worse recall and a lower F1 score. Our results indicate that perceptual grounding is a difficult task, which easily merits a paper of its own right, and so we leave further improvements (e.g. better text recognizers) for future work.
Dataset Details
paragraph4 0.1ex plus0.1ex minus.1ex-1em Dataset split We split the full dataset by assigning entire 4x4 grids (independent of the target location) to the train, valid or test set. Specifically, we design the split such that the valid set contains at least one intersection (out of four) is not part of the train set. For the test set, all four intersections are novel. See our source code, available at URL ANONYMIZED, for more details on how this split is realized.
paragraph4 0.1ex plus0.1ex minus.1ex-1em Example
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Hello, what are you near?
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Tourist: Hello, in front of me is a Brooks Brothers
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: Is that a shop or restaurant?
Tourist: ACTION:TURNLEFT
Tourist: It is a clothing shop.
Tourist: ACTION:TURNLEFT
Guide: You need to go to the intersection in the northwest corner of the map
Tourist: ACTION:TURNLEFT
Tourist: There appears to be a bank behind me.
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Ok, turn left then go straight up that road
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNLEFT
Guide: There should be shops on two of the corners but you
need to go to the corner without a shop.
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNLEFT
Guide: let me know when you get there.
Tourist: on my left is Radio city Music hall
Tourist: ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Tourist: I can't go straight any further.
Guide: ok. turn so that the theater is on your right.
Guide: then go straight
Tourist: That would be going back the way I came
Guide: yeah. I was looking at the wrong bank
Tourist: I'll notify when I am back at the brooks brothers, and the bank.
Tourist: ACTION:TURNRIGHT
Guide: make a right when the bank is on your left
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNRIGHT
Tourist: Making the right at the bank.
Tourist: ACTION:FORWARD ACTION:FORWARD
Tourist: I can't go that way.
Tourist: ACTION:TURNLEFT
Tourist: Bank is ahead of me on the right
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Guide: turn around on that intersection
Tourist: I can only go to the left or back the way I just came.
Tourist: ACTION:TURNLEFT
Guide: you're in the right place. do you see shops on the corners?
Guide: If you're on the corner with the bank, cross the street
Tourist: I'm back where I started by the shop and the bank.
Tourist: ACTION:TURNRIGHT
Guide: on the same side of the street?
Tourist: crossing the street now
Tourist: ACTION:FORWARD ACTION:FORWARD ACTION:TURNLEFT
Tourist: there is an I love new york shop across the street on the left from me now
Tourist: ACTION:TURNRIGHT ACTION:FORWARD
Guide: ok. I'll see if it's right.
Guide: EVALUATE_LOCATION
Guide: It's not right.
Tourist: What should I be on the look for?
Tourist: ACTION:TURNRIGHT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: There should be shops on two corners but you need to be on one of the corners
without the shop.
Guide: Try the other corner.
Tourist: this intersection has 2 shop corners and a bank corner
Guide: yes. that's what I see on the map.
Tourist: should I go to the bank corner? or one of the shop corners?
or the blank corner (perhaps a hotel)
Tourist: ACTION:TURNLEFT ACTION:TURNLEFT ACTION:TURNRIGHT ACTION:TURNRIGHT
Guide: Go to the one near the hotel. The map says the hotel is a little
further down but it might be a little off.
Tourist: It's a big hotel it's possible.
Tourist: ACTION:FORWARD ACTION:TURNLEFT ACTION:FORWARD ACTION:TURNRIGHT
Tourist: I'm on the hotel corner
Guide: EVALUATE_LOCATION | dataset on Mechanical Turk involving human perception, action and communication |
e854edcc5e9111922e6e120ae17d062427c27ec1 | e854edcc5e9111922e6e120ae17d062427c27ec1_0 | Q: Do the authors report results only on English data?
Text: Introduction
In recent years, the spread of misinformation has become a growing concern for researchers and the public at large BIBREF1 . Researchers at MIT found that social media users are more likely to share false information than true information BIBREF2 . Due to renewed focus on finding ways to foster healthy political conversation, the profile of factcheckers has been raised.
Factcheckers positively influence public debate by publishing good quality information and asking politicians and journalists to retract misleading or false statements. By calling out lies and the blurring of the truth, they make those in positions of power accountable. This is a result of labour intensive work that involves monitoring the news for spurious claims and carrying out rigorous research to judge credibility. So far, it has only been possible to scale their output upwards by hiring more personnel. This is problematic because newsrooms need significant resources to employ factcheckers. Publication budgets have been decreasing, resulting in a steady decline in the size of their workforce BIBREF0 . Factchecking is not a directly profitable activity, which negatively affects the allocation of resources towards it in for-profit organisations. It is often taken on by charities and philanthropists instead.
To compensate for this shortfall, our strategy is to harness the latest developments in NLP to make factchecking more efficient and therefore less costly. To this end, the new field of automated factchecking has captured the imagination of both non-profits and start-ups BIBREF3 , BIBREF4 , BIBREF5 . It aims to speed up certain aspects of the factchecking process rather than create AI that can replace factchecking personnel. This includes monitoring claims that are made in the news, aiding decisions about which statements are the most important to check and automatically retrieving existing factchecks that are relevant to a new claim.
The claim detection and claim clustering methods that we set out in this paper can be applied to each of these. We sought to devise a system that would automatically detect claims in articles and compare them to previously submitted claims. Storing the results to allow a factchecker's work on one of these claims to be easily transferred to others in the same cluster.
Related Work
It is important to decide what sentences are claims before attempting to cluster them. The first such claim detection system to have been created is ClaimBuster BIBREF6 , which scores sentences with an SVM to determine how likely they are to be politically pertinent statements. Similarly, ClaimRank BIBREF7 uses real claims checked by factchecking institutions as training data in order to surface sentences that are worthy of factchecking.
These methods deal with the question of what is a politically interesting claim. In order to classify the objective qualities of what set apart different types of claims, the ClaimBuster team created PolitiTax BIBREF8 , a taxonomy of claims, and factchecking organisation Full Fact BIBREF9 developed their preferred annotation schema for statements in consultation with their own factcheckers. This research provides a more solid framework within which to construct claim detection classifiers.
The above considers whether or not a sentence is a claim, but often claims are subsections of sentences and multiple claims might be found in one sentence. In order to accommodate this, BIBREF10 proposes extracting phrases called Context Dependent Claims (CDC) that are relevant to a certain `Topic'. Along these lines, BIBREF11 proposes new definitions for frames to be incorporated into FrameNet BIBREF12 that are specific to facts, in particular those found in a political context.
Traditional text clustering methods, using TFIDF and some clustering algorithm, are poorly suited to the problem of clustering and comparing short texts, as they can be semantically very similar but use different words. This is a manifestation of the the data sparsity problem with Bag-of-Words (BoW) models. BIBREF16 . Dimensionality reduction methods such as Latent Dirichlet Allocation (LDA) can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 . More recently, efforts in this area have used text embedding-based systems in order to capture dense representation of the texts BIBREF18 . Much of this recent work has relied on the increase of focus in word and text embeddings. Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 , and since then the number of different embeddings has exploded. While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time
Some considerations to bear in mind when deciding on an embedding scheme to use are: the size of the final vector, the complexity of the model itself and, if using a pretrained implementation, the data the model has been trained on and whether it is trained in a supervised or unsupervised manner.
The size of the embedding can have numerous results downstream. In our example we will be doing distance calculations on the resultant vectors and therefore any increase in length will increase the complexity of those distance calculations. We would therefore like as short a vector as possible, but we still wish to capture all salient information about the claim; longer vectors have more capacity to store information, both salient and non-salient.
A similar effect is seen for the complexity of the model. A more complicated model, with more trainable parameters, may be able to capture finer details about the text, but it will require a larger corpus to achieve this, and will require more computational time to calculate the embeddings. We should therefore attempt to find the simplest embedding system that can accurately solve our problem.
When attempting to use pretrained models to help in other areas, it is always important to ensure that the models you are using are trained on similar material, to increase the chance that their findings will generalise to the new problem. Many unsupervised text embeddings are trained on the CommonCrawl dataset of approx. 840 billion tokens. This gives a huge amount of data across many domains, but requires a similarly huge amount of computing power to train on the entire dataset. Supervised datasets are unlikely ever to approach such scale as they require human annotations which can be expensive to assemble. The SNLI entailment dataset is an example of a large open source dataset BIBREF20 . It features pairs of sentences along with labels specifying whether or not one entails the other. Google's Universal Sentence Encoder (USE) BIBREF14 is a sentence embedding created with a hybrid supervised/unsupervised method, leveraging both the vast amounts of unsupervised training data and the extra detail that can be derived from a supervised method. The SNLI dataset and the related MultiNLI dataset are often used for this because textual entailment is seen as a good basis for general Natural Language Understanding (NLU) BIBREF21 .
Method
It is much easier to build a dataset and reliably evaluate a model if the starting definitions are clear and objective. Questions around what is an interesting or pertinent claim are inherently subjective. For example, it is obvious that a politician will judge their opponents' claims to be more important to factcheck than their own.
Therefore, we built on the methodologies that dealt with the objective qualities of claims, which were the PolitiTax and Full Fact taxonomies. We annotated sentences from our own database of news articles based on a combination of these. We also used the Full Fact definition of a claim as a statement about the world that can be checked. Some examples of claims according to this definition are shown in Table TABREF3 . We decided the first statement was a claim since it declares the occurrence of an event, while the second was considered not to be a claim as it is an expression of feeling.
Full Fact's approach centred around using sentence embeddings as a feature engineering step, followed by a simple classifier such as logistic regression, which is what we used. They used Facebook's sentence embeddings, InferSent BIBREF13 , which was a recent breakthrough at the time. Such is the speed of new development in the field that since then, several papers describing textual embeddings have been published. Due to the fact that we had already evaluated embeddings for clustering, and therefore knew our system would rely on Google USE Large BIBREF14 , we decided to use this instead. We compared this to TFIDF and Full Fact's results as baselines. The results are displayed in Table TABREF4 .
However, ClaimBuster and Full Fact focused on live factchecking of TV debates. Logically is a news aggregator and we analyse the bodies of published news stories. We found that in our corpus, the majority of sentences are claims and therefore our model needed to be as selective as possible. In practice, we choose to filter out sentences that are predictions since generally the substance of the claim cannot be fully checked until after the event has occurred. Likewise, we try to remove claims based on personal experience or anecdotal evidence as they are difficult to verify.
Choosing an embedding
In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions.
The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.
In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3
figureFormula to assess the correctness of claim clusters based on article clusters
This method is limited in how well it can represent the problem, but it can give indications as to a good or bad clustering method or embedding, and can act as a check that the findings we obtained from the Quora dataset will generalise to our domain. We ran code which vectorized 2,000 sentences and then used the DBScan clustering method BIBREF24 to cluster using a grid search to find the best INLINEFORM0 value, maximizing this formula. We used DBScan as it mirrored the clustering method used to derive the original article clusters. The results for this experiment can be found in Table TABREF10 . We included TFIDF in the experiment as a baseline to judge other results. It is not suitable for our eventual purposes, but it the basis of the original keyword-based model used to build the clusters . That being said, TFIDF performs very well, with only Google USE Large and Infersent coming close in terms of `accuracy'. In the case of Infersent, this comes with the penalty of a much smaller number of claims included in the clusters. Google USE Large, however, clusters a greater number and for this reason we chose to use Google's USE Large.
Since Google USE Large was the best-performing embedding in both the tests we devised, this was our chosen embedding to use for clustering. However as can be seen from the results shown above, this is not a perfect solution and the inaccuracy here will introduce inaccuracy further down the clustering pipeline.
Clustering Method
We decided to follow a methodology upon the DBScan method of clustering BIBREF24 . DBScan considers all distances between pairs of points. If they are under INLINEFORM0 then those two are linked. Once the number of connected points exceeds a minimum size threshold, they are considered a cluster and all other points are considered to be unclustered. This method is advantageous for our purposes because unlike other methods, such as K-Means, it does not require the number of clusters to be specified. To create a system that can build clusters dynamically, adding one point at a time, we set the minimum cluster size to one, meaning that every point is a member of a cluster.
A potential disadvantage of this method is that because points require only one connection to a cluster to join it, they may only be related to one point in the cluster, but be considered in the same cluster as all of them. In small examples this is not a problem as all points in the cluster should be very similar. However as the number of points being considered grows, this behaviour raises the prospect of one or several borderline clustering decisions leading to massive clusters made from tenuous connections between genuine clusters. To mitigate this problem we used a method described in the Newslens paper BIBREF25 to solve a similar problem when clustering entire articles. We stored all of our claims in a graph with the connections between them added when the distance between them was determined to be less than INLINEFORM0 . To determine the final clusters we run a Louvain Community Detection BIBREF26 over this graph to split it into defined communities. This improved the compactness of a cluster. When clustering claims one by one, this algorithm can be performed on the connected subgraph featuring the new claim, to reduce the computation required.
As this method involves distance calculations between the claim being added and every existing claim, the time taken to add one claim will increase roughly linearly with respect to the number of previous claims. Through much optimization we have brought the computational time down to approximately 300ms per claim, which stays fairly static with respect to the number of previous claims.
Next Steps
The clustering described above is heavily dependent on the embedding used. The rate of advances in this field has been rapid in recent years, but an embedding will always be an imperfect representation of an claim and therefore always an area of improvement. A domain specific-embedding will likely offer a more accurate representation but creates problems with clustering claims from different domains. They also require a huge amount of data to give a good model and that is not possible in all domains. | Unanswerable |
e854edcc5e9111922e6e120ae17d062427c27ec1 | e854edcc5e9111922e6e120ae17d062427c27ec1_1 | Q: Do the authors report results only on English data?
Text: Introduction
In recent years, the spread of misinformation has become a growing concern for researchers and the public at large BIBREF1 . Researchers at MIT found that social media users are more likely to share false information than true information BIBREF2 . Due to renewed focus on finding ways to foster healthy political conversation, the profile of factcheckers has been raised.
Factcheckers positively influence public debate by publishing good quality information and asking politicians and journalists to retract misleading or false statements. By calling out lies and the blurring of the truth, they make those in positions of power accountable. This is a result of labour intensive work that involves monitoring the news for spurious claims and carrying out rigorous research to judge credibility. So far, it has only been possible to scale their output upwards by hiring more personnel. This is problematic because newsrooms need significant resources to employ factcheckers. Publication budgets have been decreasing, resulting in a steady decline in the size of their workforce BIBREF0 . Factchecking is not a directly profitable activity, which negatively affects the allocation of resources towards it in for-profit organisations. It is often taken on by charities and philanthropists instead.
To compensate for this shortfall, our strategy is to harness the latest developments in NLP to make factchecking more efficient and therefore less costly. To this end, the new field of automated factchecking has captured the imagination of both non-profits and start-ups BIBREF3 , BIBREF4 , BIBREF5 . It aims to speed up certain aspects of the factchecking process rather than create AI that can replace factchecking personnel. This includes monitoring claims that are made in the news, aiding decisions about which statements are the most important to check and automatically retrieving existing factchecks that are relevant to a new claim.
The claim detection and claim clustering methods that we set out in this paper can be applied to each of these. We sought to devise a system that would automatically detect claims in articles and compare them to previously submitted claims. Storing the results to allow a factchecker's work on one of these claims to be easily transferred to others in the same cluster.
Related Work
It is important to decide what sentences are claims before attempting to cluster them. The first such claim detection system to have been created is ClaimBuster BIBREF6 , which scores sentences with an SVM to determine how likely they are to be politically pertinent statements. Similarly, ClaimRank BIBREF7 uses real claims checked by factchecking institutions as training data in order to surface sentences that are worthy of factchecking.
These methods deal with the question of what is a politically interesting claim. In order to classify the objective qualities of what set apart different types of claims, the ClaimBuster team created PolitiTax BIBREF8 , a taxonomy of claims, and factchecking organisation Full Fact BIBREF9 developed their preferred annotation schema for statements in consultation with their own factcheckers. This research provides a more solid framework within which to construct claim detection classifiers.
The above considers whether or not a sentence is a claim, but often claims are subsections of sentences and multiple claims might be found in one sentence. In order to accommodate this, BIBREF10 proposes extracting phrases called Context Dependent Claims (CDC) that are relevant to a certain `Topic'. Along these lines, BIBREF11 proposes new definitions for frames to be incorporated into FrameNet BIBREF12 that are specific to facts, in particular those found in a political context.
Traditional text clustering methods, using TFIDF and some clustering algorithm, are poorly suited to the problem of clustering and comparing short texts, as they can be semantically very similar but use different words. This is a manifestation of the the data sparsity problem with Bag-of-Words (BoW) models. BIBREF16 . Dimensionality reduction methods such as Latent Dirichlet Allocation (LDA) can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 . More recently, efforts in this area have used text embedding-based systems in order to capture dense representation of the texts BIBREF18 . Much of this recent work has relied on the increase of focus in word and text embeddings. Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 , and since then the number of different embeddings has exploded. While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time
Some considerations to bear in mind when deciding on an embedding scheme to use are: the size of the final vector, the complexity of the model itself and, if using a pretrained implementation, the data the model has been trained on and whether it is trained in a supervised or unsupervised manner.
The size of the embedding can have numerous results downstream. In our example we will be doing distance calculations on the resultant vectors and therefore any increase in length will increase the complexity of those distance calculations. We would therefore like as short a vector as possible, but we still wish to capture all salient information about the claim; longer vectors have more capacity to store information, both salient and non-salient.
A similar effect is seen for the complexity of the model. A more complicated model, with more trainable parameters, may be able to capture finer details about the text, but it will require a larger corpus to achieve this, and will require more computational time to calculate the embeddings. We should therefore attempt to find the simplest embedding system that can accurately solve our problem.
When attempting to use pretrained models to help in other areas, it is always important to ensure that the models you are using are trained on similar material, to increase the chance that their findings will generalise to the new problem. Many unsupervised text embeddings are trained on the CommonCrawl dataset of approx. 840 billion tokens. This gives a huge amount of data across many domains, but requires a similarly huge amount of computing power to train on the entire dataset. Supervised datasets are unlikely ever to approach such scale as they require human annotations which can be expensive to assemble. The SNLI entailment dataset is an example of a large open source dataset BIBREF20 . It features pairs of sentences along with labels specifying whether or not one entails the other. Google's Universal Sentence Encoder (USE) BIBREF14 is a sentence embedding created with a hybrid supervised/unsupervised method, leveraging both the vast amounts of unsupervised training data and the extra detail that can be derived from a supervised method. The SNLI dataset and the related MultiNLI dataset are often used for this because textual entailment is seen as a good basis for general Natural Language Understanding (NLU) BIBREF21 .
Method
It is much easier to build a dataset and reliably evaluate a model if the starting definitions are clear and objective. Questions around what is an interesting or pertinent claim are inherently subjective. For example, it is obvious that a politician will judge their opponents' claims to be more important to factcheck than their own.
Therefore, we built on the methodologies that dealt with the objective qualities of claims, which were the PolitiTax and Full Fact taxonomies. We annotated sentences from our own database of news articles based on a combination of these. We also used the Full Fact definition of a claim as a statement about the world that can be checked. Some examples of claims according to this definition are shown in Table TABREF3 . We decided the first statement was a claim since it declares the occurrence of an event, while the second was considered not to be a claim as it is an expression of feeling.
Full Fact's approach centred around using sentence embeddings as a feature engineering step, followed by a simple classifier such as logistic regression, which is what we used. They used Facebook's sentence embeddings, InferSent BIBREF13 , which was a recent breakthrough at the time. Such is the speed of new development in the field that since then, several papers describing textual embeddings have been published. Due to the fact that we had already evaluated embeddings for clustering, and therefore knew our system would rely on Google USE Large BIBREF14 , we decided to use this instead. We compared this to TFIDF and Full Fact's results as baselines. The results are displayed in Table TABREF4 .
However, ClaimBuster and Full Fact focused on live factchecking of TV debates. Logically is a news aggregator and we analyse the bodies of published news stories. We found that in our corpus, the majority of sentences are claims and therefore our model needed to be as selective as possible. In practice, we choose to filter out sentences that are predictions since generally the substance of the claim cannot be fully checked until after the event has occurred. Likewise, we try to remove claims based on personal experience or anecdotal evidence as they are difficult to verify.
Choosing an embedding
In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions.
The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.
In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3
figureFormula to assess the correctness of claim clusters based on article clusters
This method is limited in how well it can represent the problem, but it can give indications as to a good or bad clustering method or embedding, and can act as a check that the findings we obtained from the Quora dataset will generalise to our domain. We ran code which vectorized 2,000 sentences and then used the DBScan clustering method BIBREF24 to cluster using a grid search to find the best INLINEFORM0 value, maximizing this formula. We used DBScan as it mirrored the clustering method used to derive the original article clusters. The results for this experiment can be found in Table TABREF10 . We included TFIDF in the experiment as a baseline to judge other results. It is not suitable for our eventual purposes, but it the basis of the original keyword-based model used to build the clusters . That being said, TFIDF performs very well, with only Google USE Large and Infersent coming close in terms of `accuracy'. In the case of Infersent, this comes with the penalty of a much smaller number of claims included in the clusters. Google USE Large, however, clusters a greater number and for this reason we chose to use Google's USE Large.
Since Google USE Large was the best-performing embedding in both the tests we devised, this was our chosen embedding to use for clustering. However as can be seen from the results shown above, this is not a perfect solution and the inaccuracy here will introduce inaccuracy further down the clustering pipeline.
Clustering Method
We decided to follow a methodology upon the DBScan method of clustering BIBREF24 . DBScan considers all distances between pairs of points. If they are under INLINEFORM0 then those two are linked. Once the number of connected points exceeds a minimum size threshold, they are considered a cluster and all other points are considered to be unclustered. This method is advantageous for our purposes because unlike other methods, such as K-Means, it does not require the number of clusters to be specified. To create a system that can build clusters dynamically, adding one point at a time, we set the minimum cluster size to one, meaning that every point is a member of a cluster.
A potential disadvantage of this method is that because points require only one connection to a cluster to join it, they may only be related to one point in the cluster, but be considered in the same cluster as all of them. In small examples this is not a problem as all points in the cluster should be very similar. However as the number of points being considered grows, this behaviour raises the prospect of one or several borderline clustering decisions leading to massive clusters made from tenuous connections between genuine clusters. To mitigate this problem we used a method described in the Newslens paper BIBREF25 to solve a similar problem when clustering entire articles. We stored all of our claims in a graph with the connections between them added when the distance between them was determined to be less than INLINEFORM0 . To determine the final clusters we run a Louvain Community Detection BIBREF26 over this graph to split it into defined communities. This improved the compactness of a cluster. When clustering claims one by one, this algorithm can be performed on the connected subgraph featuring the new claim, to reduce the computation required.
As this method involves distance calculations between the claim being added and every existing claim, the time taken to add one claim will increase roughly linearly with respect to the number of previous claims. Through much optimization we have brought the computational time down to approximately 300ms per claim, which stays fairly static with respect to the number of previous claims.
Next Steps
The clustering described above is heavily dependent on the embedding used. The rate of advances in this field has been rapid in recent years, but an embedding will always be an imperfect representation of an claim and therefore always an area of improvement. A domain specific-embedding will likely offer a more accurate representation but creates problems with clustering claims from different domains. They also require a huge amount of data to give a good model and that is not possible in all domains. | Unanswerable |
bd6cec2ab620e67b3e0e7946fc045230e6906020 | bd6cec2ab620e67b3e0e7946fc045230e6906020_0 | Q: How is the accuracy of the system measured?
Text: Introduction
In recent years, the spread of misinformation has become a growing concern for researchers and the public at large BIBREF1 . Researchers at MIT found that social media users are more likely to share false information than true information BIBREF2 . Due to renewed focus on finding ways to foster healthy political conversation, the profile of factcheckers has been raised.
Factcheckers positively influence public debate by publishing good quality information and asking politicians and journalists to retract misleading or false statements. By calling out lies and the blurring of the truth, they make those in positions of power accountable. This is a result of labour intensive work that involves monitoring the news for spurious claims and carrying out rigorous research to judge credibility. So far, it has only been possible to scale their output upwards by hiring more personnel. This is problematic because newsrooms need significant resources to employ factcheckers. Publication budgets have been decreasing, resulting in a steady decline in the size of their workforce BIBREF0 . Factchecking is not a directly profitable activity, which negatively affects the allocation of resources towards it in for-profit organisations. It is often taken on by charities and philanthropists instead.
To compensate for this shortfall, our strategy is to harness the latest developments in NLP to make factchecking more efficient and therefore less costly. To this end, the new field of automated factchecking has captured the imagination of both non-profits and start-ups BIBREF3 , BIBREF4 , BIBREF5 . It aims to speed up certain aspects of the factchecking process rather than create AI that can replace factchecking personnel. This includes monitoring claims that are made in the news, aiding decisions about which statements are the most important to check and automatically retrieving existing factchecks that are relevant to a new claim.
The claim detection and claim clustering methods that we set out in this paper can be applied to each of these. We sought to devise a system that would automatically detect claims in articles and compare them to previously submitted claims. Storing the results to allow a factchecker's work on one of these claims to be easily transferred to others in the same cluster.
Related Work
It is important to decide what sentences are claims before attempting to cluster them. The first such claim detection system to have been created is ClaimBuster BIBREF6 , which scores sentences with an SVM to determine how likely they are to be politically pertinent statements. Similarly, ClaimRank BIBREF7 uses real claims checked by factchecking institutions as training data in order to surface sentences that are worthy of factchecking.
These methods deal with the question of what is a politically interesting claim. In order to classify the objective qualities of what set apart different types of claims, the ClaimBuster team created PolitiTax BIBREF8 , a taxonomy of claims, and factchecking organisation Full Fact BIBREF9 developed their preferred annotation schema for statements in consultation with their own factcheckers. This research provides a more solid framework within which to construct claim detection classifiers.
The above considers whether or not a sentence is a claim, but often claims are subsections of sentences and multiple claims might be found in one sentence. In order to accommodate this, BIBREF10 proposes extracting phrases called Context Dependent Claims (CDC) that are relevant to a certain `Topic'. Along these lines, BIBREF11 proposes new definitions for frames to be incorporated into FrameNet BIBREF12 that are specific to facts, in particular those found in a political context.
Traditional text clustering methods, using TFIDF and some clustering algorithm, are poorly suited to the problem of clustering and comparing short texts, as they can be semantically very similar but use different words. This is a manifestation of the the data sparsity problem with Bag-of-Words (BoW) models. BIBREF16 . Dimensionality reduction methods such as Latent Dirichlet Allocation (LDA) can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 . More recently, efforts in this area have used text embedding-based systems in order to capture dense representation of the texts BIBREF18 . Much of this recent work has relied on the increase of focus in word and text embeddings. Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 , and since then the number of different embeddings has exploded. While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time
Some considerations to bear in mind when deciding on an embedding scheme to use are: the size of the final vector, the complexity of the model itself and, if using a pretrained implementation, the data the model has been trained on and whether it is trained in a supervised or unsupervised manner.
The size of the embedding can have numerous results downstream. In our example we will be doing distance calculations on the resultant vectors and therefore any increase in length will increase the complexity of those distance calculations. We would therefore like as short a vector as possible, but we still wish to capture all salient information about the claim; longer vectors have more capacity to store information, both salient and non-salient.
A similar effect is seen for the complexity of the model. A more complicated model, with more trainable parameters, may be able to capture finer details about the text, but it will require a larger corpus to achieve this, and will require more computational time to calculate the embeddings. We should therefore attempt to find the simplest embedding system that can accurately solve our problem.
When attempting to use pretrained models to help in other areas, it is always important to ensure that the models you are using are trained on similar material, to increase the chance that their findings will generalise to the new problem. Many unsupervised text embeddings are trained on the CommonCrawl dataset of approx. 840 billion tokens. This gives a huge amount of data across many domains, but requires a similarly huge amount of computing power to train on the entire dataset. Supervised datasets are unlikely ever to approach such scale as they require human annotations which can be expensive to assemble. The SNLI entailment dataset is an example of a large open source dataset BIBREF20 . It features pairs of sentences along with labels specifying whether or not one entails the other. Google's Universal Sentence Encoder (USE) BIBREF14 is a sentence embedding created with a hybrid supervised/unsupervised method, leveraging both the vast amounts of unsupervised training data and the extra detail that can be derived from a supervised method. The SNLI dataset and the related MultiNLI dataset are often used for this because textual entailment is seen as a good basis for general Natural Language Understanding (NLU) BIBREF21 .
Method
It is much easier to build a dataset and reliably evaluate a model if the starting definitions are clear and objective. Questions around what is an interesting or pertinent claim are inherently subjective. For example, it is obvious that a politician will judge their opponents' claims to be more important to factcheck than their own.
Therefore, we built on the methodologies that dealt with the objective qualities of claims, which were the PolitiTax and Full Fact taxonomies. We annotated sentences from our own database of news articles based on a combination of these. We also used the Full Fact definition of a claim as a statement about the world that can be checked. Some examples of claims according to this definition are shown in Table TABREF3 . We decided the first statement was a claim since it declares the occurrence of an event, while the second was considered not to be a claim as it is an expression of feeling.
Full Fact's approach centred around using sentence embeddings as a feature engineering step, followed by a simple classifier such as logistic regression, which is what we used. They used Facebook's sentence embeddings, InferSent BIBREF13 , which was a recent breakthrough at the time. Such is the speed of new development in the field that since then, several papers describing textual embeddings have been published. Due to the fact that we had already evaluated embeddings for clustering, and therefore knew our system would rely on Google USE Large BIBREF14 , we decided to use this instead. We compared this to TFIDF and Full Fact's results as baselines. The results are displayed in Table TABREF4 .
However, ClaimBuster and Full Fact focused on live factchecking of TV debates. Logically is a news aggregator and we analyse the bodies of published news stories. We found that in our corpus, the majority of sentences are claims and therefore our model needed to be as selective as possible. In practice, we choose to filter out sentences that are predictions since generally the substance of the claim cannot be fully checked until after the event has occurred. Likewise, we try to remove claims based on personal experience or anecdotal evidence as they are difficult to verify.
Choosing an embedding
In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions.
The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.
In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3
figureFormula to assess the correctness of claim clusters based on article clusters
This method is limited in how well it can represent the problem, but it can give indications as to a good or bad clustering method or embedding, and can act as a check that the findings we obtained from the Quora dataset will generalise to our domain. We ran code which vectorized 2,000 sentences and then used the DBScan clustering method BIBREF24 to cluster using a grid search to find the best INLINEFORM0 value, maximizing this formula. We used DBScan as it mirrored the clustering method used to derive the original article clusters. The results for this experiment can be found in Table TABREF10 . We included TFIDF in the experiment as a baseline to judge other results. It is not suitable for our eventual purposes, but it the basis of the original keyword-based model used to build the clusters . That being said, TFIDF performs very well, with only Google USE Large and Infersent coming close in terms of `accuracy'. In the case of Infersent, this comes with the penalty of a much smaller number of claims included in the clusters. Google USE Large, however, clusters a greater number and for this reason we chose to use Google's USE Large.
Since Google USE Large was the best-performing embedding in both the tests we devised, this was our chosen embedding to use for clustering. However as can be seen from the results shown above, this is not a perfect solution and the inaccuracy here will introduce inaccuracy further down the clustering pipeline.
Clustering Method
We decided to follow a methodology upon the DBScan method of clustering BIBREF24 . DBScan considers all distances between pairs of points. If they are under INLINEFORM0 then those two are linked. Once the number of connected points exceeds a minimum size threshold, they are considered a cluster and all other points are considered to be unclustered. This method is advantageous for our purposes because unlike other methods, such as K-Means, it does not require the number of clusters to be specified. To create a system that can build clusters dynamically, adding one point at a time, we set the minimum cluster size to one, meaning that every point is a member of a cluster.
A potential disadvantage of this method is that because points require only one connection to a cluster to join it, they may only be related to one point in the cluster, but be considered in the same cluster as all of them. In small examples this is not a problem as all points in the cluster should be very similar. However as the number of points being considered grows, this behaviour raises the prospect of one or several borderline clustering decisions leading to massive clusters made from tenuous connections between genuine clusters. To mitigate this problem we used a method described in the Newslens paper BIBREF25 to solve a similar problem when clustering entire articles. We stored all of our claims in a graph with the connections between them added when the distance between them was determined to be less than INLINEFORM0 . To determine the final clusters we run a Louvain Community Detection BIBREF26 over this graph to split it into defined communities. This improved the compactness of a cluster. When clustering claims one by one, this algorithm can be performed on the connected subgraph featuring the new claim, to reduce the computation required.
As this method involves distance calculations between the claim being added and every existing claim, the time taken to add one claim will increase roughly linearly with respect to the number of previous claims. Through much optimization we have brought the computational time down to approximately 300ms per claim, which stays fairly static with respect to the number of previous claims.
Next Steps
The clustering described above is heavily dependent on the embedding used. The rate of advances in this field has been rapid in recent years, but an embedding will always be an imperfect representation of an claim and therefore always an area of improvement. A domain specific-embedding will likely offer a more accurate representation but creates problems with clustering claims from different domains. They also require a huge amount of data to give a good model and that is not possible in all domains. | F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates, distances between duplicate and non-duplicate questions using different embedding systems |
4b0ba460ae3ba7a813f204abd16cf631b871baca | 4b0ba460ae3ba7a813f204abd16cf631b871baca_0 | Q: How is an incoming claim used to retrieve similar factchecked claims?
Text: Introduction
In recent years, the spread of misinformation has become a growing concern for researchers and the public at large BIBREF1 . Researchers at MIT found that social media users are more likely to share false information than true information BIBREF2 . Due to renewed focus on finding ways to foster healthy political conversation, the profile of factcheckers has been raised.
Factcheckers positively influence public debate by publishing good quality information and asking politicians and journalists to retract misleading or false statements. By calling out lies and the blurring of the truth, they make those in positions of power accountable. This is a result of labour intensive work that involves monitoring the news for spurious claims and carrying out rigorous research to judge credibility. So far, it has only been possible to scale their output upwards by hiring more personnel. This is problematic because newsrooms need significant resources to employ factcheckers. Publication budgets have been decreasing, resulting in a steady decline in the size of their workforce BIBREF0 . Factchecking is not a directly profitable activity, which negatively affects the allocation of resources towards it in for-profit organisations. It is often taken on by charities and philanthropists instead.
To compensate for this shortfall, our strategy is to harness the latest developments in NLP to make factchecking more efficient and therefore less costly. To this end, the new field of automated factchecking has captured the imagination of both non-profits and start-ups BIBREF3 , BIBREF4 , BIBREF5 . It aims to speed up certain aspects of the factchecking process rather than create AI that can replace factchecking personnel. This includes monitoring claims that are made in the news, aiding decisions about which statements are the most important to check and automatically retrieving existing factchecks that are relevant to a new claim.
The claim detection and claim clustering methods that we set out in this paper can be applied to each of these. We sought to devise a system that would automatically detect claims in articles and compare them to previously submitted claims. Storing the results to allow a factchecker's work on one of these claims to be easily transferred to others in the same cluster.
Related Work
It is important to decide what sentences are claims before attempting to cluster them. The first such claim detection system to have been created is ClaimBuster BIBREF6 , which scores sentences with an SVM to determine how likely they are to be politically pertinent statements. Similarly, ClaimRank BIBREF7 uses real claims checked by factchecking institutions as training data in order to surface sentences that are worthy of factchecking.
These methods deal with the question of what is a politically interesting claim. In order to classify the objective qualities of what set apart different types of claims, the ClaimBuster team created PolitiTax BIBREF8 , a taxonomy of claims, and factchecking organisation Full Fact BIBREF9 developed their preferred annotation schema for statements in consultation with their own factcheckers. This research provides a more solid framework within which to construct claim detection classifiers.
The above considers whether or not a sentence is a claim, but often claims are subsections of sentences and multiple claims might be found in one sentence. In order to accommodate this, BIBREF10 proposes extracting phrases called Context Dependent Claims (CDC) that are relevant to a certain `Topic'. Along these lines, BIBREF11 proposes new definitions for frames to be incorporated into FrameNet BIBREF12 that are specific to facts, in particular those found in a political context.
Traditional text clustering methods, using TFIDF and some clustering algorithm, are poorly suited to the problem of clustering and comparing short texts, as they can be semantically very similar but use different words. This is a manifestation of the the data sparsity problem with Bag-of-Words (BoW) models. BIBREF16 . Dimensionality reduction methods such as Latent Dirichlet Allocation (LDA) can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 . More recently, efforts in this area have used text embedding-based systems in order to capture dense representation of the texts BIBREF18 . Much of this recent work has relied on the increase of focus in word and text embeddings. Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 , and since then the number of different embeddings has exploded. While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time
Some considerations to bear in mind when deciding on an embedding scheme to use are: the size of the final vector, the complexity of the model itself and, if using a pretrained implementation, the data the model has been trained on and whether it is trained in a supervised or unsupervised manner.
The size of the embedding can have numerous results downstream. In our example we will be doing distance calculations on the resultant vectors and therefore any increase in length will increase the complexity of those distance calculations. We would therefore like as short a vector as possible, but we still wish to capture all salient information about the claim; longer vectors have more capacity to store information, both salient and non-salient.
A similar effect is seen for the complexity of the model. A more complicated model, with more trainable parameters, may be able to capture finer details about the text, but it will require a larger corpus to achieve this, and will require more computational time to calculate the embeddings. We should therefore attempt to find the simplest embedding system that can accurately solve our problem.
When attempting to use pretrained models to help in other areas, it is always important to ensure that the models you are using are trained on similar material, to increase the chance that their findings will generalise to the new problem. Many unsupervised text embeddings are trained on the CommonCrawl dataset of approx. 840 billion tokens. This gives a huge amount of data across many domains, but requires a similarly huge amount of computing power to train on the entire dataset. Supervised datasets are unlikely ever to approach such scale as they require human annotations which can be expensive to assemble. The SNLI entailment dataset is an example of a large open source dataset BIBREF20 . It features pairs of sentences along with labels specifying whether or not one entails the other. Google's Universal Sentence Encoder (USE) BIBREF14 is a sentence embedding created with a hybrid supervised/unsupervised method, leveraging both the vast amounts of unsupervised training data and the extra detail that can be derived from a supervised method. The SNLI dataset and the related MultiNLI dataset are often used for this because textual entailment is seen as a good basis for general Natural Language Understanding (NLU) BIBREF21 .
Method
It is much easier to build a dataset and reliably evaluate a model if the starting definitions are clear and objective. Questions around what is an interesting or pertinent claim are inherently subjective. For example, it is obvious that a politician will judge their opponents' claims to be more important to factcheck than their own.
Therefore, we built on the methodologies that dealt with the objective qualities of claims, which were the PolitiTax and Full Fact taxonomies. We annotated sentences from our own database of news articles based on a combination of these. We also used the Full Fact definition of a claim as a statement about the world that can be checked. Some examples of claims according to this definition are shown in Table TABREF3 . We decided the first statement was a claim since it declares the occurrence of an event, while the second was considered not to be a claim as it is an expression of feeling.
Full Fact's approach centred around using sentence embeddings as a feature engineering step, followed by a simple classifier such as logistic regression, which is what we used. They used Facebook's sentence embeddings, InferSent BIBREF13 , which was a recent breakthrough at the time. Such is the speed of new development in the field that since then, several papers describing textual embeddings have been published. Due to the fact that we had already evaluated embeddings for clustering, and therefore knew our system would rely on Google USE Large BIBREF14 , we decided to use this instead. We compared this to TFIDF and Full Fact's results as baselines. The results are displayed in Table TABREF4 .
However, ClaimBuster and Full Fact focused on live factchecking of TV debates. Logically is a news aggregator and we analyse the bodies of published news stories. We found that in our corpus, the majority of sentences are claims and therefore our model needed to be as selective as possible. In practice, we choose to filter out sentences that are predictions since generally the substance of the claim cannot be fully checked until after the event has occurred. Likewise, we try to remove claims based on personal experience or anecdotal evidence as they are difficult to verify.
Choosing an embedding
In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions.
The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.
In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3
figureFormula to assess the correctness of claim clusters based on article clusters
This method is limited in how well it can represent the problem, but it can give indications as to a good or bad clustering method or embedding, and can act as a check that the findings we obtained from the Quora dataset will generalise to our domain. We ran code which vectorized 2,000 sentences and then used the DBScan clustering method BIBREF24 to cluster using a grid search to find the best INLINEFORM0 value, maximizing this formula. We used DBScan as it mirrored the clustering method used to derive the original article clusters. The results for this experiment can be found in Table TABREF10 . We included TFIDF in the experiment as a baseline to judge other results. It is not suitable for our eventual purposes, but it the basis of the original keyword-based model used to build the clusters . That being said, TFIDF performs very well, with only Google USE Large and Infersent coming close in terms of `accuracy'. In the case of Infersent, this comes with the penalty of a much smaller number of claims included in the clusters. Google USE Large, however, clusters a greater number and for this reason we chose to use Google's USE Large.
Since Google USE Large was the best-performing embedding in both the tests we devised, this was our chosen embedding to use for clustering. However as can be seen from the results shown above, this is not a perfect solution and the inaccuracy here will introduce inaccuracy further down the clustering pipeline.
Clustering Method
We decided to follow a methodology upon the DBScan method of clustering BIBREF24 . DBScan considers all distances between pairs of points. If they are under INLINEFORM0 then those two are linked. Once the number of connected points exceeds a minimum size threshold, they are considered a cluster and all other points are considered to be unclustered. This method is advantageous for our purposes because unlike other methods, such as K-Means, it does not require the number of clusters to be specified. To create a system that can build clusters dynamically, adding one point at a time, we set the minimum cluster size to one, meaning that every point is a member of a cluster.
A potential disadvantage of this method is that because points require only one connection to a cluster to join it, they may only be related to one point in the cluster, but be considered in the same cluster as all of them. In small examples this is not a problem as all points in the cluster should be very similar. However as the number of points being considered grows, this behaviour raises the prospect of one or several borderline clustering decisions leading to massive clusters made from tenuous connections between genuine clusters. To mitigate this problem we used a method described in the Newslens paper BIBREF25 to solve a similar problem when clustering entire articles. We stored all of our claims in a graph with the connections between them added when the distance between them was determined to be less than INLINEFORM0 . To determine the final clusters we run a Louvain Community Detection BIBREF26 over this graph to split it into defined communities. This improved the compactness of a cluster. When clustering claims one by one, this algorithm can be performed on the connected subgraph featuring the new claim, to reduce the computation required.
As this method involves distance calculations between the claim being added and every existing claim, the time taken to add one claim will increase roughly linearly with respect to the number of previous claims. Through much optimization we have brought the computational time down to approximately 300ms per claim, which stays fairly static with respect to the number of previous claims.
Next Steps
The clustering described above is heavily dependent on the embedding used. The rate of advances in this field has been rapid in recent years, but an embedding will always be an imperfect representation of an claim and therefore always an area of improvement. A domain specific-embedding will likely offer a more accurate representation but creates problems with clustering claims from different domains. They also require a huge amount of data to give a good model and that is not possible in all domains. | text clustering on the embeddings of texts |
63b0c93f0452d0e1e6355de1d0f3ff0fd67939fb | 63b0c93f0452d0e1e6355de1d0f3ff0fd67939fb_0 | Q: What existing corpus is used for comparison in these experiments?
Text: Introduction
In recent years, the spread of misinformation has become a growing concern for researchers and the public at large BIBREF1 . Researchers at MIT found that social media users are more likely to share false information than true information BIBREF2 . Due to renewed focus on finding ways to foster healthy political conversation, the profile of factcheckers has been raised.
Factcheckers positively influence public debate by publishing good quality information and asking politicians and journalists to retract misleading or false statements. By calling out lies and the blurring of the truth, they make those in positions of power accountable. This is a result of labour intensive work that involves monitoring the news for spurious claims and carrying out rigorous research to judge credibility. So far, it has only been possible to scale their output upwards by hiring more personnel. This is problematic because newsrooms need significant resources to employ factcheckers. Publication budgets have been decreasing, resulting in a steady decline in the size of their workforce BIBREF0 . Factchecking is not a directly profitable activity, which negatively affects the allocation of resources towards it in for-profit organisations. It is often taken on by charities and philanthropists instead.
To compensate for this shortfall, our strategy is to harness the latest developments in NLP to make factchecking more efficient and therefore less costly. To this end, the new field of automated factchecking has captured the imagination of both non-profits and start-ups BIBREF3 , BIBREF4 , BIBREF5 . It aims to speed up certain aspects of the factchecking process rather than create AI that can replace factchecking personnel. This includes monitoring claims that are made in the news, aiding decisions about which statements are the most important to check and automatically retrieving existing factchecks that are relevant to a new claim.
The claim detection and claim clustering methods that we set out in this paper can be applied to each of these. We sought to devise a system that would automatically detect claims in articles and compare them to previously submitted claims. Storing the results to allow a factchecker's work on one of these claims to be easily transferred to others in the same cluster.
Related Work
It is important to decide what sentences are claims before attempting to cluster them. The first such claim detection system to have been created is ClaimBuster BIBREF6 , which scores sentences with an SVM to determine how likely they are to be politically pertinent statements. Similarly, ClaimRank BIBREF7 uses real claims checked by factchecking institutions as training data in order to surface sentences that are worthy of factchecking.
These methods deal with the question of what is a politically interesting claim. In order to classify the objective qualities of what set apart different types of claims, the ClaimBuster team created PolitiTax BIBREF8 , a taxonomy of claims, and factchecking organisation Full Fact BIBREF9 developed their preferred annotation schema for statements in consultation with their own factcheckers. This research provides a more solid framework within which to construct claim detection classifiers.
The above considers whether or not a sentence is a claim, but often claims are subsections of sentences and multiple claims might be found in one sentence. In order to accommodate this, BIBREF10 proposes extracting phrases called Context Dependent Claims (CDC) that are relevant to a certain `Topic'. Along these lines, BIBREF11 proposes new definitions for frames to be incorporated into FrameNet BIBREF12 that are specific to facts, in particular those found in a political context.
Traditional text clustering methods, using TFIDF and some clustering algorithm, are poorly suited to the problem of clustering and comparing short texts, as they can be semantically very similar but use different words. This is a manifestation of the the data sparsity problem with Bag-of-Words (BoW) models. BIBREF16 . Dimensionality reduction methods such as Latent Dirichlet Allocation (LDA) can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 . More recently, efforts in this area have used text embedding-based systems in order to capture dense representation of the texts BIBREF18 . Much of this recent work has relied on the increase of focus in word and text embeddings. Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 , and since then the number of different embeddings has exploded. While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time
Some considerations to bear in mind when deciding on an embedding scheme to use are: the size of the final vector, the complexity of the model itself and, if using a pretrained implementation, the data the model has been trained on and whether it is trained in a supervised or unsupervised manner.
The size of the embedding can have numerous results downstream. In our example we will be doing distance calculations on the resultant vectors and therefore any increase in length will increase the complexity of those distance calculations. We would therefore like as short a vector as possible, but we still wish to capture all salient information about the claim; longer vectors have more capacity to store information, both salient and non-salient.
A similar effect is seen for the complexity of the model. A more complicated model, with more trainable parameters, may be able to capture finer details about the text, but it will require a larger corpus to achieve this, and will require more computational time to calculate the embeddings. We should therefore attempt to find the simplest embedding system that can accurately solve our problem.
When attempting to use pretrained models to help in other areas, it is always important to ensure that the models you are using are trained on similar material, to increase the chance that their findings will generalise to the new problem. Many unsupervised text embeddings are trained on the CommonCrawl dataset of approx. 840 billion tokens. This gives a huge amount of data across many domains, but requires a similarly huge amount of computing power to train on the entire dataset. Supervised datasets are unlikely ever to approach such scale as they require human annotations which can be expensive to assemble. The SNLI entailment dataset is an example of a large open source dataset BIBREF20 . It features pairs of sentences along with labels specifying whether or not one entails the other. Google's Universal Sentence Encoder (USE) BIBREF14 is a sentence embedding created with a hybrid supervised/unsupervised method, leveraging both the vast amounts of unsupervised training data and the extra detail that can be derived from a supervised method. The SNLI dataset and the related MultiNLI dataset are often used for this because textual entailment is seen as a good basis for general Natural Language Understanding (NLU) BIBREF21 .
Method
It is much easier to build a dataset and reliably evaluate a model if the starting definitions are clear and objective. Questions around what is an interesting or pertinent claim are inherently subjective. For example, it is obvious that a politician will judge their opponents' claims to be more important to factcheck than their own.
Therefore, we built on the methodologies that dealt with the objective qualities of claims, which were the PolitiTax and Full Fact taxonomies. We annotated sentences from our own database of news articles based on a combination of these. We also used the Full Fact definition of a claim as a statement about the world that can be checked. Some examples of claims according to this definition are shown in Table TABREF3 . We decided the first statement was a claim since it declares the occurrence of an event, while the second was considered not to be a claim as it is an expression of feeling.
Full Fact's approach centred around using sentence embeddings as a feature engineering step, followed by a simple classifier such as logistic regression, which is what we used. They used Facebook's sentence embeddings, InferSent BIBREF13 , which was a recent breakthrough at the time. Such is the speed of new development in the field that since then, several papers describing textual embeddings have been published. Due to the fact that we had already evaluated embeddings for clustering, and therefore knew our system would rely on Google USE Large BIBREF14 , we decided to use this instead. We compared this to TFIDF and Full Fact's results as baselines. The results are displayed in Table TABREF4 .
However, ClaimBuster and Full Fact focused on live factchecking of TV debates. Logically is a news aggregator and we analyse the bodies of published news stories. We found that in our corpus, the majority of sentences are claims and therefore our model needed to be as selective as possible. In practice, we choose to filter out sentences that are predictions since generally the substance of the claim cannot be fully checked until after the event has occurred. Likewise, we try to remove claims based on personal experience or anecdotal evidence as they are difficult to verify.
Choosing an embedding
In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions.
The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.
In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3
figureFormula to assess the correctness of claim clusters based on article clusters
This method is limited in how well it can represent the problem, but it can give indications as to a good or bad clustering method or embedding, and can act as a check that the findings we obtained from the Quora dataset will generalise to our domain. We ran code which vectorized 2,000 sentences and then used the DBScan clustering method BIBREF24 to cluster using a grid search to find the best INLINEFORM0 value, maximizing this formula. We used DBScan as it mirrored the clustering method used to derive the original article clusters. The results for this experiment can be found in Table TABREF10 . We included TFIDF in the experiment as a baseline to judge other results. It is not suitable for our eventual purposes, but it the basis of the original keyword-based model used to build the clusters . That being said, TFIDF performs very well, with only Google USE Large and Infersent coming close in terms of `accuracy'. In the case of Infersent, this comes with the penalty of a much smaller number of claims included in the clusters. Google USE Large, however, clusters a greater number and for this reason we chose to use Google's USE Large.
Since Google USE Large was the best-performing embedding in both the tests we devised, this was our chosen embedding to use for clustering. However as can be seen from the results shown above, this is not a perfect solution and the inaccuracy here will introduce inaccuracy further down the clustering pipeline.
Clustering Method
We decided to follow a methodology upon the DBScan method of clustering BIBREF24 . DBScan considers all distances between pairs of points. If they are under INLINEFORM0 then those two are linked. Once the number of connected points exceeds a minimum size threshold, they are considered a cluster and all other points are considered to be unclustered. This method is advantageous for our purposes because unlike other methods, such as K-Means, it does not require the number of clusters to be specified. To create a system that can build clusters dynamically, adding one point at a time, we set the minimum cluster size to one, meaning that every point is a member of a cluster.
A potential disadvantage of this method is that because points require only one connection to a cluster to join it, they may only be related to one point in the cluster, but be considered in the same cluster as all of them. In small examples this is not a problem as all points in the cluster should be very similar. However as the number of points being considered grows, this behaviour raises the prospect of one or several borderline clustering decisions leading to massive clusters made from tenuous connections between genuine clusters. To mitigate this problem we used a method described in the Newslens paper BIBREF25 to solve a similar problem when clustering entire articles. We stored all of our claims in a graph with the connections between them added when the distance between them was determined to be less than INLINEFORM0 . To determine the final clusters we run a Louvain Community Detection BIBREF26 over this graph to split it into defined communities. This improved the compactness of a cluster. When clustering claims one by one, this algorithm can be performed on the connected subgraph featuring the new claim, to reduce the computation required.
As this method involves distance calculations between the claim being added and every existing claim, the time taken to add one claim will increase roughly linearly with respect to the number of previous claims. Through much optimization we have brought the computational time down to approximately 300ms per claim, which stays fairly static with respect to the number of previous claims.
Next Steps
The clustering described above is heavily dependent on the embedding used. The rate of advances in this field has been rapid in recent years, but an embedding will always be an imperfect representation of an claim and therefore always an area of improvement. A domain specific-embedding will likely offer a more accurate representation but creates problems with clustering claims from different domains. They also require a huge amount of data to give a good model and that is not possible in all domains. | Quora duplicate question dataset BIBREF22 |
d27f23bcd80b12f6df8e03e65f9b150444925ecf | d27f23bcd80b12f6df8e03e65f9b150444925ecf_0 | Q: What are the components in the factchecking algorithm?
Text: Introduction
In recent years, the spread of misinformation has become a growing concern for researchers and the public at large BIBREF1 . Researchers at MIT found that social media users are more likely to share false information than true information BIBREF2 . Due to renewed focus on finding ways to foster healthy political conversation, the profile of factcheckers has been raised.
Factcheckers positively influence public debate by publishing good quality information and asking politicians and journalists to retract misleading or false statements. By calling out lies and the blurring of the truth, they make those in positions of power accountable. This is a result of labour intensive work that involves monitoring the news for spurious claims and carrying out rigorous research to judge credibility. So far, it has only been possible to scale their output upwards by hiring more personnel. This is problematic because newsrooms need significant resources to employ factcheckers. Publication budgets have been decreasing, resulting in a steady decline in the size of their workforce BIBREF0 . Factchecking is not a directly profitable activity, which negatively affects the allocation of resources towards it in for-profit organisations. It is often taken on by charities and philanthropists instead.
To compensate for this shortfall, our strategy is to harness the latest developments in NLP to make factchecking more efficient and therefore less costly. To this end, the new field of automated factchecking has captured the imagination of both non-profits and start-ups BIBREF3 , BIBREF4 , BIBREF5 . It aims to speed up certain aspects of the factchecking process rather than create AI that can replace factchecking personnel. This includes monitoring claims that are made in the news, aiding decisions about which statements are the most important to check and automatically retrieving existing factchecks that are relevant to a new claim.
The claim detection and claim clustering methods that we set out in this paper can be applied to each of these. We sought to devise a system that would automatically detect claims in articles and compare them to previously submitted claims. Storing the results to allow a factchecker's work on one of these claims to be easily transferred to others in the same cluster.
Related Work
It is important to decide what sentences are claims before attempting to cluster them. The first such claim detection system to have been created is ClaimBuster BIBREF6 , which scores sentences with an SVM to determine how likely they are to be politically pertinent statements. Similarly, ClaimRank BIBREF7 uses real claims checked by factchecking institutions as training data in order to surface sentences that are worthy of factchecking.
These methods deal with the question of what is a politically interesting claim. In order to classify the objective qualities of what set apart different types of claims, the ClaimBuster team created PolitiTax BIBREF8 , a taxonomy of claims, and factchecking organisation Full Fact BIBREF9 developed their preferred annotation schema for statements in consultation with their own factcheckers. This research provides a more solid framework within which to construct claim detection classifiers.
The above considers whether or not a sentence is a claim, but often claims are subsections of sentences and multiple claims might be found in one sentence. In order to accommodate this, BIBREF10 proposes extracting phrases called Context Dependent Claims (CDC) that are relevant to a certain `Topic'. Along these lines, BIBREF11 proposes new definitions for frames to be incorporated into FrameNet BIBREF12 that are specific to facts, in particular those found in a political context.
Traditional text clustering methods, using TFIDF and some clustering algorithm, are poorly suited to the problem of clustering and comparing short texts, as they can be semantically very similar but use different words. This is a manifestation of the the data sparsity problem with Bag-of-Words (BoW) models. BIBREF16 . Dimensionality reduction methods such as Latent Dirichlet Allocation (LDA) can help solve this problem by giving a dense approximation of this sparse representation BIBREF17 . More recently, efforts in this area have used text embedding-based systems in order to capture dense representation of the texts BIBREF18 . Much of this recent work has relied on the increase of focus in word and text embeddings. Text embeddings have been an increasingly popular tool in NLP since the introduction of Word2Vec BIBREF19 , and since then the number of different embeddings has exploded. While many focus on giving a vector representation of a word, an increasing number now exist that will give a vector representation of a entire sentence or text. Following on from this work, we seek to devise a system that can run online, performing text clustering on the embeddings of texts one at a time
Some considerations to bear in mind when deciding on an embedding scheme to use are: the size of the final vector, the complexity of the model itself and, if using a pretrained implementation, the data the model has been trained on and whether it is trained in a supervised or unsupervised manner.
The size of the embedding can have numerous results downstream. In our example we will be doing distance calculations on the resultant vectors and therefore any increase in length will increase the complexity of those distance calculations. We would therefore like as short a vector as possible, but we still wish to capture all salient information about the claim; longer vectors have more capacity to store information, both salient and non-salient.
A similar effect is seen for the complexity of the model. A more complicated model, with more trainable parameters, may be able to capture finer details about the text, but it will require a larger corpus to achieve this, and will require more computational time to calculate the embeddings. We should therefore attempt to find the simplest embedding system that can accurately solve our problem.
When attempting to use pretrained models to help in other areas, it is always important to ensure that the models you are using are trained on similar material, to increase the chance that their findings will generalise to the new problem. Many unsupervised text embeddings are trained on the CommonCrawl dataset of approx. 840 billion tokens. This gives a huge amount of data across many domains, but requires a similarly huge amount of computing power to train on the entire dataset. Supervised datasets are unlikely ever to approach such scale as they require human annotations which can be expensive to assemble. The SNLI entailment dataset is an example of a large open source dataset BIBREF20 . It features pairs of sentences along with labels specifying whether or not one entails the other. Google's Universal Sentence Encoder (USE) BIBREF14 is a sentence embedding created with a hybrid supervised/unsupervised method, leveraging both the vast amounts of unsupervised training data and the extra detail that can be derived from a supervised method. The SNLI dataset and the related MultiNLI dataset are often used for this because textual entailment is seen as a good basis for general Natural Language Understanding (NLU) BIBREF21 .
Method
It is much easier to build a dataset and reliably evaluate a model if the starting definitions are clear and objective. Questions around what is an interesting or pertinent claim are inherently subjective. For example, it is obvious that a politician will judge their opponents' claims to be more important to factcheck than their own.
Therefore, we built on the methodologies that dealt with the objective qualities of claims, which were the PolitiTax and Full Fact taxonomies. We annotated sentences from our own database of news articles based on a combination of these. We also used the Full Fact definition of a claim as a statement about the world that can be checked. Some examples of claims according to this definition are shown in Table TABREF3 . We decided the first statement was a claim since it declares the occurrence of an event, while the second was considered not to be a claim as it is an expression of feeling.
Full Fact's approach centred around using sentence embeddings as a feature engineering step, followed by a simple classifier such as logistic regression, which is what we used. They used Facebook's sentence embeddings, InferSent BIBREF13 , which was a recent breakthrough at the time. Such is the speed of new development in the field that since then, several papers describing textual embeddings have been published. Due to the fact that we had already evaluated embeddings for clustering, and therefore knew our system would rely on Google USE Large BIBREF14 , we decided to use this instead. We compared this to TFIDF and Full Fact's results as baselines. The results are displayed in Table TABREF4 .
However, ClaimBuster and Full Fact focused on live factchecking of TV debates. Logically is a news aggregator and we analyse the bodies of published news stories. We found that in our corpus, the majority of sentences are claims and therefore our model needed to be as selective as possible. In practice, we choose to filter out sentences that are predictions since generally the substance of the claim cannot be fully checked until after the event has occurred. Likewise, we try to remove claims based on personal experience or anecdotal evidence as they are difficult to verify.
Choosing an embedding
In order to choose an embedding, we sought a dataset to represent our problem. Although no perfect matches exist, we decided upon the Quora duplicate question dataset BIBREF22 as the best match. To study the embeddings, we computed the euclidean distance between the two questions using various embeddings, to study the distance between semantically similar and dissimilar questions.
The graphs in figure 1 show the distances between duplicate and non-duplicate questions using different embedding systems. The X axis shows the euclidean distance between vectors and the Y axis frequency. A perfect result would be a blue peak to the left and an entirely disconnected orange spike to the right, showing that all non-duplicate questions have a greater euclidean distance than the least similar duplicate pair of questions. As can be clearly seen in the figure above, Elmo BIBREF23 and Infersent BIBREF13 show almost no separation and therefore cannot be considered good models for this problem. A much greater disparity is shown by the Google USE models BIBREF14 , and even more for the Google USE Large model. In fact the Google USE Large achieved a F1 score of 0.71 for this task without any specific training, simply by choosing a threshold below which all sentence pairs are considered duplicates.
In order to test whether these results generalised to our domain, we devised a test that would make use of what little data we had to evaluate. We had no original data on whether sentences were semantically similar, but we did have a corpus of articles clustered into stories. Working on the assumption that similar claims would be more likely to be in the same story, we developed an equation to judge how well our corpus of sentences was clustered, rewarding clustering which matches the article clustering and the total number of claims clustered. The precise formula is given below, where INLINEFORM0 is the proportion of claims in clusters from one story cluster, INLINEFORM1 is the proportion of claims in the correct claim cluster, where they are from the most common story cluster, and INLINEFORM2 is the number of claims placed in clusters. A,B and C are parameters to tune. INLINEFORM3
figureFormula to assess the correctness of claim clusters based on article clusters
This method is limited in how well it can represent the problem, but it can give indications as to a good or bad clustering method or embedding, and can act as a check that the findings we obtained from the Quora dataset will generalise to our domain. We ran code which vectorized 2,000 sentences and then used the DBScan clustering method BIBREF24 to cluster using a grid search to find the best INLINEFORM0 value, maximizing this formula. We used DBScan as it mirrored the clustering method used to derive the original article clusters. The results for this experiment can be found in Table TABREF10 . We included TFIDF in the experiment as a baseline to judge other results. It is not suitable for our eventual purposes, but it the basis of the original keyword-based model used to build the clusters . That being said, TFIDF performs very well, with only Google USE Large and Infersent coming close in terms of `accuracy'. In the case of Infersent, this comes with the penalty of a much smaller number of claims included in the clusters. Google USE Large, however, clusters a greater number and for this reason we chose to use Google's USE Large.
Since Google USE Large was the best-performing embedding in both the tests we devised, this was our chosen embedding to use for clustering. However as can be seen from the results shown above, this is not a perfect solution and the inaccuracy here will introduce inaccuracy further down the clustering pipeline.
Clustering Method
We decided to follow a methodology upon the DBScan method of clustering BIBREF24 . DBScan considers all distances between pairs of points. If they are under INLINEFORM0 then those two are linked. Once the number of connected points exceeds a minimum size threshold, they are considered a cluster and all other points are considered to be unclustered. This method is advantageous for our purposes because unlike other methods, such as K-Means, it does not require the number of clusters to be specified. To create a system that can build clusters dynamically, adding one point at a time, we set the minimum cluster size to one, meaning that every point is a member of a cluster.
A potential disadvantage of this method is that because points require only one connection to a cluster to join it, they may only be related to one point in the cluster, but be considered in the same cluster as all of them. In small examples this is not a problem as all points in the cluster should be very similar. However as the number of points being considered grows, this behaviour raises the prospect of one or several borderline clustering decisions leading to massive clusters made from tenuous connections between genuine clusters. To mitigate this problem we used a method described in the Newslens paper BIBREF25 to solve a similar problem when clustering entire articles. We stored all of our claims in a graph with the connections between them added when the distance between them was determined to be less than INLINEFORM0 . To determine the final clusters we run a Louvain Community Detection BIBREF26 over this graph to split it into defined communities. This improved the compactness of a cluster. When clustering claims one by one, this algorithm can be performed on the connected subgraph featuring the new claim, to reduce the computation required.
As this method involves distance calculations between the claim being added and every existing claim, the time taken to add one claim will increase roughly linearly with respect to the number of previous claims. Through much optimization we have brought the computational time down to approximately 300ms per claim, which stays fairly static with respect to the number of previous claims.
Next Steps
The clustering described above is heavily dependent on the embedding used. The rate of advances in this field has been rapid in recent years, but an embedding will always be an imperfect representation of an claim and therefore always an area of improvement. A domain specific-embedding will likely offer a more accurate representation but creates problems with clustering claims from different domains. They also require a huge amount of data to give a good model and that is not possible in all domains. | Unanswerable |
b11ee27f3de7dd4a76a1f158dc13c2331af37d9f | b11ee27f3de7dd4a76a1f158dc13c2331af37d9f_0 | Q: What is the baseline?
Text: Introduction
Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems and a large number of datasets are now available BIBREF0, BIBREF1, BIBREF2. However, these datasets suffer from annotation artifacts and other biases, which allow systems to “cheat”: Instead of learning to read texts, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a matching semantic type BIBREF3, BIBREF4. To give another example, many RC datasets contain a large number of “easy” problems that can be solved by looking at the first few words of the question Sugawara2018. In order to provide a reliable measure of progress, an RC dataset thus needs to be robust to such simple heuristics.
Towards this goal, two important directions have been investigated. One direction is to improve the dataset itself, for example, so that it requires an RC system to perform multi-hop inferences BIBREF0 or to generate answers BIBREF1. Another direction is to request a system to output additional information about answers. Yang2018HotpotQA:Answering propose HotpotQA, an “explainable” multi-hop Question Answering (QA) task that requires a system to identify a set of sentences containing supporting evidence for the given answer. We follow the footsteps of Yang2018HotpotQA:Answering and explore an explainable multi-hop QA task.
In the community, two important types of explanations have been explored so far BIBREF5: (i) introspective explanation (how a decision is made), and (ii) justification explanation (collections of evidences to support the decision). In this sense, supporting facts in HotpotQA can be categorized as justification explanations. The advantage of using justification explanations as benchmark is that the task can be reduced to a standard classification task, which enables us to adopt standard evaluation metrics (e.g. a classification accuracy). However, this task setting does not evaluate a machine's ability to (i) extract relevant information from justification sentences and (ii) synthesize them to form coherent logical reasoning steps, which are equally important for NLU.
To address this issue, we propose RC-QED, an RC task that requires not only the answer to a question, but also an introspective explanation in the form of a natural language derivation (NLD). For example, given the question “Which record company released the song Barracuda?” and supporting documents shown in Figure FIGREF1, a system needs to give the answer “Portrait Records” and to provide the following NLD: 1.) Barracuda is on Little Queen, and 2.) Little Queen was released by Portrait Records.
The main difference between our work and HotpotQA is that they identify a set of sentences $\lbrace s_2,s_4\rbrace $, while RC-QED requires a system to generate its derivations in a correct order. This generation task enables us to measure a machine's logical reasoning ability mentioned above. Due to its subjective nature of the natural language derivation task, we evaluate the correctness of derivations generated by a system with multiple reference answers. Our contributions can be summarized as follows:
We create a large corpus consisting of 12,000 QA pairs and natural language derivations. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations.
Through an experiment using two baseline models, we highlight several challenges of RC-QED.
We will make the corpus of reasoning annotations and the baseline system publicly available at https://naoya-i.github.io/rc-qed/.
Task formulation: RC-QED ::: Input, output, and evaluation metrics
We formally define RC-QED as follows:
Given: (i) a question $Q$, and (ii) a set $S$ of supporting documents relevant to $Q$;
Find: (i) answerability $s \in \lbrace \textsf {Answerable},$ $\textsf {Unanswerable} \rbrace $, (ii) an answer $a$, and (iii) a sequence $R$ of derivation steps.
We evaluate each prediction with the following evaluation metrics:
Answerability: Correctness of model's decision on answerability (i.e. binary classification task) evaluated by Precision/Recall/F1.
Answer precision: Correctness of predicted answers (for Answerable predictions only). We follow the standard practice of RC community for evaluation (e.g. an accuracy in the case of multiple choice QA).
Derivation precision: Correctness of generated NLDs evaluated by ROUGE-L BIBREF6 (RG-L) and BLEU-4 (BL-4) BIBREF7. We follow the standard practice of evaluation for natural language generation BIBREF1. Derivation steps might be subjective, so we resort to multiple reference answers.
Task formulation: RC-QED ::: RC-QED@!START@$^{\rm E}$@!END@
This paper instantiates RC-QED by employing multiple choice, entity-based multi-hop QA BIBREF0 as a testbed (henceforth, RC-QED$^{\rm E}$). In entity-based multi-hop QA, machines need to combine relational facts between entities to derive an answer. For example, in Figure FIGREF1, understanding the facts about Barracuda, Little Queen, and Portrait Records stated in each article is required. This design choice restricts a problem domain, but it provides interesting challenges as discussed in Section SECREF46. In addition, such entity-based chaining is known to account for the majority of reasoning types required for multi-hop reasoning BIBREF2.
More formally, given (i) a question $Q=(r, q)$ represented by a binary relation $r$ and an entity $q$ (question entity), (ii) relevant articles $S$, and (iii) a set $C$ of candidate entities, systems are required to output (i) an answerability $s \in \lbrace \textsf {Answerable}, \textsf {Unanswerable} \rbrace $, (ii) an entity $e \in C$ (answer entity) that $(q, r, e)$ holds, and (iii) a sequence $R$ of derivation steps as to why $e$ is believed to be an answer. We define derivation steps as an $m$ chain of relational facts to derive an answer, i.e. $(q, r_1, e_1), (e_1, r_2, e_2), ..., (e_{m-1}, r_{m-1}, e_m),$ $(e_m, r_m, e_{m+1}))$. Although we restrict the form of knowledge to entity relations, we use a natural language form to represent $r_i$ rather than a closed vocabulary (see Figure FIGREF1 for an example).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface
To acquire a large-scale corpus of NLDs, we use crowdsourcing (CS). Although CS is a powerful tool for large-scale dataset creation BIBREF2, BIBREF8, quality control for complex tasks is still challenging. We thus carefully design an incentive structure for crowdworkers, following Yang2018HotpotQA:Answering.
Initially, we provide crowdworkers with an instruction with example annotations, where we emphasize that they judge the truth of statements solely based on given articles, not based on their own knowledge.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Judgement task (Figure @!START@UID13@!END@).
Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Derivation task (Figure @!START@UID14@!END@).
If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers).
We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\ge 5,000$ HITs experiences and an approval rate of $\ge $ 99.0%, and pay ¢20 as a reward per instance.
Our data collection pipeline is expected to be applicable to other types of QAs other than entity-based multi-hop QA without any significant extensions, because the interface is not specifically designed for entity-centric reasoning.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Dataset
Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary.
We use supporting documents provided by WikiHop. WikiHop collects supporting documents by finding Wikipedia articles that bridges a question entity $e_i$ and an answer entity $e_j$, where the link between articles is given by a hyperlink.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results
Table TABREF17 shows the statistics of responses and example annotations. Table TABREF17 also shows the abstractiveness of annotated NLDs ($a$), namely the number of tokens in an NLD divided by the number of tokens in its corresponding justification sentences. This indicates that annotated NLDs are indeed summarized. See Table TABREF53 in Appendix and Supplementary Material for more results.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Quality
To evaluate the quality of annotation results, we publish another CS task on AMT. We randomly sample 300 True and Likely responses in this evaluation. Given NLDs and a statement, 3 crowdworkers are asked if the NLDs can lead to the statement at four scale levels. If the answer is 4 or 3 (“yes” or “likely”), we additionally asked whether each derivation step can be derived from each supporting document; otherwise we asked them the reasons. For a fair evaluation, we encourage crowdworkers to annotate given NLDs with a lower score by stating that we give a bonus if they found a flaw of reasoning on the CS interface.
The evaluation results shown in Table TABREF24 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability).
On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. Crowdworkers found that 45.3% of 294 (out of 900) 3-step NLDs has missing steps to derive a statement. Let us consider this example: for annotated NLDs “[1] Kouvola is located in Helsinki. [2] Helsinki is in the region of Uusimaa. [3] Uusimaa borders the regions Southwest Finland, Kymenlaakso and some others.” and for the statement “Kouvola is located in Kymenlaakso”, one worker pointed out the missing step “Uusimaa is in Kymenlaakso.”. We speculate that greater steps of reasoning make it difficult for crowdworkers to check the correctness of derivations during the writing task.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Agreement
For agreement on the number of NLDs, we obtained a Krippendorff's $\alpha $ of 0.223, indicating a fair agreement BIBREF9.
Our manual inspection of the 10 worst disagreements revealed that majority (7/10) come from Unsure v.s. non-Unsure. It also revealed that crowdworkers who labeled non-Unsure are reliable—6 out 7 non-Unsure annotations can be judged as correct. This partially confirms the effectiveness of our incentive structure.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model
To highlight the challenges and nature of RC-QED$^{\rm E}$, we create a simple, transparent, and interpretable baseline model.
Recent studies on knowledge graph completion (KGC) explore compositional inferences to combat with the sparsity of knowledge bases BIBREF10, BIBREF11, BIBREF12. Given a query triplet $(h, r, t)$ (e.g. (Macchu Picchu, locatedIn, Peru)), a path ranking-based approach for KGC explicitly samples paths between $h$ and $t$ in a knowledge base (e.g. Macchu Picchu—locatedIn—Andes Mountain—countryOf—Peru), and construct a feature vector of these paths. This feature vector is then used to calculate the compatibility between the query triplet and the sampled paths.
RC-QED$^{\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Knowledge graph construction
Given supporting documents $S$, we build a knowledge graph. We first apply a coreference resolver to $S$ and then create a directed graph $G(S)$. Therein, each node represents named entities (NEs) in $S$, and each edge represents textual relations between NEs extracted from $S$. Figure FIGREF27 illustrates an example of $G(S)$ constructed from supporting documents in Figure FIGREF1.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Path ranking-based KGC (PRKGC)
Given a question $Q=(q, r)$ and a candidate entity $c_i$, we estimate the plausibility of $(q, r, c_i)$ as follows:
where $\sigma $ is a sigmoid function, and $\mathbf {q, r, c_i}, \mathbf {\pi }(q, c_i)$ are vector representations of $q, r, c_i$ and a set $\pi (q, c_i)$ of shortest paths between $q$ and $c_i$ on $G(S)$. ${\rm MLP}(\cdot , \cdot )$ denotes a multi-layer perceptron. To encode entities into vectors $\mathbf {q, c_i}$, we use Long-Short Term Memory (LSTM) and take its last hidden state. For example, in Figure FIGREF27, $q =$ Barracuda and $c_i =$ Portrait Records yield $\pi (q, c_i) = \lbrace $Barracuda—is the most popular in their album—Little Queen—was released in May 1977 on—Portrait Records, Barracuda—was released from American band Heart—is the second album released by:-1—Little Queen—was released in May 1977 on—Portrait Records$\rbrace $.
To obtain path representations $\mathbf {\pi }(q, c_i)$, we attentively aggregate individual path representations: $\mathbf {\pi }(q, c_i) = \sum _j \alpha _j \mathbf {\pi _j}(q, c_i)$, where $\alpha _j$ is an attention for the $j$-th path. The attention values are calculated as follows: $\alpha _j = \exp ({\rm sc}(q, r, c_i, \pi _j)) / \sum _k \exp ({\rm sc}(q, r, c_i, \pi _k))$, where ${\rm sc}(q, r, c_i, \pi _j) = {\rm MLP}(\mathbf {q}, \mathbf {r}, \mathbf {c_i}, \mathbf {\pi _j})$. To obtain individual path representations $\mathbf {\pi _j}$, we follow toutanova-etal-2015-representing. We use a Bi-LSTM BIBREF13 with mean pooling over timestep in order to encourage similar paths to have similar path representations.
For the testing phase, we choose a candidate entity $c_i$ with the maximum probability $P(r|q, c_i)$ as an answer entity, and choose a path $\pi _j$ with the maximum attention value $\alpha _j$ as NLDs. To generate NLDs, we simply traverse the path from $q$ to $c_i$ and subsequently concatenate all entities and textual relations as one string. We output Unanswerable when (i) $\max _{c_i \in C} P(r|q, c_i) < \epsilon _k$ or (ii) $G(S)$ has no path between $q$ and all $c_i \in C$.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training
Let $\mathcal {K}^+$ be a set of question-answer pairs, where each instance consists of a triplet (a query entity $q_i$, a relation $r_i$, an answer entity $a_i$). Similarly, let $\mathcal {K}^-$ be a set of question-non-answer pairs. We minimize the following binary cross-entropy loss:
From the NLD point of view, this is unsupervised training. The model is expected to learn the score function ${\rm sc(\cdot )}$ to give higher scores to paths (i.e. NLD steps) that are useful for discriminating correct answers from wrong answers by its own. Highly scored NLDs might be useful for answer classification, but these are not guaranteed to be interpretable to humans.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training ::: Semi-supervising derivations
To address the above issue, we resort to gold-standard NLDs to guide the path scoring function ${\rm sc(\cdot )}$. Let $\mathcal {D}$ be question-answer pairs coupled with gold-standard NLDs, namely a binary vector $\mathbf {p}_i$, where the $j$-th value represents whether $j$-th path corresponds to a gold-standard NLD (1) or not (0). We apply the following cross-entropy loss to the path attention:
Experiments ::: Settings ::: Dataset
We aggregated crowdsourced annotations obtained in Section SECREF3. As a preprocessing, we converted the NLD annotation to Unsure if the derivation contains the phrase needs to be mentioned. This is due to the fact that annotators misunderstand our instruction. When at least one crowdworker state that a statement is Unsure, then we set the answerability to Unanswerable and discard NLD annotations. Otherwise, we employ all NLD annotations from workers as multiple reference NLDs. The statistics is shown in Table TABREF36.
Regarding $\mathcal {K}^+, \mathcal {K}^-$, we extracted 867,936 instances from the training set of WikiHop BIBREF0. We reserve 10% of these instances as a validation set to find the best model. For $\mathcal {D}$, we used Answerable questions in the training set. To create supervision of path (i.e. $\mathbf {p}_i$), we selected the path that is most similar to all NLD annotations in terms of ROUGE-L F1.
Experiments ::: Settings ::: Hyperparameters
We used 100-dimensional vectors for entities, relations, and textual relation representations. We initialize these representations with 100-dimensional Glove Embeddings BIBREF14 and fine-tuned them during training. We retain only top-100,000 frequent words as a model vocabulary. We used Bi-LSTM with 50 dimensional hidden state as a textual relation encoder, and an LSTM with 100-dimensional hidden state as an entity encoder. We used the Adam optimizer (default parameters) BIBREF15 with a batch size of 32. We set the answerability threshold $\epsilon _k = 0.5$.
Experiments ::: Settings ::: Baseline
To check the integrity of the PRKGC model, we created a simple baseline model (shortest path model). It outputs a candidate entity with the shortest path length from a query entity on $G(S)$ as an answer. Similarly to the PRKGC model, it traverses the path to generate NLDs. It outputs Unanswerable if (i) a query entity is not reachable to any candidate entities on $G(S)$ or (ii) the shortest path length is more than 3.
Experiments ::: Results and discussion
As shown in Table TABREF37, the PRKGC models learned to reason over more than simple shortest paths. Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QED$^{\rm E}$. Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the maximum score match human-generated NLDs to some extent.
Supervising path attentions (the PRKGC+NS model) is indeed effective for improving the human interpretability of generated NLDs. It also improves the generalization ability of question answering. We speculate that $L_d$ functions as a regularizer, which helps models to learn reasoning that helpful beyond training data. This observation is consistent with previous work where an evidence selection task is learned jointly with a main task BIBREF11, BIBREF2, BIBREF5.
As shown in Table TABREF43, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. This indicates that the challenge of RC-QED$^{\rm E}$ is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer.
To obtain further insights, we manually analyzed generated NLDs. Table TABREF44 (a) illustrates a positive example, where the model identifies that altudoceras belongs to pseudogastrioceratinae, and that pseudogastrioceratinae is a subfamily of paragastrioceratidae. Some supporting sentences are already similar to human-generated NLDs, thus simply extracting textual relations works well for some problems.
On the other hand, typical derivation error is from non-human readable textual relations. In (b), the model states that bumped has a relationship of “,” with hands up, which is originally extracted from one of supporting sentences It contains the UK Top 60 singles “Bumped”, “Hands Up (4 Lovers)” and .... This provides a useful clue for answer prediction, but is not suitable as a derivation. One may address this issue by incorporating, for example, a relation extractor or a paraphrasing mechanism using recent advances of conditional language models BIBREF20.
Experiments ::: Results and discussion ::: QA performance.
To check the integrity of our baseline models, we compare our baseline models with existing neural models tailored for QA under the pure WikiHop setting (i.e. evaluation with only an accuracy of predicted answers). Note that these existing models do not output derivations. We thus cannot make a direct comparison, so it servers as a reference purpose. Because WikiHop has no answerability task, we enforced the PRKGC model to always output answers. As shown in Table TABREF45, the PRKGC models achieve a comparable performance to other sophisticated neural models.
Related work ::: RC datasets with explanations
There exists few RC datasets annotated with explanations (Table TABREF50). The most similar work to ours is Science QA dataset BIBREF21, BIBREF22, BIBREF23, which provides a small set of NLDs annotated for analysis purposes. By developing the scalable crowdsourcing framework, our work provides one order-of-magnitude larger NLDs which can be used as a benchmark more reliably. In addition, it provides the community with new types of challenges not included in HotpotQA.
Related work ::: Analysis of RC models and datasets
There is a large body of work on analyzing the nature of RC datasets, motivated by the question to what degree RC models understand natural language BIBREF3, BIBREF4. Several studies suggest that current RC datasets have unintended bias, which enables RC systems to rely on a cheap heuristics to answer questions. For instance, Sugawara2018 show that some of these RC datasets contain a large number of “easy” questions that can be solved by a cheap heuristics (e.g. by looking at a first few tokens of questions). Responding to their findings, we take a step further and explore the new task of RC that requires RC systems to give introspective explanations as well as answers. In addition, recent studies show that current RC models and NLP models are vulnerable to adversarial examples BIBREF29, BIBREF30, BIBREF31. Explicit modeling of NLDs is expected to reguralize RC models, which could prevent RC models' strong dependence on unintended bias in training data (e.g. annotation artifact) BIBREF32, BIBREF8, BIBREF2, BIBREF5, as partially confirmed in Section SECREF46.
Related work ::: Other NLP corpora annotated with explanations
There are existing NLP tasks that require models to output explanations (Table TABREF50). FEVER BIBREF25 requires a system to judge the “factness” of a claim as well as to identify justification sentences. As discussed earlier, we take a step further from justification explanations to provide new challenges for NLU.
Several datasets are annotated with introspective explanations, ranging from textual entailments BIBREF8 to argumentative texts BIBREF26, BIBREF27, BIBREF33. All these datasets offer the classification task of single sentences or sentence pairs. The uniqueness of our dataset is that it measures a machine's ability to extract relevant information from a set of documents and to build coherent logical reasoning steps.
Conclusions
Towards RC models that can perform correct reasoning, we have proposed RC-QED that requires a system to output its introspective explanations, as well as answers. Instantiating RC-QED with entity-based multi-hop QA (RC-QED$^{\rm E}$), we have created a large-scale corpus of NLDs. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations. Our experiments using two simple baseline models have demonstrated that RC-QED$^{\rm E}$ is a non-trivial task, and that it indeed provides a challenging task of extracting and synthesizing relevant facts from supporting documents. We will make the corpus of reasoning annotations and baseline systems publicly available at https://naoya-i.github.io/rc-qed/.
One immediate future work is to expand the annotation to non-entity-based multi-hop QA datasets such as HotpotQA BIBREF2. For modeling, we plan to incorporate a generative mechanism based on recent advances in conditional language modeling.
Example annotations
Table TABREF53 shows examples of crowdsourced annotations. | path ranking-based KGC (PRKGC) |
7aba5e4483293f5847caad144ee0791c77164917 | 7aba5e4483293f5847caad144ee0791c77164917_0 | Q: What dataset was used in the experiment?
Text: Introduction
Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems and a large number of datasets are now available BIBREF0, BIBREF1, BIBREF2. However, these datasets suffer from annotation artifacts and other biases, which allow systems to “cheat”: Instead of learning to read texts, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a matching semantic type BIBREF3, BIBREF4. To give another example, many RC datasets contain a large number of “easy” problems that can be solved by looking at the first few words of the question Sugawara2018. In order to provide a reliable measure of progress, an RC dataset thus needs to be robust to such simple heuristics.
Towards this goal, two important directions have been investigated. One direction is to improve the dataset itself, for example, so that it requires an RC system to perform multi-hop inferences BIBREF0 or to generate answers BIBREF1. Another direction is to request a system to output additional information about answers. Yang2018HotpotQA:Answering propose HotpotQA, an “explainable” multi-hop Question Answering (QA) task that requires a system to identify a set of sentences containing supporting evidence for the given answer. We follow the footsteps of Yang2018HotpotQA:Answering and explore an explainable multi-hop QA task.
In the community, two important types of explanations have been explored so far BIBREF5: (i) introspective explanation (how a decision is made), and (ii) justification explanation (collections of evidences to support the decision). In this sense, supporting facts in HotpotQA can be categorized as justification explanations. The advantage of using justification explanations as benchmark is that the task can be reduced to a standard classification task, which enables us to adopt standard evaluation metrics (e.g. a classification accuracy). However, this task setting does not evaluate a machine's ability to (i) extract relevant information from justification sentences and (ii) synthesize them to form coherent logical reasoning steps, which are equally important for NLU.
To address this issue, we propose RC-QED, an RC task that requires not only the answer to a question, but also an introspective explanation in the form of a natural language derivation (NLD). For example, given the question “Which record company released the song Barracuda?” and supporting documents shown in Figure FIGREF1, a system needs to give the answer “Portrait Records” and to provide the following NLD: 1.) Barracuda is on Little Queen, and 2.) Little Queen was released by Portrait Records.
The main difference between our work and HotpotQA is that they identify a set of sentences $\lbrace s_2,s_4\rbrace $, while RC-QED requires a system to generate its derivations in a correct order. This generation task enables us to measure a machine's logical reasoning ability mentioned above. Due to its subjective nature of the natural language derivation task, we evaluate the correctness of derivations generated by a system with multiple reference answers. Our contributions can be summarized as follows:
We create a large corpus consisting of 12,000 QA pairs and natural language derivations. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations.
Through an experiment using two baseline models, we highlight several challenges of RC-QED.
We will make the corpus of reasoning annotations and the baseline system publicly available at https://naoya-i.github.io/rc-qed/.
Task formulation: RC-QED ::: Input, output, and evaluation metrics
We formally define RC-QED as follows:
Given: (i) a question $Q$, and (ii) a set $S$ of supporting documents relevant to $Q$;
Find: (i) answerability $s \in \lbrace \textsf {Answerable},$ $\textsf {Unanswerable} \rbrace $, (ii) an answer $a$, and (iii) a sequence $R$ of derivation steps.
We evaluate each prediction with the following evaluation metrics:
Answerability: Correctness of model's decision on answerability (i.e. binary classification task) evaluated by Precision/Recall/F1.
Answer precision: Correctness of predicted answers (for Answerable predictions only). We follow the standard practice of RC community for evaluation (e.g. an accuracy in the case of multiple choice QA).
Derivation precision: Correctness of generated NLDs evaluated by ROUGE-L BIBREF6 (RG-L) and BLEU-4 (BL-4) BIBREF7. We follow the standard practice of evaluation for natural language generation BIBREF1. Derivation steps might be subjective, so we resort to multiple reference answers.
Task formulation: RC-QED ::: RC-QED@!START@$^{\rm E}$@!END@
This paper instantiates RC-QED by employing multiple choice, entity-based multi-hop QA BIBREF0 as a testbed (henceforth, RC-QED$^{\rm E}$). In entity-based multi-hop QA, machines need to combine relational facts between entities to derive an answer. For example, in Figure FIGREF1, understanding the facts about Barracuda, Little Queen, and Portrait Records stated in each article is required. This design choice restricts a problem domain, but it provides interesting challenges as discussed in Section SECREF46. In addition, such entity-based chaining is known to account for the majority of reasoning types required for multi-hop reasoning BIBREF2.
More formally, given (i) a question $Q=(r, q)$ represented by a binary relation $r$ and an entity $q$ (question entity), (ii) relevant articles $S$, and (iii) a set $C$ of candidate entities, systems are required to output (i) an answerability $s \in \lbrace \textsf {Answerable}, \textsf {Unanswerable} \rbrace $, (ii) an entity $e \in C$ (answer entity) that $(q, r, e)$ holds, and (iii) a sequence $R$ of derivation steps as to why $e$ is believed to be an answer. We define derivation steps as an $m$ chain of relational facts to derive an answer, i.e. $(q, r_1, e_1), (e_1, r_2, e_2), ..., (e_{m-1}, r_{m-1}, e_m),$ $(e_m, r_m, e_{m+1}))$. Although we restrict the form of knowledge to entity relations, we use a natural language form to represent $r_i$ rather than a closed vocabulary (see Figure FIGREF1 for an example).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface
To acquire a large-scale corpus of NLDs, we use crowdsourcing (CS). Although CS is a powerful tool for large-scale dataset creation BIBREF2, BIBREF8, quality control for complex tasks is still challenging. We thus carefully design an incentive structure for crowdworkers, following Yang2018HotpotQA:Answering.
Initially, we provide crowdworkers with an instruction with example annotations, where we emphasize that they judge the truth of statements solely based on given articles, not based on their own knowledge.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Judgement task (Figure @!START@UID13@!END@).
Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Derivation task (Figure @!START@UID14@!END@).
If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers).
We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\ge 5,000$ HITs experiences and an approval rate of $\ge $ 99.0%, and pay ¢20 as a reward per instance.
Our data collection pipeline is expected to be applicable to other types of QAs other than entity-based multi-hop QA without any significant extensions, because the interface is not specifically designed for entity-centric reasoning.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Dataset
Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary.
We use supporting documents provided by WikiHop. WikiHop collects supporting documents by finding Wikipedia articles that bridges a question entity $e_i$ and an answer entity $e_j$, where the link between articles is given by a hyperlink.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results
Table TABREF17 shows the statistics of responses and example annotations. Table TABREF17 also shows the abstractiveness of annotated NLDs ($a$), namely the number of tokens in an NLD divided by the number of tokens in its corresponding justification sentences. This indicates that annotated NLDs are indeed summarized. See Table TABREF53 in Appendix and Supplementary Material for more results.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Quality
To evaluate the quality of annotation results, we publish another CS task on AMT. We randomly sample 300 True and Likely responses in this evaluation. Given NLDs and a statement, 3 crowdworkers are asked if the NLDs can lead to the statement at four scale levels. If the answer is 4 or 3 (“yes” or “likely”), we additionally asked whether each derivation step can be derived from each supporting document; otherwise we asked them the reasons. For a fair evaluation, we encourage crowdworkers to annotate given NLDs with a lower score by stating that we give a bonus if they found a flaw of reasoning on the CS interface.
The evaluation results shown in Table TABREF24 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability).
On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. Crowdworkers found that 45.3% of 294 (out of 900) 3-step NLDs has missing steps to derive a statement. Let us consider this example: for annotated NLDs “[1] Kouvola is located in Helsinki. [2] Helsinki is in the region of Uusimaa. [3] Uusimaa borders the regions Southwest Finland, Kymenlaakso and some others.” and for the statement “Kouvola is located in Kymenlaakso”, one worker pointed out the missing step “Uusimaa is in Kymenlaakso.”. We speculate that greater steps of reasoning make it difficult for crowdworkers to check the correctness of derivations during the writing task.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Agreement
For agreement on the number of NLDs, we obtained a Krippendorff's $\alpha $ of 0.223, indicating a fair agreement BIBREF9.
Our manual inspection of the 10 worst disagreements revealed that majority (7/10) come from Unsure v.s. non-Unsure. It also revealed that crowdworkers who labeled non-Unsure are reliable—6 out 7 non-Unsure annotations can be judged as correct. This partially confirms the effectiveness of our incentive structure.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model
To highlight the challenges and nature of RC-QED$^{\rm E}$, we create a simple, transparent, and interpretable baseline model.
Recent studies on knowledge graph completion (KGC) explore compositional inferences to combat with the sparsity of knowledge bases BIBREF10, BIBREF11, BIBREF12. Given a query triplet $(h, r, t)$ (e.g. (Macchu Picchu, locatedIn, Peru)), a path ranking-based approach for KGC explicitly samples paths between $h$ and $t$ in a knowledge base (e.g. Macchu Picchu—locatedIn—Andes Mountain—countryOf—Peru), and construct a feature vector of these paths. This feature vector is then used to calculate the compatibility between the query triplet and the sampled paths.
RC-QED$^{\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Knowledge graph construction
Given supporting documents $S$, we build a knowledge graph. We first apply a coreference resolver to $S$ and then create a directed graph $G(S)$. Therein, each node represents named entities (NEs) in $S$, and each edge represents textual relations between NEs extracted from $S$. Figure FIGREF27 illustrates an example of $G(S)$ constructed from supporting documents in Figure FIGREF1.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Path ranking-based KGC (PRKGC)
Given a question $Q=(q, r)$ and a candidate entity $c_i$, we estimate the plausibility of $(q, r, c_i)$ as follows:
where $\sigma $ is a sigmoid function, and $\mathbf {q, r, c_i}, \mathbf {\pi }(q, c_i)$ are vector representations of $q, r, c_i$ and a set $\pi (q, c_i)$ of shortest paths between $q$ and $c_i$ on $G(S)$. ${\rm MLP}(\cdot , \cdot )$ denotes a multi-layer perceptron. To encode entities into vectors $\mathbf {q, c_i}$, we use Long-Short Term Memory (LSTM) and take its last hidden state. For example, in Figure FIGREF27, $q =$ Barracuda and $c_i =$ Portrait Records yield $\pi (q, c_i) = \lbrace $Barracuda—is the most popular in their album—Little Queen—was released in May 1977 on—Portrait Records, Barracuda—was released from American band Heart—is the second album released by:-1—Little Queen—was released in May 1977 on—Portrait Records$\rbrace $.
To obtain path representations $\mathbf {\pi }(q, c_i)$, we attentively aggregate individual path representations: $\mathbf {\pi }(q, c_i) = \sum _j \alpha _j \mathbf {\pi _j}(q, c_i)$, where $\alpha _j$ is an attention for the $j$-th path. The attention values are calculated as follows: $\alpha _j = \exp ({\rm sc}(q, r, c_i, \pi _j)) / \sum _k \exp ({\rm sc}(q, r, c_i, \pi _k))$, where ${\rm sc}(q, r, c_i, \pi _j) = {\rm MLP}(\mathbf {q}, \mathbf {r}, \mathbf {c_i}, \mathbf {\pi _j})$. To obtain individual path representations $\mathbf {\pi _j}$, we follow toutanova-etal-2015-representing. We use a Bi-LSTM BIBREF13 with mean pooling over timestep in order to encourage similar paths to have similar path representations.
For the testing phase, we choose a candidate entity $c_i$ with the maximum probability $P(r|q, c_i)$ as an answer entity, and choose a path $\pi _j$ with the maximum attention value $\alpha _j$ as NLDs. To generate NLDs, we simply traverse the path from $q$ to $c_i$ and subsequently concatenate all entities and textual relations as one string. We output Unanswerable when (i) $\max _{c_i \in C} P(r|q, c_i) < \epsilon _k$ or (ii) $G(S)$ has no path between $q$ and all $c_i \in C$.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training
Let $\mathcal {K}^+$ be a set of question-answer pairs, where each instance consists of a triplet (a query entity $q_i$, a relation $r_i$, an answer entity $a_i$). Similarly, let $\mathcal {K}^-$ be a set of question-non-answer pairs. We minimize the following binary cross-entropy loss:
From the NLD point of view, this is unsupervised training. The model is expected to learn the score function ${\rm sc(\cdot )}$ to give higher scores to paths (i.e. NLD steps) that are useful for discriminating correct answers from wrong answers by its own. Highly scored NLDs might be useful for answer classification, but these are not guaranteed to be interpretable to humans.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training ::: Semi-supervising derivations
To address the above issue, we resort to gold-standard NLDs to guide the path scoring function ${\rm sc(\cdot )}$. Let $\mathcal {D}$ be question-answer pairs coupled with gold-standard NLDs, namely a binary vector $\mathbf {p}_i$, where the $j$-th value represents whether $j$-th path corresponds to a gold-standard NLD (1) or not (0). We apply the following cross-entropy loss to the path attention:
Experiments ::: Settings ::: Dataset
We aggregated crowdsourced annotations obtained in Section SECREF3. As a preprocessing, we converted the NLD annotation to Unsure if the derivation contains the phrase needs to be mentioned. This is due to the fact that annotators misunderstand our instruction. When at least one crowdworker state that a statement is Unsure, then we set the answerability to Unanswerable and discard NLD annotations. Otherwise, we employ all NLD annotations from workers as multiple reference NLDs. The statistics is shown in Table TABREF36.
Regarding $\mathcal {K}^+, \mathcal {K}^-$, we extracted 867,936 instances from the training set of WikiHop BIBREF0. We reserve 10% of these instances as a validation set to find the best model. For $\mathcal {D}$, we used Answerable questions in the training set. To create supervision of path (i.e. $\mathbf {p}_i$), we selected the path that is most similar to all NLD annotations in terms of ROUGE-L F1.
Experiments ::: Settings ::: Hyperparameters
We used 100-dimensional vectors for entities, relations, and textual relation representations. We initialize these representations with 100-dimensional Glove Embeddings BIBREF14 and fine-tuned them during training. We retain only top-100,000 frequent words as a model vocabulary. We used Bi-LSTM with 50 dimensional hidden state as a textual relation encoder, and an LSTM with 100-dimensional hidden state as an entity encoder. We used the Adam optimizer (default parameters) BIBREF15 with a batch size of 32. We set the answerability threshold $\epsilon _k = 0.5$.
Experiments ::: Settings ::: Baseline
To check the integrity of the PRKGC model, we created a simple baseline model (shortest path model). It outputs a candidate entity with the shortest path length from a query entity on $G(S)$ as an answer. Similarly to the PRKGC model, it traverses the path to generate NLDs. It outputs Unanswerable if (i) a query entity is not reachable to any candidate entities on $G(S)$ or (ii) the shortest path length is more than 3.
Experiments ::: Results and discussion
As shown in Table TABREF37, the PRKGC models learned to reason over more than simple shortest paths. Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QED$^{\rm E}$. Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the maximum score match human-generated NLDs to some extent.
Supervising path attentions (the PRKGC+NS model) is indeed effective for improving the human interpretability of generated NLDs. It also improves the generalization ability of question answering. We speculate that $L_d$ functions as a regularizer, which helps models to learn reasoning that helpful beyond training data. This observation is consistent with previous work where an evidence selection task is learned jointly with a main task BIBREF11, BIBREF2, BIBREF5.
As shown in Table TABREF43, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. This indicates that the challenge of RC-QED$^{\rm E}$ is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer.
To obtain further insights, we manually analyzed generated NLDs. Table TABREF44 (a) illustrates a positive example, where the model identifies that altudoceras belongs to pseudogastrioceratinae, and that pseudogastrioceratinae is a subfamily of paragastrioceratidae. Some supporting sentences are already similar to human-generated NLDs, thus simply extracting textual relations works well for some problems.
On the other hand, typical derivation error is from non-human readable textual relations. In (b), the model states that bumped has a relationship of “,” with hands up, which is originally extracted from one of supporting sentences It contains the UK Top 60 singles “Bumped”, “Hands Up (4 Lovers)” and .... This provides a useful clue for answer prediction, but is not suitable as a derivation. One may address this issue by incorporating, for example, a relation extractor or a paraphrasing mechanism using recent advances of conditional language models BIBREF20.
Experiments ::: Results and discussion ::: QA performance.
To check the integrity of our baseline models, we compare our baseline models with existing neural models tailored for QA under the pure WikiHop setting (i.e. evaluation with only an accuracy of predicted answers). Note that these existing models do not output derivations. We thus cannot make a direct comparison, so it servers as a reference purpose. Because WikiHop has no answerability task, we enforced the PRKGC model to always output answers. As shown in Table TABREF45, the PRKGC models achieve a comparable performance to other sophisticated neural models.
Related work ::: RC datasets with explanations
There exists few RC datasets annotated with explanations (Table TABREF50). The most similar work to ours is Science QA dataset BIBREF21, BIBREF22, BIBREF23, which provides a small set of NLDs annotated for analysis purposes. By developing the scalable crowdsourcing framework, our work provides one order-of-magnitude larger NLDs which can be used as a benchmark more reliably. In addition, it provides the community with new types of challenges not included in HotpotQA.
Related work ::: Analysis of RC models and datasets
There is a large body of work on analyzing the nature of RC datasets, motivated by the question to what degree RC models understand natural language BIBREF3, BIBREF4. Several studies suggest that current RC datasets have unintended bias, which enables RC systems to rely on a cheap heuristics to answer questions. For instance, Sugawara2018 show that some of these RC datasets contain a large number of “easy” questions that can be solved by a cheap heuristics (e.g. by looking at a first few tokens of questions). Responding to their findings, we take a step further and explore the new task of RC that requires RC systems to give introspective explanations as well as answers. In addition, recent studies show that current RC models and NLP models are vulnerable to adversarial examples BIBREF29, BIBREF30, BIBREF31. Explicit modeling of NLDs is expected to reguralize RC models, which could prevent RC models' strong dependence on unintended bias in training data (e.g. annotation artifact) BIBREF32, BIBREF8, BIBREF2, BIBREF5, as partially confirmed in Section SECREF46.
Related work ::: Other NLP corpora annotated with explanations
There are existing NLP tasks that require models to output explanations (Table TABREF50). FEVER BIBREF25 requires a system to judge the “factness” of a claim as well as to identify justification sentences. As discussed earlier, we take a step further from justification explanations to provide new challenges for NLU.
Several datasets are annotated with introspective explanations, ranging from textual entailments BIBREF8 to argumentative texts BIBREF26, BIBREF27, BIBREF33. All these datasets offer the classification task of single sentences or sentence pairs. The uniqueness of our dataset is that it measures a machine's ability to extract relevant information from a set of documents and to build coherent logical reasoning steps.
Conclusions
Towards RC models that can perform correct reasoning, we have proposed RC-QED that requires a system to output its introspective explanations, as well as answers. Instantiating RC-QED with entity-based multi-hop QA (RC-QED$^{\rm E}$), we have created a large-scale corpus of NLDs. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations. Our experiments using two simple baseline models have demonstrated that RC-QED$^{\rm E}$ is a non-trivial task, and that it indeed provides a challenging task of extracting and synthesizing relevant facts from supporting documents. We will make the corpus of reasoning annotations and baseline systems publicly available at https://naoya-i.github.io/rc-qed/.
One immediate future work is to expand the annotation to non-entity-based multi-hop QA datasets such as HotpotQA BIBREF2. For modeling, we plan to incorporate a generative mechanism based on recent advances in conditional language modeling.
Example annotations
Table TABREF53 shows examples of crowdsourced annotations. | WikiHop |
565d668947ffa6d52dad019af79289420505889b | 565d668947ffa6d52dad019af79289420505889b_0 | Q: Did they use any crowdsourcing platform?
Text: Introduction
Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems and a large number of datasets are now available BIBREF0, BIBREF1, BIBREF2. However, these datasets suffer from annotation artifacts and other biases, which allow systems to “cheat”: Instead of learning to read texts, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a matching semantic type BIBREF3, BIBREF4. To give another example, many RC datasets contain a large number of “easy” problems that can be solved by looking at the first few words of the question Sugawara2018. In order to provide a reliable measure of progress, an RC dataset thus needs to be robust to such simple heuristics.
Towards this goal, two important directions have been investigated. One direction is to improve the dataset itself, for example, so that it requires an RC system to perform multi-hop inferences BIBREF0 or to generate answers BIBREF1. Another direction is to request a system to output additional information about answers. Yang2018HotpotQA:Answering propose HotpotQA, an “explainable” multi-hop Question Answering (QA) task that requires a system to identify a set of sentences containing supporting evidence for the given answer. We follow the footsteps of Yang2018HotpotQA:Answering and explore an explainable multi-hop QA task.
In the community, two important types of explanations have been explored so far BIBREF5: (i) introspective explanation (how a decision is made), and (ii) justification explanation (collections of evidences to support the decision). In this sense, supporting facts in HotpotQA can be categorized as justification explanations. The advantage of using justification explanations as benchmark is that the task can be reduced to a standard classification task, which enables us to adopt standard evaluation metrics (e.g. a classification accuracy). However, this task setting does not evaluate a machine's ability to (i) extract relevant information from justification sentences and (ii) synthesize them to form coherent logical reasoning steps, which are equally important for NLU.
To address this issue, we propose RC-QED, an RC task that requires not only the answer to a question, but also an introspective explanation in the form of a natural language derivation (NLD). For example, given the question “Which record company released the song Barracuda?” and supporting documents shown in Figure FIGREF1, a system needs to give the answer “Portrait Records” and to provide the following NLD: 1.) Barracuda is on Little Queen, and 2.) Little Queen was released by Portrait Records.
The main difference between our work and HotpotQA is that they identify a set of sentences $\lbrace s_2,s_4\rbrace $, while RC-QED requires a system to generate its derivations in a correct order. This generation task enables us to measure a machine's logical reasoning ability mentioned above. Due to its subjective nature of the natural language derivation task, we evaluate the correctness of derivations generated by a system with multiple reference answers. Our contributions can be summarized as follows:
We create a large corpus consisting of 12,000 QA pairs and natural language derivations. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations.
Through an experiment using two baseline models, we highlight several challenges of RC-QED.
We will make the corpus of reasoning annotations and the baseline system publicly available at https://naoya-i.github.io/rc-qed/.
Task formulation: RC-QED ::: Input, output, and evaluation metrics
We formally define RC-QED as follows:
Given: (i) a question $Q$, and (ii) a set $S$ of supporting documents relevant to $Q$;
Find: (i) answerability $s \in \lbrace \textsf {Answerable},$ $\textsf {Unanswerable} \rbrace $, (ii) an answer $a$, and (iii) a sequence $R$ of derivation steps.
We evaluate each prediction with the following evaluation metrics:
Answerability: Correctness of model's decision on answerability (i.e. binary classification task) evaluated by Precision/Recall/F1.
Answer precision: Correctness of predicted answers (for Answerable predictions only). We follow the standard practice of RC community for evaluation (e.g. an accuracy in the case of multiple choice QA).
Derivation precision: Correctness of generated NLDs evaluated by ROUGE-L BIBREF6 (RG-L) and BLEU-4 (BL-4) BIBREF7. We follow the standard practice of evaluation for natural language generation BIBREF1. Derivation steps might be subjective, so we resort to multiple reference answers.
Task formulation: RC-QED ::: RC-QED@!START@$^{\rm E}$@!END@
This paper instantiates RC-QED by employing multiple choice, entity-based multi-hop QA BIBREF0 as a testbed (henceforth, RC-QED$^{\rm E}$). In entity-based multi-hop QA, machines need to combine relational facts between entities to derive an answer. For example, in Figure FIGREF1, understanding the facts about Barracuda, Little Queen, and Portrait Records stated in each article is required. This design choice restricts a problem domain, but it provides interesting challenges as discussed in Section SECREF46. In addition, such entity-based chaining is known to account for the majority of reasoning types required for multi-hop reasoning BIBREF2.
More formally, given (i) a question $Q=(r, q)$ represented by a binary relation $r$ and an entity $q$ (question entity), (ii) relevant articles $S$, and (iii) a set $C$ of candidate entities, systems are required to output (i) an answerability $s \in \lbrace \textsf {Answerable}, \textsf {Unanswerable} \rbrace $, (ii) an entity $e \in C$ (answer entity) that $(q, r, e)$ holds, and (iii) a sequence $R$ of derivation steps as to why $e$ is believed to be an answer. We define derivation steps as an $m$ chain of relational facts to derive an answer, i.e. $(q, r_1, e_1), (e_1, r_2, e_2), ..., (e_{m-1}, r_{m-1}, e_m),$ $(e_m, r_m, e_{m+1}))$. Although we restrict the form of knowledge to entity relations, we use a natural language form to represent $r_i$ rather than a closed vocabulary (see Figure FIGREF1 for an example).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface
To acquire a large-scale corpus of NLDs, we use crowdsourcing (CS). Although CS is a powerful tool for large-scale dataset creation BIBREF2, BIBREF8, quality control for complex tasks is still challenging. We thus carefully design an incentive structure for crowdworkers, following Yang2018HotpotQA:Answering.
Initially, we provide crowdworkers with an instruction with example annotations, where we emphasize that they judge the truth of statements solely based on given articles, not based on their own knowledge.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Judgement task (Figure @!START@UID13@!END@).
Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Derivation task (Figure @!START@UID14@!END@).
If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers).
We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\ge 5,000$ HITs experiences and an approval rate of $\ge $ 99.0%, and pay ¢20 as a reward per instance.
Our data collection pipeline is expected to be applicable to other types of QAs other than entity-based multi-hop QA without any significant extensions, because the interface is not specifically designed for entity-centric reasoning.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Dataset
Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary.
We use supporting documents provided by WikiHop. WikiHop collects supporting documents by finding Wikipedia articles that bridges a question entity $e_i$ and an answer entity $e_j$, where the link between articles is given by a hyperlink.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results
Table TABREF17 shows the statistics of responses and example annotations. Table TABREF17 also shows the abstractiveness of annotated NLDs ($a$), namely the number of tokens in an NLD divided by the number of tokens in its corresponding justification sentences. This indicates that annotated NLDs are indeed summarized. See Table TABREF53 in Appendix and Supplementary Material for more results.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Quality
To evaluate the quality of annotation results, we publish another CS task on AMT. We randomly sample 300 True and Likely responses in this evaluation. Given NLDs and a statement, 3 crowdworkers are asked if the NLDs can lead to the statement at four scale levels. If the answer is 4 or 3 (“yes” or “likely”), we additionally asked whether each derivation step can be derived from each supporting document; otherwise we asked them the reasons. For a fair evaluation, we encourage crowdworkers to annotate given NLDs with a lower score by stating that we give a bonus if they found a flaw of reasoning on the CS interface.
The evaluation results shown in Table TABREF24 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability).
On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. Crowdworkers found that 45.3% of 294 (out of 900) 3-step NLDs has missing steps to derive a statement. Let us consider this example: for annotated NLDs “[1] Kouvola is located in Helsinki. [2] Helsinki is in the region of Uusimaa. [3] Uusimaa borders the regions Southwest Finland, Kymenlaakso and some others.” and for the statement “Kouvola is located in Kymenlaakso”, one worker pointed out the missing step “Uusimaa is in Kymenlaakso.”. We speculate that greater steps of reasoning make it difficult for crowdworkers to check the correctness of derivations during the writing task.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Agreement
For agreement on the number of NLDs, we obtained a Krippendorff's $\alpha $ of 0.223, indicating a fair agreement BIBREF9.
Our manual inspection of the 10 worst disagreements revealed that majority (7/10) come from Unsure v.s. non-Unsure. It also revealed that crowdworkers who labeled non-Unsure are reliable—6 out 7 non-Unsure annotations can be judged as correct. This partially confirms the effectiveness of our incentive structure.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model
To highlight the challenges and nature of RC-QED$^{\rm E}$, we create a simple, transparent, and interpretable baseline model.
Recent studies on knowledge graph completion (KGC) explore compositional inferences to combat with the sparsity of knowledge bases BIBREF10, BIBREF11, BIBREF12. Given a query triplet $(h, r, t)$ (e.g. (Macchu Picchu, locatedIn, Peru)), a path ranking-based approach for KGC explicitly samples paths between $h$ and $t$ in a knowledge base (e.g. Macchu Picchu—locatedIn—Andes Mountain—countryOf—Peru), and construct a feature vector of these paths. This feature vector is then used to calculate the compatibility between the query triplet and the sampled paths.
RC-QED$^{\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Knowledge graph construction
Given supporting documents $S$, we build a knowledge graph. We first apply a coreference resolver to $S$ and then create a directed graph $G(S)$. Therein, each node represents named entities (NEs) in $S$, and each edge represents textual relations between NEs extracted from $S$. Figure FIGREF27 illustrates an example of $G(S)$ constructed from supporting documents in Figure FIGREF1.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Path ranking-based KGC (PRKGC)
Given a question $Q=(q, r)$ and a candidate entity $c_i$, we estimate the plausibility of $(q, r, c_i)$ as follows:
where $\sigma $ is a sigmoid function, and $\mathbf {q, r, c_i}, \mathbf {\pi }(q, c_i)$ are vector representations of $q, r, c_i$ and a set $\pi (q, c_i)$ of shortest paths between $q$ and $c_i$ on $G(S)$. ${\rm MLP}(\cdot , \cdot )$ denotes a multi-layer perceptron. To encode entities into vectors $\mathbf {q, c_i}$, we use Long-Short Term Memory (LSTM) and take its last hidden state. For example, in Figure FIGREF27, $q =$ Barracuda and $c_i =$ Portrait Records yield $\pi (q, c_i) = \lbrace $Barracuda—is the most popular in their album—Little Queen—was released in May 1977 on—Portrait Records, Barracuda—was released from American band Heart—is the second album released by:-1—Little Queen—was released in May 1977 on—Portrait Records$\rbrace $.
To obtain path representations $\mathbf {\pi }(q, c_i)$, we attentively aggregate individual path representations: $\mathbf {\pi }(q, c_i) = \sum _j \alpha _j \mathbf {\pi _j}(q, c_i)$, where $\alpha _j$ is an attention for the $j$-th path. The attention values are calculated as follows: $\alpha _j = \exp ({\rm sc}(q, r, c_i, \pi _j)) / \sum _k \exp ({\rm sc}(q, r, c_i, \pi _k))$, where ${\rm sc}(q, r, c_i, \pi _j) = {\rm MLP}(\mathbf {q}, \mathbf {r}, \mathbf {c_i}, \mathbf {\pi _j})$. To obtain individual path representations $\mathbf {\pi _j}$, we follow toutanova-etal-2015-representing. We use a Bi-LSTM BIBREF13 with mean pooling over timestep in order to encourage similar paths to have similar path representations.
For the testing phase, we choose a candidate entity $c_i$ with the maximum probability $P(r|q, c_i)$ as an answer entity, and choose a path $\pi _j$ with the maximum attention value $\alpha _j$ as NLDs. To generate NLDs, we simply traverse the path from $q$ to $c_i$ and subsequently concatenate all entities and textual relations as one string. We output Unanswerable when (i) $\max _{c_i \in C} P(r|q, c_i) < \epsilon _k$ or (ii) $G(S)$ has no path between $q$ and all $c_i \in C$.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training
Let $\mathcal {K}^+$ be a set of question-answer pairs, where each instance consists of a triplet (a query entity $q_i$, a relation $r_i$, an answer entity $a_i$). Similarly, let $\mathcal {K}^-$ be a set of question-non-answer pairs. We minimize the following binary cross-entropy loss:
From the NLD point of view, this is unsupervised training. The model is expected to learn the score function ${\rm sc(\cdot )}$ to give higher scores to paths (i.e. NLD steps) that are useful for discriminating correct answers from wrong answers by its own. Highly scored NLDs might be useful for answer classification, but these are not guaranteed to be interpretable to humans.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training ::: Semi-supervising derivations
To address the above issue, we resort to gold-standard NLDs to guide the path scoring function ${\rm sc(\cdot )}$. Let $\mathcal {D}$ be question-answer pairs coupled with gold-standard NLDs, namely a binary vector $\mathbf {p}_i$, where the $j$-th value represents whether $j$-th path corresponds to a gold-standard NLD (1) or not (0). We apply the following cross-entropy loss to the path attention:
Experiments ::: Settings ::: Dataset
We aggregated crowdsourced annotations obtained in Section SECREF3. As a preprocessing, we converted the NLD annotation to Unsure if the derivation contains the phrase needs to be mentioned. This is due to the fact that annotators misunderstand our instruction. When at least one crowdworker state that a statement is Unsure, then we set the answerability to Unanswerable and discard NLD annotations. Otherwise, we employ all NLD annotations from workers as multiple reference NLDs. The statistics is shown in Table TABREF36.
Regarding $\mathcal {K}^+, \mathcal {K}^-$, we extracted 867,936 instances from the training set of WikiHop BIBREF0. We reserve 10% of these instances as a validation set to find the best model. For $\mathcal {D}$, we used Answerable questions in the training set. To create supervision of path (i.e. $\mathbf {p}_i$), we selected the path that is most similar to all NLD annotations in terms of ROUGE-L F1.
Experiments ::: Settings ::: Hyperparameters
We used 100-dimensional vectors for entities, relations, and textual relation representations. We initialize these representations with 100-dimensional Glove Embeddings BIBREF14 and fine-tuned them during training. We retain only top-100,000 frequent words as a model vocabulary. We used Bi-LSTM with 50 dimensional hidden state as a textual relation encoder, and an LSTM with 100-dimensional hidden state as an entity encoder. We used the Adam optimizer (default parameters) BIBREF15 with a batch size of 32. We set the answerability threshold $\epsilon _k = 0.5$.
Experiments ::: Settings ::: Baseline
To check the integrity of the PRKGC model, we created a simple baseline model (shortest path model). It outputs a candidate entity with the shortest path length from a query entity on $G(S)$ as an answer. Similarly to the PRKGC model, it traverses the path to generate NLDs. It outputs Unanswerable if (i) a query entity is not reachable to any candidate entities on $G(S)$ or (ii) the shortest path length is more than 3.
Experiments ::: Results and discussion
As shown in Table TABREF37, the PRKGC models learned to reason over more than simple shortest paths. Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QED$^{\rm E}$. Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the maximum score match human-generated NLDs to some extent.
Supervising path attentions (the PRKGC+NS model) is indeed effective for improving the human interpretability of generated NLDs. It also improves the generalization ability of question answering. We speculate that $L_d$ functions as a regularizer, which helps models to learn reasoning that helpful beyond training data. This observation is consistent with previous work where an evidence selection task is learned jointly with a main task BIBREF11, BIBREF2, BIBREF5.
As shown in Table TABREF43, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. This indicates that the challenge of RC-QED$^{\rm E}$ is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer.
To obtain further insights, we manually analyzed generated NLDs. Table TABREF44 (a) illustrates a positive example, where the model identifies that altudoceras belongs to pseudogastrioceratinae, and that pseudogastrioceratinae is a subfamily of paragastrioceratidae. Some supporting sentences are already similar to human-generated NLDs, thus simply extracting textual relations works well for some problems.
On the other hand, typical derivation error is from non-human readable textual relations. In (b), the model states that bumped has a relationship of “,” with hands up, which is originally extracted from one of supporting sentences It contains the UK Top 60 singles “Bumped”, “Hands Up (4 Lovers)” and .... This provides a useful clue for answer prediction, but is not suitable as a derivation. One may address this issue by incorporating, for example, a relation extractor or a paraphrasing mechanism using recent advances of conditional language models BIBREF20.
Experiments ::: Results and discussion ::: QA performance.
To check the integrity of our baseline models, we compare our baseline models with existing neural models tailored for QA under the pure WikiHop setting (i.e. evaluation with only an accuracy of predicted answers). Note that these existing models do not output derivations. We thus cannot make a direct comparison, so it servers as a reference purpose. Because WikiHop has no answerability task, we enforced the PRKGC model to always output answers. As shown in Table TABREF45, the PRKGC models achieve a comparable performance to other sophisticated neural models.
Related work ::: RC datasets with explanations
There exists few RC datasets annotated with explanations (Table TABREF50). The most similar work to ours is Science QA dataset BIBREF21, BIBREF22, BIBREF23, which provides a small set of NLDs annotated for analysis purposes. By developing the scalable crowdsourcing framework, our work provides one order-of-magnitude larger NLDs which can be used as a benchmark more reliably. In addition, it provides the community with new types of challenges not included in HotpotQA.
Related work ::: Analysis of RC models and datasets
There is a large body of work on analyzing the nature of RC datasets, motivated by the question to what degree RC models understand natural language BIBREF3, BIBREF4. Several studies suggest that current RC datasets have unintended bias, which enables RC systems to rely on a cheap heuristics to answer questions. For instance, Sugawara2018 show that some of these RC datasets contain a large number of “easy” questions that can be solved by a cheap heuristics (e.g. by looking at a first few tokens of questions). Responding to their findings, we take a step further and explore the new task of RC that requires RC systems to give introspective explanations as well as answers. In addition, recent studies show that current RC models and NLP models are vulnerable to adversarial examples BIBREF29, BIBREF30, BIBREF31. Explicit modeling of NLDs is expected to reguralize RC models, which could prevent RC models' strong dependence on unintended bias in training data (e.g. annotation artifact) BIBREF32, BIBREF8, BIBREF2, BIBREF5, as partially confirmed in Section SECREF46.
Related work ::: Other NLP corpora annotated with explanations
There are existing NLP tasks that require models to output explanations (Table TABREF50). FEVER BIBREF25 requires a system to judge the “factness” of a claim as well as to identify justification sentences. As discussed earlier, we take a step further from justification explanations to provide new challenges for NLU.
Several datasets are annotated with introspective explanations, ranging from textual entailments BIBREF8 to argumentative texts BIBREF26, BIBREF27, BIBREF33. All these datasets offer the classification task of single sentences or sentence pairs. The uniqueness of our dataset is that it measures a machine's ability to extract relevant information from a set of documents and to build coherent logical reasoning steps.
Conclusions
Towards RC models that can perform correct reasoning, we have proposed RC-QED that requires a system to output its introspective explanations, as well as answers. Instantiating RC-QED with entity-based multi-hop QA (RC-QED$^{\rm E}$), we have created a large-scale corpus of NLDs. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations. Our experiments using two simple baseline models have demonstrated that RC-QED$^{\rm E}$ is a non-trivial task, and that it indeed provides a challenging task of extracting and synthesizing relevant facts from supporting documents. We will make the corpus of reasoning annotations and baseline systems publicly available at https://naoya-i.github.io/rc-qed/.
One immediate future work is to expand the annotation to non-entity-based multi-hop QA datasets such as HotpotQA BIBREF2. For modeling, we plan to incorporate a generative mechanism based on recent advances in conditional language modeling.
Example annotations
Table TABREF53 shows examples of crowdsourced annotations. | Yes |
565d668947ffa6d52dad019af79289420505889b | 565d668947ffa6d52dad019af79289420505889b_1 | Q: Did they use any crowdsourcing platform?
Text: Introduction
Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems and a large number of datasets are now available BIBREF0, BIBREF1, BIBREF2. However, these datasets suffer from annotation artifacts and other biases, which allow systems to “cheat”: Instead of learning to read texts, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a matching semantic type BIBREF3, BIBREF4. To give another example, many RC datasets contain a large number of “easy” problems that can be solved by looking at the first few words of the question Sugawara2018. In order to provide a reliable measure of progress, an RC dataset thus needs to be robust to such simple heuristics.
Towards this goal, two important directions have been investigated. One direction is to improve the dataset itself, for example, so that it requires an RC system to perform multi-hop inferences BIBREF0 or to generate answers BIBREF1. Another direction is to request a system to output additional information about answers. Yang2018HotpotQA:Answering propose HotpotQA, an “explainable” multi-hop Question Answering (QA) task that requires a system to identify a set of sentences containing supporting evidence for the given answer. We follow the footsteps of Yang2018HotpotQA:Answering and explore an explainable multi-hop QA task.
In the community, two important types of explanations have been explored so far BIBREF5: (i) introspective explanation (how a decision is made), and (ii) justification explanation (collections of evidences to support the decision). In this sense, supporting facts in HotpotQA can be categorized as justification explanations. The advantage of using justification explanations as benchmark is that the task can be reduced to a standard classification task, which enables us to adopt standard evaluation metrics (e.g. a classification accuracy). However, this task setting does not evaluate a machine's ability to (i) extract relevant information from justification sentences and (ii) synthesize them to form coherent logical reasoning steps, which are equally important for NLU.
To address this issue, we propose RC-QED, an RC task that requires not only the answer to a question, but also an introspective explanation in the form of a natural language derivation (NLD). For example, given the question “Which record company released the song Barracuda?” and supporting documents shown in Figure FIGREF1, a system needs to give the answer “Portrait Records” and to provide the following NLD: 1.) Barracuda is on Little Queen, and 2.) Little Queen was released by Portrait Records.
The main difference between our work and HotpotQA is that they identify a set of sentences $\lbrace s_2,s_4\rbrace $, while RC-QED requires a system to generate its derivations in a correct order. This generation task enables us to measure a machine's logical reasoning ability mentioned above. Due to its subjective nature of the natural language derivation task, we evaluate the correctness of derivations generated by a system with multiple reference answers. Our contributions can be summarized as follows:
We create a large corpus consisting of 12,000 QA pairs and natural language derivations. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations.
Through an experiment using two baseline models, we highlight several challenges of RC-QED.
We will make the corpus of reasoning annotations and the baseline system publicly available at https://naoya-i.github.io/rc-qed/.
Task formulation: RC-QED ::: Input, output, and evaluation metrics
We formally define RC-QED as follows:
Given: (i) a question $Q$, and (ii) a set $S$ of supporting documents relevant to $Q$;
Find: (i) answerability $s \in \lbrace \textsf {Answerable},$ $\textsf {Unanswerable} \rbrace $, (ii) an answer $a$, and (iii) a sequence $R$ of derivation steps.
We evaluate each prediction with the following evaluation metrics:
Answerability: Correctness of model's decision on answerability (i.e. binary classification task) evaluated by Precision/Recall/F1.
Answer precision: Correctness of predicted answers (for Answerable predictions only). We follow the standard practice of RC community for evaluation (e.g. an accuracy in the case of multiple choice QA).
Derivation precision: Correctness of generated NLDs evaluated by ROUGE-L BIBREF6 (RG-L) and BLEU-4 (BL-4) BIBREF7. We follow the standard practice of evaluation for natural language generation BIBREF1. Derivation steps might be subjective, so we resort to multiple reference answers.
Task formulation: RC-QED ::: RC-QED@!START@$^{\rm E}$@!END@
This paper instantiates RC-QED by employing multiple choice, entity-based multi-hop QA BIBREF0 as a testbed (henceforth, RC-QED$^{\rm E}$). In entity-based multi-hop QA, machines need to combine relational facts between entities to derive an answer. For example, in Figure FIGREF1, understanding the facts about Barracuda, Little Queen, and Portrait Records stated in each article is required. This design choice restricts a problem domain, but it provides interesting challenges as discussed in Section SECREF46. In addition, such entity-based chaining is known to account for the majority of reasoning types required for multi-hop reasoning BIBREF2.
More formally, given (i) a question $Q=(r, q)$ represented by a binary relation $r$ and an entity $q$ (question entity), (ii) relevant articles $S$, and (iii) a set $C$ of candidate entities, systems are required to output (i) an answerability $s \in \lbrace \textsf {Answerable}, \textsf {Unanswerable} \rbrace $, (ii) an entity $e \in C$ (answer entity) that $(q, r, e)$ holds, and (iii) a sequence $R$ of derivation steps as to why $e$ is believed to be an answer. We define derivation steps as an $m$ chain of relational facts to derive an answer, i.e. $(q, r_1, e_1), (e_1, r_2, e_2), ..., (e_{m-1}, r_{m-1}, e_m),$ $(e_m, r_m, e_{m+1}))$. Although we restrict the form of knowledge to entity relations, we use a natural language form to represent $r_i$ rather than a closed vocabulary (see Figure FIGREF1 for an example).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface
To acquire a large-scale corpus of NLDs, we use crowdsourcing (CS). Although CS is a powerful tool for large-scale dataset creation BIBREF2, BIBREF8, quality control for complex tasks is still challenging. We thus carefully design an incentive structure for crowdworkers, following Yang2018HotpotQA:Answering.
Initially, we provide crowdworkers with an instruction with example annotations, where we emphasize that they judge the truth of statements solely based on given articles, not based on their own knowledge.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Judgement task (Figure @!START@UID13@!END@).
Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Derivation task (Figure @!START@UID14@!END@).
If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers).
We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\ge 5,000$ HITs experiences and an approval rate of $\ge $ 99.0%, and pay ¢20 as a reward per instance.
Our data collection pipeline is expected to be applicable to other types of QAs other than entity-based multi-hop QA without any significant extensions, because the interface is not specifically designed for entity-centric reasoning.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Dataset
Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary.
We use supporting documents provided by WikiHop. WikiHop collects supporting documents by finding Wikipedia articles that bridges a question entity $e_i$ and an answer entity $e_j$, where the link between articles is given by a hyperlink.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results
Table TABREF17 shows the statistics of responses and example annotations. Table TABREF17 also shows the abstractiveness of annotated NLDs ($a$), namely the number of tokens in an NLD divided by the number of tokens in its corresponding justification sentences. This indicates that annotated NLDs are indeed summarized. See Table TABREF53 in Appendix and Supplementary Material for more results.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Quality
To evaluate the quality of annotation results, we publish another CS task on AMT. We randomly sample 300 True and Likely responses in this evaluation. Given NLDs and a statement, 3 crowdworkers are asked if the NLDs can lead to the statement at four scale levels. If the answer is 4 or 3 (“yes” or “likely”), we additionally asked whether each derivation step can be derived from each supporting document; otherwise we asked them the reasons. For a fair evaluation, we encourage crowdworkers to annotate given NLDs with a lower score by stating that we give a bonus if they found a flaw of reasoning on the CS interface.
The evaluation results shown in Table TABREF24 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability).
On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. Crowdworkers found that 45.3% of 294 (out of 900) 3-step NLDs has missing steps to derive a statement. Let us consider this example: for annotated NLDs “[1] Kouvola is located in Helsinki. [2] Helsinki is in the region of Uusimaa. [3] Uusimaa borders the regions Southwest Finland, Kymenlaakso and some others.” and for the statement “Kouvola is located in Kymenlaakso”, one worker pointed out the missing step “Uusimaa is in Kymenlaakso.”. We speculate that greater steps of reasoning make it difficult for crowdworkers to check the correctness of derivations during the writing task.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Agreement
For agreement on the number of NLDs, we obtained a Krippendorff's $\alpha $ of 0.223, indicating a fair agreement BIBREF9.
Our manual inspection of the 10 worst disagreements revealed that majority (7/10) come from Unsure v.s. non-Unsure. It also revealed that crowdworkers who labeled non-Unsure are reliable—6 out 7 non-Unsure annotations can be judged as correct. This partially confirms the effectiveness of our incentive structure.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model
To highlight the challenges and nature of RC-QED$^{\rm E}$, we create a simple, transparent, and interpretable baseline model.
Recent studies on knowledge graph completion (KGC) explore compositional inferences to combat with the sparsity of knowledge bases BIBREF10, BIBREF11, BIBREF12. Given a query triplet $(h, r, t)$ (e.g. (Macchu Picchu, locatedIn, Peru)), a path ranking-based approach for KGC explicitly samples paths between $h$ and $t$ in a knowledge base (e.g. Macchu Picchu—locatedIn—Andes Mountain—countryOf—Peru), and construct a feature vector of these paths. This feature vector is then used to calculate the compatibility between the query triplet and the sampled paths.
RC-QED$^{\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Knowledge graph construction
Given supporting documents $S$, we build a knowledge graph. We first apply a coreference resolver to $S$ and then create a directed graph $G(S)$. Therein, each node represents named entities (NEs) in $S$, and each edge represents textual relations between NEs extracted from $S$. Figure FIGREF27 illustrates an example of $G(S)$ constructed from supporting documents in Figure FIGREF1.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Path ranking-based KGC (PRKGC)
Given a question $Q=(q, r)$ and a candidate entity $c_i$, we estimate the plausibility of $(q, r, c_i)$ as follows:
where $\sigma $ is a sigmoid function, and $\mathbf {q, r, c_i}, \mathbf {\pi }(q, c_i)$ are vector representations of $q, r, c_i$ and a set $\pi (q, c_i)$ of shortest paths between $q$ and $c_i$ on $G(S)$. ${\rm MLP}(\cdot , \cdot )$ denotes a multi-layer perceptron. To encode entities into vectors $\mathbf {q, c_i}$, we use Long-Short Term Memory (LSTM) and take its last hidden state. For example, in Figure FIGREF27, $q =$ Barracuda and $c_i =$ Portrait Records yield $\pi (q, c_i) = \lbrace $Barracuda—is the most popular in their album—Little Queen—was released in May 1977 on—Portrait Records, Barracuda—was released from American band Heart—is the second album released by:-1—Little Queen—was released in May 1977 on—Portrait Records$\rbrace $.
To obtain path representations $\mathbf {\pi }(q, c_i)$, we attentively aggregate individual path representations: $\mathbf {\pi }(q, c_i) = \sum _j \alpha _j \mathbf {\pi _j}(q, c_i)$, where $\alpha _j$ is an attention for the $j$-th path. The attention values are calculated as follows: $\alpha _j = \exp ({\rm sc}(q, r, c_i, \pi _j)) / \sum _k \exp ({\rm sc}(q, r, c_i, \pi _k))$, where ${\rm sc}(q, r, c_i, \pi _j) = {\rm MLP}(\mathbf {q}, \mathbf {r}, \mathbf {c_i}, \mathbf {\pi _j})$. To obtain individual path representations $\mathbf {\pi _j}$, we follow toutanova-etal-2015-representing. We use a Bi-LSTM BIBREF13 with mean pooling over timestep in order to encourage similar paths to have similar path representations.
For the testing phase, we choose a candidate entity $c_i$ with the maximum probability $P(r|q, c_i)$ as an answer entity, and choose a path $\pi _j$ with the maximum attention value $\alpha _j$ as NLDs. To generate NLDs, we simply traverse the path from $q$ to $c_i$ and subsequently concatenate all entities and textual relations as one string. We output Unanswerable when (i) $\max _{c_i \in C} P(r|q, c_i) < \epsilon _k$ or (ii) $G(S)$ has no path between $q$ and all $c_i \in C$.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training
Let $\mathcal {K}^+$ be a set of question-answer pairs, where each instance consists of a triplet (a query entity $q_i$, a relation $r_i$, an answer entity $a_i$). Similarly, let $\mathcal {K}^-$ be a set of question-non-answer pairs. We minimize the following binary cross-entropy loss:
From the NLD point of view, this is unsupervised training. The model is expected to learn the score function ${\rm sc(\cdot )}$ to give higher scores to paths (i.e. NLD steps) that are useful for discriminating correct answers from wrong answers by its own. Highly scored NLDs might be useful for answer classification, but these are not guaranteed to be interpretable to humans.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training ::: Semi-supervising derivations
To address the above issue, we resort to gold-standard NLDs to guide the path scoring function ${\rm sc(\cdot )}$. Let $\mathcal {D}$ be question-answer pairs coupled with gold-standard NLDs, namely a binary vector $\mathbf {p}_i$, where the $j$-th value represents whether $j$-th path corresponds to a gold-standard NLD (1) or not (0). We apply the following cross-entropy loss to the path attention:
Experiments ::: Settings ::: Dataset
We aggregated crowdsourced annotations obtained in Section SECREF3. As a preprocessing, we converted the NLD annotation to Unsure if the derivation contains the phrase needs to be mentioned. This is due to the fact that annotators misunderstand our instruction. When at least one crowdworker state that a statement is Unsure, then we set the answerability to Unanswerable and discard NLD annotations. Otherwise, we employ all NLD annotations from workers as multiple reference NLDs. The statistics is shown in Table TABREF36.
Regarding $\mathcal {K}^+, \mathcal {K}^-$, we extracted 867,936 instances from the training set of WikiHop BIBREF0. We reserve 10% of these instances as a validation set to find the best model. For $\mathcal {D}$, we used Answerable questions in the training set. To create supervision of path (i.e. $\mathbf {p}_i$), we selected the path that is most similar to all NLD annotations in terms of ROUGE-L F1.
Experiments ::: Settings ::: Hyperparameters
We used 100-dimensional vectors for entities, relations, and textual relation representations. We initialize these representations with 100-dimensional Glove Embeddings BIBREF14 and fine-tuned them during training. We retain only top-100,000 frequent words as a model vocabulary. We used Bi-LSTM with 50 dimensional hidden state as a textual relation encoder, and an LSTM with 100-dimensional hidden state as an entity encoder. We used the Adam optimizer (default parameters) BIBREF15 with a batch size of 32. We set the answerability threshold $\epsilon _k = 0.5$.
Experiments ::: Settings ::: Baseline
To check the integrity of the PRKGC model, we created a simple baseline model (shortest path model). It outputs a candidate entity with the shortest path length from a query entity on $G(S)$ as an answer. Similarly to the PRKGC model, it traverses the path to generate NLDs. It outputs Unanswerable if (i) a query entity is not reachable to any candidate entities on $G(S)$ or (ii) the shortest path length is more than 3.
Experiments ::: Results and discussion
As shown in Table TABREF37, the PRKGC models learned to reason over more than simple shortest paths. Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QED$^{\rm E}$. Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the maximum score match human-generated NLDs to some extent.
Supervising path attentions (the PRKGC+NS model) is indeed effective for improving the human interpretability of generated NLDs. It also improves the generalization ability of question answering. We speculate that $L_d$ functions as a regularizer, which helps models to learn reasoning that helpful beyond training data. This observation is consistent with previous work where an evidence selection task is learned jointly with a main task BIBREF11, BIBREF2, BIBREF5.
As shown in Table TABREF43, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. This indicates that the challenge of RC-QED$^{\rm E}$ is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer.
To obtain further insights, we manually analyzed generated NLDs. Table TABREF44 (a) illustrates a positive example, where the model identifies that altudoceras belongs to pseudogastrioceratinae, and that pseudogastrioceratinae is a subfamily of paragastrioceratidae. Some supporting sentences are already similar to human-generated NLDs, thus simply extracting textual relations works well for some problems.
On the other hand, typical derivation error is from non-human readable textual relations. In (b), the model states that bumped has a relationship of “,” with hands up, which is originally extracted from one of supporting sentences It contains the UK Top 60 singles “Bumped”, “Hands Up (4 Lovers)” and .... This provides a useful clue for answer prediction, but is not suitable as a derivation. One may address this issue by incorporating, for example, a relation extractor or a paraphrasing mechanism using recent advances of conditional language models BIBREF20.
Experiments ::: Results and discussion ::: QA performance.
To check the integrity of our baseline models, we compare our baseline models with existing neural models tailored for QA under the pure WikiHop setting (i.e. evaluation with only an accuracy of predicted answers). Note that these existing models do not output derivations. We thus cannot make a direct comparison, so it servers as a reference purpose. Because WikiHop has no answerability task, we enforced the PRKGC model to always output answers. As shown in Table TABREF45, the PRKGC models achieve a comparable performance to other sophisticated neural models.
Related work ::: RC datasets with explanations
There exists few RC datasets annotated with explanations (Table TABREF50). The most similar work to ours is Science QA dataset BIBREF21, BIBREF22, BIBREF23, which provides a small set of NLDs annotated for analysis purposes. By developing the scalable crowdsourcing framework, our work provides one order-of-magnitude larger NLDs which can be used as a benchmark more reliably. In addition, it provides the community with new types of challenges not included in HotpotQA.
Related work ::: Analysis of RC models and datasets
There is a large body of work on analyzing the nature of RC datasets, motivated by the question to what degree RC models understand natural language BIBREF3, BIBREF4. Several studies suggest that current RC datasets have unintended bias, which enables RC systems to rely on a cheap heuristics to answer questions. For instance, Sugawara2018 show that some of these RC datasets contain a large number of “easy” questions that can be solved by a cheap heuristics (e.g. by looking at a first few tokens of questions). Responding to their findings, we take a step further and explore the new task of RC that requires RC systems to give introspective explanations as well as answers. In addition, recent studies show that current RC models and NLP models are vulnerable to adversarial examples BIBREF29, BIBREF30, BIBREF31. Explicit modeling of NLDs is expected to reguralize RC models, which could prevent RC models' strong dependence on unintended bias in training data (e.g. annotation artifact) BIBREF32, BIBREF8, BIBREF2, BIBREF5, as partially confirmed in Section SECREF46.
Related work ::: Other NLP corpora annotated with explanations
There are existing NLP tasks that require models to output explanations (Table TABREF50). FEVER BIBREF25 requires a system to judge the “factness” of a claim as well as to identify justification sentences. As discussed earlier, we take a step further from justification explanations to provide new challenges for NLU.
Several datasets are annotated with introspective explanations, ranging from textual entailments BIBREF8 to argumentative texts BIBREF26, BIBREF27, BIBREF33. All these datasets offer the classification task of single sentences or sentence pairs. The uniqueness of our dataset is that it measures a machine's ability to extract relevant information from a set of documents and to build coherent logical reasoning steps.
Conclusions
Towards RC models that can perform correct reasoning, we have proposed RC-QED that requires a system to output its introspective explanations, as well as answers. Instantiating RC-QED with entity-based multi-hop QA (RC-QED$^{\rm E}$), we have created a large-scale corpus of NLDs. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations. Our experiments using two simple baseline models have demonstrated that RC-QED$^{\rm E}$ is a non-trivial task, and that it indeed provides a challenging task of extracting and synthesizing relevant facts from supporting documents. We will make the corpus of reasoning annotations and baseline systems publicly available at https://naoya-i.github.io/rc-qed/.
One immediate future work is to expand the annotation to non-entity-based multi-hop QA datasets such as HotpotQA BIBREF2. For modeling, we plan to incorporate a generative mechanism based on recent advances in conditional language modeling.
Example annotations
Table TABREF53 shows examples of crowdsourced annotations. | Yes |
d83304c70fe66ae72e78aa1d183e9f18b7484cd6 | d83304c70fe66ae72e78aa1d183e9f18b7484cd6_0 | Q: How was the dataset annotated?
Text: Introduction
Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems and a large number of datasets are now available BIBREF0, BIBREF1, BIBREF2. However, these datasets suffer from annotation artifacts and other biases, which allow systems to “cheat”: Instead of learning to read texts, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a matching semantic type BIBREF3, BIBREF4. To give another example, many RC datasets contain a large number of “easy” problems that can be solved by looking at the first few words of the question Sugawara2018. In order to provide a reliable measure of progress, an RC dataset thus needs to be robust to such simple heuristics.
Towards this goal, two important directions have been investigated. One direction is to improve the dataset itself, for example, so that it requires an RC system to perform multi-hop inferences BIBREF0 or to generate answers BIBREF1. Another direction is to request a system to output additional information about answers. Yang2018HotpotQA:Answering propose HotpotQA, an “explainable” multi-hop Question Answering (QA) task that requires a system to identify a set of sentences containing supporting evidence for the given answer. We follow the footsteps of Yang2018HotpotQA:Answering and explore an explainable multi-hop QA task.
In the community, two important types of explanations have been explored so far BIBREF5: (i) introspective explanation (how a decision is made), and (ii) justification explanation (collections of evidences to support the decision). In this sense, supporting facts in HotpotQA can be categorized as justification explanations. The advantage of using justification explanations as benchmark is that the task can be reduced to a standard classification task, which enables us to adopt standard evaluation metrics (e.g. a classification accuracy). However, this task setting does not evaluate a machine's ability to (i) extract relevant information from justification sentences and (ii) synthesize them to form coherent logical reasoning steps, which are equally important for NLU.
To address this issue, we propose RC-QED, an RC task that requires not only the answer to a question, but also an introspective explanation in the form of a natural language derivation (NLD). For example, given the question “Which record company released the song Barracuda?” and supporting documents shown in Figure FIGREF1, a system needs to give the answer “Portrait Records” and to provide the following NLD: 1.) Barracuda is on Little Queen, and 2.) Little Queen was released by Portrait Records.
The main difference between our work and HotpotQA is that they identify a set of sentences $\lbrace s_2,s_4\rbrace $, while RC-QED requires a system to generate its derivations in a correct order. This generation task enables us to measure a machine's logical reasoning ability mentioned above. Due to its subjective nature of the natural language derivation task, we evaluate the correctness of derivations generated by a system with multiple reference answers. Our contributions can be summarized as follows:
We create a large corpus consisting of 12,000 QA pairs and natural language derivations. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations.
Through an experiment using two baseline models, we highlight several challenges of RC-QED.
We will make the corpus of reasoning annotations and the baseline system publicly available at https://naoya-i.github.io/rc-qed/.
Task formulation: RC-QED ::: Input, output, and evaluation metrics
We formally define RC-QED as follows:
Given: (i) a question $Q$, and (ii) a set $S$ of supporting documents relevant to $Q$;
Find: (i) answerability $s \in \lbrace \textsf {Answerable},$ $\textsf {Unanswerable} \rbrace $, (ii) an answer $a$, and (iii) a sequence $R$ of derivation steps.
We evaluate each prediction with the following evaluation metrics:
Answerability: Correctness of model's decision on answerability (i.e. binary classification task) evaluated by Precision/Recall/F1.
Answer precision: Correctness of predicted answers (for Answerable predictions only). We follow the standard practice of RC community for evaluation (e.g. an accuracy in the case of multiple choice QA).
Derivation precision: Correctness of generated NLDs evaluated by ROUGE-L BIBREF6 (RG-L) and BLEU-4 (BL-4) BIBREF7. We follow the standard practice of evaluation for natural language generation BIBREF1. Derivation steps might be subjective, so we resort to multiple reference answers.
Task formulation: RC-QED ::: RC-QED@!START@$^{\rm E}$@!END@
This paper instantiates RC-QED by employing multiple choice, entity-based multi-hop QA BIBREF0 as a testbed (henceforth, RC-QED$^{\rm E}$). In entity-based multi-hop QA, machines need to combine relational facts between entities to derive an answer. For example, in Figure FIGREF1, understanding the facts about Barracuda, Little Queen, and Portrait Records stated in each article is required. This design choice restricts a problem domain, but it provides interesting challenges as discussed in Section SECREF46. In addition, such entity-based chaining is known to account for the majority of reasoning types required for multi-hop reasoning BIBREF2.
More formally, given (i) a question $Q=(r, q)$ represented by a binary relation $r$ and an entity $q$ (question entity), (ii) relevant articles $S$, and (iii) a set $C$ of candidate entities, systems are required to output (i) an answerability $s \in \lbrace \textsf {Answerable}, \textsf {Unanswerable} \rbrace $, (ii) an entity $e \in C$ (answer entity) that $(q, r, e)$ holds, and (iii) a sequence $R$ of derivation steps as to why $e$ is believed to be an answer. We define derivation steps as an $m$ chain of relational facts to derive an answer, i.e. $(q, r_1, e_1), (e_1, r_2, e_2), ..., (e_{m-1}, r_{m-1}, e_m),$ $(e_m, r_m, e_{m+1}))$. Although we restrict the form of knowledge to entity relations, we use a natural language form to represent $r_i$ rather than a closed vocabulary (see Figure FIGREF1 for an example).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface
To acquire a large-scale corpus of NLDs, we use crowdsourcing (CS). Although CS is a powerful tool for large-scale dataset creation BIBREF2, BIBREF8, quality control for complex tasks is still challenging. We thus carefully design an incentive structure for crowdworkers, following Yang2018HotpotQA:Answering.
Initially, we provide crowdworkers with an instruction with example annotations, where we emphasize that they judge the truth of statements solely based on given articles, not based on their own knowledge.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Judgement task (Figure @!START@UID13@!END@).
Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Derivation task (Figure @!START@UID14@!END@).
If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers).
We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\ge 5,000$ HITs experiences and an approval rate of $\ge $ 99.0%, and pay ¢20 as a reward per instance.
Our data collection pipeline is expected to be applicable to other types of QAs other than entity-based multi-hop QA without any significant extensions, because the interface is not specifically designed for entity-centric reasoning.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Dataset
Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary.
We use supporting documents provided by WikiHop. WikiHop collects supporting documents by finding Wikipedia articles that bridges a question entity $e_i$ and an answer entity $e_j$, where the link between articles is given by a hyperlink.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results
Table TABREF17 shows the statistics of responses and example annotations. Table TABREF17 also shows the abstractiveness of annotated NLDs ($a$), namely the number of tokens in an NLD divided by the number of tokens in its corresponding justification sentences. This indicates that annotated NLDs are indeed summarized. See Table TABREF53 in Appendix and Supplementary Material for more results.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Quality
To evaluate the quality of annotation results, we publish another CS task on AMT. We randomly sample 300 True and Likely responses in this evaluation. Given NLDs and a statement, 3 crowdworkers are asked if the NLDs can lead to the statement at four scale levels. If the answer is 4 or 3 (“yes” or “likely”), we additionally asked whether each derivation step can be derived from each supporting document; otherwise we asked them the reasons. For a fair evaluation, we encourage crowdworkers to annotate given NLDs with a lower score by stating that we give a bonus if they found a flaw of reasoning on the CS interface.
The evaluation results shown in Table TABREF24 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability).
On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. Crowdworkers found that 45.3% of 294 (out of 900) 3-step NLDs has missing steps to derive a statement. Let us consider this example: for annotated NLDs “[1] Kouvola is located in Helsinki. [2] Helsinki is in the region of Uusimaa. [3] Uusimaa borders the regions Southwest Finland, Kymenlaakso and some others.” and for the statement “Kouvola is located in Kymenlaakso”, one worker pointed out the missing step “Uusimaa is in Kymenlaakso.”. We speculate that greater steps of reasoning make it difficult for crowdworkers to check the correctness of derivations during the writing task.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Agreement
For agreement on the number of NLDs, we obtained a Krippendorff's $\alpha $ of 0.223, indicating a fair agreement BIBREF9.
Our manual inspection of the 10 worst disagreements revealed that majority (7/10) come from Unsure v.s. non-Unsure. It also revealed that crowdworkers who labeled non-Unsure are reliable—6 out 7 non-Unsure annotations can be judged as correct. This partially confirms the effectiveness of our incentive structure.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model
To highlight the challenges and nature of RC-QED$^{\rm E}$, we create a simple, transparent, and interpretable baseline model.
Recent studies on knowledge graph completion (KGC) explore compositional inferences to combat with the sparsity of knowledge bases BIBREF10, BIBREF11, BIBREF12. Given a query triplet $(h, r, t)$ (e.g. (Macchu Picchu, locatedIn, Peru)), a path ranking-based approach for KGC explicitly samples paths between $h$ and $t$ in a knowledge base (e.g. Macchu Picchu—locatedIn—Andes Mountain—countryOf—Peru), and construct a feature vector of these paths. This feature vector is then used to calculate the compatibility between the query triplet and the sampled paths.
RC-QED$^{\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Knowledge graph construction
Given supporting documents $S$, we build a knowledge graph. We first apply a coreference resolver to $S$ and then create a directed graph $G(S)$. Therein, each node represents named entities (NEs) in $S$, and each edge represents textual relations between NEs extracted from $S$. Figure FIGREF27 illustrates an example of $G(S)$ constructed from supporting documents in Figure FIGREF1.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Path ranking-based KGC (PRKGC)
Given a question $Q=(q, r)$ and a candidate entity $c_i$, we estimate the plausibility of $(q, r, c_i)$ as follows:
where $\sigma $ is a sigmoid function, and $\mathbf {q, r, c_i}, \mathbf {\pi }(q, c_i)$ are vector representations of $q, r, c_i$ and a set $\pi (q, c_i)$ of shortest paths between $q$ and $c_i$ on $G(S)$. ${\rm MLP}(\cdot , \cdot )$ denotes a multi-layer perceptron. To encode entities into vectors $\mathbf {q, c_i}$, we use Long-Short Term Memory (LSTM) and take its last hidden state. For example, in Figure FIGREF27, $q =$ Barracuda and $c_i =$ Portrait Records yield $\pi (q, c_i) = \lbrace $Barracuda—is the most popular in their album—Little Queen—was released in May 1977 on—Portrait Records, Barracuda—was released from American band Heart—is the second album released by:-1—Little Queen—was released in May 1977 on—Portrait Records$\rbrace $.
To obtain path representations $\mathbf {\pi }(q, c_i)$, we attentively aggregate individual path representations: $\mathbf {\pi }(q, c_i) = \sum _j \alpha _j \mathbf {\pi _j}(q, c_i)$, where $\alpha _j$ is an attention for the $j$-th path. The attention values are calculated as follows: $\alpha _j = \exp ({\rm sc}(q, r, c_i, \pi _j)) / \sum _k \exp ({\rm sc}(q, r, c_i, \pi _k))$, where ${\rm sc}(q, r, c_i, \pi _j) = {\rm MLP}(\mathbf {q}, \mathbf {r}, \mathbf {c_i}, \mathbf {\pi _j})$. To obtain individual path representations $\mathbf {\pi _j}$, we follow toutanova-etal-2015-representing. We use a Bi-LSTM BIBREF13 with mean pooling over timestep in order to encourage similar paths to have similar path representations.
For the testing phase, we choose a candidate entity $c_i$ with the maximum probability $P(r|q, c_i)$ as an answer entity, and choose a path $\pi _j$ with the maximum attention value $\alpha _j$ as NLDs. To generate NLDs, we simply traverse the path from $q$ to $c_i$ and subsequently concatenate all entities and textual relations as one string. We output Unanswerable when (i) $\max _{c_i \in C} P(r|q, c_i) < \epsilon _k$ or (ii) $G(S)$ has no path between $q$ and all $c_i \in C$.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training
Let $\mathcal {K}^+$ be a set of question-answer pairs, where each instance consists of a triplet (a query entity $q_i$, a relation $r_i$, an answer entity $a_i$). Similarly, let $\mathcal {K}^-$ be a set of question-non-answer pairs. We minimize the following binary cross-entropy loss:
From the NLD point of view, this is unsupervised training. The model is expected to learn the score function ${\rm sc(\cdot )}$ to give higher scores to paths (i.e. NLD steps) that are useful for discriminating correct answers from wrong answers by its own. Highly scored NLDs might be useful for answer classification, but these are not guaranteed to be interpretable to humans.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training ::: Semi-supervising derivations
To address the above issue, we resort to gold-standard NLDs to guide the path scoring function ${\rm sc(\cdot )}$. Let $\mathcal {D}$ be question-answer pairs coupled with gold-standard NLDs, namely a binary vector $\mathbf {p}_i$, where the $j$-th value represents whether $j$-th path corresponds to a gold-standard NLD (1) or not (0). We apply the following cross-entropy loss to the path attention:
Experiments ::: Settings ::: Dataset
We aggregated crowdsourced annotations obtained in Section SECREF3. As a preprocessing, we converted the NLD annotation to Unsure if the derivation contains the phrase needs to be mentioned. This is due to the fact that annotators misunderstand our instruction. When at least one crowdworker state that a statement is Unsure, then we set the answerability to Unanswerable and discard NLD annotations. Otherwise, we employ all NLD annotations from workers as multiple reference NLDs. The statistics is shown in Table TABREF36.
Regarding $\mathcal {K}^+, \mathcal {K}^-$, we extracted 867,936 instances from the training set of WikiHop BIBREF0. We reserve 10% of these instances as a validation set to find the best model. For $\mathcal {D}$, we used Answerable questions in the training set. To create supervision of path (i.e. $\mathbf {p}_i$), we selected the path that is most similar to all NLD annotations in terms of ROUGE-L F1.
Experiments ::: Settings ::: Hyperparameters
We used 100-dimensional vectors for entities, relations, and textual relation representations. We initialize these representations with 100-dimensional Glove Embeddings BIBREF14 and fine-tuned them during training. We retain only top-100,000 frequent words as a model vocabulary. We used Bi-LSTM with 50 dimensional hidden state as a textual relation encoder, and an LSTM with 100-dimensional hidden state as an entity encoder. We used the Adam optimizer (default parameters) BIBREF15 with a batch size of 32. We set the answerability threshold $\epsilon _k = 0.5$.
Experiments ::: Settings ::: Baseline
To check the integrity of the PRKGC model, we created a simple baseline model (shortest path model). It outputs a candidate entity with the shortest path length from a query entity on $G(S)$ as an answer. Similarly to the PRKGC model, it traverses the path to generate NLDs. It outputs Unanswerable if (i) a query entity is not reachable to any candidate entities on $G(S)$ or (ii) the shortest path length is more than 3.
Experiments ::: Results and discussion
As shown in Table TABREF37, the PRKGC models learned to reason over more than simple shortest paths. Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QED$^{\rm E}$. Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the maximum score match human-generated NLDs to some extent.
Supervising path attentions (the PRKGC+NS model) is indeed effective for improving the human interpretability of generated NLDs. It also improves the generalization ability of question answering. We speculate that $L_d$ functions as a regularizer, which helps models to learn reasoning that helpful beyond training data. This observation is consistent with previous work where an evidence selection task is learned jointly with a main task BIBREF11, BIBREF2, BIBREF5.
As shown in Table TABREF43, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. This indicates that the challenge of RC-QED$^{\rm E}$ is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer.
To obtain further insights, we manually analyzed generated NLDs. Table TABREF44 (a) illustrates a positive example, where the model identifies that altudoceras belongs to pseudogastrioceratinae, and that pseudogastrioceratinae is a subfamily of paragastrioceratidae. Some supporting sentences are already similar to human-generated NLDs, thus simply extracting textual relations works well for some problems.
On the other hand, typical derivation error is from non-human readable textual relations. In (b), the model states that bumped has a relationship of “,” with hands up, which is originally extracted from one of supporting sentences It contains the UK Top 60 singles “Bumped”, “Hands Up (4 Lovers)” and .... This provides a useful clue for answer prediction, but is not suitable as a derivation. One may address this issue by incorporating, for example, a relation extractor or a paraphrasing mechanism using recent advances of conditional language models BIBREF20.
Experiments ::: Results and discussion ::: QA performance.
To check the integrity of our baseline models, we compare our baseline models with existing neural models tailored for QA under the pure WikiHop setting (i.e. evaluation with only an accuracy of predicted answers). Note that these existing models do not output derivations. We thus cannot make a direct comparison, so it servers as a reference purpose. Because WikiHop has no answerability task, we enforced the PRKGC model to always output answers. As shown in Table TABREF45, the PRKGC models achieve a comparable performance to other sophisticated neural models.
Related work ::: RC datasets with explanations
There exists few RC datasets annotated with explanations (Table TABREF50). The most similar work to ours is Science QA dataset BIBREF21, BIBREF22, BIBREF23, which provides a small set of NLDs annotated for analysis purposes. By developing the scalable crowdsourcing framework, our work provides one order-of-magnitude larger NLDs which can be used as a benchmark more reliably. In addition, it provides the community with new types of challenges not included in HotpotQA.
Related work ::: Analysis of RC models and datasets
There is a large body of work on analyzing the nature of RC datasets, motivated by the question to what degree RC models understand natural language BIBREF3, BIBREF4. Several studies suggest that current RC datasets have unintended bias, which enables RC systems to rely on a cheap heuristics to answer questions. For instance, Sugawara2018 show that some of these RC datasets contain a large number of “easy” questions that can be solved by a cheap heuristics (e.g. by looking at a first few tokens of questions). Responding to their findings, we take a step further and explore the new task of RC that requires RC systems to give introspective explanations as well as answers. In addition, recent studies show that current RC models and NLP models are vulnerable to adversarial examples BIBREF29, BIBREF30, BIBREF31. Explicit modeling of NLDs is expected to reguralize RC models, which could prevent RC models' strong dependence on unintended bias in training data (e.g. annotation artifact) BIBREF32, BIBREF8, BIBREF2, BIBREF5, as partially confirmed in Section SECREF46.
Related work ::: Other NLP corpora annotated with explanations
There are existing NLP tasks that require models to output explanations (Table TABREF50). FEVER BIBREF25 requires a system to judge the “factness” of a claim as well as to identify justification sentences. As discussed earlier, we take a step further from justification explanations to provide new challenges for NLU.
Several datasets are annotated with introspective explanations, ranging from textual entailments BIBREF8 to argumentative texts BIBREF26, BIBREF27, BIBREF33. All these datasets offer the classification task of single sentences or sentence pairs. The uniqueness of our dataset is that it measures a machine's ability to extract relevant information from a set of documents and to build coherent logical reasoning steps.
Conclusions
Towards RC models that can perform correct reasoning, we have proposed RC-QED that requires a system to output its introspective explanations, as well as answers. Instantiating RC-QED with entity-based multi-hop QA (RC-QED$^{\rm E}$), we have created a large-scale corpus of NLDs. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations. Our experiments using two simple baseline models have demonstrated that RC-QED$^{\rm E}$ is a non-trivial task, and that it indeed provides a challenging task of extracting and synthesizing relevant facts from supporting documents. We will make the corpus of reasoning annotations and baseline systems publicly available at https://naoya-i.github.io/rc-qed/.
One immediate future work is to expand the annotation to non-entity-based multi-hop QA datasets such as HotpotQA BIBREF2. For modeling, we plan to incorporate a generative mechanism based on recent advances in conditional language modeling.
Example annotations
Table TABREF53 shows examples of crowdsourced annotations. | True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable), why they are unsure from two choices (“Not stated in the article” or “Other”), The “summary” text boxes |
e90ac9ee085dc2a9b6fe132245302bbce5f3f5ab | e90ac9ee085dc2a9b6fe132245302bbce5f3f5ab_0 | Q: What is the source of the proposed dataset?
Text: Introduction
Reading comprehension (RC) has become a key benchmark for natural language understanding (NLU) systems and a large number of datasets are now available BIBREF0, BIBREF1, BIBREF2. However, these datasets suffer from annotation artifacts and other biases, which allow systems to “cheat”: Instead of learning to read texts, systems learn to exploit these biases and find answers via simple heuristics, such as looking for an entity with a matching semantic type BIBREF3, BIBREF4. To give another example, many RC datasets contain a large number of “easy” problems that can be solved by looking at the first few words of the question Sugawara2018. In order to provide a reliable measure of progress, an RC dataset thus needs to be robust to such simple heuristics.
Towards this goal, two important directions have been investigated. One direction is to improve the dataset itself, for example, so that it requires an RC system to perform multi-hop inferences BIBREF0 or to generate answers BIBREF1. Another direction is to request a system to output additional information about answers. Yang2018HotpotQA:Answering propose HotpotQA, an “explainable” multi-hop Question Answering (QA) task that requires a system to identify a set of sentences containing supporting evidence for the given answer. We follow the footsteps of Yang2018HotpotQA:Answering and explore an explainable multi-hop QA task.
In the community, two important types of explanations have been explored so far BIBREF5: (i) introspective explanation (how a decision is made), and (ii) justification explanation (collections of evidences to support the decision). In this sense, supporting facts in HotpotQA can be categorized as justification explanations. The advantage of using justification explanations as benchmark is that the task can be reduced to a standard classification task, which enables us to adopt standard evaluation metrics (e.g. a classification accuracy). However, this task setting does not evaluate a machine's ability to (i) extract relevant information from justification sentences and (ii) synthesize them to form coherent logical reasoning steps, which are equally important for NLU.
To address this issue, we propose RC-QED, an RC task that requires not only the answer to a question, but also an introspective explanation in the form of a natural language derivation (NLD). For example, given the question “Which record company released the song Barracuda?” and supporting documents shown in Figure FIGREF1, a system needs to give the answer “Portrait Records” and to provide the following NLD: 1.) Barracuda is on Little Queen, and 2.) Little Queen was released by Portrait Records.
The main difference between our work and HotpotQA is that they identify a set of sentences $\lbrace s_2,s_4\rbrace $, while RC-QED requires a system to generate its derivations in a correct order. This generation task enables us to measure a machine's logical reasoning ability mentioned above. Due to its subjective nature of the natural language derivation task, we evaluate the correctness of derivations generated by a system with multiple reference answers. Our contributions can be summarized as follows:
We create a large corpus consisting of 12,000 QA pairs and natural language derivations. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations.
Through an experiment using two baseline models, we highlight several challenges of RC-QED.
We will make the corpus of reasoning annotations and the baseline system publicly available at https://naoya-i.github.io/rc-qed/.
Task formulation: RC-QED ::: Input, output, and evaluation metrics
We formally define RC-QED as follows:
Given: (i) a question $Q$, and (ii) a set $S$ of supporting documents relevant to $Q$;
Find: (i) answerability $s \in \lbrace \textsf {Answerable},$ $\textsf {Unanswerable} \rbrace $, (ii) an answer $a$, and (iii) a sequence $R$ of derivation steps.
We evaluate each prediction with the following evaluation metrics:
Answerability: Correctness of model's decision on answerability (i.e. binary classification task) evaluated by Precision/Recall/F1.
Answer precision: Correctness of predicted answers (for Answerable predictions only). We follow the standard practice of RC community for evaluation (e.g. an accuracy in the case of multiple choice QA).
Derivation precision: Correctness of generated NLDs evaluated by ROUGE-L BIBREF6 (RG-L) and BLEU-4 (BL-4) BIBREF7. We follow the standard practice of evaluation for natural language generation BIBREF1. Derivation steps might be subjective, so we resort to multiple reference answers.
Task formulation: RC-QED ::: RC-QED@!START@$^{\rm E}$@!END@
This paper instantiates RC-QED by employing multiple choice, entity-based multi-hop QA BIBREF0 as a testbed (henceforth, RC-QED$^{\rm E}$). In entity-based multi-hop QA, machines need to combine relational facts between entities to derive an answer. For example, in Figure FIGREF1, understanding the facts about Barracuda, Little Queen, and Portrait Records stated in each article is required. This design choice restricts a problem domain, but it provides interesting challenges as discussed in Section SECREF46. In addition, such entity-based chaining is known to account for the majority of reasoning types required for multi-hop reasoning BIBREF2.
More formally, given (i) a question $Q=(r, q)$ represented by a binary relation $r$ and an entity $q$ (question entity), (ii) relevant articles $S$, and (iii) a set $C$ of candidate entities, systems are required to output (i) an answerability $s \in \lbrace \textsf {Answerable}, \textsf {Unanswerable} \rbrace $, (ii) an entity $e \in C$ (answer entity) that $(q, r, e)$ holds, and (iii) a sequence $R$ of derivation steps as to why $e$ is believed to be an answer. We define derivation steps as an $m$ chain of relational facts to derive an answer, i.e. $(q, r_1, e_1), (e_1, r_2, e_2), ..., (e_{m-1}, r_{m-1}, e_m),$ $(e_m, r_m, e_{m+1}))$. Although we restrict the form of knowledge to entity relations, we use a natural language form to represent $r_i$ rather than a closed vocabulary (see Figure FIGREF1 for an example).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface
To acquire a large-scale corpus of NLDs, we use crowdsourcing (CS). Although CS is a powerful tool for large-scale dataset creation BIBREF2, BIBREF8, quality control for complex tasks is still challenging. We thus carefully design an incentive structure for crowdworkers, following Yang2018HotpotQA:Answering.
Initially, we provide crowdworkers with an instruction with example annotations, where we emphasize that they judge the truth of statements solely based on given articles, not based on their own knowledge.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Judgement task (Figure @!START@UID13@!END@).
Given a statement and articles, workers are asked to judge whether the statement can be derived from the articles at three grades: True, Likely (i.e. Answerable), or Unsure (i.e. Unanswerable). If a worker selects Unsure, we ask workers to tell us why they are unsure from two choices (“Not stated in the article” or “Other”).
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Crowdsourcing interface ::: Derivation task (Figure @!START@UID14@!END@).
If a worker selects True or Likely in the judgement task, we first ask which sentences in the given articles are justification explanations for a given statement, similarly to HotpotQA BIBREF2. The “summary” text boxes (i.e. NLDs) are then initialized with these selected sentences. We give a ¢6 bonus to those workers who select True or Likely. To encourage an abstraction of selected sentences, we also introduce a gamification scheme to give a bonus to those who provide shorter NLDs. Specifically, we probabilistically give another ¢14 bonus to workers according to a score they gain. The score is always shown on top of the screen, and changes according to the length of NLDs they write in real time. To discourage noisy annotations, we also warn crowdworkers that their work would be rejected for noisy submissions. We periodically run simple filtering to exclude noisy crowdworkers (e.g. workers who give more than 50 submissions with the same answers).
We deployed the task on Amazon Mechanical Turk (AMT). To see how reasoning varies across workers, we hire 3 crowdworkers per one instance. We hire reliable crowdworkers with $\ge 5,000$ HITs experiences and an approval rate of $\ge $ 99.0%, and pay ¢20 as a reward per instance.
Our data collection pipeline is expected to be applicable to other types of QAs other than entity-based multi-hop QA without any significant extensions, because the interface is not specifically designed for entity-centric reasoning.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Dataset
Our study uses WikiHop BIBREF0, as it is an entity-based multi-hop QA dataset and has been actively used. We randomly sampled 10,000 instances from 43,738 training instances and 2,000 instances from 5,129 validation instances (i.e. 36,000 annotation tasks were published on AMT). We manually converted structured WikiHop question-answer pairs (e.g. locatedIn(Macchu Picchu, Peru)) into natural language statements (Macchu Picchu is located in Peru) using a simple conversion dictionary.
We use supporting documents provided by WikiHop. WikiHop collects supporting documents by finding Wikipedia articles that bridges a question entity $e_i$ and an answer entity $e_j$, where the link between articles is given by a hyperlink.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results
Table TABREF17 shows the statistics of responses and example annotations. Table TABREF17 also shows the abstractiveness of annotated NLDs ($a$), namely the number of tokens in an NLD divided by the number of tokens in its corresponding justification sentences. This indicates that annotated NLDs are indeed summarized. See Table TABREF53 in Appendix and Supplementary Material for more results.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Quality
To evaluate the quality of annotation results, we publish another CS task on AMT. We randomly sample 300 True and Likely responses in this evaluation. Given NLDs and a statement, 3 crowdworkers are asked if the NLDs can lead to the statement at four scale levels. If the answer is 4 or 3 (“yes” or “likely”), we additionally asked whether each derivation step can be derived from each supporting document; otherwise we asked them the reasons. For a fair evaluation, we encourage crowdworkers to annotate given NLDs with a lower score by stating that we give a bonus if they found a flaw of reasoning on the CS interface.
The evaluation results shown in Table TABREF24 indicate that the annotated NLDs are of high quality (Reachability), and each NLD is properly derived from supporting documents (Derivability).
On the other hand, we found the quality of 3-step NLDs is relatively lower than the others. Crowdworkers found that 45.3% of 294 (out of 900) 3-step NLDs has missing steps to derive a statement. Let us consider this example: for annotated NLDs “[1] Kouvola is located in Helsinki. [2] Helsinki is in the region of Uusimaa. [3] Uusimaa borders the regions Southwest Finland, Kymenlaakso and some others.” and for the statement “Kouvola is located in Kymenlaakso”, one worker pointed out the missing step “Uusimaa is in Kymenlaakso.”. We speculate that greater steps of reasoning make it difficult for crowdworkers to check the correctness of derivations during the writing task.
Data collection for RC-QED@!START@$^{\rm E}$@!END@ ::: Results ::: Agreement
For agreement on the number of NLDs, we obtained a Krippendorff's $\alpha $ of 0.223, indicating a fair agreement BIBREF9.
Our manual inspection of the 10 worst disagreements revealed that majority (7/10) come from Unsure v.s. non-Unsure. It also revealed that crowdworkers who labeled non-Unsure are reliable—6 out 7 non-Unsure annotations can be judged as correct. This partially confirms the effectiveness of our incentive structure.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model
To highlight the challenges and nature of RC-QED$^{\rm E}$, we create a simple, transparent, and interpretable baseline model.
Recent studies on knowledge graph completion (KGC) explore compositional inferences to combat with the sparsity of knowledge bases BIBREF10, BIBREF11, BIBREF12. Given a query triplet $(h, r, t)$ (e.g. (Macchu Picchu, locatedIn, Peru)), a path ranking-based approach for KGC explicitly samples paths between $h$ and $t$ in a knowledge base (e.g. Macchu Picchu—locatedIn—Andes Mountain—countryOf—Peru), and construct a feature vector of these paths. This feature vector is then used to calculate the compatibility between the query triplet and the sampled paths.
RC-QED$^{\rm E}$ can be naturally solved by path ranking-based KGC (PRKGC), where the query triplet and the sampled paths correspond to a question and derivation steps, respectively. PRKGC meets our purposes because of its glassboxness: we can trace the derivation steps of the model easily.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Knowledge graph construction
Given supporting documents $S$, we build a knowledge graph. We first apply a coreference resolver to $S$ and then create a directed graph $G(S)$. Therein, each node represents named entities (NEs) in $S$, and each edge represents textual relations between NEs extracted from $S$. Figure FIGREF27 illustrates an example of $G(S)$ constructed from supporting documents in Figure FIGREF1.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Path ranking-based KGC (PRKGC)
Given a question $Q=(q, r)$ and a candidate entity $c_i$, we estimate the plausibility of $(q, r, c_i)$ as follows:
where $\sigma $ is a sigmoid function, and $\mathbf {q, r, c_i}, \mathbf {\pi }(q, c_i)$ are vector representations of $q, r, c_i$ and a set $\pi (q, c_i)$ of shortest paths between $q$ and $c_i$ on $G(S)$. ${\rm MLP}(\cdot , \cdot )$ denotes a multi-layer perceptron. To encode entities into vectors $\mathbf {q, c_i}$, we use Long-Short Term Memory (LSTM) and take its last hidden state. For example, in Figure FIGREF27, $q =$ Barracuda and $c_i =$ Portrait Records yield $\pi (q, c_i) = \lbrace $Barracuda—is the most popular in their album—Little Queen—was released in May 1977 on—Portrait Records, Barracuda—was released from American band Heart—is the second album released by:-1—Little Queen—was released in May 1977 on—Portrait Records$\rbrace $.
To obtain path representations $\mathbf {\pi }(q, c_i)$, we attentively aggregate individual path representations: $\mathbf {\pi }(q, c_i) = \sum _j \alpha _j \mathbf {\pi _j}(q, c_i)$, where $\alpha _j$ is an attention for the $j$-th path. The attention values are calculated as follows: $\alpha _j = \exp ({\rm sc}(q, r, c_i, \pi _j)) / \sum _k \exp ({\rm sc}(q, r, c_i, \pi _k))$, where ${\rm sc}(q, r, c_i, \pi _j) = {\rm MLP}(\mathbf {q}, \mathbf {r}, \mathbf {c_i}, \mathbf {\pi _j})$. To obtain individual path representations $\mathbf {\pi _j}$, we follow toutanova-etal-2015-representing. We use a Bi-LSTM BIBREF13 with mean pooling over timestep in order to encourage similar paths to have similar path representations.
For the testing phase, we choose a candidate entity $c_i$ with the maximum probability $P(r|q, c_i)$ as an answer entity, and choose a path $\pi _j$ with the maximum attention value $\alpha _j$ as NLDs. To generate NLDs, we simply traverse the path from $q$ to $c_i$ and subsequently concatenate all entities and textual relations as one string. We output Unanswerable when (i) $\max _{c_i \in C} P(r|q, c_i) < \epsilon _k$ or (ii) $G(S)$ has no path between $q$ and all $c_i \in C$.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training
Let $\mathcal {K}^+$ be a set of question-answer pairs, where each instance consists of a triplet (a query entity $q_i$, a relation $r_i$, an answer entity $a_i$). Similarly, let $\mathcal {K}^-$ be a set of question-non-answer pairs. We minimize the following binary cross-entropy loss:
From the NLD point of view, this is unsupervised training. The model is expected to learn the score function ${\rm sc(\cdot )}$ to give higher scores to paths (i.e. NLD steps) that are useful for discriminating correct answers from wrong answers by its own. Highly scored NLDs might be useful for answer classification, but these are not guaranteed to be interpretable to humans.
Baseline RC-QED@!START@$^{\rm E}$@!END@ model ::: Training ::: Semi-supervising derivations
To address the above issue, we resort to gold-standard NLDs to guide the path scoring function ${\rm sc(\cdot )}$. Let $\mathcal {D}$ be question-answer pairs coupled with gold-standard NLDs, namely a binary vector $\mathbf {p}_i$, where the $j$-th value represents whether $j$-th path corresponds to a gold-standard NLD (1) or not (0). We apply the following cross-entropy loss to the path attention:
Experiments ::: Settings ::: Dataset
We aggregated crowdsourced annotations obtained in Section SECREF3. As a preprocessing, we converted the NLD annotation to Unsure if the derivation contains the phrase needs to be mentioned. This is due to the fact that annotators misunderstand our instruction. When at least one crowdworker state that a statement is Unsure, then we set the answerability to Unanswerable and discard NLD annotations. Otherwise, we employ all NLD annotations from workers as multiple reference NLDs. The statistics is shown in Table TABREF36.
Regarding $\mathcal {K}^+, \mathcal {K}^-$, we extracted 867,936 instances from the training set of WikiHop BIBREF0. We reserve 10% of these instances as a validation set to find the best model. For $\mathcal {D}$, we used Answerable questions in the training set. To create supervision of path (i.e. $\mathbf {p}_i$), we selected the path that is most similar to all NLD annotations in terms of ROUGE-L F1.
Experiments ::: Settings ::: Hyperparameters
We used 100-dimensional vectors for entities, relations, and textual relation representations. We initialize these representations with 100-dimensional Glove Embeddings BIBREF14 and fine-tuned them during training. We retain only top-100,000 frequent words as a model vocabulary. We used Bi-LSTM with 50 dimensional hidden state as a textual relation encoder, and an LSTM with 100-dimensional hidden state as an entity encoder. We used the Adam optimizer (default parameters) BIBREF15 with a batch size of 32. We set the answerability threshold $\epsilon _k = 0.5$.
Experiments ::: Settings ::: Baseline
To check the integrity of the PRKGC model, we created a simple baseline model (shortest path model). It outputs a candidate entity with the shortest path length from a query entity on $G(S)$ as an answer. Similarly to the PRKGC model, it traverses the path to generate NLDs. It outputs Unanswerable if (i) a query entity is not reachable to any candidate entities on $G(S)$ or (ii) the shortest path length is more than 3.
Experiments ::: Results and discussion
As shown in Table TABREF37, the PRKGC models learned to reason over more than simple shortest paths. Yet, the PRKGC model do not give considerably good results, which indicates the non-triviality of RC-QED$^{\rm E}$. Although the PRKGC model do not receive supervision about human-generated NLDs, paths with the maximum score match human-generated NLDs to some extent.
Supervising path attentions (the PRKGC+NS model) is indeed effective for improving the human interpretability of generated NLDs. It also improves the generalization ability of question answering. We speculate that $L_d$ functions as a regularizer, which helps models to learn reasoning that helpful beyond training data. This observation is consistent with previous work where an evidence selection task is learned jointly with a main task BIBREF11, BIBREF2, BIBREF5.
As shown in Table TABREF43, as the required derivation step increases, the PRKGC+NS model suffers from predicting answer entities and generating correct NLDs. This indicates that the challenge of RC-QED$^{\rm E}$ is in how to extract relevant information from supporting documents and synthesize these multiple facts to derive an answer.
To obtain further insights, we manually analyzed generated NLDs. Table TABREF44 (a) illustrates a positive example, where the model identifies that altudoceras belongs to pseudogastrioceratinae, and that pseudogastrioceratinae is a subfamily of paragastrioceratidae. Some supporting sentences are already similar to human-generated NLDs, thus simply extracting textual relations works well for some problems.
On the other hand, typical derivation error is from non-human readable textual relations. In (b), the model states that bumped has a relationship of “,” with hands up, which is originally extracted from one of supporting sentences It contains the UK Top 60 singles “Bumped”, “Hands Up (4 Lovers)” and .... This provides a useful clue for answer prediction, but is not suitable as a derivation. One may address this issue by incorporating, for example, a relation extractor or a paraphrasing mechanism using recent advances of conditional language models BIBREF20.
Experiments ::: Results and discussion ::: QA performance.
To check the integrity of our baseline models, we compare our baseline models with existing neural models tailored for QA under the pure WikiHop setting (i.e. evaluation with only an accuracy of predicted answers). Note that these existing models do not output derivations. We thus cannot make a direct comparison, so it servers as a reference purpose. Because WikiHop has no answerability task, we enforced the PRKGC model to always output answers. As shown in Table TABREF45, the PRKGC models achieve a comparable performance to other sophisticated neural models.
Related work ::: RC datasets with explanations
There exists few RC datasets annotated with explanations (Table TABREF50). The most similar work to ours is Science QA dataset BIBREF21, BIBREF22, BIBREF23, which provides a small set of NLDs annotated for analysis purposes. By developing the scalable crowdsourcing framework, our work provides one order-of-magnitude larger NLDs which can be used as a benchmark more reliably. In addition, it provides the community with new types of challenges not included in HotpotQA.
Related work ::: Analysis of RC models and datasets
There is a large body of work on analyzing the nature of RC datasets, motivated by the question to what degree RC models understand natural language BIBREF3, BIBREF4. Several studies suggest that current RC datasets have unintended bias, which enables RC systems to rely on a cheap heuristics to answer questions. For instance, Sugawara2018 show that some of these RC datasets contain a large number of “easy” questions that can be solved by a cheap heuristics (e.g. by looking at a first few tokens of questions). Responding to their findings, we take a step further and explore the new task of RC that requires RC systems to give introspective explanations as well as answers. In addition, recent studies show that current RC models and NLP models are vulnerable to adversarial examples BIBREF29, BIBREF30, BIBREF31. Explicit modeling of NLDs is expected to reguralize RC models, which could prevent RC models' strong dependence on unintended bias in training data (e.g. annotation artifact) BIBREF32, BIBREF8, BIBREF2, BIBREF5, as partially confirmed in Section SECREF46.
Related work ::: Other NLP corpora annotated with explanations
There are existing NLP tasks that require models to output explanations (Table TABREF50). FEVER BIBREF25 requires a system to judge the “factness” of a claim as well as to identify justification sentences. As discussed earlier, we take a step further from justification explanations to provide new challenges for NLU.
Several datasets are annotated with introspective explanations, ranging from textual entailments BIBREF8 to argumentative texts BIBREF26, BIBREF27, BIBREF33. All these datasets offer the classification task of single sentences or sentence pairs. The uniqueness of our dataset is that it measures a machine's ability to extract relevant information from a set of documents and to build coherent logical reasoning steps.
Conclusions
Towards RC models that can perform correct reasoning, we have proposed RC-QED that requires a system to output its introspective explanations, as well as answers. Instantiating RC-QED with entity-based multi-hop QA (RC-QED$^{\rm E}$), we have created a large-scale corpus of NLDs. The developed crowdsourcing annotation framework can be used for annotating other QA datasets with derivations. Our experiments using two simple baseline models have demonstrated that RC-QED$^{\rm E}$ is a non-trivial task, and that it indeed provides a challenging task of extracting and synthesizing relevant facts from supporting documents. We will make the corpus of reasoning annotations and baseline systems publicly available at https://naoya-i.github.io/rc-qed/.
One immediate future work is to expand the annotation to non-entity-based multi-hop QA datasets such as HotpotQA BIBREF2. For modeling, we plan to incorporate a generative mechanism based on recent advances in conditional language modeling.
Example annotations
Table TABREF53 shows examples of crowdsourced annotations. | Unanswerable |
5b029ad0d20b516ec11967baaf7d2006e8d7199f | 5b029ad0d20b516ec11967baaf7d2006e8d7199f_0 | Q: How many label options are there in the multi-label task?
Text: Introduction
Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks.
Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0, BIBREF1, BIBREF2. Not much has been done on predicting outcomes of events based on the tweet sentiments, for example, predicting winners of presidential debates based on the tweets by analyzing the users' sentiments. This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate.
In this paper, we analyze three such events: 1) US Presidential Debates 2015-16, 2) Grammy Awards 2013, and 3) Super Bowl 2013. The main focus is on the analysis of the presidential debates. For the Grammys and the Super Bowl, we just perform sentiment analysis and try to predict the outcomes in the process. For the debates, in addition to the analysis done for the Grammys and Super Bowl, we also perform a trend analysis. Our analysis of the tweets for the debates is 3-fold as shown below.
Sentiment: Perform a sentiment analysis on the debates. This involves: building a machine learning model which learns the sentiment-candidate pair (candidate is the one to whom the tweet is being directed) from the training data and then using this model to predict the sentiment-candidate pairs of new tweets.
Predicting Outcome: Here, after predicting the sentiment-candidate pairs on the new data, we predict the winner of the debates based on the sentiments of the users.
Trends: Here, we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time (hours, days, months) and how the opinion of experts such as Washington Post affect the sentiments of the users.
For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about).
The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.
We do find out that the public sentiments are not always coincident with the views of the experts. In this case, it is interesting to check whether the views of the experts can affect the public, for example, by spreading through the social media microblogs such as Twitter. Hence, we also conduct experiments to compare the public sentiment before and after the experts' views become public and thus notice the impact of the experts' views on the public sentiment. In our analysis of the debates, we observe that in certain debates, such as the 5th Republican Debate, held on December 15, 2015, the opinions of the users vary from the experts. But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day.
Our contributions are mainly: we want to see how predictive the sentiment/opinion of the users are in social media microblogs and how it compares to that of the experts. In essence, we find that the crowd wisdom in the microblog domain matches that of the experts in most cases. There are cases, however, where they don't match but we observe that the crowd's sentiments are actually affected by the experts. This can be seen in our analysis of the presidential debates.
The rest of the paper is organized as follows: in section SECREF2, we review some of the literature. In section SECREF3, we discuss the collection and preprocessing of the data. Section SECREF4 details the approach taken, along with the features and the machine learning methods used. Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future.
Related Work
Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph.
There has been some work in event detection and extraction in microblogs as well. In BIBREF9, the authors describe a way to extract major life events of a user based on tweets that either congratulate/offer condolences. BIBREF10 build a key-word graph from the data and then detect communities in this graph (cluster) to find events. This works because words that describe similar events will form clusters. In BIBREF11, the authors use distant supervision to extract events. There has also been some work on event retrieval in microblogs BIBREF12. In BIBREF13, the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only non-textual features to do so. In BIBREF14, the authors study how much of the opinion extracted from Online Social Networks (OSN) user data is reflective of the opinion of the larger population. Researchers have also mined Twitter dataset to analyze public reaction to various events: from election debate performance BIBREF15, where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events, to movie box-office predictions on the release day BIBREF16. Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie box-office scores. The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes. Recently, several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24.
Data Set and Preprocessing ::: Data Collection
Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets.
We conduct experiments on 3 different datasets, as mentioned earlier: 1) US Presidential Debates, 2) Grammy Awards 2013, 3) Superbowl 2013. To construct our presidential debates dataset, we have used the Twitter Search API to collect the tweets. Since there was no publicly available dataset for this, we had to collect the data manually. The data was collected on 10 different presidential debates: 7 republican and 3 democratic, from October 2015 to March 2016. Different hashtags like “#GOP, #GOPDebate” were used to filter out tweets specific to the debate. This is given in Table TABREF2. We extracted only english tweets for our dataset. We collected a total of 104961 tweets were collected across all the debates. But there were some limitations with the API. Firstly, the server imposes a rate limit and discards tweets when the limit is reached. The second problem is that the API returns many duplicates. Thus, after removing the duplicates and irrelevant tweets, we were left with a total of 17304 tweets. This includes the tweets only on the day of the debate. We also collected tweets on the days following the debate.
As for the other two datasets, we collected them from available-online repositories. There were a total of 2580062 tweets for the Grammy Awards 2013, and a total of 2428391 tweets for the Superbowl 2013. The statistics are given in Tables TABREF3 and TABREF3. The tweets for the grammy were before the ceremony and during. However, we only use the tweets before the ceremony (after the nominations were announced), to predict the winners. As for the superbowl, the tweets collected were during the game. But we can predict interesting things like Most Valuable Player etc. from the tweets. The tweets for both of these datasets were annotated and thus did not require any human intervention. However, the tweets for the debates had to be annotated.
Since we are using a supervised approach in this paper, we have all the tweets (for debates) in the training set human-annotated. The tweets were already annotated for the Grammys and Super Bowl. Some statistics about our datasets are presented in Tables TABREF3, TABREF3 and TABREF3. The annotations for the debate dataset comprised of 2 labels for each tweet: 1) Candidate: This is the candidate of the debate to whom the tweet refers to, 2) Sentiment: This represents the sentiment of the tweet towards that candidate. This is either positive or negative.
The task then becomes a case of multi-label classification. The candidate labels are not so trivial to obtain, because there are tweets that do not directly contain any candidates' name. For example, the tweets, “a business man for president??” and “a doctor might sure bring about a change in America!” are about Donal Trump and Ben Carson respectively. Thus, it is meaningful to have a multi-label task.
The annotations for the other two datasets are similar, in that one of the labels was the sentiment and the other was category-dependent in the outcome-prediction task, as discussed in the sections below. For example, if we are trying to predict the "Album of the Year" winners for the Grammy dataset, the second label would be the nominees for that category (album of the year).
Data Set and Preprocessing ::: Preprocessing
As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as "I hate Donald Trump, but I will vote for him". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet.
Methodology ::: Procedure
Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis.
Sentiment Analysis: To perform a sentiment analysis on the debates, we first extract all the features described below from all the tweets in the training data. We then build the different machine learning models described below on these set of features. After that, we evaluate the output produced by the models on unseen test data. The models essentially predict candidate-sentiment pairs for each tweet.
Outcome Prediction: Predict the outcome of the debates. After obtaining the sentiments on the test data for each tweet, we can compute the net normalized sentiment for each presidential candidate in the debate. This is done by looking at the number of positive and negative sentiments for each candidate. We then normalize the sentiment scores of each candidate to be in the same scale (0-1). After that, we rank the candidates based on the sentiment scores and predict the top $k$ as the winners.
Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.
As for the Grammys and the Super Bowl, we only perform the sentiment analysis and predict the outcomes.
Methodology ::: Machine Learning Models
We compare 4 different models for performing our task of sentiment classification. We then pick the best performing model for the task of outcome prediction. Here, we have two categories of algorithms: single-label and multi-label (We already discussed above why it is meaningful to have a multi-label task earlier), because one can represent $<$candidate, sentiment$>$ as a single class label or have candidate and sentiment as two separate labels. They are listed below:
Methodology ::: Machine Learning Models ::: Single-label Classification
Naive Bayes: We use a multinomial Naive Bayes model. A tweet $t$ is assigned a class $c^{*}$ such that
where there are $m$ features and $f_i$ represents the $i^{th}$ feature.
Support Vector Machines: Support Vector Machines (SVM) constructs a hyperplane or a set of hyperplanes in a high-dimensional space, which can then be used for classification. In our case, we use SVM with Sequential Minimal Optimization (SMO) BIBREF25, which is an algorithm for solving the quadratic programming (QP) problem that arises during the training of SVMs.
Elman Recurrent Neural Network: Recurrent Neural Networks (RNNs) are gaining popularity and are being applied to a wide variety of problems. They are a class of artificial neural networks, where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26. We use this in our task.
Methodology ::: Machine Learning Models ::: Multi-label Classification
RAkEL (RAndom k labELsets): RAkEL BIBREF27 is a multi-label classification algorithm that uses labeled powerset (LP) transformation: it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers, each trained on a random subset of the actual labels, for classification.
Methodology ::: Feature Space
In order to classify the tweets, a set of features is extracted from each of the tweets, such as n-gram, part-of-speech etc. The details of these features are given below:
n-gram: This represents the frequency counts of n-grams, specifically that of unigrams and bigrams.
punctuation: The number of occurrences of punctuation symbols such as commas, exclamation marks etc.
POS (part-of-speech): The frequency of each POS tagger is used as the feature.
prior polarity scoring: Here, we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language (DAL) BIBREF28. This dictionary (DAL) of about 8000 English words assigns a pleasantness score to each word on a scale of 1-3. After normalizing, we can assign the words with polarity higher than $0.8$ as positive and less than $0.5$ as negative. If a word is not present in the dictionary, we lookup its synonyms in WordNet: if this word is there in the dictionary, we assign the original word its synonym's score.
Twitter Specific features:
Number of hashtags ($\#$ symbol)
Number of mentioning users ( symbol)
Number of hyperlinks
Document embedding features: Here, we use the approach proposed by Mikolov et al. BIBREF3 to embed an entire tweet into a vector of features
Topic features: Here, LDA (Latent Dirichlet Allocation) BIBREF4 is used to extract topic-specific features for a tweet (document). This is basically the topic-document probability that is outputted by the model.
In the following experiments, we use 1-$gram$, 2-$gram$ and $(1+2)$-$gram$ to denote unigram, bigram and a combination of unigram and bigram features respectively. We also combine punctuation and the other features as miscellaneous features and use $MISC$ to denote this. We represent the document-embedding features by $DOC$ and topic-specific features by $TOPIC$.
Data Analysis
In this section, we analyze the presidential debates data and show some trends.
First, we look at the trend of the tweet frequency. Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time. We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered, the trend of the number of TV viewers matches the trend of the number of tweets. However, we can see that towards the final debates, the frequency of the tweets decreases consistently. This shows an interesting fact that although the people still watch the debates, the number of people who tweet about it are greatly reduced. But the tweeting community are mainly youngsters and this shows that most of the tweeting community, who actively tweet, didn't watch the later debates. Because if they did, then the trends should ideally match.
Next we look at how the tweeting activity is on days of the debate: specifically on the day of the debate, the next day and two days later. Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate, i.e December 15, 2015. As can be seen, the average number of people tweet more on the day of the debate than a day or two after it. This makes sense intuitively because the debate would be fresh in their heads.
Then, we look at how people tweet in the hours of the debate: specifically during the debate, one hour after and then two hours after. Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate, i.e December 15, 2015. We notice that people don't tweet much during the debate but the activity drastically increases after two hours. This might be because people were busy watching the debate and then taking some time to process things, so that they can give their opinion.
We have seen the frequency of tweets over time in the previous trends. Now, we will look at how the sentiments of the candidates change over time.
First, Figure FIGREF24 shows how the sentiments of the candidates changed across the debates. We find that many of the candidates have had ups and downs towards in the debates. But these trends are interesting in that, it gives some useful information about what went down in the debate that caused the sentiments to change (sometimes drastically). For example, if we look at the graph for Donald Trump, we see that his sentiment was at its lowest point during the debate held on December 15. Looking into the debate, we can easily see why this was the case. At a certain point in the debate, Trump was asked about his ideas for the nuclear triad. It is very important that a presidential candidate knows about this, but Trump had no idea what the nuclear triad was and, in a transparent attempt to cover his tracks, resorted to a “we need to be strong" speech. It can be due to this embarrassment that his sentiment went down during this debate.
Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out).
There is some influence, for better or worse, of these experts on the minds of the users in the early debates, but towards the final debates the sentiments of the users are mostly unwavering, as can be seen in Figure FIGREF25. Figure FIGREF25 is on the last Republican debate, and suggests that the opinions of the users do not change much with time. Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered.
Evaluation Metrics
In this section, we define the different evaluation metrics that we use for different tasks. We have two tasks at hand: 1) Sentiment Analysis, 2) Outcome Prediction. We use different metrics for these two tasks.
Evaluation Metrics ::: Sentiment Analysis
In the study of sentiment analysis, we use “Hamming Loss” to evaluate the performance of the different methods. Hamming Loss, based on Hamming distance, takes into account the prediction error and the missing error, normalized over the total number of classes and total number of examples BIBREF29. The Hamming Loss is given below:
where $|D|$ is the number of examples in the dataset and $|L|$ is the number of labels. $S_i$ and $Y_i$ denote the sets of true and predicted labels for instance $i$ respectively. $\oplus $ stands for the XOR operation BIBREF30. Intuitively, the performance is better, when the Hamming Loss is smaller. 0 would be the ideal case.
Evaluation Metrics ::: Outcome Prediction
For the case of outcome prediction, we will have a predicted set and an actual set of results. Thus, we can use common information retrieval metrics to evaluate the prediction performance. Those metrics are listed below:
Mean F-measure: F-measure takes into account both the precision and recall of the results. In essence, it takes into account how many of the relevant results are returned and also how many of the returned results are relevant.
where $|D|$ is the number of queries (debates/categories for grammy winners etc.), $P_i$ and $R_i$ are the precision and recall for the $i^{th}$ query.
Mean Average Precision: As a standard metric used in information retrieval, Mean Average Precision for a set of queries is mean of the average precision scores for each query:
where $|D|$ is the number of queries (e.g., debates), $P_i(k)$ is the precision at $k$ ($P@k$) for the $i^{th}$ query, $rel_i(k)$ is an indicator function, equaling 1 if the document at position $k$ for the $i^th$ query is relevant, else 0, and $|RD_i|$ is the number of relevant documents for the $i^{th}$ query.
Results ::: Sentiment Analysis
We use 3 different datasets for the problem of sentiment analysis, as already mentioned. We test the four different algorithms mentioned in Section SECREF6, with a different combination of features that are described in Section SECREF10. To evaluate our models, we use the “Hamming Loss” metric as discussed in Section SECREF6. We use this metric because our problem is in the multi-class classification domain. However, the single-label classifiers like SVM, Naive Bayes, Elman RNN cannot be evaluated against this metric directly. To do this, we split the predicted labels into a format that is consistent with that of multi-label classifiers like RaKel. The results of the experiments for each of the datasets are given in Tables TABREF34, TABREF34 and TABREF34. In the table, $f_1$, $f_2$, $f_3$, $f_4$, $f_5$ and $f_6$ denote the features 1-$gram$, 2-$gram$, $(1+2)$-$gram$, $(1+2)$-$gram + MISC$, $DOC$ and $DOC + TOPIC$ respectively. Note that lower values of hamming losses are more desirable. We find that RaKel performs the best out of all the algorithms. RaKel is more suited for the task because our task is a multi-class classification. Among all the single-label classifiers, SVM performs the best. We also observe that as we use more complex feature spaces, the performance increases. This is true for almost all of the algorithms listed.
Our best performing features is a combination of paragraph embedding features and topic features from LDA. This makes sense intuitively because paragraph-embedding takes into account the context in the text. Context is very important in determining the sentiment of a short text: having negative words in the text does not always mean that the text contains a negative sentiment. For example, the sentence “never say never is not a bad thing” has many negative words; but the sentence as a whole does not have a negative sentiment. This is why we need some kind of context information to accurately determine the sentiment. Thus, with these embedded features, one would be able to better determine the polarity of the sentence. The other label is the entity (candidate/song etc.) in consideration. Topic features here make sense because this can be considered as the topic of the tweet in some sense. This can be done because that label captures what or whom the tweet is about.
Results ::: Results for Outcome Prediction
In this section, we show the results for the outcome-prediction of the events. RaKel, as the best performing method, is trained to predict the sentiment-labels for the unlabeled data. The sentiment labels are then used to determine the outcome of the events. In the Tables (TABREF35, TABREF36, TABREF37) of outputs given, we only show as many predictions as there are winners.
Results ::: Results for Outcome Prediction ::: Presidential Debates
The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35. Table TABREF35 shows the winners as given in the Washington Post (3rd column) and the winners that are predicted by our system (2nd column). By comparing the set of results obtained from both the sources, we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post. The result suggests that the opinions of the social media community match with that of the journalists in most cases.
Results ::: Results for Outcome Prediction ::: Grammy Awards
A Grammy Award is given to outstanding achievement in the music industry. There are two types of awards: “General Field” awards, which are not restricted by genre, and genre-specific awards. Since, there can be upto 80 categories of awards, we only focus on the main 4: 1) Album of the Year, 2) Record of the Year, 3) Song of the Year, and 4) Best New Artist. These categories are the main in the awards ceremony and most looked forward to. That is also why we choose to predict the outcomes of these categories based on the tweets. We use the tweets before the ceremony (but after the nominations) to predict the outcomes.
Basically, we have a list of nominations for each category. We filter the tweets based on these nominations and then predict the winner as with the case of the debates. The outcomes are listed in Table TABREF36. We see that largely, the opinion of the users on the social network, agree with the deciding committee of the awards. The winners agree for all the categories except “Song of the Year”.
Results ::: Results for Outcome Prediction ::: Super Bowl
The Super Bowl is the annual championship game of the National Football League. We have collected the data for the year 2013. Here, the match was between the Baltimore Ravens and the San Francisco 49ers. The tweets that we have collected are during the game. From these tweets, one could trivially determine the winner. But an interesting outcome would be to predict the Most Valuable Player (MVP) during the game. To determine this, all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game. The result in Table TABREF37 suggests that we are able to determine the outcomes accurately.
Table TABREF43 displays some evaluation metrics for this task. These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets. Since the number of participants varies from debate-to-debate or category-to-category for Grammy etc., we cannot return a fixed number of winners for everything. So, the size of our returned ranked-list is set to half of the number of participants (except for the MVP for Super Bowl; there are so many players and returning half of them when only one of them is relevant is meaningless. So, we just return the top 10 players). As we can see from the metrics, the predicted outcomes match quite well with the actual ones (or the ones given by the experts).
Conclusions
This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features. | two labels |
79bd2ad4cb5c630ce69d5a859ed118132cae62d7 | 79bd2ad4cb5c630ce69d5a859ed118132cae62d7_0 | Q: What is the interannotator agreement of the crowd sourced users?
Text: Introduction
Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks.
Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0, BIBREF1, BIBREF2. Not much has been done on predicting outcomes of events based on the tweet sentiments, for example, predicting winners of presidential debates based on the tweets by analyzing the users' sentiments. This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate.
In this paper, we analyze three such events: 1) US Presidential Debates 2015-16, 2) Grammy Awards 2013, and 3) Super Bowl 2013. The main focus is on the analysis of the presidential debates. For the Grammys and the Super Bowl, we just perform sentiment analysis and try to predict the outcomes in the process. For the debates, in addition to the analysis done for the Grammys and Super Bowl, we also perform a trend analysis. Our analysis of the tweets for the debates is 3-fold as shown below.
Sentiment: Perform a sentiment analysis on the debates. This involves: building a machine learning model which learns the sentiment-candidate pair (candidate is the one to whom the tweet is being directed) from the training data and then using this model to predict the sentiment-candidate pairs of new tweets.
Predicting Outcome: Here, after predicting the sentiment-candidate pairs on the new data, we predict the winner of the debates based on the sentiments of the users.
Trends: Here, we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time (hours, days, months) and how the opinion of experts such as Washington Post affect the sentiments of the users.
For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about).
The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.
We do find out that the public sentiments are not always coincident with the views of the experts. In this case, it is interesting to check whether the views of the experts can affect the public, for example, by spreading through the social media microblogs such as Twitter. Hence, we also conduct experiments to compare the public sentiment before and after the experts' views become public and thus notice the impact of the experts' views on the public sentiment. In our analysis of the debates, we observe that in certain debates, such as the 5th Republican Debate, held on December 15, 2015, the opinions of the users vary from the experts. But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day.
Our contributions are mainly: we want to see how predictive the sentiment/opinion of the users are in social media microblogs and how it compares to that of the experts. In essence, we find that the crowd wisdom in the microblog domain matches that of the experts in most cases. There are cases, however, where they don't match but we observe that the crowd's sentiments are actually affected by the experts. This can be seen in our analysis of the presidential debates.
The rest of the paper is organized as follows: in section SECREF2, we review some of the literature. In section SECREF3, we discuss the collection and preprocessing of the data. Section SECREF4 details the approach taken, along with the features and the machine learning methods used. Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future.
Related Work
Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph.
There has been some work in event detection and extraction in microblogs as well. In BIBREF9, the authors describe a way to extract major life events of a user based on tweets that either congratulate/offer condolences. BIBREF10 build a key-word graph from the data and then detect communities in this graph (cluster) to find events. This works because words that describe similar events will form clusters. In BIBREF11, the authors use distant supervision to extract events. There has also been some work on event retrieval in microblogs BIBREF12. In BIBREF13, the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only non-textual features to do so. In BIBREF14, the authors study how much of the opinion extracted from Online Social Networks (OSN) user data is reflective of the opinion of the larger population. Researchers have also mined Twitter dataset to analyze public reaction to various events: from election debate performance BIBREF15, where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events, to movie box-office predictions on the release day BIBREF16. Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie box-office scores. The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes. Recently, several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24.
Data Set and Preprocessing ::: Data Collection
Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets.
We conduct experiments on 3 different datasets, as mentioned earlier: 1) US Presidential Debates, 2) Grammy Awards 2013, 3) Superbowl 2013. To construct our presidential debates dataset, we have used the Twitter Search API to collect the tweets. Since there was no publicly available dataset for this, we had to collect the data manually. The data was collected on 10 different presidential debates: 7 republican and 3 democratic, from October 2015 to March 2016. Different hashtags like “#GOP, #GOPDebate” were used to filter out tweets specific to the debate. This is given in Table TABREF2. We extracted only english tweets for our dataset. We collected a total of 104961 tweets were collected across all the debates. But there were some limitations with the API. Firstly, the server imposes a rate limit and discards tweets when the limit is reached. The second problem is that the API returns many duplicates. Thus, after removing the duplicates and irrelevant tweets, we were left with a total of 17304 tweets. This includes the tweets only on the day of the debate. We also collected tweets on the days following the debate.
As for the other two datasets, we collected them from available-online repositories. There were a total of 2580062 tweets for the Grammy Awards 2013, and a total of 2428391 tweets for the Superbowl 2013. The statistics are given in Tables TABREF3 and TABREF3. The tweets for the grammy were before the ceremony and during. However, we only use the tweets before the ceremony (after the nominations were announced), to predict the winners. As for the superbowl, the tweets collected were during the game. But we can predict interesting things like Most Valuable Player etc. from the tweets. The tweets for both of these datasets were annotated and thus did not require any human intervention. However, the tweets for the debates had to be annotated.
Since we are using a supervised approach in this paper, we have all the tweets (for debates) in the training set human-annotated. The tweets were already annotated for the Grammys and Super Bowl. Some statistics about our datasets are presented in Tables TABREF3, TABREF3 and TABREF3. The annotations for the debate dataset comprised of 2 labels for each tweet: 1) Candidate: This is the candidate of the debate to whom the tweet refers to, 2) Sentiment: This represents the sentiment of the tweet towards that candidate. This is either positive or negative.
The task then becomes a case of multi-label classification. The candidate labels are not so trivial to obtain, because there are tweets that do not directly contain any candidates' name. For example, the tweets, “a business man for president??” and “a doctor might sure bring about a change in America!” are about Donal Trump and Ben Carson respectively. Thus, it is meaningful to have a multi-label task.
The annotations for the other two datasets are similar, in that one of the labels was the sentiment and the other was category-dependent in the outcome-prediction task, as discussed in the sections below. For example, if we are trying to predict the "Album of the Year" winners for the Grammy dataset, the second label would be the nominees for that category (album of the year).
Data Set and Preprocessing ::: Preprocessing
As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as "I hate Donald Trump, but I will vote for him". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet.
Methodology ::: Procedure
Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis.
Sentiment Analysis: To perform a sentiment analysis on the debates, we first extract all the features described below from all the tweets in the training data. We then build the different machine learning models described below on these set of features. After that, we evaluate the output produced by the models on unseen test data. The models essentially predict candidate-sentiment pairs for each tweet.
Outcome Prediction: Predict the outcome of the debates. After obtaining the sentiments on the test data for each tweet, we can compute the net normalized sentiment for each presidential candidate in the debate. This is done by looking at the number of positive and negative sentiments for each candidate. We then normalize the sentiment scores of each candidate to be in the same scale (0-1). After that, we rank the candidates based on the sentiment scores and predict the top $k$ as the winners.
Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.
As for the Grammys and the Super Bowl, we only perform the sentiment analysis and predict the outcomes.
Methodology ::: Machine Learning Models
We compare 4 different models for performing our task of sentiment classification. We then pick the best performing model for the task of outcome prediction. Here, we have two categories of algorithms: single-label and multi-label (We already discussed above why it is meaningful to have a multi-label task earlier), because one can represent $<$candidate, sentiment$>$ as a single class label or have candidate and sentiment as two separate labels. They are listed below:
Methodology ::: Machine Learning Models ::: Single-label Classification
Naive Bayes: We use a multinomial Naive Bayes model. A tweet $t$ is assigned a class $c^{*}$ such that
where there are $m$ features and $f_i$ represents the $i^{th}$ feature.
Support Vector Machines: Support Vector Machines (SVM) constructs a hyperplane or a set of hyperplanes in a high-dimensional space, which can then be used for classification. In our case, we use SVM with Sequential Minimal Optimization (SMO) BIBREF25, which is an algorithm for solving the quadratic programming (QP) problem that arises during the training of SVMs.
Elman Recurrent Neural Network: Recurrent Neural Networks (RNNs) are gaining popularity and are being applied to a wide variety of problems. They are a class of artificial neural networks, where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26. We use this in our task.
Methodology ::: Machine Learning Models ::: Multi-label Classification
RAkEL (RAndom k labELsets): RAkEL BIBREF27 is a multi-label classification algorithm that uses labeled powerset (LP) transformation: it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers, each trained on a random subset of the actual labels, for classification.
Methodology ::: Feature Space
In order to classify the tweets, a set of features is extracted from each of the tweets, such as n-gram, part-of-speech etc. The details of these features are given below:
n-gram: This represents the frequency counts of n-grams, specifically that of unigrams and bigrams.
punctuation: The number of occurrences of punctuation symbols such as commas, exclamation marks etc.
POS (part-of-speech): The frequency of each POS tagger is used as the feature.
prior polarity scoring: Here, we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language (DAL) BIBREF28. This dictionary (DAL) of about 8000 English words assigns a pleasantness score to each word on a scale of 1-3. After normalizing, we can assign the words with polarity higher than $0.8$ as positive and less than $0.5$ as negative. If a word is not present in the dictionary, we lookup its synonyms in WordNet: if this word is there in the dictionary, we assign the original word its synonym's score.
Twitter Specific features:
Number of hashtags ($\#$ symbol)
Number of mentioning users ( symbol)
Number of hyperlinks
Document embedding features: Here, we use the approach proposed by Mikolov et al. BIBREF3 to embed an entire tweet into a vector of features
Topic features: Here, LDA (Latent Dirichlet Allocation) BIBREF4 is used to extract topic-specific features for a tweet (document). This is basically the topic-document probability that is outputted by the model.
In the following experiments, we use 1-$gram$, 2-$gram$ and $(1+2)$-$gram$ to denote unigram, bigram and a combination of unigram and bigram features respectively. We also combine punctuation and the other features as miscellaneous features and use $MISC$ to denote this. We represent the document-embedding features by $DOC$ and topic-specific features by $TOPIC$.
Data Analysis
In this section, we analyze the presidential debates data and show some trends.
First, we look at the trend of the tweet frequency. Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time. We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered, the trend of the number of TV viewers matches the trend of the number of tweets. However, we can see that towards the final debates, the frequency of the tweets decreases consistently. This shows an interesting fact that although the people still watch the debates, the number of people who tweet about it are greatly reduced. But the tweeting community are mainly youngsters and this shows that most of the tweeting community, who actively tweet, didn't watch the later debates. Because if they did, then the trends should ideally match.
Next we look at how the tweeting activity is on days of the debate: specifically on the day of the debate, the next day and two days later. Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate, i.e December 15, 2015. As can be seen, the average number of people tweet more on the day of the debate than a day or two after it. This makes sense intuitively because the debate would be fresh in their heads.
Then, we look at how people tweet in the hours of the debate: specifically during the debate, one hour after and then two hours after. Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate, i.e December 15, 2015. We notice that people don't tweet much during the debate but the activity drastically increases after two hours. This might be because people were busy watching the debate and then taking some time to process things, so that they can give their opinion.
We have seen the frequency of tweets over time in the previous trends. Now, we will look at how the sentiments of the candidates change over time.
First, Figure FIGREF24 shows how the sentiments of the candidates changed across the debates. We find that many of the candidates have had ups and downs towards in the debates. But these trends are interesting in that, it gives some useful information about what went down in the debate that caused the sentiments to change (sometimes drastically). For example, if we look at the graph for Donald Trump, we see that his sentiment was at its lowest point during the debate held on December 15. Looking into the debate, we can easily see why this was the case. At a certain point in the debate, Trump was asked about his ideas for the nuclear triad. It is very important that a presidential candidate knows about this, but Trump had no idea what the nuclear triad was and, in a transparent attempt to cover his tracks, resorted to a “we need to be strong" speech. It can be due to this embarrassment that his sentiment went down during this debate.
Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out).
There is some influence, for better or worse, of these experts on the minds of the users in the early debates, but towards the final debates the sentiments of the users are mostly unwavering, as can be seen in Figure FIGREF25. Figure FIGREF25 is on the last Republican debate, and suggests that the opinions of the users do not change much with time. Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered.
Evaluation Metrics
In this section, we define the different evaluation metrics that we use for different tasks. We have two tasks at hand: 1) Sentiment Analysis, 2) Outcome Prediction. We use different metrics for these two tasks.
Evaluation Metrics ::: Sentiment Analysis
In the study of sentiment analysis, we use “Hamming Loss” to evaluate the performance of the different methods. Hamming Loss, based on Hamming distance, takes into account the prediction error and the missing error, normalized over the total number of classes and total number of examples BIBREF29. The Hamming Loss is given below:
where $|D|$ is the number of examples in the dataset and $|L|$ is the number of labels. $S_i$ and $Y_i$ denote the sets of true and predicted labels for instance $i$ respectively. $\oplus $ stands for the XOR operation BIBREF30. Intuitively, the performance is better, when the Hamming Loss is smaller. 0 would be the ideal case.
Evaluation Metrics ::: Outcome Prediction
For the case of outcome prediction, we will have a predicted set and an actual set of results. Thus, we can use common information retrieval metrics to evaluate the prediction performance. Those metrics are listed below:
Mean F-measure: F-measure takes into account both the precision and recall of the results. In essence, it takes into account how many of the relevant results are returned and also how many of the returned results are relevant.
where $|D|$ is the number of queries (debates/categories for grammy winners etc.), $P_i$ and $R_i$ are the precision and recall for the $i^{th}$ query.
Mean Average Precision: As a standard metric used in information retrieval, Mean Average Precision for a set of queries is mean of the average precision scores for each query:
where $|D|$ is the number of queries (e.g., debates), $P_i(k)$ is the precision at $k$ ($P@k$) for the $i^{th}$ query, $rel_i(k)$ is an indicator function, equaling 1 if the document at position $k$ for the $i^th$ query is relevant, else 0, and $|RD_i|$ is the number of relevant documents for the $i^{th}$ query.
Results ::: Sentiment Analysis
We use 3 different datasets for the problem of sentiment analysis, as already mentioned. We test the four different algorithms mentioned in Section SECREF6, with a different combination of features that are described in Section SECREF10. To evaluate our models, we use the “Hamming Loss” metric as discussed in Section SECREF6. We use this metric because our problem is in the multi-class classification domain. However, the single-label classifiers like SVM, Naive Bayes, Elman RNN cannot be evaluated against this metric directly. To do this, we split the predicted labels into a format that is consistent with that of multi-label classifiers like RaKel. The results of the experiments for each of the datasets are given in Tables TABREF34, TABREF34 and TABREF34. In the table, $f_1$, $f_2$, $f_3$, $f_4$, $f_5$ and $f_6$ denote the features 1-$gram$, 2-$gram$, $(1+2)$-$gram$, $(1+2)$-$gram + MISC$, $DOC$ and $DOC + TOPIC$ respectively. Note that lower values of hamming losses are more desirable. We find that RaKel performs the best out of all the algorithms. RaKel is more suited for the task because our task is a multi-class classification. Among all the single-label classifiers, SVM performs the best. We also observe that as we use more complex feature spaces, the performance increases. This is true for almost all of the algorithms listed.
Our best performing features is a combination of paragraph embedding features and topic features from LDA. This makes sense intuitively because paragraph-embedding takes into account the context in the text. Context is very important in determining the sentiment of a short text: having negative words in the text does not always mean that the text contains a negative sentiment. For example, the sentence “never say never is not a bad thing” has many negative words; but the sentence as a whole does not have a negative sentiment. This is why we need some kind of context information to accurately determine the sentiment. Thus, with these embedded features, one would be able to better determine the polarity of the sentence. The other label is the entity (candidate/song etc.) in consideration. Topic features here make sense because this can be considered as the topic of the tweet in some sense. This can be done because that label captures what or whom the tweet is about.
Results ::: Results for Outcome Prediction
In this section, we show the results for the outcome-prediction of the events. RaKel, as the best performing method, is trained to predict the sentiment-labels for the unlabeled data. The sentiment labels are then used to determine the outcome of the events. In the Tables (TABREF35, TABREF36, TABREF37) of outputs given, we only show as many predictions as there are winners.
Results ::: Results for Outcome Prediction ::: Presidential Debates
The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35. Table TABREF35 shows the winners as given in the Washington Post (3rd column) and the winners that are predicted by our system (2nd column). By comparing the set of results obtained from both the sources, we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post. The result suggests that the opinions of the social media community match with that of the journalists in most cases.
Results ::: Results for Outcome Prediction ::: Grammy Awards
A Grammy Award is given to outstanding achievement in the music industry. There are two types of awards: “General Field” awards, which are not restricted by genre, and genre-specific awards. Since, there can be upto 80 categories of awards, we only focus on the main 4: 1) Album of the Year, 2) Record of the Year, 3) Song of the Year, and 4) Best New Artist. These categories are the main in the awards ceremony and most looked forward to. That is also why we choose to predict the outcomes of these categories based on the tweets. We use the tweets before the ceremony (but after the nominations) to predict the outcomes.
Basically, we have a list of nominations for each category. We filter the tweets based on these nominations and then predict the winner as with the case of the debates. The outcomes are listed in Table TABREF36. We see that largely, the opinion of the users on the social network, agree with the deciding committee of the awards. The winners agree for all the categories except “Song of the Year”.
Results ::: Results for Outcome Prediction ::: Super Bowl
The Super Bowl is the annual championship game of the National Football League. We have collected the data for the year 2013. Here, the match was between the Baltimore Ravens and the San Francisco 49ers. The tweets that we have collected are during the game. From these tweets, one could trivially determine the winner. But an interesting outcome would be to predict the Most Valuable Player (MVP) during the game. To determine this, all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game. The result in Table TABREF37 suggests that we are able to determine the outcomes accurately.
Table TABREF43 displays some evaluation metrics for this task. These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets. Since the number of participants varies from debate-to-debate or category-to-category for Grammy etc., we cannot return a fixed number of winners for everything. So, the size of our returned ranked-list is set to half of the number of participants (except for the MVP for Super Bowl; there are so many players and returning half of them when only one of them is relevant is meaningless. So, we just return the top 10 players). As we can see from the metrics, the predicted outcomes match quite well with the actual ones (or the ones given by the experts).
Conclusions
This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features. | Unanswerable |
d3a1a53521f252f869fdae944db986931d9ffe48 | d3a1a53521f252f869fdae944db986931d9ffe48_0 | Q: Who are the experts?
Text: Introduction
Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks.
Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0, BIBREF1, BIBREF2. Not much has been done on predicting outcomes of events based on the tweet sentiments, for example, predicting winners of presidential debates based on the tweets by analyzing the users' sentiments. This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate.
In this paper, we analyze three such events: 1) US Presidential Debates 2015-16, 2) Grammy Awards 2013, and 3) Super Bowl 2013. The main focus is on the analysis of the presidential debates. For the Grammys and the Super Bowl, we just perform sentiment analysis and try to predict the outcomes in the process. For the debates, in addition to the analysis done for the Grammys and Super Bowl, we also perform a trend analysis. Our analysis of the tweets for the debates is 3-fold as shown below.
Sentiment: Perform a sentiment analysis on the debates. This involves: building a machine learning model which learns the sentiment-candidate pair (candidate is the one to whom the tweet is being directed) from the training data and then using this model to predict the sentiment-candidate pairs of new tweets.
Predicting Outcome: Here, after predicting the sentiment-candidate pairs on the new data, we predict the winner of the debates based on the sentiments of the users.
Trends: Here, we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time (hours, days, months) and how the opinion of experts such as Washington Post affect the sentiments of the users.
For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about).
The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.
We do find out that the public sentiments are not always coincident with the views of the experts. In this case, it is interesting to check whether the views of the experts can affect the public, for example, by spreading through the social media microblogs such as Twitter. Hence, we also conduct experiments to compare the public sentiment before and after the experts' views become public and thus notice the impact of the experts' views on the public sentiment. In our analysis of the debates, we observe that in certain debates, such as the 5th Republican Debate, held on December 15, 2015, the opinions of the users vary from the experts. But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day.
Our contributions are mainly: we want to see how predictive the sentiment/opinion of the users are in social media microblogs and how it compares to that of the experts. In essence, we find that the crowd wisdom in the microblog domain matches that of the experts in most cases. There are cases, however, where they don't match but we observe that the crowd's sentiments are actually affected by the experts. This can be seen in our analysis of the presidential debates.
The rest of the paper is organized as follows: in section SECREF2, we review some of the literature. In section SECREF3, we discuss the collection and preprocessing of the data. Section SECREF4 details the approach taken, along with the features and the machine learning methods used. Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future.
Related Work
Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph.
There has been some work in event detection and extraction in microblogs as well. In BIBREF9, the authors describe a way to extract major life events of a user based on tweets that either congratulate/offer condolences. BIBREF10 build a key-word graph from the data and then detect communities in this graph (cluster) to find events. This works because words that describe similar events will form clusters. In BIBREF11, the authors use distant supervision to extract events. There has also been some work on event retrieval in microblogs BIBREF12. In BIBREF13, the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only non-textual features to do so. In BIBREF14, the authors study how much of the opinion extracted from Online Social Networks (OSN) user data is reflective of the opinion of the larger population. Researchers have also mined Twitter dataset to analyze public reaction to various events: from election debate performance BIBREF15, where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events, to movie box-office predictions on the release day BIBREF16. Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie box-office scores. The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes. Recently, several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24.
Data Set and Preprocessing ::: Data Collection
Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets.
We conduct experiments on 3 different datasets, as mentioned earlier: 1) US Presidential Debates, 2) Grammy Awards 2013, 3) Superbowl 2013. To construct our presidential debates dataset, we have used the Twitter Search API to collect the tweets. Since there was no publicly available dataset for this, we had to collect the data manually. The data was collected on 10 different presidential debates: 7 republican and 3 democratic, from October 2015 to March 2016. Different hashtags like “#GOP, #GOPDebate” were used to filter out tweets specific to the debate. This is given in Table TABREF2. We extracted only english tweets for our dataset. We collected a total of 104961 tweets were collected across all the debates. But there were some limitations with the API. Firstly, the server imposes a rate limit and discards tweets when the limit is reached. The second problem is that the API returns many duplicates. Thus, after removing the duplicates and irrelevant tweets, we were left with a total of 17304 tweets. This includes the tweets only on the day of the debate. We also collected tweets on the days following the debate.
As for the other two datasets, we collected them from available-online repositories. There were a total of 2580062 tweets for the Grammy Awards 2013, and a total of 2428391 tweets for the Superbowl 2013. The statistics are given in Tables TABREF3 and TABREF3. The tweets for the grammy were before the ceremony and during. However, we only use the tweets before the ceremony (after the nominations were announced), to predict the winners. As for the superbowl, the tweets collected were during the game. But we can predict interesting things like Most Valuable Player etc. from the tweets. The tweets for both of these datasets were annotated and thus did not require any human intervention. However, the tweets for the debates had to be annotated.
Since we are using a supervised approach in this paper, we have all the tweets (for debates) in the training set human-annotated. The tweets were already annotated for the Grammys and Super Bowl. Some statistics about our datasets are presented in Tables TABREF3, TABREF3 and TABREF3. The annotations for the debate dataset comprised of 2 labels for each tweet: 1) Candidate: This is the candidate of the debate to whom the tweet refers to, 2) Sentiment: This represents the sentiment of the tweet towards that candidate. This is either positive or negative.
The task then becomes a case of multi-label classification. The candidate labels are not so trivial to obtain, because there are tweets that do not directly contain any candidates' name. For example, the tweets, “a business man for president??” and “a doctor might sure bring about a change in America!” are about Donal Trump and Ben Carson respectively. Thus, it is meaningful to have a multi-label task.
The annotations for the other two datasets are similar, in that one of the labels was the sentiment and the other was category-dependent in the outcome-prediction task, as discussed in the sections below. For example, if we are trying to predict the "Album of the Year" winners for the Grammy dataset, the second label would be the nominees for that category (album of the year).
Data Set and Preprocessing ::: Preprocessing
As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as "I hate Donald Trump, but I will vote for him". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet.
Methodology ::: Procedure
Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis.
Sentiment Analysis: To perform a sentiment analysis on the debates, we first extract all the features described below from all the tweets in the training data. We then build the different machine learning models described below on these set of features. After that, we evaluate the output produced by the models on unseen test data. The models essentially predict candidate-sentiment pairs for each tweet.
Outcome Prediction: Predict the outcome of the debates. After obtaining the sentiments on the test data for each tweet, we can compute the net normalized sentiment for each presidential candidate in the debate. This is done by looking at the number of positive and negative sentiments for each candidate. We then normalize the sentiment scores of each candidate to be in the same scale (0-1). After that, we rank the candidates based on the sentiment scores and predict the top $k$ as the winners.
Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.
As for the Grammys and the Super Bowl, we only perform the sentiment analysis and predict the outcomes.
Methodology ::: Machine Learning Models
We compare 4 different models for performing our task of sentiment classification. We then pick the best performing model for the task of outcome prediction. Here, we have two categories of algorithms: single-label and multi-label (We already discussed above why it is meaningful to have a multi-label task earlier), because one can represent $<$candidate, sentiment$>$ as a single class label or have candidate and sentiment as two separate labels. They are listed below:
Methodology ::: Machine Learning Models ::: Single-label Classification
Naive Bayes: We use a multinomial Naive Bayes model. A tweet $t$ is assigned a class $c^{*}$ such that
where there are $m$ features and $f_i$ represents the $i^{th}$ feature.
Support Vector Machines: Support Vector Machines (SVM) constructs a hyperplane or a set of hyperplanes in a high-dimensional space, which can then be used for classification. In our case, we use SVM with Sequential Minimal Optimization (SMO) BIBREF25, which is an algorithm for solving the quadratic programming (QP) problem that arises during the training of SVMs.
Elman Recurrent Neural Network: Recurrent Neural Networks (RNNs) are gaining popularity and are being applied to a wide variety of problems. They are a class of artificial neural networks, where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26. We use this in our task.
Methodology ::: Machine Learning Models ::: Multi-label Classification
RAkEL (RAndom k labELsets): RAkEL BIBREF27 is a multi-label classification algorithm that uses labeled powerset (LP) transformation: it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers, each trained on a random subset of the actual labels, for classification.
Methodology ::: Feature Space
In order to classify the tweets, a set of features is extracted from each of the tweets, such as n-gram, part-of-speech etc. The details of these features are given below:
n-gram: This represents the frequency counts of n-grams, specifically that of unigrams and bigrams.
punctuation: The number of occurrences of punctuation symbols such as commas, exclamation marks etc.
POS (part-of-speech): The frequency of each POS tagger is used as the feature.
prior polarity scoring: Here, we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language (DAL) BIBREF28. This dictionary (DAL) of about 8000 English words assigns a pleasantness score to each word on a scale of 1-3. After normalizing, we can assign the words with polarity higher than $0.8$ as positive and less than $0.5$ as negative. If a word is not present in the dictionary, we lookup its synonyms in WordNet: if this word is there in the dictionary, we assign the original word its synonym's score.
Twitter Specific features:
Number of hashtags ($\#$ symbol)
Number of mentioning users ( symbol)
Number of hyperlinks
Document embedding features: Here, we use the approach proposed by Mikolov et al. BIBREF3 to embed an entire tweet into a vector of features
Topic features: Here, LDA (Latent Dirichlet Allocation) BIBREF4 is used to extract topic-specific features for a tweet (document). This is basically the topic-document probability that is outputted by the model.
In the following experiments, we use 1-$gram$, 2-$gram$ and $(1+2)$-$gram$ to denote unigram, bigram and a combination of unigram and bigram features respectively. We also combine punctuation and the other features as miscellaneous features and use $MISC$ to denote this. We represent the document-embedding features by $DOC$ and topic-specific features by $TOPIC$.
Data Analysis
In this section, we analyze the presidential debates data and show some trends.
First, we look at the trend of the tweet frequency. Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time. We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered, the trend of the number of TV viewers matches the trend of the number of tweets. However, we can see that towards the final debates, the frequency of the tweets decreases consistently. This shows an interesting fact that although the people still watch the debates, the number of people who tweet about it are greatly reduced. But the tweeting community are mainly youngsters and this shows that most of the tweeting community, who actively tweet, didn't watch the later debates. Because if they did, then the trends should ideally match.
Next we look at how the tweeting activity is on days of the debate: specifically on the day of the debate, the next day and two days later. Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate, i.e December 15, 2015. As can be seen, the average number of people tweet more on the day of the debate than a day or two after it. This makes sense intuitively because the debate would be fresh in their heads.
Then, we look at how people tweet in the hours of the debate: specifically during the debate, one hour after and then two hours after. Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate, i.e December 15, 2015. We notice that people don't tweet much during the debate but the activity drastically increases after two hours. This might be because people were busy watching the debate and then taking some time to process things, so that they can give their opinion.
We have seen the frequency of tweets over time in the previous trends. Now, we will look at how the sentiments of the candidates change over time.
First, Figure FIGREF24 shows how the sentiments of the candidates changed across the debates. We find that many of the candidates have had ups and downs towards in the debates. But these trends are interesting in that, it gives some useful information about what went down in the debate that caused the sentiments to change (sometimes drastically). For example, if we look at the graph for Donald Trump, we see that his sentiment was at its lowest point during the debate held on December 15. Looking into the debate, we can easily see why this was the case. At a certain point in the debate, Trump was asked about his ideas for the nuclear triad. It is very important that a presidential candidate knows about this, but Trump had no idea what the nuclear triad was and, in a transparent attempt to cover his tracks, resorted to a “we need to be strong" speech. It can be due to this embarrassment that his sentiment went down during this debate.
Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out).
There is some influence, for better or worse, of these experts on the minds of the users in the early debates, but towards the final debates the sentiments of the users are mostly unwavering, as can be seen in Figure FIGREF25. Figure FIGREF25 is on the last Republican debate, and suggests that the opinions of the users do not change much with time. Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered.
Evaluation Metrics
In this section, we define the different evaluation metrics that we use for different tasks. We have two tasks at hand: 1) Sentiment Analysis, 2) Outcome Prediction. We use different metrics for these two tasks.
Evaluation Metrics ::: Sentiment Analysis
In the study of sentiment analysis, we use “Hamming Loss” to evaluate the performance of the different methods. Hamming Loss, based on Hamming distance, takes into account the prediction error and the missing error, normalized over the total number of classes and total number of examples BIBREF29. The Hamming Loss is given below:
where $|D|$ is the number of examples in the dataset and $|L|$ is the number of labels. $S_i$ and $Y_i$ denote the sets of true and predicted labels for instance $i$ respectively. $\oplus $ stands for the XOR operation BIBREF30. Intuitively, the performance is better, when the Hamming Loss is smaller. 0 would be the ideal case.
Evaluation Metrics ::: Outcome Prediction
For the case of outcome prediction, we will have a predicted set and an actual set of results. Thus, we can use common information retrieval metrics to evaluate the prediction performance. Those metrics are listed below:
Mean F-measure: F-measure takes into account both the precision and recall of the results. In essence, it takes into account how many of the relevant results are returned and also how many of the returned results are relevant.
where $|D|$ is the number of queries (debates/categories for grammy winners etc.), $P_i$ and $R_i$ are the precision and recall for the $i^{th}$ query.
Mean Average Precision: As a standard metric used in information retrieval, Mean Average Precision for a set of queries is mean of the average precision scores for each query:
where $|D|$ is the number of queries (e.g., debates), $P_i(k)$ is the precision at $k$ ($P@k$) for the $i^{th}$ query, $rel_i(k)$ is an indicator function, equaling 1 if the document at position $k$ for the $i^th$ query is relevant, else 0, and $|RD_i|$ is the number of relevant documents for the $i^{th}$ query.
Results ::: Sentiment Analysis
We use 3 different datasets for the problem of sentiment analysis, as already mentioned. We test the four different algorithms mentioned in Section SECREF6, with a different combination of features that are described in Section SECREF10. To evaluate our models, we use the “Hamming Loss” metric as discussed in Section SECREF6. We use this metric because our problem is in the multi-class classification domain. However, the single-label classifiers like SVM, Naive Bayes, Elman RNN cannot be evaluated against this metric directly. To do this, we split the predicted labels into a format that is consistent with that of multi-label classifiers like RaKel. The results of the experiments for each of the datasets are given in Tables TABREF34, TABREF34 and TABREF34. In the table, $f_1$, $f_2$, $f_3$, $f_4$, $f_5$ and $f_6$ denote the features 1-$gram$, 2-$gram$, $(1+2)$-$gram$, $(1+2)$-$gram + MISC$, $DOC$ and $DOC + TOPIC$ respectively. Note that lower values of hamming losses are more desirable. We find that RaKel performs the best out of all the algorithms. RaKel is more suited for the task because our task is a multi-class classification. Among all the single-label classifiers, SVM performs the best. We also observe that as we use more complex feature spaces, the performance increases. This is true for almost all of the algorithms listed.
Our best performing features is a combination of paragraph embedding features and topic features from LDA. This makes sense intuitively because paragraph-embedding takes into account the context in the text. Context is very important in determining the sentiment of a short text: having negative words in the text does not always mean that the text contains a negative sentiment. For example, the sentence “never say never is not a bad thing” has many negative words; but the sentence as a whole does not have a negative sentiment. This is why we need some kind of context information to accurately determine the sentiment. Thus, with these embedded features, one would be able to better determine the polarity of the sentence. The other label is the entity (candidate/song etc.) in consideration. Topic features here make sense because this can be considered as the topic of the tweet in some sense. This can be done because that label captures what or whom the tweet is about.
Results ::: Results for Outcome Prediction
In this section, we show the results for the outcome-prediction of the events. RaKel, as the best performing method, is trained to predict the sentiment-labels for the unlabeled data. The sentiment labels are then used to determine the outcome of the events. In the Tables (TABREF35, TABREF36, TABREF37) of outputs given, we only show as many predictions as there are winners.
Results ::: Results for Outcome Prediction ::: Presidential Debates
The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35. Table TABREF35 shows the winners as given in the Washington Post (3rd column) and the winners that are predicted by our system (2nd column). By comparing the set of results obtained from both the sources, we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post. The result suggests that the opinions of the social media community match with that of the journalists in most cases.
Results ::: Results for Outcome Prediction ::: Grammy Awards
A Grammy Award is given to outstanding achievement in the music industry. There are two types of awards: “General Field” awards, which are not restricted by genre, and genre-specific awards. Since, there can be upto 80 categories of awards, we only focus on the main 4: 1) Album of the Year, 2) Record of the Year, 3) Song of the Year, and 4) Best New Artist. These categories are the main in the awards ceremony and most looked forward to. That is also why we choose to predict the outcomes of these categories based on the tweets. We use the tweets before the ceremony (but after the nominations) to predict the outcomes.
Basically, we have a list of nominations for each category. We filter the tweets based on these nominations and then predict the winner as with the case of the debates. The outcomes are listed in Table TABREF36. We see that largely, the opinion of the users on the social network, agree with the deciding committee of the awards. The winners agree for all the categories except “Song of the Year”.
Results ::: Results for Outcome Prediction ::: Super Bowl
The Super Bowl is the annual championship game of the National Football League. We have collected the data for the year 2013. Here, the match was between the Baltimore Ravens and the San Francisco 49ers. The tweets that we have collected are during the game. From these tweets, one could trivially determine the winner. But an interesting outcome would be to predict the Most Valuable Player (MVP) during the game. To determine this, all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game. The result in Table TABREF37 suggests that we are able to determine the outcomes accurately.
Table TABREF43 displays some evaluation metrics for this task. These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets. Since the number of participants varies from debate-to-debate or category-to-category for Grammy etc., we cannot return a fixed number of winners for everything. So, the size of our returned ranked-list is set to half of the number of participants (except for the MVP for Super Bowl; there are so many players and returning half of them when only one of them is relevant is meaningless. So, we just return the top 10 players). As we can see from the metrics, the predicted outcomes match quite well with the actual ones (or the ones given by the experts).
Conclusions
This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features. | political pundits of the Washington Post |
d3a1a53521f252f869fdae944db986931d9ffe48 | d3a1a53521f252f869fdae944db986931d9ffe48_1 | Q: Who are the experts?
Text: Introduction
Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks.
Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0, BIBREF1, BIBREF2. Not much has been done on predicting outcomes of events based on the tweet sentiments, for example, predicting winners of presidential debates based on the tweets by analyzing the users' sentiments. This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate.
In this paper, we analyze three such events: 1) US Presidential Debates 2015-16, 2) Grammy Awards 2013, and 3) Super Bowl 2013. The main focus is on the analysis of the presidential debates. For the Grammys and the Super Bowl, we just perform sentiment analysis and try to predict the outcomes in the process. For the debates, in addition to the analysis done for the Grammys and Super Bowl, we also perform a trend analysis. Our analysis of the tweets for the debates is 3-fold as shown below.
Sentiment: Perform a sentiment analysis on the debates. This involves: building a machine learning model which learns the sentiment-candidate pair (candidate is the one to whom the tweet is being directed) from the training data and then using this model to predict the sentiment-candidate pairs of new tweets.
Predicting Outcome: Here, after predicting the sentiment-candidate pairs on the new data, we predict the winner of the debates based on the sentiments of the users.
Trends: Here, we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time (hours, days, months) and how the opinion of experts such as Washington Post affect the sentiments of the users.
For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about).
The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.
We do find out that the public sentiments are not always coincident with the views of the experts. In this case, it is interesting to check whether the views of the experts can affect the public, for example, by spreading through the social media microblogs such as Twitter. Hence, we also conduct experiments to compare the public sentiment before and after the experts' views become public and thus notice the impact of the experts' views on the public sentiment. In our analysis of the debates, we observe that in certain debates, such as the 5th Republican Debate, held on December 15, 2015, the opinions of the users vary from the experts. But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day.
Our contributions are mainly: we want to see how predictive the sentiment/opinion of the users are in social media microblogs and how it compares to that of the experts. In essence, we find that the crowd wisdom in the microblog domain matches that of the experts in most cases. There are cases, however, where they don't match but we observe that the crowd's sentiments are actually affected by the experts. This can be seen in our analysis of the presidential debates.
The rest of the paper is organized as follows: in section SECREF2, we review some of the literature. In section SECREF3, we discuss the collection and preprocessing of the data. Section SECREF4 details the approach taken, along with the features and the machine learning methods used. Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future.
Related Work
Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph.
There has been some work in event detection and extraction in microblogs as well. In BIBREF9, the authors describe a way to extract major life events of a user based on tweets that either congratulate/offer condolences. BIBREF10 build a key-word graph from the data and then detect communities in this graph (cluster) to find events. This works because words that describe similar events will form clusters. In BIBREF11, the authors use distant supervision to extract events. There has also been some work on event retrieval in microblogs BIBREF12. In BIBREF13, the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only non-textual features to do so. In BIBREF14, the authors study how much of the opinion extracted from Online Social Networks (OSN) user data is reflective of the opinion of the larger population. Researchers have also mined Twitter dataset to analyze public reaction to various events: from election debate performance BIBREF15, where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events, to movie box-office predictions on the release day BIBREF16. Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie box-office scores. The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes. Recently, several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24.
Data Set and Preprocessing ::: Data Collection
Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets.
We conduct experiments on 3 different datasets, as mentioned earlier: 1) US Presidential Debates, 2) Grammy Awards 2013, 3) Superbowl 2013. To construct our presidential debates dataset, we have used the Twitter Search API to collect the tweets. Since there was no publicly available dataset for this, we had to collect the data manually. The data was collected on 10 different presidential debates: 7 republican and 3 democratic, from October 2015 to March 2016. Different hashtags like “#GOP, #GOPDebate” were used to filter out tweets specific to the debate. This is given in Table TABREF2. We extracted only english tweets for our dataset. We collected a total of 104961 tweets were collected across all the debates. But there were some limitations with the API. Firstly, the server imposes a rate limit and discards tweets when the limit is reached. The second problem is that the API returns many duplicates. Thus, after removing the duplicates and irrelevant tweets, we were left with a total of 17304 tweets. This includes the tweets only on the day of the debate. We also collected tweets on the days following the debate.
As for the other two datasets, we collected them from available-online repositories. There were a total of 2580062 tweets for the Grammy Awards 2013, and a total of 2428391 tweets for the Superbowl 2013. The statistics are given in Tables TABREF3 and TABREF3. The tweets for the grammy were before the ceremony and during. However, we only use the tweets before the ceremony (after the nominations were announced), to predict the winners. As for the superbowl, the tweets collected were during the game. But we can predict interesting things like Most Valuable Player etc. from the tweets. The tweets for both of these datasets were annotated and thus did not require any human intervention. However, the tweets for the debates had to be annotated.
Since we are using a supervised approach in this paper, we have all the tweets (for debates) in the training set human-annotated. The tweets were already annotated for the Grammys and Super Bowl. Some statistics about our datasets are presented in Tables TABREF3, TABREF3 and TABREF3. The annotations for the debate dataset comprised of 2 labels for each tweet: 1) Candidate: This is the candidate of the debate to whom the tweet refers to, 2) Sentiment: This represents the sentiment of the tweet towards that candidate. This is either positive or negative.
The task then becomes a case of multi-label classification. The candidate labels are not so trivial to obtain, because there are tweets that do not directly contain any candidates' name. For example, the tweets, “a business man for president??” and “a doctor might sure bring about a change in America!” are about Donal Trump and Ben Carson respectively. Thus, it is meaningful to have a multi-label task.
The annotations for the other two datasets are similar, in that one of the labels was the sentiment and the other was category-dependent in the outcome-prediction task, as discussed in the sections below. For example, if we are trying to predict the "Album of the Year" winners for the Grammy dataset, the second label would be the nominees for that category (album of the year).
Data Set and Preprocessing ::: Preprocessing
As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as "I hate Donald Trump, but I will vote for him". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet.
Methodology ::: Procedure
Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis.
Sentiment Analysis: To perform a sentiment analysis on the debates, we first extract all the features described below from all the tweets in the training data. We then build the different machine learning models described below on these set of features. After that, we evaluate the output produced by the models on unseen test data. The models essentially predict candidate-sentiment pairs for each tweet.
Outcome Prediction: Predict the outcome of the debates. After obtaining the sentiments on the test data for each tweet, we can compute the net normalized sentiment for each presidential candidate in the debate. This is done by looking at the number of positive and negative sentiments for each candidate. We then normalize the sentiment scores of each candidate to be in the same scale (0-1). After that, we rank the candidates based on the sentiment scores and predict the top $k$ as the winners.
Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.
As for the Grammys and the Super Bowl, we only perform the sentiment analysis and predict the outcomes.
Methodology ::: Machine Learning Models
We compare 4 different models for performing our task of sentiment classification. We then pick the best performing model for the task of outcome prediction. Here, we have two categories of algorithms: single-label and multi-label (We already discussed above why it is meaningful to have a multi-label task earlier), because one can represent $<$candidate, sentiment$>$ as a single class label or have candidate and sentiment as two separate labels. They are listed below:
Methodology ::: Machine Learning Models ::: Single-label Classification
Naive Bayes: We use a multinomial Naive Bayes model. A tweet $t$ is assigned a class $c^{*}$ such that
where there are $m$ features and $f_i$ represents the $i^{th}$ feature.
Support Vector Machines: Support Vector Machines (SVM) constructs a hyperplane or a set of hyperplanes in a high-dimensional space, which can then be used for classification. In our case, we use SVM with Sequential Minimal Optimization (SMO) BIBREF25, which is an algorithm for solving the quadratic programming (QP) problem that arises during the training of SVMs.
Elman Recurrent Neural Network: Recurrent Neural Networks (RNNs) are gaining popularity and are being applied to a wide variety of problems. They are a class of artificial neural networks, where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26. We use this in our task.
Methodology ::: Machine Learning Models ::: Multi-label Classification
RAkEL (RAndom k labELsets): RAkEL BIBREF27 is a multi-label classification algorithm that uses labeled powerset (LP) transformation: it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers, each trained on a random subset of the actual labels, for classification.
Methodology ::: Feature Space
In order to classify the tweets, a set of features is extracted from each of the tweets, such as n-gram, part-of-speech etc. The details of these features are given below:
n-gram: This represents the frequency counts of n-grams, specifically that of unigrams and bigrams.
punctuation: The number of occurrences of punctuation symbols such as commas, exclamation marks etc.
POS (part-of-speech): The frequency of each POS tagger is used as the feature.
prior polarity scoring: Here, we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language (DAL) BIBREF28. This dictionary (DAL) of about 8000 English words assigns a pleasantness score to each word on a scale of 1-3. After normalizing, we can assign the words with polarity higher than $0.8$ as positive and less than $0.5$ as negative. If a word is not present in the dictionary, we lookup its synonyms in WordNet: if this word is there in the dictionary, we assign the original word its synonym's score.
Twitter Specific features:
Number of hashtags ($\#$ symbol)
Number of mentioning users ( symbol)
Number of hyperlinks
Document embedding features: Here, we use the approach proposed by Mikolov et al. BIBREF3 to embed an entire tweet into a vector of features
Topic features: Here, LDA (Latent Dirichlet Allocation) BIBREF4 is used to extract topic-specific features for a tweet (document). This is basically the topic-document probability that is outputted by the model.
In the following experiments, we use 1-$gram$, 2-$gram$ and $(1+2)$-$gram$ to denote unigram, bigram and a combination of unigram and bigram features respectively. We also combine punctuation and the other features as miscellaneous features and use $MISC$ to denote this. We represent the document-embedding features by $DOC$ and topic-specific features by $TOPIC$.
Data Analysis
In this section, we analyze the presidential debates data and show some trends.
First, we look at the trend of the tweet frequency. Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time. We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered, the trend of the number of TV viewers matches the trend of the number of tweets. However, we can see that towards the final debates, the frequency of the tweets decreases consistently. This shows an interesting fact that although the people still watch the debates, the number of people who tweet about it are greatly reduced. But the tweeting community are mainly youngsters and this shows that most of the tweeting community, who actively tweet, didn't watch the later debates. Because if they did, then the trends should ideally match.
Next we look at how the tweeting activity is on days of the debate: specifically on the day of the debate, the next day and two days later. Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate, i.e December 15, 2015. As can be seen, the average number of people tweet more on the day of the debate than a day or two after it. This makes sense intuitively because the debate would be fresh in their heads.
Then, we look at how people tweet in the hours of the debate: specifically during the debate, one hour after and then two hours after. Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate, i.e December 15, 2015. We notice that people don't tweet much during the debate but the activity drastically increases after two hours. This might be because people were busy watching the debate and then taking some time to process things, so that they can give their opinion.
We have seen the frequency of tweets over time in the previous trends. Now, we will look at how the sentiments of the candidates change over time.
First, Figure FIGREF24 shows how the sentiments of the candidates changed across the debates. We find that many of the candidates have had ups and downs towards in the debates. But these trends are interesting in that, it gives some useful information about what went down in the debate that caused the sentiments to change (sometimes drastically). For example, if we look at the graph for Donald Trump, we see that his sentiment was at its lowest point during the debate held on December 15. Looking into the debate, we can easily see why this was the case. At a certain point in the debate, Trump was asked about his ideas for the nuclear triad. It is very important that a presidential candidate knows about this, but Trump had no idea what the nuclear triad was and, in a transparent attempt to cover his tracks, resorted to a “we need to be strong" speech. It can be due to this embarrassment that his sentiment went down during this debate.
Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out).
There is some influence, for better or worse, of these experts on the minds of the users in the early debates, but towards the final debates the sentiments of the users are mostly unwavering, as can be seen in Figure FIGREF25. Figure FIGREF25 is on the last Republican debate, and suggests that the opinions of the users do not change much with time. Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered.
Evaluation Metrics
In this section, we define the different evaluation metrics that we use for different tasks. We have two tasks at hand: 1) Sentiment Analysis, 2) Outcome Prediction. We use different metrics for these two tasks.
Evaluation Metrics ::: Sentiment Analysis
In the study of sentiment analysis, we use “Hamming Loss” to evaluate the performance of the different methods. Hamming Loss, based on Hamming distance, takes into account the prediction error and the missing error, normalized over the total number of classes and total number of examples BIBREF29. The Hamming Loss is given below:
where $|D|$ is the number of examples in the dataset and $|L|$ is the number of labels. $S_i$ and $Y_i$ denote the sets of true and predicted labels for instance $i$ respectively. $\oplus $ stands for the XOR operation BIBREF30. Intuitively, the performance is better, when the Hamming Loss is smaller. 0 would be the ideal case.
Evaluation Metrics ::: Outcome Prediction
For the case of outcome prediction, we will have a predicted set and an actual set of results. Thus, we can use common information retrieval metrics to evaluate the prediction performance. Those metrics are listed below:
Mean F-measure: F-measure takes into account both the precision and recall of the results. In essence, it takes into account how many of the relevant results are returned and also how many of the returned results are relevant.
where $|D|$ is the number of queries (debates/categories for grammy winners etc.), $P_i$ and $R_i$ are the precision and recall for the $i^{th}$ query.
Mean Average Precision: As a standard metric used in information retrieval, Mean Average Precision for a set of queries is mean of the average precision scores for each query:
where $|D|$ is the number of queries (e.g., debates), $P_i(k)$ is the precision at $k$ ($P@k$) for the $i^{th}$ query, $rel_i(k)$ is an indicator function, equaling 1 if the document at position $k$ for the $i^th$ query is relevant, else 0, and $|RD_i|$ is the number of relevant documents for the $i^{th}$ query.
Results ::: Sentiment Analysis
We use 3 different datasets for the problem of sentiment analysis, as already mentioned. We test the four different algorithms mentioned in Section SECREF6, with a different combination of features that are described in Section SECREF10. To evaluate our models, we use the “Hamming Loss” metric as discussed in Section SECREF6. We use this metric because our problem is in the multi-class classification domain. However, the single-label classifiers like SVM, Naive Bayes, Elman RNN cannot be evaluated against this metric directly. To do this, we split the predicted labels into a format that is consistent with that of multi-label classifiers like RaKel. The results of the experiments for each of the datasets are given in Tables TABREF34, TABREF34 and TABREF34. In the table, $f_1$, $f_2$, $f_3$, $f_4$, $f_5$ and $f_6$ denote the features 1-$gram$, 2-$gram$, $(1+2)$-$gram$, $(1+2)$-$gram + MISC$, $DOC$ and $DOC + TOPIC$ respectively. Note that lower values of hamming losses are more desirable. We find that RaKel performs the best out of all the algorithms. RaKel is more suited for the task because our task is a multi-class classification. Among all the single-label classifiers, SVM performs the best. We also observe that as we use more complex feature spaces, the performance increases. This is true for almost all of the algorithms listed.
Our best performing features is a combination of paragraph embedding features and topic features from LDA. This makes sense intuitively because paragraph-embedding takes into account the context in the text. Context is very important in determining the sentiment of a short text: having negative words in the text does not always mean that the text contains a negative sentiment. For example, the sentence “never say never is not a bad thing” has many negative words; but the sentence as a whole does not have a negative sentiment. This is why we need some kind of context information to accurately determine the sentiment. Thus, with these embedded features, one would be able to better determine the polarity of the sentence. The other label is the entity (candidate/song etc.) in consideration. Topic features here make sense because this can be considered as the topic of the tweet in some sense. This can be done because that label captures what or whom the tweet is about.
Results ::: Results for Outcome Prediction
In this section, we show the results for the outcome-prediction of the events. RaKel, as the best performing method, is trained to predict the sentiment-labels for the unlabeled data. The sentiment labels are then used to determine the outcome of the events. In the Tables (TABREF35, TABREF36, TABREF37) of outputs given, we only show as many predictions as there are winners.
Results ::: Results for Outcome Prediction ::: Presidential Debates
The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35. Table TABREF35 shows the winners as given in the Washington Post (3rd column) and the winners that are predicted by our system (2nd column). By comparing the set of results obtained from both the sources, we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post. The result suggests that the opinions of the social media community match with that of the journalists in most cases.
Results ::: Results for Outcome Prediction ::: Grammy Awards
A Grammy Award is given to outstanding achievement in the music industry. There are two types of awards: “General Field” awards, which are not restricted by genre, and genre-specific awards. Since, there can be upto 80 categories of awards, we only focus on the main 4: 1) Album of the Year, 2) Record of the Year, 3) Song of the Year, and 4) Best New Artist. These categories are the main in the awards ceremony and most looked forward to. That is also why we choose to predict the outcomes of these categories based on the tweets. We use the tweets before the ceremony (but after the nominations) to predict the outcomes.
Basically, we have a list of nominations for each category. We filter the tweets based on these nominations and then predict the winner as with the case of the debates. The outcomes are listed in Table TABREF36. We see that largely, the opinion of the users on the social network, agree with the deciding committee of the awards. The winners agree for all the categories except “Song of the Year”.
Results ::: Results for Outcome Prediction ::: Super Bowl
The Super Bowl is the annual championship game of the National Football League. We have collected the data for the year 2013. Here, the match was between the Baltimore Ravens and the San Francisco 49ers. The tweets that we have collected are during the game. From these tweets, one could trivially determine the winner. But an interesting outcome would be to predict the Most Valuable Player (MVP) during the game. To determine this, all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game. The result in Table TABREF37 suggests that we are able to determine the outcomes accurately.
Table TABREF43 displays some evaluation metrics for this task. These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets. Since the number of participants varies from debate-to-debate or category-to-category for Grammy etc., we cannot return a fixed number of winners for everything. So, the size of our returned ranked-list is set to half of the number of participants (except for the MVP for Super Bowl; there are so many players and returning half of them when only one of them is relevant is meaningless. So, we just return the top 10 players). As we can see from the metrics, the predicted outcomes match quite well with the actual ones (or the ones given by the experts).
Conclusions
This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features. | the experts in the field |
38e11663b03ac585863742044fd15a0e875ae9ab | 38e11663b03ac585863742044fd15a0e875ae9ab_0 | Q: Who is the crowd in these experiments?
Text: Introduction
Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks.
Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0, BIBREF1, BIBREF2. Not much has been done on predicting outcomes of events based on the tweet sentiments, for example, predicting winners of presidential debates based on the tweets by analyzing the users' sentiments. This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate.
In this paper, we analyze three such events: 1) US Presidential Debates 2015-16, 2) Grammy Awards 2013, and 3) Super Bowl 2013. The main focus is on the analysis of the presidential debates. For the Grammys and the Super Bowl, we just perform sentiment analysis and try to predict the outcomes in the process. For the debates, in addition to the analysis done for the Grammys and Super Bowl, we also perform a trend analysis. Our analysis of the tweets for the debates is 3-fold as shown below.
Sentiment: Perform a sentiment analysis on the debates. This involves: building a machine learning model which learns the sentiment-candidate pair (candidate is the one to whom the tweet is being directed) from the training data and then using this model to predict the sentiment-candidate pairs of new tweets.
Predicting Outcome: Here, after predicting the sentiment-candidate pairs on the new data, we predict the winner of the debates based on the sentiments of the users.
Trends: Here, we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time (hours, days, months) and how the opinion of experts such as Washington Post affect the sentiments of the users.
For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about).
The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.
We do find out that the public sentiments are not always coincident with the views of the experts. In this case, it is interesting to check whether the views of the experts can affect the public, for example, by spreading through the social media microblogs such as Twitter. Hence, we also conduct experiments to compare the public sentiment before and after the experts' views become public and thus notice the impact of the experts' views on the public sentiment. In our analysis of the debates, we observe that in certain debates, such as the 5th Republican Debate, held on December 15, 2015, the opinions of the users vary from the experts. But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day.
Our contributions are mainly: we want to see how predictive the sentiment/opinion of the users are in social media microblogs and how it compares to that of the experts. In essence, we find that the crowd wisdom in the microblog domain matches that of the experts in most cases. There are cases, however, where they don't match but we observe that the crowd's sentiments are actually affected by the experts. This can be seen in our analysis of the presidential debates.
The rest of the paper is organized as follows: in section SECREF2, we review some of the literature. In section SECREF3, we discuss the collection and preprocessing of the data. Section SECREF4 details the approach taken, along with the features and the machine learning methods used. Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future.
Related Work
Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph.
There has been some work in event detection and extraction in microblogs as well. In BIBREF9, the authors describe a way to extract major life events of a user based on tweets that either congratulate/offer condolences. BIBREF10 build a key-word graph from the data and then detect communities in this graph (cluster) to find events. This works because words that describe similar events will form clusters. In BIBREF11, the authors use distant supervision to extract events. There has also been some work on event retrieval in microblogs BIBREF12. In BIBREF13, the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only non-textual features to do so. In BIBREF14, the authors study how much of the opinion extracted from Online Social Networks (OSN) user data is reflective of the opinion of the larger population. Researchers have also mined Twitter dataset to analyze public reaction to various events: from election debate performance BIBREF15, where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events, to movie box-office predictions on the release day BIBREF16. Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie box-office scores. The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes. Recently, several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24.
Data Set and Preprocessing ::: Data Collection
Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets.
We conduct experiments on 3 different datasets, as mentioned earlier: 1) US Presidential Debates, 2) Grammy Awards 2013, 3) Superbowl 2013. To construct our presidential debates dataset, we have used the Twitter Search API to collect the tweets. Since there was no publicly available dataset for this, we had to collect the data manually. The data was collected on 10 different presidential debates: 7 republican and 3 democratic, from October 2015 to March 2016. Different hashtags like “#GOP, #GOPDebate” were used to filter out tweets specific to the debate. This is given in Table TABREF2. We extracted only english tweets for our dataset. We collected a total of 104961 tweets were collected across all the debates. But there were some limitations with the API. Firstly, the server imposes a rate limit and discards tweets when the limit is reached. The second problem is that the API returns many duplicates. Thus, after removing the duplicates and irrelevant tweets, we were left with a total of 17304 tweets. This includes the tweets only on the day of the debate. We also collected tweets on the days following the debate.
As for the other two datasets, we collected them from available-online repositories. There were a total of 2580062 tweets for the Grammy Awards 2013, and a total of 2428391 tweets for the Superbowl 2013. The statistics are given in Tables TABREF3 and TABREF3. The tweets for the grammy were before the ceremony and during. However, we only use the tweets before the ceremony (after the nominations were announced), to predict the winners. As for the superbowl, the tweets collected were during the game. But we can predict interesting things like Most Valuable Player etc. from the tweets. The tweets for both of these datasets were annotated and thus did not require any human intervention. However, the tweets for the debates had to be annotated.
Since we are using a supervised approach in this paper, we have all the tweets (for debates) in the training set human-annotated. The tweets were already annotated for the Grammys and Super Bowl. Some statistics about our datasets are presented in Tables TABREF3, TABREF3 and TABREF3. The annotations for the debate dataset comprised of 2 labels for each tweet: 1) Candidate: This is the candidate of the debate to whom the tweet refers to, 2) Sentiment: This represents the sentiment of the tweet towards that candidate. This is either positive or negative.
The task then becomes a case of multi-label classification. The candidate labels are not so trivial to obtain, because there are tweets that do not directly contain any candidates' name. For example, the tweets, “a business man for president??” and “a doctor might sure bring about a change in America!” are about Donal Trump and Ben Carson respectively. Thus, it is meaningful to have a multi-label task.
The annotations for the other two datasets are similar, in that one of the labels was the sentiment and the other was category-dependent in the outcome-prediction task, as discussed in the sections below. For example, if we are trying to predict the "Album of the Year" winners for the Grammy dataset, the second label would be the nominees for that category (album of the year).
Data Set and Preprocessing ::: Preprocessing
As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as "I hate Donald Trump, but I will vote for him". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet.
Methodology ::: Procedure
Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis.
Sentiment Analysis: To perform a sentiment analysis on the debates, we first extract all the features described below from all the tweets in the training data. We then build the different machine learning models described below on these set of features. After that, we evaluate the output produced by the models on unseen test data. The models essentially predict candidate-sentiment pairs for each tweet.
Outcome Prediction: Predict the outcome of the debates. After obtaining the sentiments on the test data for each tweet, we can compute the net normalized sentiment for each presidential candidate in the debate. This is done by looking at the number of positive and negative sentiments for each candidate. We then normalize the sentiment scores of each candidate to be in the same scale (0-1). After that, we rank the candidates based on the sentiment scores and predict the top $k$ as the winners.
Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.
As for the Grammys and the Super Bowl, we only perform the sentiment analysis and predict the outcomes.
Methodology ::: Machine Learning Models
We compare 4 different models for performing our task of sentiment classification. We then pick the best performing model for the task of outcome prediction. Here, we have two categories of algorithms: single-label and multi-label (We already discussed above why it is meaningful to have a multi-label task earlier), because one can represent $<$candidate, sentiment$>$ as a single class label or have candidate and sentiment as two separate labels. They are listed below:
Methodology ::: Machine Learning Models ::: Single-label Classification
Naive Bayes: We use a multinomial Naive Bayes model. A tweet $t$ is assigned a class $c^{*}$ such that
where there are $m$ features and $f_i$ represents the $i^{th}$ feature.
Support Vector Machines: Support Vector Machines (SVM) constructs a hyperplane or a set of hyperplanes in a high-dimensional space, which can then be used for classification. In our case, we use SVM with Sequential Minimal Optimization (SMO) BIBREF25, which is an algorithm for solving the quadratic programming (QP) problem that arises during the training of SVMs.
Elman Recurrent Neural Network: Recurrent Neural Networks (RNNs) are gaining popularity and are being applied to a wide variety of problems. They are a class of artificial neural networks, where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26. We use this in our task.
Methodology ::: Machine Learning Models ::: Multi-label Classification
RAkEL (RAndom k labELsets): RAkEL BIBREF27 is a multi-label classification algorithm that uses labeled powerset (LP) transformation: it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers, each trained on a random subset of the actual labels, for classification.
Methodology ::: Feature Space
In order to classify the tweets, a set of features is extracted from each of the tweets, such as n-gram, part-of-speech etc. The details of these features are given below:
n-gram: This represents the frequency counts of n-grams, specifically that of unigrams and bigrams.
punctuation: The number of occurrences of punctuation symbols such as commas, exclamation marks etc.
POS (part-of-speech): The frequency of each POS tagger is used as the feature.
prior polarity scoring: Here, we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language (DAL) BIBREF28. This dictionary (DAL) of about 8000 English words assigns a pleasantness score to each word on a scale of 1-3. After normalizing, we can assign the words with polarity higher than $0.8$ as positive and less than $0.5$ as negative. If a word is not present in the dictionary, we lookup its synonyms in WordNet: if this word is there in the dictionary, we assign the original word its synonym's score.
Twitter Specific features:
Number of hashtags ($\#$ symbol)
Number of mentioning users ( symbol)
Number of hyperlinks
Document embedding features: Here, we use the approach proposed by Mikolov et al. BIBREF3 to embed an entire tweet into a vector of features
Topic features: Here, LDA (Latent Dirichlet Allocation) BIBREF4 is used to extract topic-specific features for a tweet (document). This is basically the topic-document probability that is outputted by the model.
In the following experiments, we use 1-$gram$, 2-$gram$ and $(1+2)$-$gram$ to denote unigram, bigram and a combination of unigram and bigram features respectively. We also combine punctuation and the other features as miscellaneous features and use $MISC$ to denote this. We represent the document-embedding features by $DOC$ and topic-specific features by $TOPIC$.
Data Analysis
In this section, we analyze the presidential debates data and show some trends.
First, we look at the trend of the tweet frequency. Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time. We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered, the trend of the number of TV viewers matches the trend of the number of tweets. However, we can see that towards the final debates, the frequency of the tweets decreases consistently. This shows an interesting fact that although the people still watch the debates, the number of people who tweet about it are greatly reduced. But the tweeting community are mainly youngsters and this shows that most of the tweeting community, who actively tweet, didn't watch the later debates. Because if they did, then the trends should ideally match.
Next we look at how the tweeting activity is on days of the debate: specifically on the day of the debate, the next day and two days later. Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate, i.e December 15, 2015. As can be seen, the average number of people tweet more on the day of the debate than a day or two after it. This makes sense intuitively because the debate would be fresh in their heads.
Then, we look at how people tweet in the hours of the debate: specifically during the debate, one hour after and then two hours after. Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate, i.e December 15, 2015. We notice that people don't tweet much during the debate but the activity drastically increases after two hours. This might be because people were busy watching the debate and then taking some time to process things, so that they can give their opinion.
We have seen the frequency of tweets over time in the previous trends. Now, we will look at how the sentiments of the candidates change over time.
First, Figure FIGREF24 shows how the sentiments of the candidates changed across the debates. We find that many of the candidates have had ups and downs towards in the debates. But these trends are interesting in that, it gives some useful information about what went down in the debate that caused the sentiments to change (sometimes drastically). For example, if we look at the graph for Donald Trump, we see that his sentiment was at its lowest point during the debate held on December 15. Looking into the debate, we can easily see why this was the case. At a certain point in the debate, Trump was asked about his ideas for the nuclear triad. It is very important that a presidential candidate knows about this, but Trump had no idea what the nuclear triad was and, in a transparent attempt to cover his tracks, resorted to a “we need to be strong" speech. It can be due to this embarrassment that his sentiment went down during this debate.
Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out).
There is some influence, for better or worse, of these experts on the minds of the users in the early debates, but towards the final debates the sentiments of the users are mostly unwavering, as can be seen in Figure FIGREF25. Figure FIGREF25 is on the last Republican debate, and suggests that the opinions of the users do not change much with time. Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered.
Evaluation Metrics
In this section, we define the different evaluation metrics that we use for different tasks. We have two tasks at hand: 1) Sentiment Analysis, 2) Outcome Prediction. We use different metrics for these two tasks.
Evaluation Metrics ::: Sentiment Analysis
In the study of sentiment analysis, we use “Hamming Loss” to evaluate the performance of the different methods. Hamming Loss, based on Hamming distance, takes into account the prediction error and the missing error, normalized over the total number of classes and total number of examples BIBREF29. The Hamming Loss is given below:
where $|D|$ is the number of examples in the dataset and $|L|$ is the number of labels. $S_i$ and $Y_i$ denote the sets of true and predicted labels for instance $i$ respectively. $\oplus $ stands for the XOR operation BIBREF30. Intuitively, the performance is better, when the Hamming Loss is smaller. 0 would be the ideal case.
Evaluation Metrics ::: Outcome Prediction
For the case of outcome prediction, we will have a predicted set and an actual set of results. Thus, we can use common information retrieval metrics to evaluate the prediction performance. Those metrics are listed below:
Mean F-measure: F-measure takes into account both the precision and recall of the results. In essence, it takes into account how many of the relevant results are returned and also how many of the returned results are relevant.
where $|D|$ is the number of queries (debates/categories for grammy winners etc.), $P_i$ and $R_i$ are the precision and recall for the $i^{th}$ query.
Mean Average Precision: As a standard metric used in information retrieval, Mean Average Precision for a set of queries is mean of the average precision scores for each query:
where $|D|$ is the number of queries (e.g., debates), $P_i(k)$ is the precision at $k$ ($P@k$) for the $i^{th}$ query, $rel_i(k)$ is an indicator function, equaling 1 if the document at position $k$ for the $i^th$ query is relevant, else 0, and $|RD_i|$ is the number of relevant documents for the $i^{th}$ query.
Results ::: Sentiment Analysis
We use 3 different datasets for the problem of sentiment analysis, as already mentioned. We test the four different algorithms mentioned in Section SECREF6, with a different combination of features that are described in Section SECREF10. To evaluate our models, we use the “Hamming Loss” metric as discussed in Section SECREF6. We use this metric because our problem is in the multi-class classification domain. However, the single-label classifiers like SVM, Naive Bayes, Elman RNN cannot be evaluated against this metric directly. To do this, we split the predicted labels into a format that is consistent with that of multi-label classifiers like RaKel. The results of the experiments for each of the datasets are given in Tables TABREF34, TABREF34 and TABREF34. In the table, $f_1$, $f_2$, $f_3$, $f_4$, $f_5$ and $f_6$ denote the features 1-$gram$, 2-$gram$, $(1+2)$-$gram$, $(1+2)$-$gram + MISC$, $DOC$ and $DOC + TOPIC$ respectively. Note that lower values of hamming losses are more desirable. We find that RaKel performs the best out of all the algorithms. RaKel is more suited for the task because our task is a multi-class classification. Among all the single-label classifiers, SVM performs the best. We also observe that as we use more complex feature spaces, the performance increases. This is true for almost all of the algorithms listed.
Our best performing features is a combination of paragraph embedding features and topic features from LDA. This makes sense intuitively because paragraph-embedding takes into account the context in the text. Context is very important in determining the sentiment of a short text: having negative words in the text does not always mean that the text contains a negative sentiment. For example, the sentence “never say never is not a bad thing” has many negative words; but the sentence as a whole does not have a negative sentiment. This is why we need some kind of context information to accurately determine the sentiment. Thus, with these embedded features, one would be able to better determine the polarity of the sentence. The other label is the entity (candidate/song etc.) in consideration. Topic features here make sense because this can be considered as the topic of the tweet in some sense. This can be done because that label captures what or whom the tweet is about.
Results ::: Results for Outcome Prediction
In this section, we show the results for the outcome-prediction of the events. RaKel, as the best performing method, is trained to predict the sentiment-labels for the unlabeled data. The sentiment labels are then used to determine the outcome of the events. In the Tables (TABREF35, TABREF36, TABREF37) of outputs given, we only show as many predictions as there are winners.
Results ::: Results for Outcome Prediction ::: Presidential Debates
The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35. Table TABREF35 shows the winners as given in the Washington Post (3rd column) and the winners that are predicted by our system (2nd column). By comparing the set of results obtained from both the sources, we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post. The result suggests that the opinions of the social media community match with that of the journalists in most cases.
Results ::: Results for Outcome Prediction ::: Grammy Awards
A Grammy Award is given to outstanding achievement in the music industry. There are two types of awards: “General Field” awards, which are not restricted by genre, and genre-specific awards. Since, there can be upto 80 categories of awards, we only focus on the main 4: 1) Album of the Year, 2) Record of the Year, 3) Song of the Year, and 4) Best New Artist. These categories are the main in the awards ceremony and most looked forward to. That is also why we choose to predict the outcomes of these categories based on the tweets. We use the tweets before the ceremony (but after the nominations) to predict the outcomes.
Basically, we have a list of nominations for each category. We filter the tweets based on these nominations and then predict the winner as with the case of the debates. The outcomes are listed in Table TABREF36. We see that largely, the opinion of the users on the social network, agree with the deciding committee of the awards. The winners agree for all the categories except “Song of the Year”.
Results ::: Results for Outcome Prediction ::: Super Bowl
The Super Bowl is the annual championship game of the National Football League. We have collected the data for the year 2013. Here, the match was between the Baltimore Ravens and the San Francisco 49ers. The tweets that we have collected are during the game. From these tweets, one could trivially determine the winner. But an interesting outcome would be to predict the Most Valuable Player (MVP) during the game. To determine this, all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game. The result in Table TABREF37 suggests that we are able to determine the outcomes accurately.
Table TABREF43 displays some evaluation metrics for this task. These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets. Since the number of participants varies from debate-to-debate or category-to-category for Grammy etc., we cannot return a fixed number of winners for everything. So, the size of our returned ranked-list is set to half of the number of participants (except for the MVP for Super Bowl; there are so many players and returning half of them when only one of them is relevant is meaningless. So, we just return the top 10 players). As we can see from the metrics, the predicted outcomes match quite well with the actual ones (or the ones given by the experts).
Conclusions
This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features. | peoples' sentiments expressed over social media |
14421b7ae4459b647033b3ccba635d4ba7bb114b | 14421b7ae4459b647033b3ccba635d4ba7bb114b_0 | Q: How do you establish the ground truth of who won a debate?
Text: Introduction
Over the past few years, microblogs have become one of the most popular online social networks. Microblogging websites have evolved to become a source of varied kinds of information. This is due to the nature of microblogs: people post real-time messages about their opinions and express sentiment on a variety of topics, discuss current issues, complain, etc. Twitter is one such popular microblogging service where users create status messages (called “tweets"). With over 400 million tweets per day on Twitter, microblog users generate large amount of data, which cover very rich topics ranging from politics, sports to celebrity gossip. Because the user generated content on microblogs covers rich topics and expresses sentiment/opinions of the mass, mining and analyzing this information can prove to be very beneficial both to the industrial and the academic community. Tweet classification has attracted considerable attention because it has become very important to analyze peoples' sentiments and opinions over social networks.
Most of the current work on analysis of tweets is focused on sentiment analysis BIBREF0, BIBREF1, BIBREF2. Not much has been done on predicting outcomes of events based on the tweet sentiments, for example, predicting winners of presidential debates based on the tweets by analyzing the users' sentiments. This is possible intuitively because the sentiment of the users in their tweets towards the candidates is proportional to the performance of the candidates in the debate.
In this paper, we analyze three such events: 1) US Presidential Debates 2015-16, 2) Grammy Awards 2013, and 3) Super Bowl 2013. The main focus is on the analysis of the presidential debates. For the Grammys and the Super Bowl, we just perform sentiment analysis and try to predict the outcomes in the process. For the debates, in addition to the analysis done for the Grammys and Super Bowl, we also perform a trend analysis. Our analysis of the tweets for the debates is 3-fold as shown below.
Sentiment: Perform a sentiment analysis on the debates. This involves: building a machine learning model which learns the sentiment-candidate pair (candidate is the one to whom the tweet is being directed) from the training data and then using this model to predict the sentiment-candidate pairs of new tweets.
Predicting Outcome: Here, after predicting the sentiment-candidate pairs on the new data, we predict the winner of the debates based on the sentiments of the users.
Trends: Here, we analyze certain trends of the debates like the change in sentiments of the users towards the candidates over time (hours, days, months) and how the opinion of experts such as Washington Post affect the sentiments of the users.
For the sentiment analysis, we look at our problem in a multi-label setting, our two labels being sentiment polarity and the candidate/category in consideration. We test both single-label classifiers and multi-label ones on the problem and as intuition suggests, the multi-label classifier RaKel performs better. A combination of document-embedding features BIBREF3 and topic features (essentially the document-topic probabilities) BIBREF4 is shown to give the best results. These features make sense intuitively because the document-embedding features take context of the text into account, which is important for sentiment polarity classification, and topic features take into account the topic of the tweet (who/what is it about).
The prediction of outcomes of debates is very interesting in our case. Most of the results seem to match with the views of some experts such as the political pundits of the Washington Post. This implies that certain rules that were used to score the candidates in the debates by said-experts were in fact reflected by reading peoples' sentiments expressed over social media. This opens up a wide variety of learning possibilities from users' sentiments on social media, which is sometimes referred to as the wisdom of crowd.
We do find out that the public sentiments are not always coincident with the views of the experts. In this case, it is interesting to check whether the views of the experts can affect the public, for example, by spreading through the social media microblogs such as Twitter. Hence, we also conduct experiments to compare the public sentiment before and after the experts' views become public and thus notice the impact of the experts' views on the public sentiment. In our analysis of the debates, we observe that in certain debates, such as the 5th Republican Debate, held on December 15, 2015, the opinions of the users vary from the experts. But we see the effect of the experts on the sentiment of the users by looking at their opinions of the same candidates the next day.
Our contributions are mainly: we want to see how predictive the sentiment/opinion of the users are in social media microblogs and how it compares to that of the experts. In essence, we find that the crowd wisdom in the microblog domain matches that of the experts in most cases. There are cases, however, where they don't match but we observe that the crowd's sentiments are actually affected by the experts. This can be seen in our analysis of the presidential debates.
The rest of the paper is organized as follows: in section SECREF2, we review some of the literature. In section SECREF3, we discuss the collection and preprocessing of the data. Section SECREF4 details the approach taken, along with the features and the machine learning methods used. Section SECREF7 discusses the results of the experiments conducted and lastly section SECREF8 ends with a conclusion on the results including certain limitations and scopes for improvement to work on in the future.
Related Work
Sentiment analysis as a Natural Language Processing task has been handled at many levels of granularity. Specifically on the microblog front, some of the early results on sentiment analysis are by BIBREF0, BIBREF1, BIBREF2, BIBREF5, BIBREF6. Go et al. BIBREF0 applied distant supervision to classify tweet sentiment by using emoticons as noisy labels. Kouloumpis et al. BIBREF7 exploited hashtags in tweets to build training data. Chenhao Tan et al. BIBREF8 determined user-level sentiments on particular topics with the help of the social network graph.
There has been some work in event detection and extraction in microblogs as well. In BIBREF9, the authors describe a way to extract major life events of a user based on tweets that either congratulate/offer condolences. BIBREF10 build a key-word graph from the data and then detect communities in this graph (cluster) to find events. This works because words that describe similar events will form clusters. In BIBREF11, the authors use distant supervision to extract events. There has also been some work on event retrieval in microblogs BIBREF12. In BIBREF13, the authors detect time points in the twitter stream when an important event happens and then classify such events based on the sentiments they evoke using only non-textual features to do so. In BIBREF14, the authors study how much of the opinion extracted from Online Social Networks (OSN) user data is reflective of the opinion of the larger population. Researchers have also mined Twitter dataset to analyze public reaction to various events: from election debate performance BIBREF15, where the authors demonstrate visuals and metrics that can be used to detect sentiment pulse, anomalies in that pulse, and indications of controversial topics that can be used to inform the design of visual analytic systems for social media events, to movie box-office predictions on the release day BIBREF16. Mishne and Glance BIBREF17 correlate sentiments in blog posts with movie box-office scores. The correlations they observed for positive sentiments are fairly low and not sufficient to use for predictive purposes. Recently, several approaches involving machine learning and deep learning have also been used in the visual and language domains BIBREF18, BIBREF19, BIBREF20, BIBREF21, BIBREF22, BIBREF23, BIBREF24.
Data Set and Preprocessing ::: Data Collection
Twitter is a social networking and microblogging service that allows users to post real-time messages, called tweets. Tweets are very short messages, a maximum of 140 characters in length. Due to such a restriction in length, people tend to use a lot of acronyms, shorten words etc. In essence, the tweets are usually very noisy. There are several aspects to tweets such as: 1) Target: Users use the symbol “@" in their tweets to refer to other users on the microblog. 2) Hashtag: Hashtags are used by users to mark topics. This is done to increase the visibility of the tweets.
We conduct experiments on 3 different datasets, as mentioned earlier: 1) US Presidential Debates, 2) Grammy Awards 2013, 3) Superbowl 2013. To construct our presidential debates dataset, we have used the Twitter Search API to collect the tweets. Since there was no publicly available dataset for this, we had to collect the data manually. The data was collected on 10 different presidential debates: 7 republican and 3 democratic, from October 2015 to March 2016. Different hashtags like “#GOP, #GOPDebate” were used to filter out tweets specific to the debate. This is given in Table TABREF2. We extracted only english tweets for our dataset. We collected a total of 104961 tweets were collected across all the debates. But there were some limitations with the API. Firstly, the server imposes a rate limit and discards tweets when the limit is reached. The second problem is that the API returns many duplicates. Thus, after removing the duplicates and irrelevant tweets, we were left with a total of 17304 tweets. This includes the tweets only on the day of the debate. We also collected tweets on the days following the debate.
As for the other two datasets, we collected them from available-online repositories. There were a total of 2580062 tweets for the Grammy Awards 2013, and a total of 2428391 tweets for the Superbowl 2013. The statistics are given in Tables TABREF3 and TABREF3. The tweets for the grammy were before the ceremony and during. However, we only use the tweets before the ceremony (after the nominations were announced), to predict the winners. As for the superbowl, the tweets collected were during the game. But we can predict interesting things like Most Valuable Player etc. from the tweets. The tweets for both of these datasets were annotated and thus did not require any human intervention. However, the tweets for the debates had to be annotated.
Since we are using a supervised approach in this paper, we have all the tweets (for debates) in the training set human-annotated. The tweets were already annotated for the Grammys and Super Bowl. Some statistics about our datasets are presented in Tables TABREF3, TABREF3 and TABREF3. The annotations for the debate dataset comprised of 2 labels for each tweet: 1) Candidate: This is the candidate of the debate to whom the tweet refers to, 2) Sentiment: This represents the sentiment of the tweet towards that candidate. This is either positive or negative.
The task then becomes a case of multi-label classification. The candidate labels are not so trivial to obtain, because there are tweets that do not directly contain any candidates' name. For example, the tweets, “a business man for president??” and “a doctor might sure bring about a change in America!” are about Donal Trump and Ben Carson respectively. Thus, it is meaningful to have a multi-label task.
The annotations for the other two datasets are similar, in that one of the labels was the sentiment and the other was category-dependent in the outcome-prediction task, as discussed in the sections below. For example, if we are trying to predict the "Album of the Year" winners for the Grammy dataset, the second label would be the nominees for that category (album of the year).
Data Set and Preprocessing ::: Preprocessing
As noted earlier, tweets are generally noisy and thus require some preprocessing done before using them. Several filters were applied to the tweets such as: (1) Usernames: Since users often include usernames in their tweets to direct their message, we simplify it by replacing the usernames with the token “USER”. For example, @michael will be replaced by USER. (2) URLs: In most of the tweets, users include links that add on to their text message. We convert/replace the link address to the token “URL”. (3) Repeated Letters: Oftentimes, users use repeated letters in a word to emphasize their notion. For example, the word “lol” (which stands for “laugh out loud”) is sometimes written as “looooool” to emphasize the degree of funnyness. We replace such repeated occurrences of letters (more than 2), with just 3 occurrences. We replace with 3 occurrences and not 2, so that we can distinguish the exaggerated usage from the regular ones. (4) Multiple Sentiments: Tweets which contain multiple sentiments are removed, such as "I hate Donald Trump, but I will vote for him". This is done so that there is no ambiguity. (5) Retweets: In Twitter, many times tweets of a person are copied and posted by another user. This is known as retweeting and they are commonly abbreviated with “RT”. These are removed and only the original tweets are processed. (6) Repeated Tweets: The Twitter API sometimes returns a tweet multiple times. We remove such duplicates to avoid putting extra weight on any particular tweet.
Methodology ::: Procedure
Our analysis of the debates is 3-fold including sentiment analysis, outcome prediction, and trend analysis.
Sentiment Analysis: To perform a sentiment analysis on the debates, we first extract all the features described below from all the tweets in the training data. We then build the different machine learning models described below on these set of features. After that, we evaluate the output produced by the models on unseen test data. The models essentially predict candidate-sentiment pairs for each tweet.
Outcome Prediction: Predict the outcome of the debates. After obtaining the sentiments on the test data for each tweet, we can compute the net normalized sentiment for each presidential candidate in the debate. This is done by looking at the number of positive and negative sentiments for each candidate. We then normalize the sentiment scores of each candidate to be in the same scale (0-1). After that, we rank the candidates based on the sentiment scores and predict the top $k$ as the winners.
Trend Analysis: We also analyze some certain trends of the debates. Firstly, we look at the change in sentiments of the users towards the candidates over time (hours, days, months). This is done by computing the sentiment scores for each candidate in each of the debates and seeing how it varies over time, across debates. Secondly, we examine the effect of Washington Post on the views of the users. This is done by looking at the sentiments of the candidates (to predict winners) of a debate before and after the winners are announced by the experts in Washington Post. This way, we can see if Washington Post has had any effect on the sentiments of the users. Besides that, to study the behavior of the users, we also look at the correlation of the tweet volume with the number of viewers as well as the variation of tweet volume over time (hours, days, months) for debates.
As for the Grammys and the Super Bowl, we only perform the sentiment analysis and predict the outcomes.
Methodology ::: Machine Learning Models
We compare 4 different models for performing our task of sentiment classification. We then pick the best performing model for the task of outcome prediction. Here, we have two categories of algorithms: single-label and multi-label (We already discussed above why it is meaningful to have a multi-label task earlier), because one can represent $<$candidate, sentiment$>$ as a single class label or have candidate and sentiment as two separate labels. They are listed below:
Methodology ::: Machine Learning Models ::: Single-label Classification
Naive Bayes: We use a multinomial Naive Bayes model. A tweet $t$ is assigned a class $c^{*}$ such that
where there are $m$ features and $f_i$ represents the $i^{th}$ feature.
Support Vector Machines: Support Vector Machines (SVM) constructs a hyperplane or a set of hyperplanes in a high-dimensional space, which can then be used for classification. In our case, we use SVM with Sequential Minimal Optimization (SMO) BIBREF25, which is an algorithm for solving the quadratic programming (QP) problem that arises during the training of SVMs.
Elman Recurrent Neural Network: Recurrent Neural Networks (RNNs) are gaining popularity and are being applied to a wide variety of problems. They are a class of artificial neural networks, where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. The Elman RNN was proposed by Jeff Elman in the year 1990 BIBREF26. We use this in our task.
Methodology ::: Machine Learning Models ::: Multi-label Classification
RAkEL (RAndom k labELsets): RAkEL BIBREF27 is a multi-label classification algorithm that uses labeled powerset (LP) transformation: it basically creates a single binary classifier for every label combination and then uses multiple LP classifiers, each trained on a random subset of the actual labels, for classification.
Methodology ::: Feature Space
In order to classify the tweets, a set of features is extracted from each of the tweets, such as n-gram, part-of-speech etc. The details of these features are given below:
n-gram: This represents the frequency counts of n-grams, specifically that of unigrams and bigrams.
punctuation: The number of occurrences of punctuation symbols such as commas, exclamation marks etc.
POS (part-of-speech): The frequency of each POS tagger is used as the feature.
prior polarity scoring: Here, we obtain the prior polarity of the words BIBREF6 using the Dictionary of Affect in Language (DAL) BIBREF28. This dictionary (DAL) of about 8000 English words assigns a pleasantness score to each word on a scale of 1-3. After normalizing, we can assign the words with polarity higher than $0.8$ as positive and less than $0.5$ as negative. If a word is not present in the dictionary, we lookup its synonyms in WordNet: if this word is there in the dictionary, we assign the original word its synonym's score.
Twitter Specific features:
Number of hashtags ($\#$ symbol)
Number of mentioning users ( symbol)
Number of hyperlinks
Document embedding features: Here, we use the approach proposed by Mikolov et al. BIBREF3 to embed an entire tweet into a vector of features
Topic features: Here, LDA (Latent Dirichlet Allocation) BIBREF4 is used to extract topic-specific features for a tweet (document). This is basically the topic-document probability that is outputted by the model.
In the following experiments, we use 1-$gram$, 2-$gram$ and $(1+2)$-$gram$ to denote unigram, bigram and a combination of unigram and bigram features respectively. We also combine punctuation and the other features as miscellaneous features and use $MISC$ to denote this. We represent the document-embedding features by $DOC$ and topic-specific features by $TOPIC$.
Data Analysis
In this section, we analyze the presidential debates data and show some trends.
First, we look at the trend of the tweet frequency. Figure FIGREF21 shows the trends of the tweet frequency and the number of TV viewers as the debates progress over time. We observe from Figures FIGREF21 and FIGREF21 that for the first 5 debates considered, the trend of the number of TV viewers matches the trend of the number of tweets. However, we can see that towards the final debates, the frequency of the tweets decreases consistently. This shows an interesting fact that although the people still watch the debates, the number of people who tweet about it are greatly reduced. But the tweeting community are mainly youngsters and this shows that most of the tweeting community, who actively tweet, didn't watch the later debates. Because if they did, then the trends should ideally match.
Next we look at how the tweeting activity is on days of the debate: specifically on the day of the debate, the next day and two days later. Figure FIGREF22 shows the trend of the tweet frequency around the day of the 5th republican debate, i.e December 15, 2015. As can be seen, the average number of people tweet more on the day of the debate than a day or two after it. This makes sense intuitively because the debate would be fresh in their heads.
Then, we look at how people tweet in the hours of the debate: specifically during the debate, one hour after and then two hours after. Figure FIGREF23 shows the trend of the tweet frequency around the hour of the 5th republican debate, i.e December 15, 2015. We notice that people don't tweet much during the debate but the activity drastically increases after two hours. This might be because people were busy watching the debate and then taking some time to process things, so that they can give their opinion.
We have seen the frequency of tweets over time in the previous trends. Now, we will look at how the sentiments of the candidates change over time.
First, Figure FIGREF24 shows how the sentiments of the candidates changed across the debates. We find that many of the candidates have had ups and downs towards in the debates. But these trends are interesting in that, it gives some useful information about what went down in the debate that caused the sentiments to change (sometimes drastically). For example, if we look at the graph for Donald Trump, we see that his sentiment was at its lowest point during the debate held on December 15. Looking into the debate, we can easily see why this was the case. At a certain point in the debate, Trump was asked about his ideas for the nuclear triad. It is very important that a presidential candidate knows about this, but Trump had no idea what the nuclear triad was and, in a transparent attempt to cover his tracks, resorted to a “we need to be strong" speech. It can be due to this embarrassment that his sentiment went down during this debate.
Next, we investigate how the sentiments of the users towards the candidates change before and after the debate. In essence, we examine how the debate and the results of the debates given by the experts affects the sentiment of the candidates. Figure FIGREF25 shows the sentiments of the users towards the candidate during the 5th Republican Debate, 15th December 2015. It can be seen that the sentiments of the users towards the candidates does indeed change over the course of two days. One particular example is that of Jeb Bush. It seems that the populace are generally prejudiced towards the candidates, which is reflected in their sentiments of the candidates on the day of the debate. The results of the Washington Post are released in the morning after the debate. One can see the winners suggested by the Washington Post in Table TABREF35. One of the winners in that debate according to them is Jeb Bush. Coincidentally, Figure FIGREF25 suggests that the sentiment of Bush has gone up one day after the debate (essentially, one day after the results given by the experts are out).
There is some influence, for better or worse, of these experts on the minds of the users in the early debates, but towards the final debates the sentiments of the users are mostly unwavering, as can be seen in Figure FIGREF25. Figure FIGREF25 is on the last Republican debate, and suggests that the opinions of the users do not change much with time. Essentially the users have seen enough debates to make up their own minds and their sentiments are not easily wavered.
Evaluation Metrics
In this section, we define the different evaluation metrics that we use for different tasks. We have two tasks at hand: 1) Sentiment Analysis, 2) Outcome Prediction. We use different metrics for these two tasks.
Evaluation Metrics ::: Sentiment Analysis
In the study of sentiment analysis, we use “Hamming Loss” to evaluate the performance of the different methods. Hamming Loss, based on Hamming distance, takes into account the prediction error and the missing error, normalized over the total number of classes and total number of examples BIBREF29. The Hamming Loss is given below:
where $|D|$ is the number of examples in the dataset and $|L|$ is the number of labels. $S_i$ and $Y_i$ denote the sets of true and predicted labels for instance $i$ respectively. $\oplus $ stands for the XOR operation BIBREF30. Intuitively, the performance is better, when the Hamming Loss is smaller. 0 would be the ideal case.
Evaluation Metrics ::: Outcome Prediction
For the case of outcome prediction, we will have a predicted set and an actual set of results. Thus, we can use common information retrieval metrics to evaluate the prediction performance. Those metrics are listed below:
Mean F-measure: F-measure takes into account both the precision and recall of the results. In essence, it takes into account how many of the relevant results are returned and also how many of the returned results are relevant.
where $|D|$ is the number of queries (debates/categories for grammy winners etc.), $P_i$ and $R_i$ are the precision and recall for the $i^{th}$ query.
Mean Average Precision: As a standard metric used in information retrieval, Mean Average Precision for a set of queries is mean of the average precision scores for each query:
where $|D|$ is the number of queries (e.g., debates), $P_i(k)$ is the precision at $k$ ($P@k$) for the $i^{th}$ query, $rel_i(k)$ is an indicator function, equaling 1 if the document at position $k$ for the $i^th$ query is relevant, else 0, and $|RD_i|$ is the number of relevant documents for the $i^{th}$ query.
Results ::: Sentiment Analysis
We use 3 different datasets for the problem of sentiment analysis, as already mentioned. We test the four different algorithms mentioned in Section SECREF6, with a different combination of features that are described in Section SECREF10. To evaluate our models, we use the “Hamming Loss” metric as discussed in Section SECREF6. We use this metric because our problem is in the multi-class classification domain. However, the single-label classifiers like SVM, Naive Bayes, Elman RNN cannot be evaluated against this metric directly. To do this, we split the predicted labels into a format that is consistent with that of multi-label classifiers like RaKel. The results of the experiments for each of the datasets are given in Tables TABREF34, TABREF34 and TABREF34. In the table, $f_1$, $f_2$, $f_3$, $f_4$, $f_5$ and $f_6$ denote the features 1-$gram$, 2-$gram$, $(1+2)$-$gram$, $(1+2)$-$gram + MISC$, $DOC$ and $DOC + TOPIC$ respectively. Note that lower values of hamming losses are more desirable. We find that RaKel performs the best out of all the algorithms. RaKel is more suited for the task because our task is a multi-class classification. Among all the single-label classifiers, SVM performs the best. We also observe that as we use more complex feature spaces, the performance increases. This is true for almost all of the algorithms listed.
Our best performing features is a combination of paragraph embedding features and topic features from LDA. This makes sense intuitively because paragraph-embedding takes into account the context in the text. Context is very important in determining the sentiment of a short text: having negative words in the text does not always mean that the text contains a negative sentiment. For example, the sentence “never say never is not a bad thing” has many negative words; but the sentence as a whole does not have a negative sentiment. This is why we need some kind of context information to accurately determine the sentiment. Thus, with these embedded features, one would be able to better determine the polarity of the sentence. The other label is the entity (candidate/song etc.) in consideration. Topic features here make sense because this can be considered as the topic of the tweet in some sense. This can be done because that label captures what or whom the tweet is about.
Results ::: Results for Outcome Prediction
In this section, we show the results for the outcome-prediction of the events. RaKel, as the best performing method, is trained to predict the sentiment-labels for the unlabeled data. The sentiment labels are then used to determine the outcome of the events. In the Tables (TABREF35, TABREF36, TABREF37) of outputs given, we only show as many predictions as there are winners.
Results ::: Results for Outcome Prediction ::: Presidential Debates
The results obtained for the outcome prediction task for the US presidential debates is shown in Table TABREF35. Table TABREF35 shows the winners as given in the Washington Post (3rd column) and the winners that are predicted by our system (2nd column). By comparing the set of results obtained from both the sources, we find that the set of candidates predicted match to a large extent with the winners given out by the Washington Post. The result suggests that the opinions of the social media community match with that of the journalists in most cases.
Results ::: Results for Outcome Prediction ::: Grammy Awards
A Grammy Award is given to outstanding achievement in the music industry. There are two types of awards: “General Field” awards, which are not restricted by genre, and genre-specific awards. Since, there can be upto 80 categories of awards, we only focus on the main 4: 1) Album of the Year, 2) Record of the Year, 3) Song of the Year, and 4) Best New Artist. These categories are the main in the awards ceremony and most looked forward to. That is also why we choose to predict the outcomes of these categories based on the tweets. We use the tweets before the ceremony (but after the nominations) to predict the outcomes.
Basically, we have a list of nominations for each category. We filter the tweets based on these nominations and then predict the winner as with the case of the debates. The outcomes are listed in Table TABREF36. We see that largely, the opinion of the users on the social network, agree with the deciding committee of the awards. The winners agree for all the categories except “Song of the Year”.
Results ::: Results for Outcome Prediction ::: Super Bowl
The Super Bowl is the annual championship game of the National Football League. We have collected the data for the year 2013. Here, the match was between the Baltimore Ravens and the San Francisco 49ers. The tweets that we have collected are during the game. From these tweets, one could trivially determine the winner. But an interesting outcome would be to predict the Most Valuable Player (MVP) during the game. To determine this, all the tweets were looked at and we determined the candidate with the highest positive sentiment by the end of the game. The result in Table TABREF37 suggests that we are able to determine the outcomes accurately.
Table TABREF43 displays some evaluation metrics for this task. These were computed based on the predicted outcomes and the actual outcomes for each of the different datasets. Since the number of participants varies from debate-to-debate or category-to-category for Grammy etc., we cannot return a fixed number of winners for everything. So, the size of our returned ranked-list is set to half of the number of participants (except for the MVP for Super Bowl; there are so many players and returning half of them when only one of them is relevant is meaningless. So, we just return the top 10 players). As we can see from the metrics, the predicted outcomes match quite well with the actual ones (or the ones given by the experts).
Conclusions
This paper presents a study that compares the opinions of users on microblogs, which is essentially the crowd wisdom, to that of the experts in the field. Specifically, we explore three datasets: US Presidential Debates 2015-16, Grammy Awards 2013, Super Bowl 2013. We determined if the opinions of the crowd and the experts match by using the sentiments of the tweets to predict the outcomes of the debates/Grammys/Super Bowl. We observed that in most of the cases, the predictions were right indicating that crowd wisdom is indeed worth looking at and mining sentiments in microblogs is useful. In some cases where there were disagreements, however, we observed that the opinions of the experts did have some influence on the opinions of the users. We also find that the features that were most useful in our case of multi-label classification was a combination of the document-embedding and topic features. | experts in Washington Post |
52f7e42fe8f27d800d1189251dfec7446f0e1d3b | 52f7e42fe8f27d800d1189251dfec7446f0e1d3b_0 | Q: How much better is performance of proposed method than state-of-the-art methods in experiments?
Text: Introduction
In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$.
Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.
While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.
Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.
Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.
In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.
The main contributions of this study are as follows:
1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.
2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations.
Related Work
In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\textbf {h}$ and $\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction". Hence, TransE defines the following loss function:
TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.
In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.
Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\textbf {M}_{r,1}$ and $\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\textbf {M}_{r,1}\textbf {h}-\textbf {M}_{r,2}\textbf {t}||_{1}$.
Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective.
Problem Formulation
In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:
Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \subset E \times R \times E $, where $E \subset I \cup B $ is set of entities, $R \subset I$ is set of relations between entities. Similarly, $ T_{A} \subset E \times R \times A $ is the set of attribute triples, where $ A \subset I \cup B \cup L $ is the set of attribute values.
Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\in T_{R} $, and attribute triples in form of $ (h, r, a)\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\lbrace h,t|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of entities, $R =\lbrace r|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of relations, $A=\lbrace a|(h,r,a)\in T_{A}\rbrace $, respectively.
The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\textbf {h}, \textbf {r}, \textbf {t})$ and $ (\textbf {h}, \textbf {r}, \textbf {a})$, where Boldfaced $\textbf {h}\in \mathbb {R}^{k}$, $\textbf {r}\in \mathbb {R}^{k}$, $\textbf {t}\in \mathbb {R}^{k}$ and $\textbf {a}\in \mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.
Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
Proposed Model
In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively.
Proposed Model ::: Overall Architecture
The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.
Proposed Model ::: Attribute Embedding Layer
The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.
Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\textbf {a}$ can be defined as follows.
where $\textbf {w}_{i}\in \mathbb {R}^{k}$ is the word embedding of $w_{i}$.
Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.
LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.
where $f_{lstm}$ is the LSTM network.
Proposed Model ::: Embedding Propagation Layer
Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.
Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\textbf {h}\in \mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\mathcal {N}_{h} = \lbrace t,a|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $. The purpose of attentive embedding propagation is encode $\mathcal {N}_{h}$ and output a vector $\vec{\textbf {h}}$ as the new embedding of entity $h$.
In order to obtain sufficient expressive power, one learnable linear transformation $\textbf {W}\in \mathbb {R}^{k^{^{\prime }} \times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\vec{\textbf {h}}$ can be formulated as follows:
where $\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .
In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:
Hereafter, we implement the attention coefficients $\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:
where the leakyRelu is selected as activation function.
As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\textbf {t}$ of head entity should be close to the tail entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds.
Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:
Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:
where $\mathop {\Big |\Big |}$ represents concatenation, $ \pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\textbf {W}^{i}$ denotes the linear transformation of input embedding.
Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:
In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:
After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail.
Proposed Model ::: Output Layer and Training Details
Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.
knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\textbf {h}+\textbf {r}- \textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as
where $\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \cup T_{A}$ is the set of valid triples, and $T^{\prime }$ is set of corrupted triples which can be formulated as:
Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:
where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\sigma (x)$ is sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$.
We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\textbf {h}$, $\textbf {r}$, $\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\textbf {h}^{\tau +1}\leftarrow \textbf {h}^{\tau }-\lambda \nabla _{\textbf {h}}\mathcal {L}$, where $\tau $ labels the iteration step and $\lambda $ is the learning rate.
Experiments ::: Date sets
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
Experiments ::: Experiments Setting
In evaluation, we compare our method with three types of models:
1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.
2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.
3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.
In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets.
Experiments ::: Entity Classification ::: Evaluation Protocol.
In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric.
Experiments ::: Entity Classification ::: Test Performance.
Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.
Experiments ::: Entity Classification ::: Efficiency Evaluation.
Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods.
Experiments ::: Knowledge Graph Completion
The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named "raw" and "filter" in order to avoid misleading behavior.
The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models.
Conclusion and Future Work
Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. | Accuracy of best proposed method KANE (LSTM+Concatenation) are 0.8011, 0.8592, 0.8605 compared to best state-of-the art method R-GCN + LR 0.7721, 0.8193, 0.8229 on three datasets respectively. |
00e6324ecd454f5d4b2a4b27fcf4104855ff8ee2 | 00e6324ecd454f5d4b2a4b27fcf4104855ff8ee2_0 | Q: What further analysis is done?
Text: Introduction
In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$.
Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.
While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.
Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.
Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.
In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.
The main contributions of this study are as follows:
1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.
2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations.
Related Work
In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\textbf {h}$ and $\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction". Hence, TransE defines the following loss function:
TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.
In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.
Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\textbf {M}_{r,1}$ and $\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\textbf {M}_{r,1}\textbf {h}-\textbf {M}_{r,2}\textbf {t}||_{1}$.
Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective.
Problem Formulation
In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:
Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \subset E \times R \times E $, where $E \subset I \cup B $ is set of entities, $R \subset I$ is set of relations between entities. Similarly, $ T_{A} \subset E \times R \times A $ is the set of attribute triples, where $ A \subset I \cup B \cup L $ is the set of attribute values.
Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\in T_{R} $, and attribute triples in form of $ (h, r, a)\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\lbrace h,t|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of entities, $R =\lbrace r|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of relations, $A=\lbrace a|(h,r,a)\in T_{A}\rbrace $, respectively.
The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\textbf {h}, \textbf {r}, \textbf {t})$ and $ (\textbf {h}, \textbf {r}, \textbf {a})$, where Boldfaced $\textbf {h}\in \mathbb {R}^{k}$, $\textbf {r}\in \mathbb {R}^{k}$, $\textbf {t}\in \mathbb {R}^{k}$ and $\textbf {a}\in \mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.
Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
Proposed Model
In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively.
Proposed Model ::: Overall Architecture
The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.
Proposed Model ::: Attribute Embedding Layer
The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.
Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\textbf {a}$ can be defined as follows.
where $\textbf {w}_{i}\in \mathbb {R}^{k}$ is the word embedding of $w_{i}$.
Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.
LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.
where $f_{lstm}$ is the LSTM network.
Proposed Model ::: Embedding Propagation Layer
Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.
Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\textbf {h}\in \mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\mathcal {N}_{h} = \lbrace t,a|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $. The purpose of attentive embedding propagation is encode $\mathcal {N}_{h}$ and output a vector $\vec{\textbf {h}}$ as the new embedding of entity $h$.
In order to obtain sufficient expressive power, one learnable linear transformation $\textbf {W}\in \mathbb {R}^{k^{^{\prime }} \times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\vec{\textbf {h}}$ can be formulated as follows:
where $\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .
In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:
Hereafter, we implement the attention coefficients $\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:
where the leakyRelu is selected as activation function.
As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\textbf {t}$ of head entity should be close to the tail entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds.
Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:
Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:
where $\mathop {\Big |\Big |}$ represents concatenation, $ \pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\textbf {W}^{i}$ denotes the linear transformation of input embedding.
Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:
In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:
After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail.
Proposed Model ::: Output Layer and Training Details
Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.
knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\textbf {h}+\textbf {r}- \textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as
where $\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \cup T_{A}$ is the set of valid triples, and $T^{\prime }$ is set of corrupted triples which can be formulated as:
Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:
where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\sigma (x)$ is sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$.
We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\textbf {h}$, $\textbf {r}$, $\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\textbf {h}^{\tau +1}\leftarrow \textbf {h}^{\tau }-\lambda \nabla _{\textbf {h}}\mathcal {L}$, where $\tau $ labels the iteration step and $\lambda $ is the learning rate.
Experiments ::: Date sets
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
Experiments ::: Experiments Setting
In evaluation, we compare our method with three types of models:
1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.
2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.
3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.
In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets.
Experiments ::: Entity Classification ::: Evaluation Protocol.
In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric.
Experiments ::: Entity Classification ::: Test Performance.
Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.
Experiments ::: Entity Classification ::: Efficiency Evaluation.
Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods.
Experiments ::: Knowledge Graph Completion
The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named "raw" and "filter" in order to avoid misleading behavior.
The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models.
Conclusion and Future Work
Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. | we use t-SNE tool BIBREF27 to visualize the learned embedding |
aa0d67c2a1bc222d1f2d9e5d51824352da5bb6dc | aa0d67c2a1bc222d1f2d9e5d51824352da5bb6dc_0 | Q: What seven state-of-the-art methods are used for comparison?
Text: Introduction
In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$.
Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.
While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.
Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.
Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.
In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.
The main contributions of this study are as follows:
1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.
2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations.
Related Work
In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\textbf {h}$ and $\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction". Hence, TransE defines the following loss function:
TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.
In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.
Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\textbf {M}_{r,1}$ and $\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\textbf {M}_{r,1}\textbf {h}-\textbf {M}_{r,2}\textbf {t}||_{1}$.
Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective.
Problem Formulation
In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:
Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \subset E \times R \times E $, where $E \subset I \cup B $ is set of entities, $R \subset I$ is set of relations between entities. Similarly, $ T_{A} \subset E \times R \times A $ is the set of attribute triples, where $ A \subset I \cup B \cup L $ is the set of attribute values.
Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\in T_{R} $, and attribute triples in form of $ (h, r, a)\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\lbrace h,t|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of entities, $R =\lbrace r|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of relations, $A=\lbrace a|(h,r,a)\in T_{A}\rbrace $, respectively.
The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\textbf {h}, \textbf {r}, \textbf {t})$ and $ (\textbf {h}, \textbf {r}, \textbf {a})$, where Boldfaced $\textbf {h}\in \mathbb {R}^{k}$, $\textbf {r}\in \mathbb {R}^{k}$, $\textbf {t}\in \mathbb {R}^{k}$ and $\textbf {a}\in \mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.
Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
Proposed Model
In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively.
Proposed Model ::: Overall Architecture
The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.
Proposed Model ::: Attribute Embedding Layer
The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.
Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\textbf {a}$ can be defined as follows.
where $\textbf {w}_{i}\in \mathbb {R}^{k}$ is the word embedding of $w_{i}$.
Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.
LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.
where $f_{lstm}$ is the LSTM network.
Proposed Model ::: Embedding Propagation Layer
Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.
Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\textbf {h}\in \mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\mathcal {N}_{h} = \lbrace t,a|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $. The purpose of attentive embedding propagation is encode $\mathcal {N}_{h}$ and output a vector $\vec{\textbf {h}}$ as the new embedding of entity $h$.
In order to obtain sufficient expressive power, one learnable linear transformation $\textbf {W}\in \mathbb {R}^{k^{^{\prime }} \times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\vec{\textbf {h}}$ can be formulated as follows:
where $\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .
In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:
Hereafter, we implement the attention coefficients $\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:
where the leakyRelu is selected as activation function.
As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\textbf {t}$ of head entity should be close to the tail entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds.
Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:
Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:
where $\mathop {\Big |\Big |}$ represents concatenation, $ \pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\textbf {W}^{i}$ denotes the linear transformation of input embedding.
Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:
In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:
After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail.
Proposed Model ::: Output Layer and Training Details
Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.
knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\textbf {h}+\textbf {r}- \textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as
where $\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \cup T_{A}$ is the set of valid triples, and $T^{\prime }$ is set of corrupted triples which can be formulated as:
Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:
where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\sigma (x)$ is sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$.
We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\textbf {h}$, $\textbf {r}$, $\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\textbf {h}^{\tau +1}\leftarrow \textbf {h}^{\tau }-\lambda \nabla _{\textbf {h}}\mathcal {L}$, where $\tau $ labels the iteration step and $\lambda $ is the learning rate.
Experiments ::: Date sets
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
Experiments ::: Experiments Setting
In evaluation, we compare our method with three types of models:
1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.
2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.
3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.
In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets.
Experiments ::: Entity Classification ::: Evaluation Protocol.
In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric.
Experiments ::: Entity Classification ::: Test Performance.
Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.
Experiments ::: Entity Classification ::: Efficiency Evaluation.
Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods.
Experiments ::: Knowledge Graph Completion
The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named "raw" and "filter" in order to avoid misleading behavior.
The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models.
Conclusion and Future Work
Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. | TransE, TransR and TransH, PTransE, and ALL-PATHS, R-GCN BIBREF24 and KR-EAR BIBREF26 |
cf0085c1d7bd9bc9932424e4aba4e6812d27f727 | cf0085c1d7bd9bc9932424e4aba4e6812d27f727_0 | Q: What three datasets are used to measure performance?
Text: Introduction
In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$.
Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.
While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.
Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.
Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.
In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.
The main contributions of this study are as follows:
1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.
2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations.
Related Work
In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\textbf {h}$ and $\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction". Hence, TransE defines the following loss function:
TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.
In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.
Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\textbf {M}_{r,1}$ and $\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\textbf {M}_{r,1}\textbf {h}-\textbf {M}_{r,2}\textbf {t}||_{1}$.
Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective.
Problem Formulation
In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:
Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \subset E \times R \times E $, where $E \subset I \cup B $ is set of entities, $R \subset I$ is set of relations between entities. Similarly, $ T_{A} \subset E \times R \times A $ is the set of attribute triples, where $ A \subset I \cup B \cup L $ is the set of attribute values.
Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\in T_{R} $, and attribute triples in form of $ (h, r, a)\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\lbrace h,t|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of entities, $R =\lbrace r|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of relations, $A=\lbrace a|(h,r,a)\in T_{A}\rbrace $, respectively.
The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\textbf {h}, \textbf {r}, \textbf {t})$ and $ (\textbf {h}, \textbf {r}, \textbf {a})$, where Boldfaced $\textbf {h}\in \mathbb {R}^{k}$, $\textbf {r}\in \mathbb {R}^{k}$, $\textbf {t}\in \mathbb {R}^{k}$ and $\textbf {a}\in \mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.
Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
Proposed Model
In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively.
Proposed Model ::: Overall Architecture
The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.
Proposed Model ::: Attribute Embedding Layer
The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.
Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\textbf {a}$ can be defined as follows.
where $\textbf {w}_{i}\in \mathbb {R}^{k}$ is the word embedding of $w_{i}$.
Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.
LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.
where $f_{lstm}$ is the LSTM network.
Proposed Model ::: Embedding Propagation Layer
Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.
Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\textbf {h}\in \mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\mathcal {N}_{h} = \lbrace t,a|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $. The purpose of attentive embedding propagation is encode $\mathcal {N}_{h}$ and output a vector $\vec{\textbf {h}}$ as the new embedding of entity $h$.
In order to obtain sufficient expressive power, one learnable linear transformation $\textbf {W}\in \mathbb {R}^{k^{^{\prime }} \times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\vec{\textbf {h}}$ can be formulated as follows:
where $\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .
In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:
Hereafter, we implement the attention coefficients $\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:
where the leakyRelu is selected as activation function.
As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\textbf {t}$ of head entity should be close to the tail entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds.
Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:
Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:
where $\mathop {\Big |\Big |}$ represents concatenation, $ \pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\textbf {W}^{i}$ denotes the linear transformation of input embedding.
Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:
In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:
After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail.
Proposed Model ::: Output Layer and Training Details
Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.
knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\textbf {h}+\textbf {r}- \textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as
where $\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \cup T_{A}$ is the set of valid triples, and $T^{\prime }$ is set of corrupted triples which can be formulated as:
Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:
where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\sigma (x)$ is sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$.
We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\textbf {h}$, $\textbf {r}$, $\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\textbf {h}^{\tau +1}\leftarrow \textbf {h}^{\tau }-\lambda \nabla _{\textbf {h}}\mathcal {L}$, where $\tau $ labels the iteration step and $\lambda $ is the learning rate.
Experiments ::: Date sets
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
Experiments ::: Experiments Setting
In evaluation, we compare our method with three types of models:
1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.
2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.
3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.
In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets.
Experiments ::: Entity Classification ::: Evaluation Protocol.
In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric.
Experiments ::: Entity Classification ::: Test Performance.
Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.
Experiments ::: Entity Classification ::: Efficiency Evaluation.
Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods.
Experiments ::: Knowledge Graph Completion
The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named "raw" and "filter" in order to avoid misleading behavior.
The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models.
Conclusion and Future Work
Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. | FB24K, DBP24K, Game30K |
cf0085c1d7bd9bc9932424e4aba4e6812d27f727 | cf0085c1d7bd9bc9932424e4aba4e6812d27f727_1 | Q: What three datasets are used to measure performance?
Text: Introduction
In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$.
Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.
While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.
Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.
Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.
In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.
The main contributions of this study are as follows:
1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.
2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations.
Related Work
In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\textbf {h}$ and $\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction". Hence, TransE defines the following loss function:
TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.
In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.
Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\textbf {M}_{r,1}$ and $\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\textbf {M}_{r,1}\textbf {h}-\textbf {M}_{r,2}\textbf {t}||_{1}$.
Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective.
Problem Formulation
In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:
Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \subset E \times R \times E $, where $E \subset I \cup B $ is set of entities, $R \subset I$ is set of relations between entities. Similarly, $ T_{A} \subset E \times R \times A $ is the set of attribute triples, where $ A \subset I \cup B \cup L $ is the set of attribute values.
Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\in T_{R} $, and attribute triples in form of $ (h, r, a)\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\lbrace h,t|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of entities, $R =\lbrace r|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of relations, $A=\lbrace a|(h,r,a)\in T_{A}\rbrace $, respectively.
The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\textbf {h}, \textbf {r}, \textbf {t})$ and $ (\textbf {h}, \textbf {r}, \textbf {a})$, where Boldfaced $\textbf {h}\in \mathbb {R}^{k}$, $\textbf {r}\in \mathbb {R}^{k}$, $\textbf {t}\in \mathbb {R}^{k}$ and $\textbf {a}\in \mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.
Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
Proposed Model
In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively.
Proposed Model ::: Overall Architecture
The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.
Proposed Model ::: Attribute Embedding Layer
The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.
Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\textbf {a}$ can be defined as follows.
where $\textbf {w}_{i}\in \mathbb {R}^{k}$ is the word embedding of $w_{i}$.
Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.
LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.
where $f_{lstm}$ is the LSTM network.
Proposed Model ::: Embedding Propagation Layer
Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.
Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\textbf {h}\in \mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\mathcal {N}_{h} = \lbrace t,a|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $. The purpose of attentive embedding propagation is encode $\mathcal {N}_{h}$ and output a vector $\vec{\textbf {h}}$ as the new embedding of entity $h$.
In order to obtain sufficient expressive power, one learnable linear transformation $\textbf {W}\in \mathbb {R}^{k^{^{\prime }} \times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\vec{\textbf {h}}$ can be formulated as follows:
where $\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .
In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:
Hereafter, we implement the attention coefficients $\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:
where the leakyRelu is selected as activation function.
As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\textbf {t}$ of head entity should be close to the tail entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds.
Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:
Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:
where $\mathop {\Big |\Big |}$ represents concatenation, $ \pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\textbf {W}^{i}$ denotes the linear transformation of input embedding.
Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:
In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:
After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail.
Proposed Model ::: Output Layer and Training Details
Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.
knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\textbf {h}+\textbf {r}- \textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as
where $\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \cup T_{A}$ is the set of valid triples, and $T^{\prime }$ is set of corrupted triples which can be formulated as:
Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:
where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\sigma (x)$ is sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$.
We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\textbf {h}$, $\textbf {r}$, $\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\textbf {h}^{\tau +1}\leftarrow \textbf {h}^{\tau }-\lambda \nabla _{\textbf {h}}\mathcal {L}$, where $\tau $ labels the iteration step and $\lambda $ is the learning rate.
Experiments ::: Date sets
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
Experiments ::: Experiments Setting
In evaluation, we compare our method with three types of models:
1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.
2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.
3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.
In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets.
Experiments ::: Entity Classification ::: Evaluation Protocol.
In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric.
Experiments ::: Entity Classification ::: Test Performance.
Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.
Experiments ::: Entity Classification ::: Efficiency Evaluation.
Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods.
Experiments ::: Knowledge Graph Completion
The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named "raw" and "filter" in order to avoid misleading behavior.
The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models.
Conclusion and Future Work
Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. | Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph |
586b7470be91efe246c3507b05e30651ea6b9832 | 586b7470be91efe246c3507b05e30651ea6b9832_0 | Q: How does KANE capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner?
Text: Introduction
In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$.
Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.
While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.
Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.
Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.
In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.
The main contributions of this study are as follows:
1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.
2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations.
Related Work
In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\textbf {h}$ and $\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction". Hence, TransE defines the following loss function:
TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.
In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.
Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\textbf {M}_{r,1}$ and $\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\textbf {M}_{r,1}\textbf {h}-\textbf {M}_{r,2}\textbf {t}||_{1}$.
Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective.
Problem Formulation
In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:
Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \subset E \times R \times E $, where $E \subset I \cup B $ is set of entities, $R \subset I$ is set of relations between entities. Similarly, $ T_{A} \subset E \times R \times A $ is the set of attribute triples, where $ A \subset I \cup B \cup L $ is the set of attribute values.
Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\in T_{R} $, and attribute triples in form of $ (h, r, a)\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\lbrace h,t|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of entities, $R =\lbrace r|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of relations, $A=\lbrace a|(h,r,a)\in T_{A}\rbrace $, respectively.
The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\textbf {h}, \textbf {r}, \textbf {t})$ and $ (\textbf {h}, \textbf {r}, \textbf {a})$, where Boldfaced $\textbf {h}\in \mathbb {R}^{k}$, $\textbf {r}\in \mathbb {R}^{k}$, $\textbf {t}\in \mathbb {R}^{k}$ and $\textbf {a}\in \mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.
Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
Proposed Model
In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively.
Proposed Model ::: Overall Architecture
The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.
Proposed Model ::: Attribute Embedding Layer
The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.
Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\textbf {a}$ can be defined as follows.
where $\textbf {w}_{i}\in \mathbb {R}^{k}$ is the word embedding of $w_{i}$.
Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.
LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.
where $f_{lstm}$ is the LSTM network.
Proposed Model ::: Embedding Propagation Layer
Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.
Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\textbf {h}\in \mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\mathcal {N}_{h} = \lbrace t,a|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $. The purpose of attentive embedding propagation is encode $\mathcal {N}_{h}$ and output a vector $\vec{\textbf {h}}$ as the new embedding of entity $h$.
In order to obtain sufficient expressive power, one learnable linear transformation $\textbf {W}\in \mathbb {R}^{k^{^{\prime }} \times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\vec{\textbf {h}}$ can be formulated as follows:
where $\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .
In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:
Hereafter, we implement the attention coefficients $\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:
where the leakyRelu is selected as activation function.
As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\textbf {t}$ of head entity should be close to the tail entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds.
Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:
Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:
where $\mathop {\Big |\Big |}$ represents concatenation, $ \pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\textbf {W}^{i}$ denotes the linear transformation of input embedding.
Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:
In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:
After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail.
Proposed Model ::: Output Layer and Training Details
Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.
knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\textbf {h}+\textbf {r}- \textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as
where $\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \cup T_{A}$ is the set of valid triples, and $T^{\prime }$ is set of corrupted triples which can be formulated as:
Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:
where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\sigma (x)$ is sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$.
We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\textbf {h}$, $\textbf {r}$, $\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\textbf {h}^{\tau +1}\leftarrow \textbf {h}^{\tau }-\lambda \nabla _{\textbf {h}}\mathcal {L}$, where $\tau $ labels the iteration step and $\lambda $ is the learning rate.
Experiments ::: Date sets
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
Experiments ::: Experiments Setting
In evaluation, we compare our method with three types of models:
1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.
2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.
3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.
In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets.
Experiments ::: Entity Classification ::: Evaluation Protocol.
In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric.
Experiments ::: Entity Classification ::: Test Performance.
Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.
Experiments ::: Entity Classification ::: Efficiency Evaluation.
Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods.
Experiments ::: Knowledge Graph Completion
The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named "raw" and "filter" in order to avoid misleading behavior.
The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models.
Conclusion and Future Work
Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. | To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. |
31b20a4bab09450267dfa42884227103743e3426 | 31b20a4bab09450267dfa42884227103743e3426_0 | Q: What are recent works on knowedge graph embeddings authors mention?
Text: Introduction
In the past decade, many large-scale Knowledge Graphs (KGs), such as Freebase BIBREF0, DBpedia BIBREF1 and YAGO BIBREF2 have been built to represent human complex knowledge about the real-world in the machine-readable format. The facts in KGs are usually encoded in the form of triples $(\textit {head entity}, relation, \textit {tail entity})$ (denoted $(h, r, t)$ in this study) through the Resource Description Framework, e.g.,$(\textit {Donald Trump}, Born In, \textit {New York City})$. Figure FIGREF2 shows the subgraph of knowledge graph about the family of Donald Trump. In many KGs, we can observe that some relations indicate attributes of entities, such as the $\textit {Born}$ and $\textit {Abstract}$ in Figure FIGREF2, and others indicates the relations between entities (the head entity and tail entity are real world entity). Hence, the relationship in KG can be divided into relations and attributes, and correspondingly two types of triples, namely relation triples and attribute triples BIBREF3. A relation triples in KGs represents relationship between entities, e.g.,$(\textit {Donald Trump},Father of, \textit {Ivanka Trump})$, while attribute triples denote a literal attribute value of an entity, e.g.,$(\textit {Donald Trump},Born, \textit {"June 14, 1946"})$.
Knowledge graphs have became important basis for many artificial intelligence applications, such as recommendation system BIBREF4, question answering BIBREF5 and information retrieval BIBREF6, which is attracting growing interests in both academia and industry communities. A common approach to apply KGs in these artificial intelligence applications is through embedding, which provide a simple method to encode both entities and relations into a continuous low-dimensional embedding spaces. Hence, learning distributional representation of knowledge graph has attracted many research attentions in recent years. TransE BIBREF7 is a seminal work in representation learning low-dimensional vectors for both entities and relations. The basic idea behind TransE is that the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $\textbf {h}+\textbf {r}\approx \textbf {t}$. This model provide a flexible way to improve the ability in completing the KGs, such as predicating the missing items in knowledge graph. Since then, several methods like TransH BIBREF8 and TransR BIBREF9, which represent the relational translation in other effective forms, have been proposed. Recent attempts focused on either incorporating extra information beyond KG triples BIBREF10, BIBREF11, BIBREF12, BIBREF13, or designing more complicated strategies BIBREF14, BIBREF15, BIBREF16.
While these methods have achieved promising results in KG completion and link predication, existing knowledge graph embedding methods still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. We argue that the high-order structural relationship between entities also contain rich semantic relationships and incorporating these information can improve model performance. For example the fact $\textit {Donald Trump}\stackrel{Father of}{\longrightarrow }\textit {Ivanka Trump}\stackrel{Spouse}{\longrightarrow }\textit {Jared Kushner} $ indicates the relationship between entity Donald Trump and entity Jared Kushner. Several path-based methods have attempted to take multiple-step relation paths into consideration for learning high-order structural information of KGs BIBREF17, BIBREF18. But note that huge number of paths posed a critical complexity challenge on these methods. In order to enable efficient path modeling, these methods have to make approximations by sampling or applying path selection algorithm. We argue that making approximations has a large impact on the final performance.
Second, to the best of our knowledge, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. Therefore, these methods easily suffer from sparseness and incompleteness of knowledge graph. Even worse, structure information usually cannot distinguish the different meanings of relations and entities in different triples. We believe that these rich information encoded in attribute triples can help explore rich semantic information and further improve the performance of knowledge graph. For example, we can learn date of birth and abstraction from values of Born and Abstract about Donald Trump in Figure FIGREF2. There are a huge number of attribute triples in real KGs, for example the statistical results in BIBREF3 shows attribute triples are three times as many as relationship triples in English DBpedia (2016-04). Recent a few attempts try to incorporate attribute triples BIBREF11, BIBREF12. However, these are two limitations existing in these methods. One is that only a part of attribute triples are used in the existing methods, such as only entity description is used in BIBREF12. The other is some attempts try to jointly model the attribute triples and relation triples in one unified optimization problem. The loss of two kinds triples has to be carefully balanced during optimization. For example, BIBREF3 use hyper-parameters to weight the loss of two kinds triples in their models.
Considering limitations of existing knowledge graph embedding methods, we believe it is of critical importance to develop a model that can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner. Towards this end, inspired by the recent developments of graph convolutional networks (GCN) BIBREF19, which have the potential of achieving the goal but have not been explored much for knowledge graph embedding, we propose Knowledge Graph Attention Networks for Enhancing Knowledge Graph Embedding (KANE). The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Specifically, two carefully designs are equipped in KANE to correspondingly address the above two challenges: 1) recursive embedding propagation based on relation triples, which updates a entity embedding. Through performing such recursively embedding propagation, the high-order structural information of kGs can be successfully captured in a linear time complexity; and 2) multi-head attention-based aggregation. The weight of each attribute triples can be learned through applying the neural attention mechanism BIBREF20.
In experiments, we evaluate our model on two KGs tasks including knowledge graph completion and entity classification. Experimental results on three datasets shows that our method can significantly outperforms state-of-arts methods.
The main contributions of this study are as follows:
1) We highlight the importance of explicitly modeling the high-order structural and attribution information of KGs to provide better knowledge graph embedding.
2) We proposed a new method KANE, which achieves can capture both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
3) We conduct experiments on three datasets, demonstrating the effectiveness of KANE and its interpretability in understanding the importance of high-order relations.
Related Work
In recent years, there are many efforts in Knowledge Graph Embeddings for KGs aiming to encode entities and relations into a continuous low-dimensional embedding spaces. Knowledge Graph Embedding provides a very simply and effective methods to apply KGs in various artificial intelligence applications. Hence, Knowledge Graph Embeddings has attracted many research attentions in recent years. The general methodology is to define a score function for the triples and finally learn the representations of entities and relations by minimizing the loss function $f_r(h,t)$, which implies some types of transformations on $\textbf {h}$ and $\textbf {t}$. TransE BIBREF7 is a seminal work in knowledge graph embedding, which assumes the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ when $(h, r, t)$ holds as mentioned in section “Introduction". Hence, TransE defines the following loss function:
TransE regarding the relation as a translation between head entity and tail entity is inspired by the word2vec BIBREF21, where relationships between words often correspond to translations in latent feature space. This model achieves a good trade-off between computational efficiency and accuracy in KGs with thousands of relations. but this model has flaws in dealing with one-to-many, many-to-one and many-to-many relations.
In order to address this issue, TransH BIBREF8 models a relation as a relation-specific hyperplane together with a translation on it, allowing entities to have distinct representation in different relations. TransR BIBREF9 models entities and relations in separate spaces, i.e., entity space and relation spaces, and performs translation from entity spaces to relation spaces. TransD BIBREF22 captures the diversity of relations and entities simultaneously by defining dynamic mapping matrix. Recent attempts can be divided into two categories: (i) those which tries to incorporate additional information to further improve the performance of knowledge graph embedding, e.g., entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12 and logical rules BIBREF23; (ii) those which tries to design more complicated strategies, e.g., deep neural network models BIBREF24.
Except for TransE and its extensions, some efforts measure plausibility by matching latent semantics of entities and relations. The basic idea behind these models is that the plausible triples of a KG is assigned low energies. For examples, Distant Model BIBREF25 defines two different projections for head and tail entity in a specific relation, i.e., $\textbf {M}_{r,1}$ and $\textbf {M}_{r,2}$. It represents the vectors of head and tail entity can be transformed by these two projections. The loss function is $f_r(h,t)=||\textbf {M}_{r,1}\textbf {h}-\textbf {M}_{r,2}\textbf {t}||_{1}$.
Our KANE is conceptually advantageous to existing methods in that: 1) it directly factors high-order relations into the predictive model in linear time which avoids the labor intensive process of materializing paths, thus is more efficient and convenient to use; 2) it directly encodes all attribute triples in learning representation of entities which can capture rich semantic information and further improve the performance of knowledge graph embedding, and 3) KANE can directly factors high-order relations and attribute information into the predictive model in an efficient, explicit and unified manner, thus all related parameters are tailored for optimizing the embedding objective.
Problem Formulation
In this study, wo consider two kinds of triples existing in KGs: relation triples and attribute triples. Relation triples denote the relation between entities, while attribute triples describe attributes of entities. Both relation and attribute triples denotes important information about entity, we will take both of them into consideration in the task of learning representation of entities. We let $I $ denote the set of IRIs (Internationalized Resource Identifier), $B $ are the set of blank nodes, and $L $ are the set of literals (denoted by quoted strings). The relation triples and attribute triples can be formalized as follows:
Definition 1. Relation and Attribute Triples: A set of Relation triples $ T_{R} $ can be represented by $ T_{R} \subset E \times R \times E $, where $E \subset I \cup B $ is set of entities, $R \subset I$ is set of relations between entities. Similarly, $ T_{A} \subset E \times R \times A $ is the set of attribute triples, where $ A \subset I \cup B \cup L $ is the set of attribute values.
Definition 2. Knowledge Graph: A KG consists of a combination of relation triples in the form of $ (h, r, t)\in T_{R} $, and attribute triples in form of $ (h, r, a)\in T_{A} $. Formally, we represent a KG as $G=(E,R,A,T_{R},T_{A})$, where $E=\lbrace h,t|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of entities, $R =\lbrace r|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $ is set of relations, $A=\lbrace a|(h,r,a)\in T_{A}\rbrace $, respectively.
The purpose of this study is try to use embedding-based model which can capture both high-order structural and attribute information of KGs that assigns a continuous representations for each element of triples in the form $ (\textbf {h}, \textbf {r}, \textbf {t})$ and $ (\textbf {h}, \textbf {r}, \textbf {a})$, where Boldfaced $\textbf {h}\in \mathbb {R}^{k}$, $\textbf {r}\in \mathbb {R}^{k}$, $\textbf {t}\in \mathbb {R}^{k}$ and $\textbf {a}\in \mathbb {R}^{k}$ denote the embedding vector of head entity $h$, relation $r$, tail entity $t$ and attribute $a$ respectively.
Next, we detail our proposed model which models both high-order structural and attribute information of KGs in an efficient, explicit and unified manner under the graph convolutional networks framework.
Proposed Model
In this section, we present the proposed model in detail. We first introduce the overall framework of KANE, then discuss the input embedding of entities, relations and values in KGs, the design of embedding propagation layers based on graph attention network and the loss functions for link predication and entity classification task, respectively.
Proposed Model ::: Overall Architecture
The process of KANE is illustrated in Figure FIGREF2. We introduce the architecture of KANE from left to right. As shown in Figure FIGREF2, the whole triples of knowledge graph as input. The task of attribute embedding lays is embedding every value in attribute triples into a continuous vector space while preserving the semantic information. To capture both high-order structural information of KGs, we used an attention-based embedding propagation method. This method can recursively propagate the embeddings of entities from an entity's neighbors, and aggregate the neighbors with different weights. The final embedding of entities, relations and values are feed into two different deep neural network for two different tasks including link predication and entity classification.
Proposed Model ::: Attribute Embedding Layer
The value in attribute triples usually is sentence or a word. To encode the representation of value from its sentence or word, we need to encode the variable-length sentences to a fixed-length vector. In this study, we adopt two different encoders to model the attribute value.
Bag-of-Words Encoder. The representation of attribute value can be generated by a summation of all words embeddings of values. We denote the attribute value $a$ as a word sequence $a = w_{1},...,w_{n}$, where $w_{i}$ is the word at position $i$. The embedding of $\textbf {a}$ can be defined as follows.
where $\textbf {w}_{i}\in \mathbb {R}^{k}$ is the word embedding of $w_{i}$.
Bag-of-Words Encoder is a simple and intuitive method, which can capture the relative importance of words. But this method suffers in that two strings that contains the same words with different order will have the same representation.
LSTM Encoder. In order to overcome the limitation of Bag-of-Word encoder, we consider using LSTM networks to encoder a sequence of words in attribute value into a single vector. The final hidden state of the LSTM networks is selected as a representation of the attribute value.
where $f_{lstm}$ is the LSTM network.
Proposed Model ::: Embedding Propagation Layer
Next we describe the details of recursively embedding propagation method building upon the architecture of graph convolution network. Moreover, by exploiting the idea of graph attention network, out method learn to assign varying levels of importance to entity in every entity's neighborhood and can generate attentive weights of cascaded embedding propagation. In this study, embedding propagation layer consists of two mainly components: attentive embedding propagation and embedding aggregation. Here, we start by describing the attentive embedding propagation.
Attentive Embedding Propagation: Considering an KG $G$, the input to our layer is a set of entities, relations and attribute values embedding. We use $\textbf {h}\in \mathbb {R}^{k}$ to denote the embedding of entity $h$. The neighborhood of entity $h$ can be described by $\mathcal {N}_{h} = \lbrace t,a|(h,r,t)\in T_{R} \cup (h,r,a)\in T_{A}\rbrace $. The purpose of attentive embedding propagation is encode $\mathcal {N}_{h}$ and output a vector $\vec{\textbf {h}}$ as the new embedding of entity $h$.
In order to obtain sufficient expressive power, one learnable linear transformation $\textbf {W}\in \mathbb {R}^{k^{^{\prime }} \times k}$ is adopted to transform the input embeddings into higher level feature space. In this study, we take a triple $(h,r,t)$ as example and the output a vector $\vec{\textbf {h}}$ can be formulated as follows:
where $\pi (h,r,t)$ is attention coefficients which indicates the importance of entity's $t$ to entities $h$ .
In this study, the attention coefficients also control how many information being propagated from its neighborhood through the relation. To make attention coefficients easily comparable between different entities, the attention coefficient $\pi (h,r,t)$ can be computed using a softmax function over all the triples connected with $h$. The softmax function can be formulated as follows:
Hereafter, we implement the attention coefficients $\pi (h,r,t)$ through a single-layer feedforward neural network, which is formulated as follows:
where the leakyRelu is selected as activation function.
As shown in Equation DISPLAY_FORM13, the attention coefficient score is depend on the distance head entity $h$ and the tail entity $t$ plus the relation $r$, which follows the idea behind TransE that the embedding $\textbf {t}$ of head entity should be close to the tail entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds.
Embedding Aggregation. To stabilize the learning process of attention, we perform multi-head attention on final layer. Specifically, we use $m$ attention mechanism to execute the transformation of Equation DISPLAY_FORM11. A aggregators is needed to combine all embeddings of multi-head graph attention layer. In this study, we adapt two types of aggregators:
Concatenation Aggregator concatenates all embeddings of multi-head graph attention, followed by a nonlinear transformation:
where $\mathop {\Big |\Big |}$ represents concatenation, $ \pi (h,r,t)^{i}$ are normalized attention coefficient computed by the $i$-th attentive embedding propagation, and $\textbf {W}^{i}$ denotes the linear transformation of input embedding.
Averaging Aggregator sums all embeddings of multi-head graph attention and the output embedding in the final is calculated applying averaging:
In order to encode the high-order connectivity information in KGs, we use multiple embedding propagation layers to gathering the deep information propagated from the neighbors. More formally, the embedding of entity $h$ in $l$-th layers can be defined as follows:
After performing $L$ embedding propagation layers, we can get the final embedding of entities, relations and attribute values, which include both high-order structural and attribute information of KGs. Next, we discuss the loss functions of KANE for two different tasks and introduce the learning and optimization detail.
Proposed Model ::: Output Layer and Training Details
Here, we introduce the learning and optimization details for our method. Two different loss functions are carefully designed fro two different tasks of KG, which include knowledge graph completion and entity classification. Next details of these two loss functions are discussed.
knowledge graph completion. This task is a classical task in knowledge graph representation learning community. Specifically, two subtasks are included in knowledge graph completion: entity predication and link predication. Entity predication aims to infer the impossible head/tail entities in testing datasets when one of them is missing, while the link predication focus on complete a triple when relation is missing. In this study, we borrow the idea of translational scoring function from TransE, which the embedding $\textbf {t}$ of tail entity should be close to the head entity's embedding $\textbf {r}$ plus the relation vector $\textbf {t}$ if $(h, r, t)$ holds, which indicates $d(h+r,t)= ||\textbf {h}+\textbf {r}- \textbf {t}||$. Specifically, we train our model using hinge-loss function, given formally as
where $\gamma >0$ is a margin hyper-parameter, $[x ]_{+}$ denotes the positive part of $x$, $T=T_{R} \cup T_{A}$ is the set of valid triples, and $T^{\prime }$ is set of corrupted triples which can be formulated as:
Entity Classification. For the task of entity classification, we simple uses a fully connected layers and binary cross-entropy loss (BCE) over sigmoid activation on the output of last layer. We minimize the binary cross-entropy on all labeled entities, given formally as:
where $E_{D}$ is the set of entities indicates have labels, $C$ is the dimension of the output features, which is equal to the number of classes, $y_{ej}$ is the label indicator of entity $e$ for $j$-th class, and $\sigma (x)$ is sigmoid function $\sigma (x) = \frac{1}{1+e^{-x}}$.
We optimize these two loss functions using mini-batch stochastic gradient decent (SGD) over the possible $\textbf {h}$, $\textbf {r}$, $\textbf {t}$, with the chin rule that applying to update all parameters. At each step, we update the parameter $\textbf {h}^{\tau +1}\leftarrow \textbf {h}^{\tau }-\lambda \nabla _{\textbf {h}}\mathcal {L}$, where $\tau $ labels the iteration step and $\lambda $ is the learning rate.
Experiments ::: Date sets
In this study, we evaluate our model on three real KG including two typical large-scale knowledge graph: Freebase BIBREF0, DBpedia BIBREF1 and a self-construction game knowledge graph. First, we adapt a dataset extracted from Freebase, i.e., FB24K, which used by BIBREF26. Then, we collect extra entities and relations that from DBpedia which that they should have at least 100 mentions BIBREF7 and they could link to the entities in the FB24K by the sameAs triples. Finally, we build a datasets named as DBP24K. In addition, we build a game datasets from our game knowledge graph, named as Game30K. The statistics of datasets are listed in Table TABREF24.
Experiments ::: Experiments Setting
In evaluation, we compare our method with three types of models:
1) Typical Methods. Three typical knowledge graph embedding methods includes TransE, TransR and TransH are selected as baselines. For TransE, the dissimilarity measure is implemented with L1-norm, and relation as well as entity are replaced during negative sampling. For TransR, we directly use the source codes released in BIBREF9. In order for better performance, the replacement of relation in negative sampling is utilized according to the suggestion of author.
2) Path-based Methods. We compare our method with two typical path-based model include PTransE, and ALL-PATHS BIBREF18. PTransE is the first method to model relation path in KG embedding task, and ALL-PATHS improve the PTransE through a dynamic programming algorithm which can incorporate all relation paths of bounded length.
3) Attribute-incorporated Methods. Several state-of-art attribute-incorporated methods including R-GCN BIBREF24 and KR-EAR BIBREF26 are used to compare with our methods on three real datasets.
In addition, four variants of KANE which each of which correspondingly defines its specific way of computing the attribute value embedding and embedding aggregation are used as baseline in evaluation. In this study, we name four three variants as KANE (BOW+Concatenation), KANE (BOW+Average), and KANE (LSTM+Concatenation), KANE (LSTM+Average). Our method is learned with mini-batch SGD. As for hyper-parameters, we select batch size among {16, 32, 64, 128}, learning rate $\lambda $ for SGD among {0.1, 0.01, 0.001}. For a fair comparison, we also set the vector dimensions of all entity and relation to the same $k \in ${128, 258, 512, 1024}, the same dissimilarity measure $l_{1}$ or $l_{2}$ distance in loss function, and the same number of negative examples $n$ among {1, 10, 20, 40}. The training time on both data sets is limited to at most 400 epochs. The best models are selected by a grid search and early stopping on validation sets.
Experiments ::: Entity Classification ::: Evaluation Protocol.
In entity classification, the aim is to predicate the type of entity. For all baseline models, we first get the entity embedding in different datasets through default parameter settings as in their original papers or implementations.Then, Logistic Regression is used as classifier, which regards the entity's embeddings as feature of classifier. In evaluation, we random selected 10% of training set as validation set and accuracy as evaluation metric.
Experiments ::: Entity Classification ::: Test Performance.
Experimental results of entity classification on the test sets of all the datasets is shown in Table TABREF25. The results is clearly demonstrate that our proposed method significantly outperforms state-of-art results on accuracy for three datasets. For more in-depth performance analysis, we note: (1) Among all baselines, Path-based methods and Attribute-incorporated methods outperform three typical methods. This indicates that incorporating extra information can improve the knowledge graph embedding performance; (2) Four variants of KANE always outperform baseline methods. The main reasons why KANE works well are two fold: 1) KANE can capture high-order structural information of KGs in an efficient, explicit manner and passe these information to their neighboring; 2) KANE leverages rich information encoded in attribute triples. These rich semantic information can further improve the performance of knowledge graph; (3) The variant of KANE that use LSTM Encoder and Concatenation aggregator outperform other variants. The main reasons is that LSTM encoder can distinguish the word order and concatenation aggregator combine all embedding of multi-head attention in a higher leaver feature space, which can obtain sufficient expressive power.
Experiments ::: Entity Classification ::: Efficiency Evaluation.
Figure FIGREF30 shows the test accuracy with increasing epoch on DBP24K and Game30K. We can see that test accuracy first rapidly increased in the first ten iterations, but reaches a stable stages when epoch is larger than 40. Figure FIGREF31 shows test accuracy with different embedding size and training data proportions. We can note that too small embedding size or training data proportions can not generate sufficient global information. In order to further analysis the embeddings learned by our method, we use t-SNE tool BIBREF27 to visualize the learned embedding. Figure FIGREF32 shows the visualization of 256 dimensional entity's embedding on Game30K learned by KANE, R-GCN, PransE and TransE. We observe that our method can learn more discriminative entity's embedding than other other methods.
Experiments ::: Knowledge Graph Completion
The purpose of knowledge graph completion is to complete a triple $(h, r, t)$ when one of $h, r, t$ is missing, which is used many literature BIBREF7. Two measures are considered as our evaluation metrics: (1) the mean rank of correct entities or relations (Mean Rank); (2) the proportion of correct entities or relations ranked in top1 (Hits@1, for relations) or top 10 (Hits@10, for entities). Following the setting in BIBREF7, we also adopt the two evaluation settings named "raw" and "filter" in order to avoid misleading behavior.
The results of entity and relation predication on FB24K are shown in the Table TABREF33. This results indicates that KANE still outperforms other baselines significantly and consistently. This also verifies the necessity of modeling high-order structural and attribute information of KGs in Knowledge graph embedding models.
Conclusion and Future Work
Many recent works have demonstrated the benefits of knowledge graph embedding in knowledge graph completion, such as relation extraction. However, We argue that knowledge graph embedding method still have room for improvement. First, TransE and its most extensions only take direct relations between entities into consideration. Second, most existing knowledge graph embedding methods just leverage relation triples of KGs while ignoring a large number of attribute triples. In order to overcome these limitation, inspired by the recent developments of graph convolutional networks, we propose a new knowledge graph embedding methods, named KANE. The key ideal of KANE is to aggregate all attribute triples with bias and perform embedding propagation based on relation triples when calculating the representations of given entity. Empirical results on three datasets show that KANE significantly outperforms seven state-of-arts methods. | entity types or concepts BIBREF13, relations paths BIBREF17, textual descriptions BIBREF11, BIBREF12, logical rules BIBREF23, deep neural network models BIBREF24 |
45306b26447ea4b120655d6bb2e3636079d3d6e0 | 45306b26447ea4b120655d6bb2e3636079d3d6e0_0 | Q: Do they report results only on English data?
Text: Introduction
The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.
A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction.
Motivation
Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.
Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.
Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 .
Definition and Challenges
Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:
Dataset Creation
We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:
The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.
Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled.
Feature Design
The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.
Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features.
Evaluation
Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class.
Performance for Datasets 1 and 2
Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction.
Performance for Held-out Dataset H
Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1.
Error Analysis
Some categories of errors that occur are:
Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.
Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.
Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it.
Conclusion & Future Work
In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.
Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application. | Yes |
0c08af6e4feaf801185f2ec97c4da04c8b767ad6 | 0c08af6e4feaf801185f2ec97c4da04c8b767ad6_0 | Q: Do the authors mention any confounds to their study?
Text: Introduction
The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.
A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction.
Motivation
Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.
Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.
Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 .
Definition and Challenges
Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:
Dataset Creation
We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:
The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.
Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled.
Feature Design
The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.
Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features.
Evaluation
Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class.
Performance for Datasets 1 and 2
Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction.
Performance for Held-out Dataset H
Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1.
Error Analysis
Some categories of errors that occur are:
Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.
Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.
Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it.
Conclusion & Future Work
In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.
Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application. | No |
6412e97373e8e9ae3aa20aa17abef8326dc05450 | 6412e97373e8e9ae3aa20aa17abef8326dc05450_0 | Q: What baseline model is used?
Text: Introduction
The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.
A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction.
Motivation
Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.
Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.
Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 .
Definition and Challenges
Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:
Dataset Creation
We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:
The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.
Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled.
Feature Design
The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.
Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features.
Evaluation
Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class.
Performance for Datasets 1 and 2
Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction.
Performance for Held-out Dataset H
Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1.
Error Analysis
Some categories of errors that occur are:
Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.
Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.
Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it.
Conclusion & Future Work
In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.
Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application. | Human evaluators |
957bda6b421ef7d2839c3cec083404ac77721f14 | 957bda6b421ef7d2839c3cec083404ac77721f14_0 | Q: What stylistic features are used to detect drunk texts?
Text: Introduction
The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.
A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction.
Motivation
Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.
Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.
Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 .
Definition and Challenges
Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:
Dataset Creation
We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:
The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.
Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled.
Feature Design
The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.
Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features.
Evaluation
Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class.
Performance for Datasets 1 and 2
Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction.
Performance for Held-out Dataset H
Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1.
Error Analysis
Some categories of errors that occur are:
Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.
Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.
Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it.
Conclusion & Future Work
In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.
Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application. | LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalisation, Length, Emoticon (Presence/Count )
and Sentiment Ratio |
957bda6b421ef7d2839c3cec083404ac77721f14 | 957bda6b421ef7d2839c3cec083404ac77721f14_1 | Q: What stylistic features are used to detect drunk texts?
Text: Introduction
The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.
A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction.
Motivation
Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.
Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.
Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 .
Definition and Challenges
Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:
Dataset Creation
We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:
The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.
Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled.
Feature Design
The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.
Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features.
Evaluation
Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class.
Performance for Datasets 1 and 2
Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction.
Performance for Held-out Dataset H
Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1.
Error Analysis
Some categories of errors that occur are:
Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.
Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.
Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it.
Conclusion & Future Work
In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.
Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application. | LDA unigrams (Presence/Count), POS Ratio, #Named Entity Mentions, #Discourse Connectors, Spelling errors, Repeated characters, Capitalization, Length, Emoticon (Presence/Count), Sentiment Ratio. |
368317b4fd049511e00b441c2e9550ded6607c37 | 368317b4fd049511e00b441c2e9550ded6607c37_0 | Q: Is the data acquired under distant supervision verified by humans at any stage?
Text: Introduction
The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.
A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction.
Motivation
Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.
Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.
Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 .
Definition and Challenges
Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:
Dataset Creation
We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:
The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.
Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled.
Feature Design
The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.
Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features.
Evaluation
Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class.
Performance for Datasets 1 and 2
Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction.
Performance for Held-out Dataset H
Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1.
Error Analysis
Some categories of errors that occur are:
Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.
Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.
Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it.
Conclusion & Future Work
In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.
Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application. | Yes |
b3ec918827cd22b16212265fcdd5b3eadee654ae | b3ec918827cd22b16212265fcdd5b3eadee654ae_0 | Q: What hashtags are used for distant supervision?
Text: Introduction
The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.
A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction.
Motivation
Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.
Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.
Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 .
Definition and Challenges
Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:
Dataset Creation
We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:
The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.
Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled.
Feature Design
The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.
Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features.
Evaluation
Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class.
Performance for Datasets 1 and 2
Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction.
Performance for Held-out Dataset H
Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1.
Error Analysis
Some categories of errors that occur are:
Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.
Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.
Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it.
Conclusion & Future Work
In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.
Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application. | Unanswerable |
387970ebc7ef99f302f318d047f708274c0e8f21 | 387970ebc7ef99f302f318d047f708274c0e8f21_0 | Q: Do the authors equate drunk tweeting with drunk texting?
Text: Introduction
The ubiquity of communication devices has made social media highly accessible. The content on these media reflects a user's day-to-day activities. This includes content created under the influence of alcohol. In popular culture, this has been referred to as `drunk-texting'. In this paper, we introduce automatic `drunk-texting prediction' as a computational task. Given a tweet, the goal is to automatically identify if it was written by a drunk user. We refer to tweets written under the influence of alcohol as `drunk tweets', and the opposite as `sober tweets'.
A key challenge is to obtain an annotated dataset. We use hashtag-based supervision so that the authors of the tweets mention if they were drunk at the time of posting a tweet. We create three datasets by using different strategies that are related to the use of hashtags. We then present SVM-based classifiers that use N-gram and stylistic features such as capitalisation, spelling errors, etc. Through our experiments, we make subtle points related to: (a) the performance of our features, (b) how our approach compares against human ability to detect drunk-texting, (c) most discriminative stylistic features, and (d) an error analysis that points to future work. To the best of our knowledge, this is a first study that shows the feasibility of text-based analysis for drunk-texting prediction.
Motivation
Past studies show the relation between alcohol abuse and unsociable behaviour such as aggression BIBREF0 , crime BIBREF1 , suicide attempts BIBREF2 , drunk driving BIBREF3 , and risky sexual behaviour BIBREF4 . suicide state that “those responsible for assessing cases of attempted suicide should be adept at detecting alcohol misuse”. Thus, a drunk-texting prediction system can be used to identify individuals susceptible to these behaviours, or for investigative purposes after an incident.
Drunk-texting may also cause regret. Mail Goggles prompts a user to solve math questions before sending an email on weekend evenings. Some Android applications avoid drunk-texting by blocking outgoing texts at the click of a button. However, to the best of our knowledge, these tools require a user command to begin blocking. An ongoing text-based analysis will be more helpful, especially since it offers a more natural setting by monitoring stream of social media text and not explicitly seeking user input. Thus, automatic drunk-texting prediction will improve systems aimed to avoid regrettable drunk-texting. To the best of our knowledge, ours is the first study that does a quantitative analysis, in terms of prediction of the drunk state by using textual clues.
Several studies have studied linguistic traits associated with emotion expression and mental health issues, suicidal nature, criminal status, etc. BIBREF5 , BIBREF6 . NLP techniques have been used in the past to address social safety and mental health issues BIBREF7 .
Definition and Challenges
Drunk-texting prediction is the task of classifying a text as drunk or sober. For example, a tweet `Feeling buzzed. Can't remember how the evening went' must be predicted as `drunk', whereas, `Returned from work late today, the traffic was bad' must be predicted as `sober'. The challenges are:
Dataset Creation
We use hashtag-based supervision to create our datasets, similar to tasks like emotion classification BIBREF8 . The tweets are downloaded using Twitter API (https://dev.twitter.com/). We remove non-Unicode characters, and eliminate tweets that contain hyperlinks and also tweets that are shorter than 6 words in length. Finally, hashtags used to indicate drunk or sober tweets are removed so that they provide labels, but do not act as features. The dataset is available on request. As a result, we create three datasets, each using a different strategy for sober tweets, as follows:
The drunk tweets for Datasets 1 and 2 are the same. Figure FIGREF9 shows a word-cloud for these drunk tweets (with stop words and forms of the word `drunk' removed), created using WordItOut. The size of a word indicates its frequency. In addition to topical words such as `bar', `bottle' and `wine', the word-cloud shows sentiment words such as `love' or `damn', along with profane words.
Heuristics other than these hashtags could have been used for dataset creation. For example, timestamps were a good option to account for time at which a tweet was posted. However, this could not be used because user's local times was not available, since very few users had geolocation enabled.
Feature Design
The complete set of features is shown in Table TABREF7 . There are two sets of features: (a) N-gram features, and (b) Stylistic features. We use unigrams and bigrams as N-gram features- considering both presence and count.
Table TABREF7 shows the complete set of stylistic features of our prediction system. POS ratios are a set of features that record the proportion of each POS tag in the dataset (for example, the proportion of nouns/adjectives, etc.). The POS tags and named entity mentions are obtained from NLTK BIBREF9 . Discourse connectors are identified based on a manually created list. Spelling errors are identified using a spell checker by enchant. The repeated characters feature captures a situation in which a word contains a letter that is repeated three or more times, as in the case of happpy. Since drunk-texting is often associated with emotional expression, we also incorporate a set of sentiment-based features. These features include: count/presence of emoticons and sentiment ratio. Sentiment ratio is the proportion of positive and negative words in the tweet. To determine positive and negative words, we use the sentiment lexicon in mpqa. To identify a more refined set of words that correspond to the two classes, we also estimated 20 topics for the dataset by estimating an LDA model BIBREF10 . We then consider top 10 words per topic, for both classes. This results in 400 LDA-specific unigrams that are then used as features.
Evaluation
Using the two sets of features, we train SVM classifiers BIBREF11 . We show the five-fold cross-validation performance of our features on Datasets 1 and 2, in Section SECREF17 , and on Dataset H in Section SECREF21 . Section SECREF22 presents an error analysis. Accuracy, positive/negative precision and positive/negative recall are shown as A, PP/NP and PR/NR respectively. `Drunk' forms the positive class, while `Sober' forms the negative class.
Performance for Datasets 1 and 2
Table TABREF14 shows the performance for five-fold cross-validation for Datasets 1 and 2. In case of Dataset 1, we observe that N-gram features achieve an accuracy of 85.5%. We see that our stylistic features alone exhibit degraded performance, with an accuracy of 75.6%, in the case of Dataset 1. Table TABREF16 shows top stylistic features, when trained on the two datasets. Spelling errors, POS ratios for nouns (POS_NOUN), length and sentiment ratios appear in both lists, in addition to LDA-based unigrams. However, negative recall reduces to a mere 3.2%. This degradation implies that our features capture a subset of drunk tweets and that there are properties of drunk tweets that may be more subtle. When both N-gram and stylistic features are used, there is negligible improvement. The accuracy for Dataset 2 increases from 77.9% to 78.1%. Precision/Recall metrics do not change significantly either. The best accuracy of our classifier is 78.1% for all features, and 75.6% for stylistic features. This shows that text-based clues can indeed be used for drunk-texting prediction.
Performance for Held-out Dataset H
Using held-out dataset H, we evaluate how our system performs in comparison to humans. Three annotators, A1-A3, mark each tweet in the Dataset H as drunk or sober. Table TABREF19 shows a moderate agreement between our annotators (for example, it is 0.42 for A1 and A2). Table TABREF20 compares our classifier with humans. Our human annotators perform the task with an average accuracy of 68.8%, while our classifier (with all features) trained on Dataset 2 reaches 64%. The classifier trained on Dataset 2 is better than which is trained on Dataset 1.
Error Analysis
Some categories of errors that occur are:
Incorrect hashtag supervision: The tweet `Can't believe I lost my bag last night, literally had everything in! Thanks god the bar man found it' was marked with`#Drunk'. However, this tweet is not likely to be a drunk tweet, but describes a drunk episode in retrospective. Our classifier predicts it as sober.
Seemingly sober tweets: Human annotators as well as our classifier could not identify whether `Will you take her on a date? But really she does like you' was drunk, although the author of the tweet had marked it so. This example also highlights the difficulty of drunk-texting prediction.
Pragmatic difficulty: The tweet `National dress of Ireland is one's one vomit.. my family is lovely' was correctly identified by our human annotators as a drunk tweet. This tweet contains an element of humour and topic change, but our classifier could not capture it.
Conclusion & Future Work
In this paper, we introduce automatic drunk-texting prediction as the task of predicting a tweet as drunk or sober. First, we justify the need for drunk-texting prediction as means of identifying risky social behavior arising out of alcohol abuse, and the need to build tools that avoid privacy leaks due to drunk-texting. We then highlight the challenges of drunk-texting prediction: one of the challenges is selection of negative examples (sober tweets). Using hashtag-based supervision, we create three datasets annotated with drunk or sober labels. We then present SVM-based classifiers which use two sets of features: N-gram and stylistic features. Our drunk prediction system obtains a best accuracy of 78.1%. We observe that our stylistic features add negligible value to N-gram features. We use our heldout dataset to compare how our system performs against human annotators. While human annotators achieve an accuracy of 68.8%, our system reaches reasonably close and performs with a best accuracy of 64%.
Our analysis of the task and experimental findings make a case for drunk-texting prediction as a useful and feasible NLP application. | Yes |
2fffff59e57b8dbcaefb437a6b3434fc137f813b | 2fffff59e57b8dbcaefb437a6b3434fc137f813b_0 | Q: What corpus was the source of the OpenIE extractions?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining |
2fffff59e57b8dbcaefb437a6b3434fc137f813b | 2fffff59e57b8dbcaefb437a6b3434fc137f813b_1 | Q: What corpus was the source of the OpenIE extractions?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | Unanswerable |
eb95af36347ed0e0808e19963fe4d058e2ce3c9f | eb95af36347ed0e0808e19963fe4d058e2ce3c9f_0 | Q: What is the accuracy of the proposed technique?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | 51.7 and 51.6 on 4th and 8th grade question sets with no curated knowledge. 47.5 and 48.0 on 4th and 8th grade question sets when both solvers are given the same knowledge |
cd1792929b9fa5dd5b1df0ae06fc6aece4c97424 | cd1792929b9fa5dd5b1df0ae06fc6aece4c97424_0 | Q: Is an entity linking process used?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | No |
65d34041ffa4564385361979a08706b10b92ebc7 | 65d34041ffa4564385361979a08706b10b92ebc7_0 | Q: Are the OpenIE extractions all triples?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | No |
e215fa142102f7f9eeda9c9eb8d2aeff7f2a33ed | e215fa142102f7f9eeda9c9eb8d2aeff7f2a33ed_0 | Q: What method was used to generate the OpenIE extractions?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S, take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ |
a8545f145d5ea2202cb321c8f93e75ad26fcf4aa | a8545f145d5ea2202cb321c8f93e75ad26fcf4aa_0 | Q: Can the method answer multi-hop questions?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | Yes |
417dabd43d6266044d38ed88dbcb5fdd7a426b22 | 417dabd43d6266044d38ed88dbcb5fdd7a426b22_0 | Q: What was the textual source to which OpenIE was applied?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining |
fed230cef7c130f6040fb04304a33bbc17ca3a36 | fed230cef7c130f6040fb04304a33bbc17ca3a36_0 | Q: What OpenIE method was used to generate the extractions?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S, take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ |
7917d44e952b58ea066dc0b485d605c9a1fe3dda | 7917d44e952b58ea066dc0b485d605c9a1fe3dda_0 | Q: Is their method capable of multi-hop reasoning?
Text: Introduction
Effective question answering (QA) systems have been a long-standing quest of AI research. Structured curated KBs have been used successfully for this task BIBREF0 , BIBREF1 . However, these KBs are expensive to build and typically domain-specific. Automatically constructed open vocabulary (subject; predicate; object) style tuples have broader coverage, but have only been used for simple questions where a single tuple suffices BIBREF2 , BIBREF3 .
Our goal in this work is to develop a QA system that can perform reasoning with Open IE BIBREF4 tuples for complex multiple-choice questions that require tuples from multiple sentences. Such a system can answer complex questions in resource-poor domains where curated knowledge is unavailable. Elementary-level science exams is one such domain, requiring complex reasoning BIBREF5 . Due to the lack of a large-scale structured KB, state-of-the-art systems for this task either rely on shallow reasoning with large text corpora BIBREF6 , BIBREF7 or deeper, structured reasoning with a small amount of automatically acquired BIBREF8 or manually curated BIBREF9 knowledge.
Consider the following question from an Alaska state 4th grade science test:
Which object in our solar system reflects light and is a satellite that orbits around one planet? (A) Earth (B) Mercury (C) the Sun (D) the Moon
This question is challenging for QA systems because of its complex structure and the need for multi-fact reasoning. A natural way to answer it is by combining facts such as (Moon; is; in the solar system), (Moon; reflects; light), (Moon; is; satellite), and (Moon; orbits; around one planet).
A candidate system for such reasoning, and which we draw inspiration from, is the TableILP system of BIBREF9 . TableILP treats QA as a search for an optimal subgraph that connects terms in the question and answer via rows in a set of curated tables, and solves the optimization problem using Integer Linear Programming (ILP). We similarly want to search for an optimal subgraph. However, a large, automatically extracted tuple KB makes the reasoning context different on three fronts: (a) unlike reasoning with tables, chaining tuples is less important and reliable as join rules aren't available; (b) conjunctive evidence becomes paramount, as, unlike a long table row, a single tuple is less likely to cover the entire question; and (c) again, unlike table rows, tuples are noisy, making combining redundant evidence essential. Consequently, a table-knowledge centered inference model isn't the best fit for noisy tuples.
To address this challenge, we present a new ILP-based model of inference with tuples, implemented in a reasoner called TupleInf. We demonstrate that TupleInf significantly outperforms TableILP by 11.8% on a broad set of over 1,300 science questions, without requiring manually curated tables, using a substantially simpler ILP formulation, and generalizing well to higher grade levels. The gains persist even when both solvers are provided identical knowledge. This demonstrates for the first time how Open IE based QA can be extended from simple lookup questions to an effective system for complex questions.
Related Work
We discuss two classes of related work: retrieval-based web question-answering (simple reasoning with large scale KB) and science question-answering (complex reasoning with small KB).
Tuple Inference Solver
We first describe the tuples used by our solver. We define a tuple as (subject; predicate; objects) with zero or more objects. We refer to the subject, predicate, and objects as the fields of the tuple.
Tuple KB
We use the text corpora (S) from BIBREF6 aristo2016:combining to build our tuple KB. For each test set, we use the corresponding training questions $Q_\mathit {tr}$ to retrieve domain-relevant sentences from S. Specifically, for each multiple-choice question $(q,A) \in Q_\mathit {tr}$ and each choice $a \in A$ , we use all non-stopword tokens in $q$ and $a$ as an ElasticSearch query against S. We take the top 200 hits, run Open IE v4, and aggregate the resulting tuples over all $a \in A$ and over all questions in $Q_\mathit {tr}$ to create the tuple KB (T).
Tuple Selection
Given a multiple-choice question $qa$ with question text $q$ and answer choices A= $\lbrace a_i\rbrace $ , we select the most relevant tuples from $T$ and $S$ as follows.
Selecting from Tuple KB: We use an inverted index to find the 1,000 tuples that have the most overlapping tokens with question tokens $tok(qa).$ . We also filter out any tuples that overlap only with $tok(q)$ as they do not support any answer. We compute the normalized TF-IDF score treating the question, $q$ as a query and each tuple, $t$ as a document: $ &\textit {tf}(x, q)=1\; \textmd {if x} \in q ; \textit {idf}(x) = log(1 + N/n_x) \\ &\textit {tf-idf}(t, q)=\sum _{x \in t\cap q} idf(x) $
where $N$ is the number of tuples in the KB and $n_x$ are the number of tuples containing $x$ . We normalize the tf-idf score by the number of tokens in $t$ and $q$ . We finally take the 50 top-scoring tuples $T_{qa}$ .
On-the-fly tuples from text: To handle questions from new domains not covered by the training set, we extract additional tuples on the fly from S (similar to BIBREF17 knowlhunting). We perform the same ElasticSearch query described earlier for building T. We ignore sentences that cover none or all answer choices as they are not discriminative. We also ignore long sentences ( $>$ 300 characters) and sentences with negation as they tend to lead to noisy inference. We then run Open IE on these sentences and re-score the resulting tuples using the Jaccard score due to the lossy nature of Open IE, and finally take the 50 top-scoring tuples $T^{\prime }_{qa}$ .
Support Graph Search
Similar to TableILP, we view the QA task as searching for a graph that best connects the terms in the question (qterms) with an answer choice via the knowledge; see Figure 1 for a simple illustrative example. Unlike standard alignment models used for tasks such as Recognizing Textual Entailment (RTE) BIBREF18 , however, we must score alignments between a set $T_{qa} \cup T^{\prime }_{qa}$ of structured tuples and a (potentially multi-sentence) multiple-choice question $qa$ .
The qterms, answer choices, and tuples fields form the set of possible vertices, $\mathcal {V}$ , of the support graph. Edges connecting qterms to tuple fields and tuple fields to answer choices form the set of possible edges, $\mathcal {E}$ . The support graph, $G(V, E)$ , is a subgraph of $\mathcal {G}(\mathcal {V}, \mathcal {E})$ where $V$ and $E$ denote “active” nodes and edges, resp. We define the desired behavior of an optimal support graph via an ILP model as follows.
Similar to TableILP, we score the support graph based on the weight of the active nodes and edges. Each edge $e(t, h)$ is weighted based on a word-overlap score. While TableILP used WordNet BIBREF19 paths to compute the weight, this measure results in unreliable scores when faced with longer phrases found in Open IE tuples.
Compared to a curated KB, it is easy to find Open IE tuples that match irrelevant parts of the questions. To mitigate this issue, we improve the scoring of qterms in our ILP objective to focus on important terms. Since the later terms in a question tend to provide the most critical information, we scale qterm coefficients based on their position. Also, qterms that appear in almost all of the selected tuples tend not to be discriminative as any tuple would support such a qterm. Hence we scale the coefficients by the inverse frequency of the tokens in the selected tuples.
Since Open IE tuples do not come with schema and join rules, we can define a substantially simpler model compared to TableILP. This reduces the reasoning capability but also eliminates the reliance on hand-authored join rules and regular expressions used in TableILP. We discovered (see empirical evaluation) that this simple model can achieve the same score as TableILP on the Regents test (target test set used by TableILP) and generalizes better to different grade levels.
We define active vertices and edges using ILP constraints: an active edge must connect two active vertices and an active vertex must have at least one active edge. To avoid positive edge coefficients in the objective function resulting in spurious edges in the support graph, we limit the number of active edges from an active tuple, question choice, tuple fields, and qterms (first group of constraints in Table 1 ). Our model is also capable of using multiple tuples to support different parts of the question as illustrated in Figure 1 . To avoid spurious tuples that only connect with the question (or choice) or ignore the relation being expressed in the tuple, we add constraints that require each tuple to connect a qterm with an answer choice (second group of constraints in Table 1 ).
We also define new constraints based on the Open IE tuple structure. Since an Open IE tuple expresses a fact about the tuple's subject, we require the subject to be active in the support graph. To avoid issues such as (Planet; orbit; Sun) matching the sample question in the introduction (“Which object $\ldots $ orbits around a planet”), we also add an ordering constraint (third group in Table 1 ).
Its worth mentioning that TupleInf only combines parallel evidence i.e. each tuple must connect words in the question to the answer choice. For reliable multi-hop reasoning using OpenIE tuples, we can add inter-tuple connections to the support graph search, controlled by a small number of rules over the OpenIE predicates. Learning such rules for the Science domain is an open problem and potential avenue of future work.
Experiments
Comparing our method with two state-of-the-art systems for 4th and 8th grade science exams, we demonstrate that (a) TupleInf with only automatically extracted tuples significantly outperforms TableILP with its original curated knowledge as well as with additional tuples, and (b) TupleInf's complementary approach to IR leads to an improved ensemble. Numbers in bold indicate statistical significance based on the Binomial exact test BIBREF20 at $p=0.05$ .
We consider two question sets. (1) 4th Grade set (1220 train, 1304 test) is a 10x larger superset of the NY Regents questions BIBREF6 , and includes professionally written licensed questions. (2) 8th Grade set (293 train, 282 test) contains 8th grade questions from various states.
We consider two knowledge sources. The Sentence corpus (S) consists of domain-targeted $~$ 80K sentences and 280 GB of plain text extracted from web pages used by BIBREF6 aristo2016:combining. This corpus is used by the IR solver and also used to create the tuple KB T and on-the-fly tuples $T^{\prime }_{qa}$ . Additionally, TableILP uses $\sim $ 70 Curated tables (C) designed for 4th grade NY Regents exams.
We compare TupleInf with two state-of-the-art baselines. IR is a simple yet powerful information-retrieval baseline BIBREF6 that selects the answer option with the best matching sentence in a corpus. TableILP is the state-of-the-art structured inference baseline BIBREF9 developed for science questions.
Results
Table 2 shows that TupleInf, with no curated knowledge, outperforms TableILP on both question sets by more than 11%. The lower half of the table shows that even when both solvers are given the same knowledge (C+T), the improved selection and simplified model of TupleInf results in a statistically significant improvement. Our simple model, TupleInf(C + T), also achieves scores comparable to TableILP on the latter's target Regents questions (61.4% vs TableILP's reported 61.5%) without any specialized rules.
Table 3 shows that while TupleInf achieves similar scores as the IR solver, the approaches are complementary (structured lossy knowledge reasoning vs. lossless sentence retrieval). The two solvers, in fact, differ on 47.3% of the training questions. To exploit this complementarity, we train an ensemble system BIBREF6 which, as shown in the table, provides a substantial boost over the individual solvers. Further, IR + TupleInf is consistently better than IR + TableILP. Finally, in combination with IR and the statistical association based PMI solver (that scores 54.1% by itself) of BIBREF6 aristo2016:combining, TupleInf achieves a score of 58.2% as compared to TableILP's ensemble score of 56.7% on the 4th grade set, again attesting to TupleInf's strength.
Error Analysis
We describe four classes of failures that we observed, and the future work they suggest.
Missing Important Words: Which material will spread out to completely fill a larger container? (A)air (B)ice (C)sand (D)water
In this question, we have tuples that support water will spread out and fill a larger container but miss the critical word “completely”. An approach capable of detecting salient question words could help avoid that.
Lossy IE: Which action is the best method to separate a mixture of salt and water? ...
The IR solver correctly answers this question by using the sentence: Separate the salt and water mixture by evaporating the water. However, TupleInf is not able to answer this question as Open IE is unable to extract tuples from this imperative sentence. While the additional structure from Open IE is useful for more robust matching, converting sentences to Open IE tuples may lose important bits of information.
Bad Alignment: Which of the following gases is necessary for humans to breathe in order to live?(A) Oxygen(B) Carbon dioxide(C) Helium(D) Water vapor
TupleInf returns “Carbon dioxide” as the answer because of the tuple (humans; breathe out; carbon dioxide). The chunk “to breathe” in the question has a high alignment score to the “breathe out” relation in the tuple even though they have completely different meanings. Improving the phrase alignment can mitigate this issue.
Out of scope: Deer live in forest for shelter. If the forest was cut down, which situation would most likely happen?...
Such questions that require modeling a state presented in the question and reasoning over the state are out of scope of our solver.
Conclusion
We presented a new QA system, TupleInf, that can reason over a large, potentially noisy tuple KB to answer complex questions. Our results show that TupleInf is a new state-of-the-art structured solver for elementary-level science that does not rely on curated knowledge and generalizes to higher grades. Errors due to lossy IE and misalignments suggest future work in incorporating context and distributional measures.
Appendix: ILP Model Details
To build the ILP model, we first need to get the questions terms (qterm) from the question by chunking the question using an in-house chunker based on the postagger from FACTORIE.
Experiment Details
We use the SCIP ILP optimization engine BIBREF21 to optimize our ILP model. To get the score for each answer choice $a_i$ , we force the active variable for that choice $x_{a_i}$ to be one and use the objective function value of the ILP model as the score. For evaluations, we use a 2-core 2.5 GHz Amazon EC2 linux machine with 16 GB RAM. To evaluate TableILP and TupleInf on curated tables and tuples, we converted them into the expected format of each solver as follows.
Using curated tables with TupleInf
For each question, we select the 7 best matching tables using the tf-idf score of the table w.r.t. the question tokens and top 20 rows from each table using the Jaccard similarity of the row with the question. (same as BIBREF9 tableilp2016). We then convert the table rows into the tuple structure using the relations defined by TableILP. For every pair of cells connected by a relation, we create a tuple with the two cells as the subject and primary object with the relation as the predicate. The other cells of the table are used as additional objects to provide context to the solver. We pick top-scoring 50 tuples using the Jaccard score.
Using Open IE tuples with TableILP
We create an additional table in TableILP with all the tuples in $T$ . Since TableILP uses fixed-length $(subject; predicate; object)$ triples, we need to map tuples with multiple objects to this format. For each object, $O_i$ in the input Open IE tuple $(S; P; O_1; O_2 \ldots )$ , we add a triple $(S; P; O_i)$ to this table. | Yes |
7d5ba230522df1890619dedcfb310160958223c1 | 7d5ba230522df1890619dedcfb310160958223c1_0 | Q: Do the authors offer any hypothesis about why the dense mode outperformed the sparse one?
Text: Introduction
Word sense disambiguation (WSD) is a natural language processing task of identifying the particular word senses of polysemous words used in a sentence. Recently, a lot of attention was paid to the problem of WSD for the Russian language BIBREF0 , BIBREF1 , BIBREF2 . This problem is especially difficult because of both linguistic issues – namely, the rich morphology of Russian and other Slavic languages in general – and technical challenges like the lack of software and language resources required for addressing the problem.
To address these issues, we present Watasense, an unsupervised system for word sense disambiguation. We describe its architecture and conduct an evaluation on three datasets for Russian. The choice of an unsupervised system is motivated by the absence of resources that would enable a supervised system for under-resourced languages. Watasense is not strictly tied to the Russian language and can be applied to any language for which a tokenizer, part-of-speech tagger, lemmatizer, and a sense inventory are available.
The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 presents the Watasense word sense disambiguation system, presents its architecture, and describes the unsupervised word sense disambiguation methods bundled with it. Section 4 evaluates the system on a gold standard for Russian. Section 5 concludes with final remarks.
Related Work
Although the problem of WSD has been addressed in many SemEval campaigns BIBREF3 , BIBREF4 , BIBREF5 , we focus here on word sense disambiguation systems rather than on the research methodologies.
Among the freely available systems, IMS (“It Makes Sense”) is a supervised WSD system designed initially for the English language BIBREF6 . The system uses a support vector machine classifier to infer the particular sense of a word in the sentence given its contextual sentence-level features. Pywsd is an implementation of several popular WSD algorithms implemented in a library for the Python programming language. It offers both the classical Lesk algorithm for WSD and path-based algorithms that heavily use the WordNet and similar lexical ontologies. DKPro WSD BIBREF7 is a general-purpose framework for WSD that uses a lexical ontology as the sense inventory and offers the variety of WordNet-based algorithms. Babelfy BIBREF8 is a WSD system that uses BabelNet, a large-scale multilingual lexical ontology available for most natural languages. Due to the broad coverage of BabelNet, Babelfy offers entity linking as part of the WSD functionality.
Panchenko:17:emnlp present an unsupervised WSD system that is also knowledge-free: its sense inventory is induced based on the JoBimText framework, and disambiguation is performed by computing the semantic similarity between the context and the candidate senses BIBREF9 . Pelevina:16 proposed a similar approach to WSD, but based on dense vector representations (word embeddings), called SenseGram. Similarly to SenseGram, our WSD system is based on averaging of word embeddings on the basis of an automatically induced sense inventory. A crucial difference, however, is that we induce our sense inventory from synonymy dictionaries and not distributional word vectors. While this requires more manually created resources, a potential advantage of our approach is that the resulting inventory contains less noise.
Watasense, an Unsupervised System for Word Sense Disambiguation
Watasense is implemented in the Python programming language using the scikit-learn BIBREF10 and Gensim BIBREF11 libraries. Watasense offers a Web interface (Figure FIGREF2 ), a command-line tool, and an application programming interface (API) for deployment within other applications.
System Architecture
A sentence is represented as a list of spans. A span is a quadruple: INLINEFORM0 , where INLINEFORM1 is the word or the token, INLINEFORM2 is the part of speech tag, INLINEFORM3 is the lemma, INLINEFORM4 is the position of the word in the sentence. These data are provided by tokenizer, part-of-speech tagger, and lemmatizer that are specific for the given language. The WSD results are represented as a map of spans to the corresponding word sense identifiers.
The sense inventory is a list of synsets. A synset is represented by three bag of words: the synonyms, the hypernyms, and the union of two former – the bag. Due to the performance reasons, on initialization, an inverted index is constructed to map a word to the set of synsets it is included into.
Each word sense disambiguation method extends the BaseWSD class. This class provides the end user with a generic interface for WSD and also encapsulates common routines for data pre-processing. The inherited classes like SparseWSD and DenseWSD should implement the disambiguate_word(...) method that disambiguates the given word in the given sentence. Both classes use the bag representation of synsets on the initialization. As the result, for WSD, not just the synonyms are used, but also the hypernyms corresponding to the synsets. The UML class diagram is presented in Figure FIGREF4 .
Watasense supports two sources of word vectors: it can either read the word vector dataset in the binary Word2Vec format or use Word2Vec-Pyro4, a general-purpose word vector server. The use of a remote word vector server is recommended due to the reduction of memory footprint per each Watasense process.
User Interface
FIGREF2 shows the Web interface of Watasense. It is composed of two primary activities. The first is the text input and the method selection ( FIGREF2 ). The second is the display of the disambiguation results with part of speech highlighting ( FIGREF7 ). Those words with resolved polysemy are underlined; the tooltips with the details are raised on hover.
Word Sense Disambiguation
We use two different unsupervised approaches for word sense disambiguation. The first, called `sparse model', uses a straightforward sparse vector space model, as widely used in Information Retrieval, to represent contexts and synsets. The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.
In the vector space model approach, we follow the sparse context-based disambiguated method BIBREF12 , BIBREF13 . For estimating the sense of the word INLINEFORM0 in a sentence, we search for such a synset INLINEFORM1 that maximizes the cosine similarity to the sentence vector: DISPLAYFORM0
where INLINEFORM0 is the set of words forming the synset, INLINEFORM1 is the set of words forming the sentence. On initialization, the synsets represented in the sense inventory are transformed into the INLINEFORM2 -weighted word-synset sparse matrix efficiently represented in the memory using the compressed sparse row format. Given a sentence, a similar transformation is done to obtain the sparse vector representation of the sentence in the same space as the word-synset matrix. Then, for each word to disambiguate, we retrieve the synset containing this word that maximizes the cosine similarity between the sparse sentence vector and the sparse synset vector. Let INLINEFORM3 be the maximal number of synsets containing a word and INLINEFORM4 be the maximal size of a synset. Therefore, disambiguation of the whole sentence INLINEFORM5 requires INLINEFORM6 operations using the efficient sparse matrix representation.
In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms. We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word: DISPLAYFORM0
where INLINEFORM0 denotes the word embedding of INLINEFORM1 . We do the same transformation for the sentence vectors. Then, given a word INLINEFORM2 , a sentence INLINEFORM3 , we find the synset INLINEFORM4 that maximizes the cosine similarity to the sentence: DISPLAYFORM0
On initialization, we pre-compute the dense synset vectors by averaging the corresponding word embeddings. Given a sentence, we similarly compute the dense sentence vector by averaging the vectors of the words belonging to non-auxiliary parts of speech, i.e., nouns, adjectives, adverbs, verbs, etc. Then, given a word to disambiguate, we retrieve the synset that maximizes the cosine similarity between the dense sentence vector and the dense synset vector. Thus, given the number of dimensions INLINEFORM0 , disambiguation of the whole sentence INLINEFORM1 requires INLINEFORM2 operations.
Evaluation
We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation BIBREF5 . In the gold standard, each word is provided with a set of instances, i.e., the sentences containing the word. Each instance is manually annotated with the single sense identifier according to a pre-defined sense inventory. Each participating system estimates the sense labels for these ambiguous words, which can be viewed as a clustering of instances, according to sense labels. The system's clustering is compared to the gold-standard clustering for evaluation.
Quality Measure
The original SemEval 2010 Task 14 used the V-Measure external clustering measure BIBREF5 . However, this measure is maximized by clustering each sentence into his own distinct cluster, i.e., a `dummy' singleton baseline. This is achieved by the system deciding that every ambiguous word in every sentence corresponds to a different word sense. To cope with this issue, we follow a similar study BIBREF1 and use instead of the adjusted Rand index (ARI) proposed by Hubert:85 as an evaluation measure.
In order to provide the overall value of ARI, we follow the addition approach used in BIBREF1 . Since the quality measure is computed for each lemma individually, the total value is a weighted sum, namely DISPLAYFORM0
where INLINEFORM0 is the lemma, INLINEFORM1 is the set of the instances for the lemma INLINEFORM2 , INLINEFORM3 is the adjusted Rand index computed for the lemma INLINEFORM4 . Thus, the contribution of each lemma to the total score is proportional to the number of instances of this lemma.
Dataset
We evaluate the word sense disambiguation methods in Watasense against three baselines: an unsupervised approach for learning multi-prototype word embeddings called AdaGram BIBREF15 , same sense for all the instances per lemma (One), and one sense per instance (Singletons). The AdaGram model is trained on the combination of RuWac, Lib.Ru, and the Russian Wikipedia with the overall vocabulary size of 2 billion tokens BIBREF1 .
As the gold-standard dataset, we use the WSD training dataset for Russian created during RUSSE'2018: A Shared Task on Word Sense Induction and Disambiguation for the Russian Language BIBREF16 . The dataset has 31 words covered by INLINEFORM0 instances in the bts-rnc subset and 5 words covered by 439 instances in the wiki-wiki subset.
The following different sense inventories have been used during the evaluation:
[leftmargin=4mm]
Watlink, a word sense network constructed automatically. It uses the synsets induced in an unsupervised way by the Watset[CWnolog, MCL] method BIBREF2 and the semantic relations from such dictionaries as Wiktionary referred as Joint INLINEFORM0 Exp INLINEFORM1 SWN in Ustalov:17:dialogue. This is the only automatically built inventory we use in the evaluation.
RuThes, a large-scale lexical ontology for Russian created by a group of expert lexicographers BIBREF17 .
RuWordNet, a semi-automatic conversion of the RuThes lexical ontology into a WordNet-like structure BIBREF18 .
Since the Dense model requires word embeddings, we used the 500-dimensional word vectors from the Russian Distributional Thesaurus BIBREF19 . These vectors are obtained using the Skip-gram approach trained on the lib.rus.ec text corpus.
Results
We compare the evaluation results obtained for the Sparse and Dense approaches with three baselines: the AdaGram model (AdaGram), the same sense for all the instances per lemma (One) and one sense per instance (Singletons). The evaluation results are presented in Table TABREF25 . The columns bts-rnc and wiki-wiki represent the overall value of ARI according to Equation ( EQREF15 ). The column Avg. consists of the weighted average of the datasets w.r.t. the number of instances.
We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. Thus, we recommend using the dense approach in further studies. Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average, this result does not transfer to languages with less available corpus size.
Conclusion
In this paper, we presented Watasense, an open source unsupervised word sense disambiguation system that is parameterized only by a word sense inventory. It supports both sparse and dense sense representations. We were able to show that the dense approach substantially boosts the performance of the sparse approach on three different sense inventories for Russian. We recommend using the dense approach in further studies due to its smoothing capabilities that reduce sparseness. In further studies, we will look at the problem of phrase neighbors that influence the sentence vector representations.
Finally, we would like to emphasize the fact that Watasense has a simple API for integrating different algorithms for WSD. At the same time, it requires only a basic set of language processing tools to be available: tokenizer, a part-of-speech tagger, lemmatizer, and a sense inventory, which means that low-resourced language can benefit of its usage.
Acknowledgements
We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) under the project “Joining Ontologies and Semantics Induced from Text” (JOIN-T), the RFBR under the projects no. 16-37-00203 mol_a and no. 16-37-00354 mol_a, and the RFH under the project no. 16-04-12019. The research was supported by the Ministry of Education and Science of the Russian Federation Agreement no. 02.A03.21.0006. The calculations were carried out using the supercomputer “Uran” at the Krasovskii Institute of Mathematics and Mechanics. | Yes |
a48cc6d3d322a7b159ff40ec162a541bf74321eb | a48cc6d3d322a7b159ff40ec162a541bf74321eb_0 | Q: What evaluation is conducted?
Text: Introduction
Word sense disambiguation (WSD) is a natural language processing task of identifying the particular word senses of polysemous words used in a sentence. Recently, a lot of attention was paid to the problem of WSD for the Russian language BIBREF0 , BIBREF1 , BIBREF2 . This problem is especially difficult because of both linguistic issues – namely, the rich morphology of Russian and other Slavic languages in general – and technical challenges like the lack of software and language resources required for addressing the problem.
To address these issues, we present Watasense, an unsupervised system for word sense disambiguation. We describe its architecture and conduct an evaluation on three datasets for Russian. The choice of an unsupervised system is motivated by the absence of resources that would enable a supervised system for under-resourced languages. Watasense is not strictly tied to the Russian language and can be applied to any language for which a tokenizer, part-of-speech tagger, lemmatizer, and a sense inventory are available.
The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 presents the Watasense word sense disambiguation system, presents its architecture, and describes the unsupervised word sense disambiguation methods bundled with it. Section 4 evaluates the system on a gold standard for Russian. Section 5 concludes with final remarks.
Related Work
Although the problem of WSD has been addressed in many SemEval campaigns BIBREF3 , BIBREF4 , BIBREF5 , we focus here on word sense disambiguation systems rather than on the research methodologies.
Among the freely available systems, IMS (“It Makes Sense”) is a supervised WSD system designed initially for the English language BIBREF6 . The system uses a support vector machine classifier to infer the particular sense of a word in the sentence given its contextual sentence-level features. Pywsd is an implementation of several popular WSD algorithms implemented in a library for the Python programming language. It offers both the classical Lesk algorithm for WSD and path-based algorithms that heavily use the WordNet and similar lexical ontologies. DKPro WSD BIBREF7 is a general-purpose framework for WSD that uses a lexical ontology as the sense inventory and offers the variety of WordNet-based algorithms. Babelfy BIBREF8 is a WSD system that uses BabelNet, a large-scale multilingual lexical ontology available for most natural languages. Due to the broad coverage of BabelNet, Babelfy offers entity linking as part of the WSD functionality.
Panchenko:17:emnlp present an unsupervised WSD system that is also knowledge-free: its sense inventory is induced based on the JoBimText framework, and disambiguation is performed by computing the semantic similarity between the context and the candidate senses BIBREF9 . Pelevina:16 proposed a similar approach to WSD, but based on dense vector representations (word embeddings), called SenseGram. Similarly to SenseGram, our WSD system is based on averaging of word embeddings on the basis of an automatically induced sense inventory. A crucial difference, however, is that we induce our sense inventory from synonymy dictionaries and not distributional word vectors. While this requires more manually created resources, a potential advantage of our approach is that the resulting inventory contains less noise.
Watasense, an Unsupervised System for Word Sense Disambiguation
Watasense is implemented in the Python programming language using the scikit-learn BIBREF10 and Gensim BIBREF11 libraries. Watasense offers a Web interface (Figure FIGREF2 ), a command-line tool, and an application programming interface (API) for deployment within other applications.
System Architecture
A sentence is represented as a list of spans. A span is a quadruple: INLINEFORM0 , where INLINEFORM1 is the word or the token, INLINEFORM2 is the part of speech tag, INLINEFORM3 is the lemma, INLINEFORM4 is the position of the word in the sentence. These data are provided by tokenizer, part-of-speech tagger, and lemmatizer that are specific for the given language. The WSD results are represented as a map of spans to the corresponding word sense identifiers.
The sense inventory is a list of synsets. A synset is represented by three bag of words: the synonyms, the hypernyms, and the union of two former – the bag. Due to the performance reasons, on initialization, an inverted index is constructed to map a word to the set of synsets it is included into.
Each word sense disambiguation method extends the BaseWSD class. This class provides the end user with a generic interface for WSD and also encapsulates common routines for data pre-processing. The inherited classes like SparseWSD and DenseWSD should implement the disambiguate_word(...) method that disambiguates the given word in the given sentence. Both classes use the bag representation of synsets on the initialization. As the result, for WSD, not just the synonyms are used, but also the hypernyms corresponding to the synsets. The UML class diagram is presented in Figure FIGREF4 .
Watasense supports two sources of word vectors: it can either read the word vector dataset in the binary Word2Vec format or use Word2Vec-Pyro4, a general-purpose word vector server. The use of a remote word vector server is recommended due to the reduction of memory footprint per each Watasense process.
User Interface
FIGREF2 shows the Web interface of Watasense. It is composed of two primary activities. The first is the text input and the method selection ( FIGREF2 ). The second is the display of the disambiguation results with part of speech highlighting ( FIGREF7 ). Those words with resolved polysemy are underlined; the tooltips with the details are raised on hover.
Word Sense Disambiguation
We use two different unsupervised approaches for word sense disambiguation. The first, called `sparse model', uses a straightforward sparse vector space model, as widely used in Information Retrieval, to represent contexts and synsets. The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.
In the vector space model approach, we follow the sparse context-based disambiguated method BIBREF12 , BIBREF13 . For estimating the sense of the word INLINEFORM0 in a sentence, we search for such a synset INLINEFORM1 that maximizes the cosine similarity to the sentence vector: DISPLAYFORM0
where INLINEFORM0 is the set of words forming the synset, INLINEFORM1 is the set of words forming the sentence. On initialization, the synsets represented in the sense inventory are transformed into the INLINEFORM2 -weighted word-synset sparse matrix efficiently represented in the memory using the compressed sparse row format. Given a sentence, a similar transformation is done to obtain the sparse vector representation of the sentence in the same space as the word-synset matrix. Then, for each word to disambiguate, we retrieve the synset containing this word that maximizes the cosine similarity between the sparse sentence vector and the sparse synset vector. Let INLINEFORM3 be the maximal number of synsets containing a word and INLINEFORM4 be the maximal size of a synset. Therefore, disambiguation of the whole sentence INLINEFORM5 requires INLINEFORM6 operations using the efficient sparse matrix representation.
In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms. We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word: DISPLAYFORM0
where INLINEFORM0 denotes the word embedding of INLINEFORM1 . We do the same transformation for the sentence vectors. Then, given a word INLINEFORM2 , a sentence INLINEFORM3 , we find the synset INLINEFORM4 that maximizes the cosine similarity to the sentence: DISPLAYFORM0
On initialization, we pre-compute the dense synset vectors by averaging the corresponding word embeddings. Given a sentence, we similarly compute the dense sentence vector by averaging the vectors of the words belonging to non-auxiliary parts of speech, i.e., nouns, adjectives, adverbs, verbs, etc. Then, given a word to disambiguate, we retrieve the synset that maximizes the cosine similarity between the dense sentence vector and the dense synset vector. Thus, given the number of dimensions INLINEFORM0 , disambiguation of the whole sentence INLINEFORM1 requires INLINEFORM2 operations.
Evaluation
We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation BIBREF5 . In the gold standard, each word is provided with a set of instances, i.e., the sentences containing the word. Each instance is manually annotated with the single sense identifier according to a pre-defined sense inventory. Each participating system estimates the sense labels for these ambiguous words, which can be viewed as a clustering of instances, according to sense labels. The system's clustering is compared to the gold-standard clustering for evaluation.
Quality Measure
The original SemEval 2010 Task 14 used the V-Measure external clustering measure BIBREF5 . However, this measure is maximized by clustering each sentence into his own distinct cluster, i.e., a `dummy' singleton baseline. This is achieved by the system deciding that every ambiguous word in every sentence corresponds to a different word sense. To cope with this issue, we follow a similar study BIBREF1 and use instead of the adjusted Rand index (ARI) proposed by Hubert:85 as an evaluation measure.
In order to provide the overall value of ARI, we follow the addition approach used in BIBREF1 . Since the quality measure is computed for each lemma individually, the total value is a weighted sum, namely DISPLAYFORM0
where INLINEFORM0 is the lemma, INLINEFORM1 is the set of the instances for the lemma INLINEFORM2 , INLINEFORM3 is the adjusted Rand index computed for the lemma INLINEFORM4 . Thus, the contribution of each lemma to the total score is proportional to the number of instances of this lemma.
Dataset
We evaluate the word sense disambiguation methods in Watasense against three baselines: an unsupervised approach for learning multi-prototype word embeddings called AdaGram BIBREF15 , same sense for all the instances per lemma (One), and one sense per instance (Singletons). The AdaGram model is trained on the combination of RuWac, Lib.Ru, and the Russian Wikipedia with the overall vocabulary size of 2 billion tokens BIBREF1 .
As the gold-standard dataset, we use the WSD training dataset for Russian created during RUSSE'2018: A Shared Task on Word Sense Induction and Disambiguation for the Russian Language BIBREF16 . The dataset has 31 words covered by INLINEFORM0 instances in the bts-rnc subset and 5 words covered by 439 instances in the wiki-wiki subset.
The following different sense inventories have been used during the evaluation:
[leftmargin=4mm]
Watlink, a word sense network constructed automatically. It uses the synsets induced in an unsupervised way by the Watset[CWnolog, MCL] method BIBREF2 and the semantic relations from such dictionaries as Wiktionary referred as Joint INLINEFORM0 Exp INLINEFORM1 SWN in Ustalov:17:dialogue. This is the only automatically built inventory we use in the evaluation.
RuThes, a large-scale lexical ontology for Russian created by a group of expert lexicographers BIBREF17 .
RuWordNet, a semi-automatic conversion of the RuThes lexical ontology into a WordNet-like structure BIBREF18 .
Since the Dense model requires word embeddings, we used the 500-dimensional word vectors from the Russian Distributional Thesaurus BIBREF19 . These vectors are obtained using the Skip-gram approach trained on the lib.rus.ec text corpus.
Results
We compare the evaluation results obtained for the Sparse and Dense approaches with three baselines: the AdaGram model (AdaGram), the same sense for all the instances per lemma (One) and one sense per instance (Singletons). The evaluation results are presented in Table TABREF25 . The columns bts-rnc and wiki-wiki represent the overall value of ARI according to Equation ( EQREF15 ). The column Avg. consists of the weighted average of the datasets w.r.t. the number of instances.
We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. Thus, we recommend using the dense approach in further studies. Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average, this result does not transfer to languages with less available corpus size.
Conclusion
In this paper, we presented Watasense, an open source unsupervised word sense disambiguation system that is parameterized only by a word sense inventory. It supports both sparse and dense sense representations. We were able to show that the dense approach substantially boosts the performance of the sparse approach on three different sense inventories for Russian. We recommend using the dense approach in further studies due to its smoothing capabilities that reduce sparseness. In further studies, we will look at the problem of phrase neighbors that influence the sentence vector representations.
Finally, we would like to emphasize the fact that Watasense has a simple API for integrating different algorithms for WSD. At the same time, it requires only a basic set of language processing tools to be available: tokenizer, a part-of-speech tagger, lemmatizer, and a sense inventory, which means that low-resourced language can benefit of its usage.
Acknowledgements
We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) under the project “Joining Ontologies and Semantics Induced from Text” (JOIN-T), the RFBR under the projects no. 16-37-00203 mol_a and no. 16-37-00354 mol_a, and the RFH under the project no. 16-04-12019. The research was supported by the Ministry of Education and Science of the Russian Federation Agreement no. 02.A03.21.0006. The calculations were carried out using the supercomputer “Uran” at the Krasovskii Institute of Mathematics and Mechanics. | Word Sense Induction & Disambiguation |
2bc0bb7d3688fdd2267c582ca593e2ce72718a91 | 2bc0bb7d3688fdd2267c582ca593e2ce72718a91_0 | Q: Which corpus of synsets are used?
Text: Introduction
Word sense disambiguation (WSD) is a natural language processing task of identifying the particular word senses of polysemous words used in a sentence. Recently, a lot of attention was paid to the problem of WSD for the Russian language BIBREF0 , BIBREF1 , BIBREF2 . This problem is especially difficult because of both linguistic issues – namely, the rich morphology of Russian and other Slavic languages in general – and technical challenges like the lack of software and language resources required for addressing the problem.
To address these issues, we present Watasense, an unsupervised system for word sense disambiguation. We describe its architecture and conduct an evaluation on three datasets for Russian. The choice of an unsupervised system is motivated by the absence of resources that would enable a supervised system for under-resourced languages. Watasense is not strictly tied to the Russian language and can be applied to any language for which a tokenizer, part-of-speech tagger, lemmatizer, and a sense inventory are available.
The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 presents the Watasense word sense disambiguation system, presents its architecture, and describes the unsupervised word sense disambiguation methods bundled with it. Section 4 evaluates the system on a gold standard for Russian. Section 5 concludes with final remarks.
Related Work
Although the problem of WSD has been addressed in many SemEval campaigns BIBREF3 , BIBREF4 , BIBREF5 , we focus here on word sense disambiguation systems rather than on the research methodologies.
Among the freely available systems, IMS (“It Makes Sense”) is a supervised WSD system designed initially for the English language BIBREF6 . The system uses a support vector machine classifier to infer the particular sense of a word in the sentence given its contextual sentence-level features. Pywsd is an implementation of several popular WSD algorithms implemented in a library for the Python programming language. It offers both the classical Lesk algorithm for WSD and path-based algorithms that heavily use the WordNet and similar lexical ontologies. DKPro WSD BIBREF7 is a general-purpose framework for WSD that uses a lexical ontology as the sense inventory and offers the variety of WordNet-based algorithms. Babelfy BIBREF8 is a WSD system that uses BabelNet, a large-scale multilingual lexical ontology available for most natural languages. Due to the broad coverage of BabelNet, Babelfy offers entity linking as part of the WSD functionality.
Panchenko:17:emnlp present an unsupervised WSD system that is also knowledge-free: its sense inventory is induced based on the JoBimText framework, and disambiguation is performed by computing the semantic similarity between the context and the candidate senses BIBREF9 . Pelevina:16 proposed a similar approach to WSD, but based on dense vector representations (word embeddings), called SenseGram. Similarly to SenseGram, our WSD system is based on averaging of word embeddings on the basis of an automatically induced sense inventory. A crucial difference, however, is that we induce our sense inventory from synonymy dictionaries and not distributional word vectors. While this requires more manually created resources, a potential advantage of our approach is that the resulting inventory contains less noise.
Watasense, an Unsupervised System for Word Sense Disambiguation
Watasense is implemented in the Python programming language using the scikit-learn BIBREF10 and Gensim BIBREF11 libraries. Watasense offers a Web interface (Figure FIGREF2 ), a command-line tool, and an application programming interface (API) for deployment within other applications.
System Architecture
A sentence is represented as a list of spans. A span is a quadruple: INLINEFORM0 , where INLINEFORM1 is the word or the token, INLINEFORM2 is the part of speech tag, INLINEFORM3 is the lemma, INLINEFORM4 is the position of the word in the sentence. These data are provided by tokenizer, part-of-speech tagger, and lemmatizer that are specific for the given language. The WSD results are represented as a map of spans to the corresponding word sense identifiers.
The sense inventory is a list of synsets. A synset is represented by three bag of words: the synonyms, the hypernyms, and the union of two former – the bag. Due to the performance reasons, on initialization, an inverted index is constructed to map a word to the set of synsets it is included into.
Each word sense disambiguation method extends the BaseWSD class. This class provides the end user with a generic interface for WSD and also encapsulates common routines for data pre-processing. The inherited classes like SparseWSD and DenseWSD should implement the disambiguate_word(...) method that disambiguates the given word in the given sentence. Both classes use the bag representation of synsets on the initialization. As the result, for WSD, not just the synonyms are used, but also the hypernyms corresponding to the synsets. The UML class diagram is presented in Figure FIGREF4 .
Watasense supports two sources of word vectors: it can either read the word vector dataset in the binary Word2Vec format or use Word2Vec-Pyro4, a general-purpose word vector server. The use of a remote word vector server is recommended due to the reduction of memory footprint per each Watasense process.
User Interface
FIGREF2 shows the Web interface of Watasense. It is composed of two primary activities. The first is the text input and the method selection ( FIGREF2 ). The second is the display of the disambiguation results with part of speech highlighting ( FIGREF7 ). Those words with resolved polysemy are underlined; the tooltips with the details are raised on hover.
Word Sense Disambiguation
We use two different unsupervised approaches for word sense disambiguation. The first, called `sparse model', uses a straightforward sparse vector space model, as widely used in Information Retrieval, to represent contexts and synsets. The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.
In the vector space model approach, we follow the sparse context-based disambiguated method BIBREF12 , BIBREF13 . For estimating the sense of the word INLINEFORM0 in a sentence, we search for such a synset INLINEFORM1 that maximizes the cosine similarity to the sentence vector: DISPLAYFORM0
where INLINEFORM0 is the set of words forming the synset, INLINEFORM1 is the set of words forming the sentence. On initialization, the synsets represented in the sense inventory are transformed into the INLINEFORM2 -weighted word-synset sparse matrix efficiently represented in the memory using the compressed sparse row format. Given a sentence, a similar transformation is done to obtain the sparse vector representation of the sentence in the same space as the word-synset matrix. Then, for each word to disambiguate, we retrieve the synset containing this word that maximizes the cosine similarity between the sparse sentence vector and the sparse synset vector. Let INLINEFORM3 be the maximal number of synsets containing a word and INLINEFORM4 be the maximal size of a synset. Therefore, disambiguation of the whole sentence INLINEFORM5 requires INLINEFORM6 operations using the efficient sparse matrix representation.
In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms. We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word: DISPLAYFORM0
where INLINEFORM0 denotes the word embedding of INLINEFORM1 . We do the same transformation for the sentence vectors. Then, given a word INLINEFORM2 , a sentence INLINEFORM3 , we find the synset INLINEFORM4 that maximizes the cosine similarity to the sentence: DISPLAYFORM0
On initialization, we pre-compute the dense synset vectors by averaging the corresponding word embeddings. Given a sentence, we similarly compute the dense sentence vector by averaging the vectors of the words belonging to non-auxiliary parts of speech, i.e., nouns, adjectives, adverbs, verbs, etc. Then, given a word to disambiguate, we retrieve the synset that maximizes the cosine similarity between the dense sentence vector and the dense synset vector. Thus, given the number of dimensions INLINEFORM0 , disambiguation of the whole sentence INLINEFORM1 requires INLINEFORM2 operations.
Evaluation
We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation BIBREF5 . In the gold standard, each word is provided with a set of instances, i.e., the sentences containing the word. Each instance is manually annotated with the single sense identifier according to a pre-defined sense inventory. Each participating system estimates the sense labels for these ambiguous words, which can be viewed as a clustering of instances, according to sense labels. The system's clustering is compared to the gold-standard clustering for evaluation.
Quality Measure
The original SemEval 2010 Task 14 used the V-Measure external clustering measure BIBREF5 . However, this measure is maximized by clustering each sentence into his own distinct cluster, i.e., a `dummy' singleton baseline. This is achieved by the system deciding that every ambiguous word in every sentence corresponds to a different word sense. To cope with this issue, we follow a similar study BIBREF1 and use instead of the adjusted Rand index (ARI) proposed by Hubert:85 as an evaluation measure.
In order to provide the overall value of ARI, we follow the addition approach used in BIBREF1 . Since the quality measure is computed for each lemma individually, the total value is a weighted sum, namely DISPLAYFORM0
where INLINEFORM0 is the lemma, INLINEFORM1 is the set of the instances for the lemma INLINEFORM2 , INLINEFORM3 is the adjusted Rand index computed for the lemma INLINEFORM4 . Thus, the contribution of each lemma to the total score is proportional to the number of instances of this lemma.
Dataset
We evaluate the word sense disambiguation methods in Watasense against three baselines: an unsupervised approach for learning multi-prototype word embeddings called AdaGram BIBREF15 , same sense for all the instances per lemma (One), and one sense per instance (Singletons). The AdaGram model is trained on the combination of RuWac, Lib.Ru, and the Russian Wikipedia with the overall vocabulary size of 2 billion tokens BIBREF1 .
As the gold-standard dataset, we use the WSD training dataset for Russian created during RUSSE'2018: A Shared Task on Word Sense Induction and Disambiguation for the Russian Language BIBREF16 . The dataset has 31 words covered by INLINEFORM0 instances in the bts-rnc subset and 5 words covered by 439 instances in the wiki-wiki subset.
The following different sense inventories have been used during the evaluation:
[leftmargin=4mm]
Watlink, a word sense network constructed automatically. It uses the synsets induced in an unsupervised way by the Watset[CWnolog, MCL] method BIBREF2 and the semantic relations from such dictionaries as Wiktionary referred as Joint INLINEFORM0 Exp INLINEFORM1 SWN in Ustalov:17:dialogue. This is the only automatically built inventory we use in the evaluation.
RuThes, a large-scale lexical ontology for Russian created by a group of expert lexicographers BIBREF17 .
RuWordNet, a semi-automatic conversion of the RuThes lexical ontology into a WordNet-like structure BIBREF18 .
Since the Dense model requires word embeddings, we used the 500-dimensional word vectors from the Russian Distributional Thesaurus BIBREF19 . These vectors are obtained using the Skip-gram approach trained on the lib.rus.ec text corpus.
Results
We compare the evaluation results obtained for the Sparse and Dense approaches with three baselines: the AdaGram model (AdaGram), the same sense for all the instances per lemma (One) and one sense per instance (Singletons). The evaluation results are presented in Table TABREF25 . The columns bts-rnc and wiki-wiki represent the overall value of ARI according to Equation ( EQREF15 ). The column Avg. consists of the weighted average of the datasets w.r.t. the number of instances.
We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. Thus, we recommend using the dense approach in further studies. Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average, this result does not transfer to languages with less available corpus size.
Conclusion
In this paper, we presented Watasense, an open source unsupervised word sense disambiguation system that is parameterized only by a word sense inventory. It supports both sparse and dense sense representations. We were able to show that the dense approach substantially boosts the performance of the sparse approach on three different sense inventories for Russian. We recommend using the dense approach in further studies due to its smoothing capabilities that reduce sparseness. In further studies, we will look at the problem of phrase neighbors that influence the sentence vector representations.
Finally, we would like to emphasize the fact that Watasense has a simple API for integrating different algorithms for WSD. At the same time, it requires only a basic set of language processing tools to be available: tokenizer, a part-of-speech tagger, lemmatizer, and a sense inventory, which means that low-resourced language can benefit of its usage.
Acknowledgements
We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) under the project “Joining Ontologies and Semantics Induced from Text” (JOIN-T), the RFBR under the projects no. 16-37-00203 mol_a and no. 16-37-00354 mol_a, and the RFH under the project no. 16-04-12019. The research was supported by the Ministry of Education and Science of the Russian Federation Agreement no. 02.A03.21.0006. The calculations were carried out using the supercomputer “Uran” at the Krasovskii Institute of Mathematics and Mechanics. | Wiktionary |
8c073b7ea8cb5cc54d7fecb8f4bf88c1fb621b19 | 8c073b7ea8cb5cc54d7fecb8f4bf88c1fb621b19_0 | Q: What measure of semantic similarity is used?
Text: Introduction
Word sense disambiguation (WSD) is a natural language processing task of identifying the particular word senses of polysemous words used in a sentence. Recently, a lot of attention was paid to the problem of WSD for the Russian language BIBREF0 , BIBREF1 , BIBREF2 . This problem is especially difficult because of both linguistic issues – namely, the rich morphology of Russian and other Slavic languages in general – and technical challenges like the lack of software and language resources required for addressing the problem.
To address these issues, we present Watasense, an unsupervised system for word sense disambiguation. We describe its architecture and conduct an evaluation on three datasets for Russian. The choice of an unsupervised system is motivated by the absence of resources that would enable a supervised system for under-resourced languages. Watasense is not strictly tied to the Russian language and can be applied to any language for which a tokenizer, part-of-speech tagger, lemmatizer, and a sense inventory are available.
The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 presents the Watasense word sense disambiguation system, presents its architecture, and describes the unsupervised word sense disambiguation methods bundled with it. Section 4 evaluates the system on a gold standard for Russian. Section 5 concludes with final remarks.
Related Work
Although the problem of WSD has been addressed in many SemEval campaigns BIBREF3 , BIBREF4 , BIBREF5 , we focus here on word sense disambiguation systems rather than on the research methodologies.
Among the freely available systems, IMS (“It Makes Sense”) is a supervised WSD system designed initially for the English language BIBREF6 . The system uses a support vector machine classifier to infer the particular sense of a word in the sentence given its contextual sentence-level features. Pywsd is an implementation of several popular WSD algorithms implemented in a library for the Python programming language. It offers both the classical Lesk algorithm for WSD and path-based algorithms that heavily use the WordNet and similar lexical ontologies. DKPro WSD BIBREF7 is a general-purpose framework for WSD that uses a lexical ontology as the sense inventory and offers the variety of WordNet-based algorithms. Babelfy BIBREF8 is a WSD system that uses BabelNet, a large-scale multilingual lexical ontology available for most natural languages. Due to the broad coverage of BabelNet, Babelfy offers entity linking as part of the WSD functionality.
Panchenko:17:emnlp present an unsupervised WSD system that is also knowledge-free: its sense inventory is induced based on the JoBimText framework, and disambiguation is performed by computing the semantic similarity between the context and the candidate senses BIBREF9 . Pelevina:16 proposed a similar approach to WSD, but based on dense vector representations (word embeddings), called SenseGram. Similarly to SenseGram, our WSD system is based on averaging of word embeddings on the basis of an automatically induced sense inventory. A crucial difference, however, is that we induce our sense inventory from synonymy dictionaries and not distributional word vectors. While this requires more manually created resources, a potential advantage of our approach is that the resulting inventory contains less noise.
Watasense, an Unsupervised System for Word Sense Disambiguation
Watasense is implemented in the Python programming language using the scikit-learn BIBREF10 and Gensim BIBREF11 libraries. Watasense offers a Web interface (Figure FIGREF2 ), a command-line tool, and an application programming interface (API) for deployment within other applications.
System Architecture
A sentence is represented as a list of spans. A span is a quadruple: INLINEFORM0 , where INLINEFORM1 is the word or the token, INLINEFORM2 is the part of speech tag, INLINEFORM3 is the lemma, INLINEFORM4 is the position of the word in the sentence. These data are provided by tokenizer, part-of-speech tagger, and lemmatizer that are specific for the given language. The WSD results are represented as a map of spans to the corresponding word sense identifiers.
The sense inventory is a list of synsets. A synset is represented by three bag of words: the synonyms, the hypernyms, and the union of two former – the bag. Due to the performance reasons, on initialization, an inverted index is constructed to map a word to the set of synsets it is included into.
Each word sense disambiguation method extends the BaseWSD class. This class provides the end user with a generic interface for WSD and also encapsulates common routines for data pre-processing. The inherited classes like SparseWSD and DenseWSD should implement the disambiguate_word(...) method that disambiguates the given word in the given sentence. Both classes use the bag representation of synsets on the initialization. As the result, for WSD, not just the synonyms are used, but also the hypernyms corresponding to the synsets. The UML class diagram is presented in Figure FIGREF4 .
Watasense supports two sources of word vectors: it can either read the word vector dataset in the binary Word2Vec format or use Word2Vec-Pyro4, a general-purpose word vector server. The use of a remote word vector server is recommended due to the reduction of memory footprint per each Watasense process.
User Interface
FIGREF2 shows the Web interface of Watasense. It is composed of two primary activities. The first is the text input and the method selection ( FIGREF2 ). The second is the display of the disambiguation results with part of speech highlighting ( FIGREF7 ). Those words with resolved polysemy are underlined; the tooltips with the details are raised on hover.
Word Sense Disambiguation
We use two different unsupervised approaches for word sense disambiguation. The first, called `sparse model', uses a straightforward sparse vector space model, as widely used in Information Retrieval, to represent contexts and synsets. The second, called `dense model', represents synsets and contexts in a dense, low-dimensional space by averaging word embeddings.
In the vector space model approach, we follow the sparse context-based disambiguated method BIBREF12 , BIBREF13 . For estimating the sense of the word INLINEFORM0 in a sentence, we search for such a synset INLINEFORM1 that maximizes the cosine similarity to the sentence vector: DISPLAYFORM0
where INLINEFORM0 is the set of words forming the synset, INLINEFORM1 is the set of words forming the sentence. On initialization, the synsets represented in the sense inventory are transformed into the INLINEFORM2 -weighted word-synset sparse matrix efficiently represented in the memory using the compressed sparse row format. Given a sentence, a similar transformation is done to obtain the sparse vector representation of the sentence in the same space as the word-synset matrix. Then, for each word to disambiguate, we retrieve the synset containing this word that maximizes the cosine similarity between the sparse sentence vector and the sparse synset vector. Let INLINEFORM3 be the maximal number of synsets containing a word and INLINEFORM4 be the maximal size of a synset. Therefore, disambiguation of the whole sentence INLINEFORM5 requires INLINEFORM6 operations using the efficient sparse matrix representation.
In the synset embeddings model approach, we follow SenseGram BIBREF14 and apply it to the synsets induced from a graph of synonyms. We transform every synset into its dense vector representation by averaging the word embeddings corresponding to each constituent word: DISPLAYFORM0
where INLINEFORM0 denotes the word embedding of INLINEFORM1 . We do the same transformation for the sentence vectors. Then, given a word INLINEFORM2 , a sentence INLINEFORM3 , we find the synset INLINEFORM4 that maximizes the cosine similarity to the sentence: DISPLAYFORM0
On initialization, we pre-compute the dense synset vectors by averaging the corresponding word embeddings. Given a sentence, we similarly compute the dense sentence vector by averaging the vectors of the words belonging to non-auxiliary parts of speech, i.e., nouns, adjectives, adverbs, verbs, etc. Then, given a word to disambiguate, we retrieve the synset that maximizes the cosine similarity between the dense sentence vector and the dense synset vector. Thus, given the number of dimensions INLINEFORM0 , disambiguation of the whole sentence INLINEFORM1 requires INLINEFORM2 operations.
Evaluation
We conduct our experiments using the evaluation methodology of SemEval 2010 Task 14: Word Sense Induction & Disambiguation BIBREF5 . In the gold standard, each word is provided with a set of instances, i.e., the sentences containing the word. Each instance is manually annotated with the single sense identifier according to a pre-defined sense inventory. Each participating system estimates the sense labels for these ambiguous words, which can be viewed as a clustering of instances, according to sense labels. The system's clustering is compared to the gold-standard clustering for evaluation.
Quality Measure
The original SemEval 2010 Task 14 used the V-Measure external clustering measure BIBREF5 . However, this measure is maximized by clustering each sentence into his own distinct cluster, i.e., a `dummy' singleton baseline. This is achieved by the system deciding that every ambiguous word in every sentence corresponds to a different word sense. To cope with this issue, we follow a similar study BIBREF1 and use instead of the adjusted Rand index (ARI) proposed by Hubert:85 as an evaluation measure.
In order to provide the overall value of ARI, we follow the addition approach used in BIBREF1 . Since the quality measure is computed for each lemma individually, the total value is a weighted sum, namely DISPLAYFORM0
where INLINEFORM0 is the lemma, INLINEFORM1 is the set of the instances for the lemma INLINEFORM2 , INLINEFORM3 is the adjusted Rand index computed for the lemma INLINEFORM4 . Thus, the contribution of each lemma to the total score is proportional to the number of instances of this lemma.
Dataset
We evaluate the word sense disambiguation methods in Watasense against three baselines: an unsupervised approach for learning multi-prototype word embeddings called AdaGram BIBREF15 , same sense for all the instances per lemma (One), and one sense per instance (Singletons). The AdaGram model is trained on the combination of RuWac, Lib.Ru, and the Russian Wikipedia with the overall vocabulary size of 2 billion tokens BIBREF1 .
As the gold-standard dataset, we use the WSD training dataset for Russian created during RUSSE'2018: A Shared Task on Word Sense Induction and Disambiguation for the Russian Language BIBREF16 . The dataset has 31 words covered by INLINEFORM0 instances in the bts-rnc subset and 5 words covered by 439 instances in the wiki-wiki subset.
The following different sense inventories have been used during the evaluation:
[leftmargin=4mm]
Watlink, a word sense network constructed automatically. It uses the synsets induced in an unsupervised way by the Watset[CWnolog, MCL] method BIBREF2 and the semantic relations from such dictionaries as Wiktionary referred as Joint INLINEFORM0 Exp INLINEFORM1 SWN in Ustalov:17:dialogue. This is the only automatically built inventory we use in the evaluation.
RuThes, a large-scale lexical ontology for Russian created by a group of expert lexicographers BIBREF17 .
RuWordNet, a semi-automatic conversion of the RuThes lexical ontology into a WordNet-like structure BIBREF18 .
Since the Dense model requires word embeddings, we used the 500-dimensional word vectors from the Russian Distributional Thesaurus BIBREF19 . These vectors are obtained using the Skip-gram approach trained on the lib.rus.ec text corpus.
Results
We compare the evaluation results obtained for the Sparse and Dense approaches with three baselines: the AdaGram model (AdaGram), the same sense for all the instances per lemma (One) and one sense per instance (Singletons). The evaluation results are presented in Table TABREF25 . The columns bts-rnc and wiki-wiki represent the overall value of ARI according to Equation ( EQREF15 ). The column Avg. consists of the weighted average of the datasets w.r.t. the number of instances.
We observe that the SenseGram-based approach for word sense disambiguation yields substantially better results in every case (Table TABREF25 ). The primary reason for that is the implicit handling of similar words due to the averaging of dense word vectors for semantically related words. Thus, we recommend using the dense approach in further studies. Although the AdaGram approach trained on a large text corpus showed better results according to the weighted average, this result does not transfer to languages with less available corpus size.
Conclusion
In this paper, we presented Watasense, an open source unsupervised word sense disambiguation system that is parameterized only by a word sense inventory. It supports both sparse and dense sense representations. We were able to show that the dense approach substantially boosts the performance of the sparse approach on three different sense inventories for Russian. We recommend using the dense approach in further studies due to its smoothing capabilities that reduce sparseness. In further studies, we will look at the problem of phrase neighbors that influence the sentence vector representations.
Finally, we would like to emphasize the fact that Watasense has a simple API for integrating different algorithms for WSD. At the same time, it requires only a basic set of language processing tools to be available: tokenizer, a part-of-speech tagger, lemmatizer, and a sense inventory, which means that low-resourced language can benefit of its usage.
Acknowledgements
We acknowledge the support of the Deutsche Forschungsgemeinschaft (DFG) under the project “Joining Ontologies and Semantics Induced from Text” (JOIN-T), the RFBR under the projects no. 16-37-00203 mol_a and no. 16-37-00354 mol_a, and the RFH under the project no. 16-04-12019. The research was supported by the Ministry of Education and Science of the Russian Federation Agreement no. 02.A03.21.0006. The calculations were carried out using the supercomputer “Uran” at the Krasovskii Institute of Mathematics and Mechanics. | cosine similarity |
dcb18516369c3cf9838e83168357aed6643ae1b8 | dcb18516369c3cf9838e83168357aed6643ae1b8_0 | Q: Which retrieval system was used for baselines?
Text: Introduction
Factoid Question Answering (QA) aims to extract answers, from an underlying knowledge source, to information seeking questions posed in natural language. Depending on the knowledge source available there are two main approaches for factoid QA. Structured sources, including Knowledge Bases (KBs) such as Freebase BIBREF1 , are easier to process automatically since the information is organized according to a fixed schema. In this case the question is parsed into a logical form in order to query against the KB. However, even the largest KBs are often incomplete BIBREF2 , BIBREF3 , and hence can only answer a limited subset of all possible factoid questions.
For this reason the focus is now shifting towards unstructured sources, such as Wikipedia articles, which hold a vast quantity of information in textual form and, in principle, can be used to answer a much larger collection of questions. Extracting the correct answer from unstructured text is, however, challenging, and typical QA pipelines consist of the following two components: (1) searching for the passages relevant to the given question, and (2) reading the retrieved text in order to select a span of text which best answers the question BIBREF4 , BIBREF5 .
Like most other language technologies, the current research focus for both these steps is firmly on machine learning based approaches for which performance improves with the amount of data available. Machine reading performance, in particular, has been significantly boosted in the last few years with the introduction of large-scale reading comprehension datasets such as CNN / DailyMail BIBREF6 and Squad BIBREF7 . State-of-the-art systems for these datasets BIBREF8 , BIBREF9 focus solely on step (2) above, in effect assuming the relevant passage of text is already known.
In this paper, we introduce two new datasets for QUestion Answering by Search And Reading – Quasar. The datasets each consist of factoid question-answer pairs and a corresponding large background corpus to facilitate research into the combined problem of retrieval and comprehension. Quasar-S consists of 37,362 cloze-style questions constructed from definitions of software entities available on the popular website Stack Overflow. The answer to each question is restricted to be another software entity, from an output vocabulary of 4874 entities. Quasar-T consists of 43,013 trivia questions collected from various internet sources by a trivia enthusiast. The answers to these questions are free-form spans of text, though most are noun phrases.
While production quality QA systems may have access to the entire world wide web as a knowledge source, for Quasar we restrict our search to specific background corpora. This is necessary to avoid uninteresting solutions which directly extract answers from the sources from which the questions were constructed. For Quasar-S we construct the knowledge source by collecting top 50 threads tagged with each entity in the dataset on the Stack Overflow website. For Quasar-T we use ClueWeb09 BIBREF0 , which contains about 1 billion web pages collected between January and February 2009. Figure 1 shows some examples.
Unlike existing reading comprehension tasks, the Quasar tasks go beyond the ability to only understand a given passage, and require the ability to answer questions given large corpora. Prior datasets (such as those used in BIBREF4 ) are constructed by first selecting a passage and then constructing questions about that passage. This design (intentionally) ignores some of the subproblems required to answer open-domain questions from corpora, namely searching for passages that may contain candidate answers, and aggregating information/resolving conflicts between candidates from many passages. The purpose of Quasar is to allow research into these subproblems, and in particular whether the search step can benefit from integration and joint training with downstream reading systems.
Additionally, Quasar-S has the interesting feature of being a closed-domain dataset about computer programming, and successful approaches to it must develop domain-expertise and a deep understanding of the background corpus. To our knowledge it is one of the largest closed-domain QA datasets available. Quasar-T, on the other hand, consists of open-domain questions based on trivia, which refers to “bits of information, often of little importance". Unlike previous open-domain systems which rely heavily on the redundancy of information on the web to correctly answer questions, we hypothesize that Quasar-T requires a deeper reading of documents to answer correctly.
We evaluate Quasar against human testers, as well as several baselines ranging from naïve heuristics to state-of-the-art machine readers. The best performing baselines achieve $33.6\%$ and $28.5\%$ on Quasar-S and Quasar-T, while human performance is $50\%$ and $60.6\%$ respectively. For the automatic systems, we see an interesting tension between searching and reading accuracies – retrieving more documents in the search phase leads to a higher coverage of answers, but makes the comprehension task more difficult. We also collect annotations on a subset of the development set questions to allow researchers to analyze the categories in which their system performs well or falls short. We plan to release these annotations along with the datasets, and our retrieved documents for each question.
Dataset Construction
Each dataset consists of a collection of records with one QA problem per record. For each record, we include some question text, a context document relevant to the question, a set of candidate solutions, and the correct solution. In this section, we describe how each of these fields was generated for each Quasar variant.
Question sets
The software question set was built from the definitional “excerpt” entry for each tag (entity) on StackOverflow. For example the excerpt for the “java“ tag is, “Java is a general-purpose object-oriented programming language designed to be used in conjunction with the Java Virtual Machine (JVM).” Not every excerpt includes the tag being defined (which we will call the “head tag”), so we prepend the head tag to the front of the string to guarantee relevant results later on in the pipeline. We then completed preprocessing of the software questions by downcasing and tokenizing the string using a custom tokenizer compatible with special characters in software terms (e.g. “.net”, “c++”). Each preprocessed excerpt was then converted to a series of cloze questions using a simple heuristic: first searching the string for mentions of other entities, then repleacing each mention in turn with a placeholder string (Figure 2 ).
This heuristic is noisy, since the software domain often overloads existing English words (e.g. “can” may refer to a Controller Area Network bus; “swap” may refer to the temporary storage of inactive pages of memory on disk; “using” may refer to a namespacing keyword). To improve precision we scored each cloze based on the relative incidence of the term in an English corpus versus in our StackOverflow one, and discarded all clozes scoring below a threshold. This means our dataset does not include any cloze questions for terms which are common in English (such as “can” “swap” and “using”, but also “image” “service” and “packet”). A more sophisticated entity recognition system could make recall improvements here.
The trivia question set was built from a collection of just under 54,000 trivia questions collected by Reddit user 007craft and released in December 2015. The raw dataset was noisy, having been scraped from multiple sources with variable attention to detail in formatting, spelling, and accuracy. We filtered the raw questions to remove unparseable entries as well as any True/False or multiple choice questions, for a total of 52,000 free-response style questions remaining. The questions range in difficulty, from straightforward (“Who recorded the song `Rocket Man”' “Elton John”) to difficult (“What was Robin Williams paid for Disney's Aladdin in 1982” “Scale $485 day + Picasso Painting”) to debatable (“According to Earth Medicine what's the birth totem for march” “The Falcon”)
Context Retrieval
The context document for each record consists of a list of ranked and scored pseudodocuments relevant to the question.
Context documents for each query were generated in a two-phase fashion, first collecting a large pool of semirelevant text, then filling a temporary index with short or long pseudodocuments from the pool, and finally selecting a set of $N$ top-ranking pseudodocuments (100 short or 20 long) from the temporary index.
For Quasar-S, the pool of text for each question was composed of 50+ question-and-answer threads scraped from http://stackoverflow.com. StackOverflow keeps a running tally of the top-voted questions for each tag in their knowledge base; we used Scrapy to pull the top 50 question posts for each tag, along with any answer-post responses and metadata (tags, authorship, comments). From each thread we pulled all text not marked as code, and split it into sentences using the Stanford NLP sentence segmenter, truncating sentences to 2048 characters. Each sentence was marked with a thread identifier, a post identifier, and the tags for the thread. Long pseudodocuments were either the full post (in the case of question posts), or the full post and its head question (in the case of answer posts), comments included. Short pseudodocuments were individual sentences.
To build the context documents for Quasar-S, the pseudodocuments for the entire corpus were loaded into a disk-based lucene index, each annotated with its thread ID and the tags for the thread. This index was queried for each cloze using the following lucene syntax:
[noitemsep]
SHOULD(PHRASE(question text))
SHOULD(BOOLEAN(question text))
MUST(tags:$headtag)
where “question text” refers to the sequence of tokens in the cloze question, with the placeholder removed. The first SHOULD term indicates that an exact phrase match to the question text should score highly. The second SHOULD term indicates that any partial match to tokens in the question text should also score highly, roughly in proportion to the number of terms matched. The MUST term indicates that only pseudodocuments annotated with the head tag of the cloze should be considered.
The top $100N$ pseudodocuments were retrieved, and the top $N$ unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded.
For Quasar-T, the pool of text for each question was composed of 100 HTML documents retrieved from ClueWeb09. Each question-answer pair was converted to a #combine query in the Indri query language to comply with the ClueWeb09 batch query service, using simple regular expression substitution rules to remove (s/[.(){}<>:*`_]+//g) or replace (s/[,?']+/ /g) illegal characters. Any questions generating syntax errors after this step were discarded. We then extracted the plaintext from each HTML document using Jericho. For long pseudodocuments we used the full page text, truncated to 2048 characters. For short pseudodocuments we used individual sentences as extracted by the Stanford NLP sentence segmenter, truncated to 200 characters.
To build the context documents for the trivia set, the pseudodocuments from the pool were collected into an in-memory lucene index and queried using the question text only (the answer text was not included for this step). The structure of the query was identical to the query for Quasar-S, without the head tag filter:
[noitemsep]
SHOULD(PHRASE(question text))
SHOULD(BOOLEAN(question text))
The top $100N$ pseudodocuments were retrieved, and the top $N$ unique pseudodocuments were added to the context document along with their lucene retrieval score. Any questions showing zero results for this query were discarded.
Candidate solutions
The list of candidate solutions provided with each record is guaranteed to contain the correct answer to the question. Quasar-S used a closed vocabulary of 4874 tags as its candidate list. Since the questions in Quasar-T are in free-response format, we constructed a separate list of candidate solutions for each question. Since most of the correct answers were noun phrases, we took each sequence of NN* -tagged tokens in the context document, as identified by the Stanford NLP Maxent POS tagger, as the candidate list for each record. If this list did not include the correct answer, it was added to the list.
Postprocessing
Once context documents had been built, we extracted the subset of questions where the answer string, excluded from the query for the two-phase search, was nonetheless present in the context document. This subset allows us to evaluate the performance of the reading system independently from the search system, while the full set allows us to evaluate the performance of Quasar as a whole. We also split the full set into training, validation and test sets. The final size of each data subset after all discards is listed in Table 1 .
Metrics
Evaluation is straightforward on Quasar-S since each answer comes from a fixed output vocabulary of entities, and we report the average accuracy of predictions as the evaluation metric. For Quasar-T, the answers may be free form spans of text, and the same answer may be expressed in different terms, which makes evaluation difficult. Here we pick the two metrics from BIBREF7 , BIBREF19 . In preprocessing the answer we remove punctuation, white-space and definite and indefinite articles from the strings. Then, exact match measures whether the two strings, after preprocessing, are equal or not. For F1 match we first construct a bag of tokens for each string, followed be preprocessing of each token, and measure the F1 score of the overlap between the two bags of tokens. These metrics are far from perfect for Quasar-T; for example, our human testers were penalized for entering “0” as answer instead of “zero”. However, a comparison between systems may still be meaningful.
Human Evaluation
To put the difficulty of the introduced datasets into perspective, we evaluated human performance on answering the questions. For each dataset, we recruited one domain expert (a developer with several years of programming experience for Quasar-S, and an avid trivia enthusiast for Quasar-T) and $1-3$ non-experts. Each volunteer was presented with randomly selected questions from the development set and asked to answer them via an online app. The experts were evaluated in a “closed-book” setting, i.e. they did not have access to any external resources. The non-experts were evaluated in an “open-book” setting, where they had access to a search engine over the short pseudo-documents extracted for each dataset (as described in Section "Context Retrieval" ). We decided to use short pseudo-documents for this exercise to reduce the burden of reading on the volunteers, though we note that the long pseudo-documents have greater coverage of answers.
We also asked the volunteers to provide annotations to categorize the type of each question they were asked, and a label for whether the question was ambiguous. For Quasar-S the annotators were asked to mark the relation between the head entity (from whose definition the cloze was constructed) and the answer entity. For Quasar-T the annotators were asked to mark the genre of the question (e.g., Arts & Literature) and the entity type of the answer (e.g., Person). When multiple annotators marked the same question differently, we took the majority vote when possible and discarded ties. In total we collected 226 relation annotations for 136 questions in Quasar-S, out of which 27 were discarded due to conflicting ties, leaving a total of 109 annotated questions. For Quasar-T we collected annotations for a total of 144 questions, out of which 12 we marked as ambiguous. In the remaining 132, a total of 214 genres were annotated (a question could be annotated with multiple genres), while 10 questions had conflicting entity-type annotations which we discarded, leaving 122 total entity-type annotations. Figure 3 shows the distribution of these annotations.
Baseline Systems
We evaluate several baselines on Quasar, ranging from simple heuristics to deep neural networks. Some predict a single token / entity as the answer, while others predict a span of tokens.
MF-i (Maximum Frequency) counts the number of occurrences of each candidate answer in the retrieved context and returns the one with maximum frequency. MF-e is the same as MF-i except it excludes the candidates present in the query. WD (Word Distance) measures the sum of distances from a candidate to other non-stopword tokens in the passage which are also present in the query. For the cloze-style Quasar-S the distances are measured by first aligning the query placeholder to the candidate in the passage, and then measuring the offsets between other tokens in the query and their mentions in the passage. The maximum distance for any token is capped at a specified threshold, which is tuned on the validation set.
For Quasar-T we also test the Sliding Window (SW) and Sliding Window + Distance (SW+D) baselines proposed in BIBREF13 . The scores were computed for the list of candidate solutions described in Section "Context Retrieval" .
For Quasar-S, since the answers come from a fixed vocabulary of entities, we test language model baselines which predict the most likely entity to appear in a given context. We train three n-gram baselines using the SRILM toolkit BIBREF21 for $n=3,4,5$ on the entire corpus of all Stack Overflow posts. The output predictions are restricted to the output vocabulary of entities.
We also train a bidirectional Recurrent Neural Network (RNN) language model (based on GRU units). This model encodes both the left and right context of an entity using forward and backward GRUs, and then concatenates the final states from both to predict the entity through a softmax layer. Training is performed on the entire corpus of Stack Overflow posts, with the loss computed only over mentions of entities in the output vocabulary. This approach benefits from looking at both sides of the cloze in a query to predict the entity, as compared to the single-sided n-gram baselines.
Reading comprehension models are trained to extract the answer from the given passage. We test two recent architectures on Quasar using publicly available code from the authors .
The GA Reader BIBREF8 is a multi-layer neural network which extracts a single token from the passage to answer a given query. At the time of writing it had state-of-the-art performance on several cloze-style datasets for QA. For Quasar-S we train and test GA on all instances for which the correct answer is found within the retrieved context. For Quasar-T we train and test GA on all instances where the answer is in the context and is a single token.
The BiDAF model BIBREF9 is also a multi-layer neural network which predicts a span of text from the passage as the answer to a given query. At the time of writing it had state-of-the-art performance among published models on the Squad dataset. For Quasar-T we train and test BiDAF on all instances where the answer is in the retrieved context.
Results
Several baselines rely on the retrieved context to extract the answer to a question. For these, we refer to the fraction of instances for which the correct answer is present in the context as Search Accuracy. The performance of the baseline among these instances is referred to as the Reading Accuracy, and the overall performance (which is a product of the two) is referred to as the Overall Accuracy. In Figure 4 we compare how these three vary as the number of context documents is varied. Naturally, the search accuracy increases as the context size increases, however at the same time reading performance decreases since the task of extracting the answer becomes harder for longer documents. Hence, simply retrieving more documents is not sufficient – finding the few most relevant ones will allow the reader to work best.
In Tables 2 and 3 we compare all baselines when the context size is tuned to maximize the overall accuracy on the validation set. For Quasar-S the best performing baseline is the BiRNN language model, which achieves $33.6\%$ accuracy. The GA model achieves $48.3\%$ accuracy on the set of instances for which the answer is in context, however, a search accuracy of only $65\%$ means its overall performance is lower. This can improve with improved retrieval. For Quasar-T, both the neural models significantly outperform the heuristic models, with BiDAF getting the highest F1 score of $28.5\%$ .
The best performing baselines, however, lag behind human performance by $16.4\%$ and $32.1\%$ for Quasar-S and Quasar-T respectively, indicating the strong potential for improvement. Interestingly, for human performance we observe that non-experts are able to match or beat the performance of experts when given access to the background corpus for searching the answers. We also emphasize that the human performance is limited by either the knowledge of the experts, or the usefulness of the search engine for non-experts; it should not be viewed as an upper bound for automatic systems which can potentially use the entire background corpus. Further analysis of the human and baseline performance in each category of annotated questions is provided in Appendix "Performance Analysis" .
Conclusion
We have presented the Quasar datasets for promoting research into two related tasks for QA – searching a large corpus of text for relevant passages, and reading the passages to extract answers. We have also described baseline systems for the two tasks which perform reasonably but lag behind human performance. While the searching performance improves as we retrieve more context, the reading performance typically goes down. Hence, future work, in addition to improving these components individually, should also focus on joint approaches to optimizing the two on end-task performance. The datasets, including the documents retrieved by our system and the human annotations, are available at https://github.com/bdhingra/quasar.
Acknowledgments
This work was funded by NSF under grants CCF-1414030 and IIS-1250956 and by grants from Google.
Quasar-S Relation Definitions
Table 4 includes the definition of all the annotated relations for Quasar-S.
Performance Analysis
Figure 5 shows a comparison of the human performance with the best performing baseline for each category of annotated questions. We see consistent differences between the two, except in the following cases. For Quasar-S, Bi-RNN performs comparably to humans for the developed-with and runs-on categories, but much worse in the has-component and is-a categories. For Quasar-T, BiDAF performs comparably to humans in the sports category, but much worse in history & religion and language, or when the answer type is a number or date/time. | The dataset comes with a ranked set of relevant documents. Hence the baselines do not use a retrieval system. |
f46a907360d75ad566620e7f6bf7746497b6e4a9 | f46a907360d75ad566620e7f6bf7746497b6e4a9_0 | Q: What word embeddings were used?
Text: Introduction
Named Entity Recognition (NER) is one of information extraction subtasks that is responsible for detecting entity elements from raw text and can determine the category in which the element belongs, these categories include the names of persons, organizations, locations, expressions of times, quantities, monetary values and percentages.
The problem of NER is described as follow:
Input: A sentence S consists a sequence of $n$ words: $S= w_1,w_2,w_3,…,w_n$ ($w_i$: the $i^{th}$ word)
Output: The sequence of $n$ labels $y_1,y_2,y_3,…,y_n$. Each $y_i$ label represents the category which $w_i$ belongs to.
For example, given a sentence:
Input: vietnamGiám đốc điều hành Tim Cook của Apple vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện Flint Center, Cupertino.
(Apple CEO Tim Cook introduces 2 new, larger iPhones, Smart Watch at Cupertino Flint Center event)
The algorithm will output:
Output: vietnam⟨O⟩Giám đốc điều hành⟨O⟩ ⟨PER⟩Tim Cook⟨PER⟩ ⟨O⟩của⟨O⟩ ⟨ORG⟩Apple⟨ORG⟩ ⟨O⟩vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện⟨O⟩ ⟨ORG⟩Flint Center⟨ORG⟩, ⟨LOC⟩Cupertino⟨LOC⟩.
With LOC, PER, ORG is Name of location, person, organization respectively. Note that O means Other (Not a Name entity). We will not denote the O label in the following examples in this article because we only care about name of entities.
In this paper, we analyze common errors of the previous state-of-the-art techniques using Deep Neural Network (DNN) on VLSP Corpus. This may contribute to the later researchers the common errors from the results of these state-of-the-art models, then they can rely on to improve the model.
Section 2 discusses the related works to this paper. We will present a method for evaluating and analyzing the types of errors in Section 3. The data used for testing and analysis of errors will be introduced in Section 4, we also talk about deep neural network methods and pre-trained word embeddings for experimentation in this section. Section 5 will detail the errors and evaluations. In the end is our contribution to improve the above errors.
Related work
Previously publicly available NER systems do not use DNN, for example, the MITRE Identification Scrubber Toolkit (MIST) BIBREF0, Stanford NER BIBREF1, BANNER BIBREF2 and NERsuite BIBREF3. NER systems for Vietnamese language processing used traditional machine learning methods such as Maximum Entropy Markov Model (MEMM), Support Vector Machine (SVM) and Conditional Random Field (CRF). In particular, most of the toolkits for NER task attempted to use MEMM BIBREF4, and CRF BIBREF5 to solve this problem.
Nowadays, because of the increase in data, DNN methods are used a lot. They have archived great results when it comes to NER tasks, for example, Guillaume Lample et al with BLSTM-CRF in BIBREF6 report 90.94 F1 score, Chiu et al with BLSTM-CNN in BIBREF7 got 91.62 F1 score, Xeuzhe Ma and Eduard Hovy with BLSTM-CNN-CRF in BIBREF8 achieved F1 score of 91.21, Thai-Hoang Pham and Phuong Le-Hong with BLSTM-CNN-CRF in BIBREF9 got 88.59% F1 score. These DNN models are also the state-of-the-art models.
Error-analysis method
The results of our analysis experiments are reported in precision and recall over all labels (name of person, location, organization and miscellaneous). The process of analyzing errors has 2 steps:
Step 1: We use two state-of-the-art models including BLSTM-CNN-CRF and BLSTM-CRF to train and test on VLSP’s NER corpus. In our experiments, we implement word embeddings as features to the two systems.
Step 2: Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2).
A token (an entity name maybe contain more than one word) will be extracted as a correct entity by the model if both of the followings are correct:
The length of it (range) is correct: The word beginning and the end is the same as gold data (annotator).
The label (tag) of it is correct: The label is the same as in gold data.
If it is not meet two above requirements, it will be the wrong entity (an error). Therefore, we divide the errors into five different types which are described in detail as follows:
No extraction: The error where the model did not extract tokens as a name entity (NE) though the tokens were annotated as a NE.
LSTM-CNN-CRF: vietnam Việt_Nam
Annotator: vietnam⟨LOC⟩ Việt_Nam ⟨LOC⟩
No annotation: The error where the model extracted tokens as an NE though the tokens were not annotated as a NE.
LSTM-CNN-CRF: vietnam⟨PER⟩ Châu Âu ⟨PER⟩
Annotator: vietnamChâu Âu
Wrong range: The error where the model extracted tokens as an NE and only the range was wrong. (The extracted tokens were partially annotated or they were the part of the annotated tokens).
LSTM-CNN-CRF: vietnam⟨PER⟩ Ca_sĩ Nguyễn Văn A ⟨PER⟩
Annotator:
vietnamCa_sĩ ⟨PER⟩ Nguyễn Văn A ⟨PER⟩
Wrong tag: The error where the model extracted tokens as an NE and only the tag type was wrong.
LSTM-CNN-CRF: vietnamKhám phá ⟨PER⟩ Yangsuri ⟨PER⟩
Annotator:
vietnamKhám phá ⟨LOC⟩ Yangsuri ⟨LOC⟩
Wrong range and tag: The error where the model extracted tokens as an NE and both the range and the tag type were wrong.
LSTM-CNN-CRF: vietnam⟨LOC⟩ gian_hàng Apple ⟨LOC⟩
Annotator:
vietnamgian_hàng ⟨ORG⟩ Apple ⟨ORG⟩
We compare the predicted NEs to the gold NEs ($Fig. 1$), if they have the same range, the predicted NE is a correct or Wrong tag. If it has different range with the gold NE, we will see what type of wrong it is. If it does not have any overlap, it is a No extraction. If it has an overlap and the tag is the same at gold NE, it is a Wrong range. Finally, it is a Wrong range and tag if it has an overlap but the tag is different. The steps in Fig. 2 is the same at Fig. 1 and the different only is we compare the gold NE to the predicted NE, and No extraction type will be No annotation.
Data and model ::: Data sets
To conduct error analysis of the model, we used the corpus which are provided by VLSP 2016 - Named Entity Recognition. The dataset contains four different types of label: Location (LOC), Person (PER), Organization (ORG) and Miscellaneous - Name of an entity that do not belong to 3 types above (Table TABREF15). Although the corpus has more information about the POS and chunks, but we do not use them as features in our model.
There are two folders with 267 text files of training data and 45 text files of test data. They all have their own format. We take 21 first text files and 22 last text files and 22 sentences of the 22th text file and 55 sentences of the 245th text file to be a development data. The remaining files are going to be the training data. The test file is the same at the file VSLP gave. Finally, we have 3 text files only based on the CoNLL 2003 format: train, dev and test.
Data and model ::: Pre-trained word Embeddings
We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:
Kyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps. His word embedding is the vector of 100 dimension and it has about 10k words.
Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia. The format is the same at Kyubyong's, but their embedding is the vector of 300 dimension, and they have about 200k words
Data and model ::: Model
Based on state-of-the-art methods for NER, BLSTM-CNN-CRF is the end-to-end deep neural network model that achieves the best result on F-score BIBREF9. Therefore, we decide to conduct the experiment on this model and analyze the errors.
We run experiment with the Ma and Hovy (2016) model BIBREF8, source code provided by (Motoki Sato) and analysis the errors from this result. Before we decide to analysis on this result, we have run some other methods, but this one with Vietnamese pre-trained word embeddings provided by Kyubyong Park obtains the best result. Other results are shown in the Table 2.
Experiment and Results
Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings.
We compare the outputs of BLSTM-CNN-CRF model (predicted) to the annotated data (gold) and analyzed the errors. Table 3 shows perfomance of the BLSTM-CNN-CRF model. In our experiments, we use three evaluation parameters (precision, recall, and F1 score) to access our experimental result. They will be described as follow in Table 3. The "correctNE", the number of correct label for entity that the model can found. The "goldNE", number of the real label annotated by annotator in the gold data. The "foundNE", number of the label the model find out (no matter if they are correct or not).
In Table 3 above, we can see that recall score on ORG label is lowest. The reason is almost all the ORG label on test file is name of some brands that do not appear on training data and pre-trained word embedding. On the other side, the characters inside these brand names also inside the other names of person in the training data. The context from both side of the sentence (future- and past-feature) also make the model "think" the name entity not as it should be.
Table 4 shows that the biggest number of errors is No extraction. The errors were counted by using logical sum (OR) of the gold labels and predicted labels (predicted by the model). The second most frequent error was Wrong tag means the model extract it's a NE but wrong tag.
Experiment and Results ::: Error analysis on gold data
First of all, we will compare the predicted NEs to the gold NEs (Fig. 1). Table 4 shows the summary of errors by types based on the gold labels, the "correct" is the number of gold tag that the model predicted correctly, "error" is the number of gold tag that the model predicted incorrectly, and "total" is sum of them. Four columns next show the number of type errors on each label.
Table 5 shows that Person, Location and Organization is the main reason why No extraction and Wrong tag are high.
After analyzing based on the gold NEs, we figure out the reason is:
Almost all the NEs is wrong, they do not appear on training data and pre-trained embedding. These NEs vector will be initial randomly, therefore, these vectors are poor which means have no semantic aspect.
The "weird" ORG NE in the sentence appear together with other words have context of PER, so this "weird" ORG NE is going to be label at PER.
For example:
gold data: vietnamVĐV được xem là đầu_tiên ký hợp_đồng quảng_cáo là võ_sĩ ⟨PER⟩ Trần Quang Hạ ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨LOC⟩ Hiroshima ⟨LOC⟩.
(The athlete is considered the first to sign a contract of boxing Tran Quang Ha after winning the gold medal Asiad Hiroshima)
predicted data: vietnam…là võ_sĩ ⟨PER⟩Trần Quang Hạ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨PER⟩Hiroshima⟨PER⟩.
Some mistakes of the model are from training set, for example, anonymous person named "P." appears many times in the training set, so when model meets "P." in context of "P. 3 vietnamQuận 9" (Ward 3, District 9) – "P." stands for vietnam"Phường" (Ward) model will predict "P." as a PER.
Training data: vietnamnếu ⟨PER⟩P.⟨PER⟩ có ở đây – (If P. were here) Predicted data: vietnam⟨PER⟩P. 3⟨PER⟩, Gò_vấp – (Ward 3, Go_vap District)
Experiment and Results ::: Analysis on predicted data
Table 6 shows the summary of errors by types based on the predicted data. After analyzing the errors on predicted and gold data, we noticed that the difference of these errors are mainly in the No anotation and No extraction. Therefore, we only mention the main reasons for the No anotation:
Most of the wrong labels that model assigns are brand names (Ex: Charriol, Dream, Jupiter, ...), words are abbreviated vietnam(XKLD – xuất khẩu lao động (labour export)), movie names, … All of these words do not appear in training data and word embedding. Perhaps these reasons are the followings:
The vectors of these words are random so the semantic aspect is poor.
The hidden states of these words also rely on past feature (forward pass) and future feature (backward pass) of the sentence. Therefore, they are assigned wrongly because of their context.
These words are primarily capitalized or all capital letters, so they are assigned as a name entity. This error is caused by the CNN layer extract characters information of the word.
Table 7 shows the detail of errors on predicted data where we will see number kind of errors on each label.
Experiment and Results ::: Errors of annotators
After considering the training and test data, we realized that this data has many problems need to be fixed in the next run experiments. The annotators are not consistent between the training data and the test data, more details are shown as follow:
The organizations are labeled in the train data but not labeled in the test data:
Training data: vietnam⟨ORG⟩ Sở Y_tế ⟨ORG⟩ (Department of Health)
Test data: vietnamSở Y_tế (Department of Health)
Explanation: vietnam"Sở Y_tế" in train and test are the same name of organization entity. However the one in test data is not labeled.
The entity has the same meaning but is assigned differently between the train data and the test:
Training data: vietnam⟨MISC⟩ người Việt ⟨MISC⟩ (Vietnamese people)
Test data: vietnamdân ⟨LOC⟩ Việt ⟨LOC⟩ (Vietnamese people)
Explanation: vietnamBoth "người Việt" in train data and "dân Việt" in test data are the same meaning, but they are assigned differently.
The range of entities are differently between the train data and the test data:
Training data: vietnam⟨LOC⟩ làng Atâu ⟨LOC⟩ (Atâu village)
Test data: vietnamlàng ⟨LOC⟩ Hàn_Quốc ⟨LOC⟩ (Korea village)
Explanation: The two villages differ only in name, but they are labeled differently in range
Capitalization rules are not unified with a token is considered an entity:
Training data: vietnam⟨ORG⟩ Công_ty Inmasco ⟨ORG⟩ (Inmasco Company)
Training data: vietnamcông_ty con (Subsidiaries)
Test data: vietnamcông_ty ⟨ORG⟩ Yeon Young Entertainment ⟨ORG⟩ (Yeon Young Entertainment company)
Explanation: If it comes to a company with a specific name, it should be labeled vietnam⟨ORG⟩ Công_ty Yeon Young Entertainment ⟨ORG⟩ with "C" in capital letters.
Conclusion
In this paper, we have presented a thorough study of distinctive error distributions produced by Bi-LSTM-CNN-CRF for the Vietnamese language. This would be helpful for researchers to create better NER models.
Based on the analysis results, we suggest some possible directions for improvement of model and for the improvement of data-driven NER for the Vietnamese language in future:
The word at the begin of the sentence is capitalized, so, if the name of person is at this position, model will ignore them (no extraction). To improve this issue, we can use the POS feature together with BIO format (Inside, Outside, Beginning) BIBREF6 at the top layer (CRF).
If we can unify the labeling of the annotators between the train, dev and test sets. We will improve data quality and classifier.
It is better if there is a pre-trained word embeddings that overlays the data, and segmentation algorithm need to be more accurately. | Kyubyong Park, Edouard Grave et al BIBREF11 |
79d999bdf8a343ce5b2739db3833661a1deab742 | 79d999bdf8a343ce5b2739db3833661a1deab742_0 | Q: What type of errors were produced by the BLSTM-CNN-CRF system?
Text: Introduction
Named Entity Recognition (NER) is one of information extraction subtasks that is responsible for detecting entity elements from raw text and can determine the category in which the element belongs, these categories include the names of persons, organizations, locations, expressions of times, quantities, monetary values and percentages.
The problem of NER is described as follow:
Input: A sentence S consists a sequence of $n$ words: $S= w_1,w_2,w_3,…,w_n$ ($w_i$: the $i^{th}$ word)
Output: The sequence of $n$ labels $y_1,y_2,y_3,…,y_n$. Each $y_i$ label represents the category which $w_i$ belongs to.
For example, given a sentence:
Input: vietnamGiám đốc điều hành Tim Cook của Apple vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện Flint Center, Cupertino.
(Apple CEO Tim Cook introduces 2 new, larger iPhones, Smart Watch at Cupertino Flint Center event)
The algorithm will output:
Output: vietnam⟨O⟩Giám đốc điều hành⟨O⟩ ⟨PER⟩Tim Cook⟨PER⟩ ⟨O⟩của⟨O⟩ ⟨ORG⟩Apple⟨ORG⟩ ⟨O⟩vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện⟨O⟩ ⟨ORG⟩Flint Center⟨ORG⟩, ⟨LOC⟩Cupertino⟨LOC⟩.
With LOC, PER, ORG is Name of location, person, organization respectively. Note that O means Other (Not a Name entity). We will not denote the O label in the following examples in this article because we only care about name of entities.
In this paper, we analyze common errors of the previous state-of-the-art techniques using Deep Neural Network (DNN) on VLSP Corpus. This may contribute to the later researchers the common errors from the results of these state-of-the-art models, then they can rely on to improve the model.
Section 2 discusses the related works to this paper. We will present a method for evaluating and analyzing the types of errors in Section 3. The data used for testing and analysis of errors will be introduced in Section 4, we also talk about deep neural network methods and pre-trained word embeddings for experimentation in this section. Section 5 will detail the errors and evaluations. In the end is our contribution to improve the above errors.
Related work
Previously publicly available NER systems do not use DNN, for example, the MITRE Identification Scrubber Toolkit (MIST) BIBREF0, Stanford NER BIBREF1, BANNER BIBREF2 and NERsuite BIBREF3. NER systems for Vietnamese language processing used traditional machine learning methods such as Maximum Entropy Markov Model (MEMM), Support Vector Machine (SVM) and Conditional Random Field (CRF). In particular, most of the toolkits for NER task attempted to use MEMM BIBREF4, and CRF BIBREF5 to solve this problem.
Nowadays, because of the increase in data, DNN methods are used a lot. They have archived great results when it comes to NER tasks, for example, Guillaume Lample et al with BLSTM-CRF in BIBREF6 report 90.94 F1 score, Chiu et al with BLSTM-CNN in BIBREF7 got 91.62 F1 score, Xeuzhe Ma and Eduard Hovy with BLSTM-CNN-CRF in BIBREF8 achieved F1 score of 91.21, Thai-Hoang Pham and Phuong Le-Hong with BLSTM-CNN-CRF in BIBREF9 got 88.59% F1 score. These DNN models are also the state-of-the-art models.
Error-analysis method
The results of our analysis experiments are reported in precision and recall over all labels (name of person, location, organization and miscellaneous). The process of analyzing errors has 2 steps:
Step 1: We use two state-of-the-art models including BLSTM-CNN-CRF and BLSTM-CRF to train and test on VLSP’s NER corpus. In our experiments, we implement word embeddings as features to the two systems.
Step 2: Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2).
A token (an entity name maybe contain more than one word) will be extracted as a correct entity by the model if both of the followings are correct:
The length of it (range) is correct: The word beginning and the end is the same as gold data (annotator).
The label (tag) of it is correct: The label is the same as in gold data.
If it is not meet two above requirements, it will be the wrong entity (an error). Therefore, we divide the errors into five different types which are described in detail as follows:
No extraction: The error where the model did not extract tokens as a name entity (NE) though the tokens were annotated as a NE.
LSTM-CNN-CRF: vietnam Việt_Nam
Annotator: vietnam⟨LOC⟩ Việt_Nam ⟨LOC⟩
No annotation: The error where the model extracted tokens as an NE though the tokens were not annotated as a NE.
LSTM-CNN-CRF: vietnam⟨PER⟩ Châu Âu ⟨PER⟩
Annotator: vietnamChâu Âu
Wrong range: The error where the model extracted tokens as an NE and only the range was wrong. (The extracted tokens were partially annotated or they were the part of the annotated tokens).
LSTM-CNN-CRF: vietnam⟨PER⟩ Ca_sĩ Nguyễn Văn A ⟨PER⟩
Annotator:
vietnamCa_sĩ ⟨PER⟩ Nguyễn Văn A ⟨PER⟩
Wrong tag: The error where the model extracted tokens as an NE and only the tag type was wrong.
LSTM-CNN-CRF: vietnamKhám phá ⟨PER⟩ Yangsuri ⟨PER⟩
Annotator:
vietnamKhám phá ⟨LOC⟩ Yangsuri ⟨LOC⟩
Wrong range and tag: The error where the model extracted tokens as an NE and both the range and the tag type were wrong.
LSTM-CNN-CRF: vietnam⟨LOC⟩ gian_hàng Apple ⟨LOC⟩
Annotator:
vietnamgian_hàng ⟨ORG⟩ Apple ⟨ORG⟩
We compare the predicted NEs to the gold NEs ($Fig. 1$), if they have the same range, the predicted NE is a correct or Wrong tag. If it has different range with the gold NE, we will see what type of wrong it is. If it does not have any overlap, it is a No extraction. If it has an overlap and the tag is the same at gold NE, it is a Wrong range. Finally, it is a Wrong range and tag if it has an overlap but the tag is different. The steps in Fig. 2 is the same at Fig. 1 and the different only is we compare the gold NE to the predicted NE, and No extraction type will be No annotation.
Data and model ::: Data sets
To conduct error analysis of the model, we used the corpus which are provided by VLSP 2016 - Named Entity Recognition. The dataset contains four different types of label: Location (LOC), Person (PER), Organization (ORG) and Miscellaneous - Name of an entity that do not belong to 3 types above (Table TABREF15). Although the corpus has more information about the POS and chunks, but we do not use them as features in our model.
There are two folders with 267 text files of training data and 45 text files of test data. They all have their own format. We take 21 first text files and 22 last text files and 22 sentences of the 22th text file and 55 sentences of the 245th text file to be a development data. The remaining files are going to be the training data. The test file is the same at the file VSLP gave. Finally, we have 3 text files only based on the CoNLL 2003 format: train, dev and test.
Data and model ::: Pre-trained word Embeddings
We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:
Kyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps. His word embedding is the vector of 100 dimension and it has about 10k words.
Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia. The format is the same at Kyubyong's, but their embedding is the vector of 300 dimension, and they have about 200k words
Data and model ::: Model
Based on state-of-the-art methods for NER, BLSTM-CNN-CRF is the end-to-end deep neural network model that achieves the best result on F-score BIBREF9. Therefore, we decide to conduct the experiment on this model and analyze the errors.
We run experiment with the Ma and Hovy (2016) model BIBREF8, source code provided by (Motoki Sato) and analysis the errors from this result. Before we decide to analysis on this result, we have run some other methods, but this one with Vietnamese pre-trained word embeddings provided by Kyubyong Park obtains the best result. Other results are shown in the Table 2.
Experiment and Results
Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings.
We compare the outputs of BLSTM-CNN-CRF model (predicted) to the annotated data (gold) and analyzed the errors. Table 3 shows perfomance of the BLSTM-CNN-CRF model. In our experiments, we use three evaluation parameters (precision, recall, and F1 score) to access our experimental result. They will be described as follow in Table 3. The "correctNE", the number of correct label for entity that the model can found. The "goldNE", number of the real label annotated by annotator in the gold data. The "foundNE", number of the label the model find out (no matter if they are correct or not).
In Table 3 above, we can see that recall score on ORG label is lowest. The reason is almost all the ORG label on test file is name of some brands that do not appear on training data and pre-trained word embedding. On the other side, the characters inside these brand names also inside the other names of person in the training data. The context from both side of the sentence (future- and past-feature) also make the model "think" the name entity not as it should be.
Table 4 shows that the biggest number of errors is No extraction. The errors were counted by using logical sum (OR) of the gold labels and predicted labels (predicted by the model). The second most frequent error was Wrong tag means the model extract it's a NE but wrong tag.
Experiment and Results ::: Error analysis on gold data
First of all, we will compare the predicted NEs to the gold NEs (Fig. 1). Table 4 shows the summary of errors by types based on the gold labels, the "correct" is the number of gold tag that the model predicted correctly, "error" is the number of gold tag that the model predicted incorrectly, and "total" is sum of them. Four columns next show the number of type errors on each label.
Table 5 shows that Person, Location and Organization is the main reason why No extraction and Wrong tag are high.
After analyzing based on the gold NEs, we figure out the reason is:
Almost all the NEs is wrong, they do not appear on training data and pre-trained embedding. These NEs vector will be initial randomly, therefore, these vectors are poor which means have no semantic aspect.
The "weird" ORG NE in the sentence appear together with other words have context of PER, so this "weird" ORG NE is going to be label at PER.
For example:
gold data: vietnamVĐV được xem là đầu_tiên ký hợp_đồng quảng_cáo là võ_sĩ ⟨PER⟩ Trần Quang Hạ ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨LOC⟩ Hiroshima ⟨LOC⟩.
(The athlete is considered the first to sign a contract of boxing Tran Quang Ha after winning the gold medal Asiad Hiroshima)
predicted data: vietnam…là võ_sĩ ⟨PER⟩Trần Quang Hạ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨PER⟩Hiroshima⟨PER⟩.
Some mistakes of the model are from training set, for example, anonymous person named "P." appears many times in the training set, so when model meets "P." in context of "P. 3 vietnamQuận 9" (Ward 3, District 9) – "P." stands for vietnam"Phường" (Ward) model will predict "P." as a PER.
Training data: vietnamnếu ⟨PER⟩P.⟨PER⟩ có ở đây – (If P. were here) Predicted data: vietnam⟨PER⟩P. 3⟨PER⟩, Gò_vấp – (Ward 3, Go_vap District)
Experiment and Results ::: Analysis on predicted data
Table 6 shows the summary of errors by types based on the predicted data. After analyzing the errors on predicted and gold data, we noticed that the difference of these errors are mainly in the No anotation and No extraction. Therefore, we only mention the main reasons for the No anotation:
Most of the wrong labels that model assigns are brand names (Ex: Charriol, Dream, Jupiter, ...), words are abbreviated vietnam(XKLD – xuất khẩu lao động (labour export)), movie names, … All of these words do not appear in training data and word embedding. Perhaps these reasons are the followings:
The vectors of these words are random so the semantic aspect is poor.
The hidden states of these words also rely on past feature (forward pass) and future feature (backward pass) of the sentence. Therefore, they are assigned wrongly because of their context.
These words are primarily capitalized or all capital letters, so they are assigned as a name entity. This error is caused by the CNN layer extract characters information of the word.
Table 7 shows the detail of errors on predicted data where we will see number kind of errors on each label.
Experiment and Results ::: Errors of annotators
After considering the training and test data, we realized that this data has many problems need to be fixed in the next run experiments. The annotators are not consistent between the training data and the test data, more details are shown as follow:
The organizations are labeled in the train data but not labeled in the test data:
Training data: vietnam⟨ORG⟩ Sở Y_tế ⟨ORG⟩ (Department of Health)
Test data: vietnamSở Y_tế (Department of Health)
Explanation: vietnam"Sở Y_tế" in train and test are the same name of organization entity. However the one in test data is not labeled.
The entity has the same meaning but is assigned differently between the train data and the test:
Training data: vietnam⟨MISC⟩ người Việt ⟨MISC⟩ (Vietnamese people)
Test data: vietnamdân ⟨LOC⟩ Việt ⟨LOC⟩ (Vietnamese people)
Explanation: vietnamBoth "người Việt" in train data and "dân Việt" in test data are the same meaning, but they are assigned differently.
The range of entities are differently between the train data and the test data:
Training data: vietnam⟨LOC⟩ làng Atâu ⟨LOC⟩ (Atâu village)
Test data: vietnamlàng ⟨LOC⟩ Hàn_Quốc ⟨LOC⟩ (Korea village)
Explanation: The two villages differ only in name, but they are labeled differently in range
Capitalization rules are not unified with a token is considered an entity:
Training data: vietnam⟨ORG⟩ Công_ty Inmasco ⟨ORG⟩ (Inmasco Company)
Training data: vietnamcông_ty con (Subsidiaries)
Test data: vietnamcông_ty ⟨ORG⟩ Yeon Young Entertainment ⟨ORG⟩ (Yeon Young Entertainment company)
Explanation: If it comes to a company with a specific name, it should be labeled vietnam⟨ORG⟩ Công_ty Yeon Young Entertainment ⟨ORG⟩ with "C" in capital letters.
Conclusion
In this paper, we have presented a thorough study of distinctive error distributions produced by Bi-LSTM-CNN-CRF for the Vietnamese language. This would be helpful for researchers to create better NER models.
Based on the analysis results, we suggest some possible directions for improvement of model and for the improvement of data-driven NER for the Vietnamese language in future:
The word at the begin of the sentence is capitalized, so, if the name of person is at this position, model will ignore them (no extraction). To improve this issue, we can use the POS feature together with BIO format (Inside, Outside, Beginning) BIBREF6 at the top layer (CRF).
If we can unify the labeling of the annotators between the train, dev and test sets. We will improve data quality and classifier.
It is better if there is a pre-trained word embeddings that overlays the data, and segmentation algorithm need to be more accurately. | No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag |
71d59c36225b5ee80af11d3568bdad7425f17b0c | 71d59c36225b5ee80af11d3568bdad7425f17b0c_0 | Q: How much better was the BLSTM-CNN-CRF than the BLSTM-CRF?
Text: Introduction
Named Entity Recognition (NER) is one of information extraction subtasks that is responsible for detecting entity elements from raw text and can determine the category in which the element belongs, these categories include the names of persons, organizations, locations, expressions of times, quantities, monetary values and percentages.
The problem of NER is described as follow:
Input: A sentence S consists a sequence of $n$ words: $S= w_1,w_2,w_3,…,w_n$ ($w_i$: the $i^{th}$ word)
Output: The sequence of $n$ labels $y_1,y_2,y_3,…,y_n$. Each $y_i$ label represents the category which $w_i$ belongs to.
For example, given a sentence:
Input: vietnamGiám đốc điều hành Tim Cook của Apple vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện Flint Center, Cupertino.
(Apple CEO Tim Cook introduces 2 new, larger iPhones, Smart Watch at Cupertino Flint Center event)
The algorithm will output:
Output: vietnam⟨O⟩Giám đốc điều hành⟨O⟩ ⟨PER⟩Tim Cook⟨PER⟩ ⟨O⟩của⟨O⟩ ⟨ORG⟩Apple⟨ORG⟩ ⟨O⟩vừa giới thiệu 2 điện thoại iPhone, đồng hồ thông minh mới, lớn hơn ở sự kiện⟨O⟩ ⟨ORG⟩Flint Center⟨ORG⟩, ⟨LOC⟩Cupertino⟨LOC⟩.
With LOC, PER, ORG is Name of location, person, organization respectively. Note that O means Other (Not a Name entity). We will not denote the O label in the following examples in this article because we only care about name of entities.
In this paper, we analyze common errors of the previous state-of-the-art techniques using Deep Neural Network (DNN) on VLSP Corpus. This may contribute to the later researchers the common errors from the results of these state-of-the-art models, then they can rely on to improve the model.
Section 2 discusses the related works to this paper. We will present a method for evaluating and analyzing the types of errors in Section 3. The data used for testing and analysis of errors will be introduced in Section 4, we also talk about deep neural network methods and pre-trained word embeddings for experimentation in this section. Section 5 will detail the errors and evaluations. In the end is our contribution to improve the above errors.
Related work
Previously publicly available NER systems do not use DNN, for example, the MITRE Identification Scrubber Toolkit (MIST) BIBREF0, Stanford NER BIBREF1, BANNER BIBREF2 and NERsuite BIBREF3. NER systems for Vietnamese language processing used traditional machine learning methods such as Maximum Entropy Markov Model (MEMM), Support Vector Machine (SVM) and Conditional Random Field (CRF). In particular, most of the toolkits for NER task attempted to use MEMM BIBREF4, and CRF BIBREF5 to solve this problem.
Nowadays, because of the increase in data, DNN methods are used a lot. They have archived great results when it comes to NER tasks, for example, Guillaume Lample et al with BLSTM-CRF in BIBREF6 report 90.94 F1 score, Chiu et al with BLSTM-CNN in BIBREF7 got 91.62 F1 score, Xeuzhe Ma and Eduard Hovy with BLSTM-CNN-CRF in BIBREF8 achieved F1 score of 91.21, Thai-Hoang Pham and Phuong Le-Hong with BLSTM-CNN-CRF in BIBREF9 got 88.59% F1 score. These DNN models are also the state-of-the-art models.
Error-analysis method
The results of our analysis experiments are reported in precision and recall over all labels (name of person, location, organization and miscellaneous). The process of analyzing errors has 2 steps:
Step 1: We use two state-of-the-art models including BLSTM-CNN-CRF and BLSTM-CRF to train and test on VLSP’s NER corpus. In our experiments, we implement word embeddings as features to the two systems.
Step 2: Based on the best results (BLSTM-CNN-CRF), error analysis is performed based on five types of errors (No extraction, No annotation, Wrong range, Wrong tag, Wrong range and tag), in a way similar to BIBREF10, but we analyze on both gold labels and predicted labels (more detail in figure 1 and 2).
A token (an entity name maybe contain more than one word) will be extracted as a correct entity by the model if both of the followings are correct:
The length of it (range) is correct: The word beginning and the end is the same as gold data (annotator).
The label (tag) of it is correct: The label is the same as in gold data.
If it is not meet two above requirements, it will be the wrong entity (an error). Therefore, we divide the errors into five different types which are described in detail as follows:
No extraction: The error where the model did not extract tokens as a name entity (NE) though the tokens were annotated as a NE.
LSTM-CNN-CRF: vietnam Việt_Nam
Annotator: vietnam⟨LOC⟩ Việt_Nam ⟨LOC⟩
No annotation: The error where the model extracted tokens as an NE though the tokens were not annotated as a NE.
LSTM-CNN-CRF: vietnam⟨PER⟩ Châu Âu ⟨PER⟩
Annotator: vietnamChâu Âu
Wrong range: The error where the model extracted tokens as an NE and only the range was wrong. (The extracted tokens were partially annotated or they were the part of the annotated tokens).
LSTM-CNN-CRF: vietnam⟨PER⟩ Ca_sĩ Nguyễn Văn A ⟨PER⟩
Annotator:
vietnamCa_sĩ ⟨PER⟩ Nguyễn Văn A ⟨PER⟩
Wrong tag: The error where the model extracted tokens as an NE and only the tag type was wrong.
LSTM-CNN-CRF: vietnamKhám phá ⟨PER⟩ Yangsuri ⟨PER⟩
Annotator:
vietnamKhám phá ⟨LOC⟩ Yangsuri ⟨LOC⟩
Wrong range and tag: The error where the model extracted tokens as an NE and both the range and the tag type were wrong.
LSTM-CNN-CRF: vietnam⟨LOC⟩ gian_hàng Apple ⟨LOC⟩
Annotator:
vietnamgian_hàng ⟨ORG⟩ Apple ⟨ORG⟩
We compare the predicted NEs to the gold NEs ($Fig. 1$), if they have the same range, the predicted NE is a correct or Wrong tag. If it has different range with the gold NE, we will see what type of wrong it is. If it does not have any overlap, it is a No extraction. If it has an overlap and the tag is the same at gold NE, it is a Wrong range. Finally, it is a Wrong range and tag if it has an overlap but the tag is different. The steps in Fig. 2 is the same at Fig. 1 and the different only is we compare the gold NE to the predicted NE, and No extraction type will be No annotation.
Data and model ::: Data sets
To conduct error analysis of the model, we used the corpus which are provided by VLSP 2016 - Named Entity Recognition. The dataset contains four different types of label: Location (LOC), Person (PER), Organization (ORG) and Miscellaneous - Name of an entity that do not belong to 3 types above (Table TABREF15). Although the corpus has more information about the POS and chunks, but we do not use them as features in our model.
There are two folders with 267 text files of training data and 45 text files of test data. They all have their own format. We take 21 first text files and 22 last text files and 22 sentences of the 22th text file and 55 sentences of the 245th text file to be a development data. The remaining files are going to be the training data. The test file is the same at the file VSLP gave. Finally, we have 3 text files only based on the CoNLL 2003 format: train, dev and test.
Data and model ::: Pre-trained word Embeddings
We use the word embeddings for Vietnamese that created by Kyubyong Park and Edouard Grave at al:
Kyubyong Park: In his project, he uses two methods including fastText and word2vec to generate word embeddings from wikipedia database backup dumps. His word embedding is the vector of 100 dimension and it has about 10k words.
Edouard Grave et al BIBREF11: They use fastText tool to generate word embeddings from Wikipedia. The format is the same at Kyubyong's, but their embedding is the vector of 300 dimension, and they have about 200k words
Data and model ::: Model
Based on state-of-the-art methods for NER, BLSTM-CNN-CRF is the end-to-end deep neural network model that achieves the best result on F-score BIBREF9. Therefore, we decide to conduct the experiment on this model and analyze the errors.
We run experiment with the Ma and Hovy (2016) model BIBREF8, source code provided by (Motoki Sato) and analysis the errors from this result. Before we decide to analysis on this result, we have run some other methods, but this one with Vietnamese pre-trained word embeddings provided by Kyubyong Park obtains the best result. Other results are shown in the Table 2.
Experiment and Results
Table 2 shows our experiments on two models with and without different pre-trained word embedding – KP means the Kyubyong Park’s pre-trained word embeddings and EG means Edouard Grave’s pre-trained word embeddings.
We compare the outputs of BLSTM-CNN-CRF model (predicted) to the annotated data (gold) and analyzed the errors. Table 3 shows perfomance of the BLSTM-CNN-CRF model. In our experiments, we use three evaluation parameters (precision, recall, and F1 score) to access our experimental result. They will be described as follow in Table 3. The "correctNE", the number of correct label for entity that the model can found. The "goldNE", number of the real label annotated by annotator in the gold data. The "foundNE", number of the label the model find out (no matter if they are correct or not).
In Table 3 above, we can see that recall score on ORG label is lowest. The reason is almost all the ORG label on test file is name of some brands that do not appear on training data and pre-trained word embedding. On the other side, the characters inside these brand names also inside the other names of person in the training data. The context from both side of the sentence (future- and past-feature) also make the model "think" the name entity not as it should be.
Table 4 shows that the biggest number of errors is No extraction. The errors were counted by using logical sum (OR) of the gold labels and predicted labels (predicted by the model). The second most frequent error was Wrong tag means the model extract it's a NE but wrong tag.
Experiment and Results ::: Error analysis on gold data
First of all, we will compare the predicted NEs to the gold NEs (Fig. 1). Table 4 shows the summary of errors by types based on the gold labels, the "correct" is the number of gold tag that the model predicted correctly, "error" is the number of gold tag that the model predicted incorrectly, and "total" is sum of them. Four columns next show the number of type errors on each label.
Table 5 shows that Person, Location and Organization is the main reason why No extraction and Wrong tag are high.
After analyzing based on the gold NEs, we figure out the reason is:
Almost all the NEs is wrong, they do not appear on training data and pre-trained embedding. These NEs vector will be initial randomly, therefore, these vectors are poor which means have no semantic aspect.
The "weird" ORG NE in the sentence appear together with other words have context of PER, so this "weird" ORG NE is going to be label at PER.
For example:
gold data: vietnamVĐV được xem là đầu_tiên ký hợp_đồng quảng_cáo là võ_sĩ ⟨PER⟩ Trần Quang Hạ ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨LOC⟩ Hiroshima ⟨LOC⟩.
(The athlete is considered the first to sign a contract of boxing Tran Quang Ha after winning the gold medal Asiad Hiroshima)
predicted data: vietnam…là võ_sĩ ⟨PER⟩Trần Quang Hạ⟨PER⟩ sau khi đoạt HCV taekwondo Asiad ⟨PER⟩Hiroshima⟨PER⟩.
Some mistakes of the model are from training set, for example, anonymous person named "P." appears many times in the training set, so when model meets "P." in context of "P. 3 vietnamQuận 9" (Ward 3, District 9) – "P." stands for vietnam"Phường" (Ward) model will predict "P." as a PER.
Training data: vietnamnếu ⟨PER⟩P.⟨PER⟩ có ở đây – (If P. were here) Predicted data: vietnam⟨PER⟩P. 3⟨PER⟩, Gò_vấp – (Ward 3, Go_vap District)
Experiment and Results ::: Analysis on predicted data
Table 6 shows the summary of errors by types based on the predicted data. After analyzing the errors on predicted and gold data, we noticed that the difference of these errors are mainly in the No anotation and No extraction. Therefore, we only mention the main reasons for the No anotation:
Most of the wrong labels that model assigns are brand names (Ex: Charriol, Dream, Jupiter, ...), words are abbreviated vietnam(XKLD – xuất khẩu lao động (labour export)), movie names, … All of these words do not appear in training data and word embedding. Perhaps these reasons are the followings:
The vectors of these words are random so the semantic aspect is poor.
The hidden states of these words also rely on past feature (forward pass) and future feature (backward pass) of the sentence. Therefore, they are assigned wrongly because of their context.
These words are primarily capitalized or all capital letters, so they are assigned as a name entity. This error is caused by the CNN layer extract characters information of the word.
Table 7 shows the detail of errors on predicted data where we will see number kind of errors on each label.
Experiment and Results ::: Errors of annotators
After considering the training and test data, we realized that this data has many problems need to be fixed in the next run experiments. The annotators are not consistent between the training data and the test data, more details are shown as follow:
The organizations are labeled in the train data but not labeled in the test data:
Training data: vietnam⟨ORG⟩ Sở Y_tế ⟨ORG⟩ (Department of Health)
Test data: vietnamSở Y_tế (Department of Health)
Explanation: vietnam"Sở Y_tế" in train and test are the same name of organization entity. However the one in test data is not labeled.
The entity has the same meaning but is assigned differently between the train data and the test:
Training data: vietnam⟨MISC⟩ người Việt ⟨MISC⟩ (Vietnamese people)
Test data: vietnamdân ⟨LOC⟩ Việt ⟨LOC⟩ (Vietnamese people)
Explanation: vietnamBoth "người Việt" in train data and "dân Việt" in test data are the same meaning, but they are assigned differently.
The range of entities are differently between the train data and the test data:
Training data: vietnam⟨LOC⟩ làng Atâu ⟨LOC⟩ (Atâu village)
Test data: vietnamlàng ⟨LOC⟩ Hàn_Quốc ⟨LOC⟩ (Korea village)
Explanation: The two villages differ only in name, but they are labeled differently in range
Capitalization rules are not unified with a token is considered an entity:
Training data: vietnam⟨ORG⟩ Công_ty Inmasco ⟨ORG⟩ (Inmasco Company)
Training data: vietnamcông_ty con (Subsidiaries)
Test data: vietnamcông_ty ⟨ORG⟩ Yeon Young Entertainment ⟨ORG⟩ (Yeon Young Entertainment company)
Explanation: If it comes to a company with a specific name, it should be labeled vietnam⟨ORG⟩ Công_ty Yeon Young Entertainment ⟨ORG⟩ with "C" in capital letters.
Conclusion
In this paper, we have presented a thorough study of distinctive error distributions produced by Bi-LSTM-CNN-CRF for the Vietnamese language. This would be helpful for researchers to create better NER models.
Based on the analysis results, we suggest some possible directions for improvement of model and for the improvement of data-driven NER for the Vietnamese language in future:
The word at the begin of the sentence is capitalized, so, if the name of person is at this position, model will ignore them (no extraction). To improve this issue, we can use the POS feature together with BIO format (Inside, Outside, Beginning) BIBREF6 at the top layer (CRF).
If we can unify the labeling of the annotators between the train, dev and test sets. We will improve data quality and classifier.
It is better if there is a pre-trained word embeddings that overlays the data, and segmentation algorithm need to be more accurately. | Best BLSTM-CNN-CRF had F1 score 86.87 vs 86.69 of best BLSTM-CRF |
efc65e5032588da4a134d121fe50d49fe8fe5e8c | efc65e5032588da4a134d121fe50d49fe8fe5e8c_0 | Q: What supplemental tasks are used for multitask learning?
Text: Introduction
Community question answering (cQA) is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions. In the past decade, these websites have attracted a great number of users, and have accumulated a large collection of question-comment threads generated by these users. However, the low restriction results in a high variation in answer quality, which makes it time-consuming to search for useful information from the existing content. It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question, or when looking for solutions from comments of an existing question.
Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.
To overcome these issues, most previous work (e.g. SemEval 2015 BIBREF1 ) relied heavily on additional features and reasoning capabilities. In BIBREF2 , a neural attention-based model was proposed for automatically recognizing entailment relations between pairs of natural language sentences. In this study, we first modify this model for all three cQA tasks. We also extend this framework into a jointly trained model when the external resources are available, i.e. selecting an external comment when we know the question that the external comment answers (Task C).
Our ultimate objective is to classify relevant questions and comments without complicated handcrafted features. By applying RNN-based encoders, we avoid heavily engineered features and learn the representation automatically. In addition, an attention mechanism augments encoders with the ability to attend to past outputs directly. This becomes helpful when encoding longer sequences, since we no longer need to compress all information into a fixed-length vector representation.
In our view, existing annotated cQA corpora are generally too small to properly train an end-to-end neural network. To address this, we investigate transfer learning by pretraining the recurrent systems on other corpora, and also generating additional instances from existing cQA corpus.
Related Work
Earlier work of community question answering relied heavily on feature engineering, linguistic tools, and external resource. BIBREF3 and BIBREF4 utilized rich non-textual features such as answer's profile. BIBREF5 syntactically analyzed the question and extracted name entity features. BIBREF6 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment.
More recent work incorporated word vector into their feature extraction system and based on it designed different distance metric for question and answer BIBREF7 BIBREF8 . While these approaches showed effectiveness, it is difficult to generalize them to common cQA tasks since linguistic tools and external resource may be restrictive in other languages and features are highly customized for each cQA task.
Very recent work on answer selection also involved the use of neural networks. BIBREF9 used LSTM to construct a joint vector based on both the question and the answer and then converted it into a learning to rank problem. BIBREF10 proposed several convolutional neural network (CNN) architectures for cQA. Our method differs in that RNN encoder is applied here and by adding attention mechanism we jointly learn which words in question to focus and hence available to conduct qualitative analysis. During classification, we feed the extracted vector into a feed-forward neural network directly instead of using mean/max pooling on top of each time steps.
Method
In this section, we first discuss long short-term memory (LSTM) units and an associated attention mechanism. Next, we explain how we can encode a pair of sentences into a dense vector for predicting relationships using an LSTM with an attention mechanism. Finally, we apply these models to predict question-question similarity, question-comment similarity, and question-external comment similarity.
LSTM Models
LSTMs have shown great success in many different fields. An LSTM unit contains a memory cell with self-connections, as well as three multiplicative gates to control information flow. Given input vector $x_t$ , previous hidden outputs $h_{t-1}$ , and previous cell state $c_{t-1}$ , LSTM units operate as follows:
$$X &= \begin{bmatrix} x_t\\[0.3em] h_{t-1}\\[0.3em] \end{bmatrix}\\ i_t &= \sigma (\mathbf {W_{iX}}X + \mathbf {W_{ic}}c_{t-1} + \mathbf {b_i})\\ f_t &= \sigma (\mathbf {W_{fX}}X + \mathbf {W_{fc}}c_{t-1} + \mathbf {b_f})\\ o_t &= \sigma (\mathbf {W_{oX}}X + \mathbf {W_{oc}}c_{t-1} + \mathbf {b_o})\\ c_t &= f_t \odot c_{t-1} + i_t \odot tanh(\mathbf {W_{cX}}X + \mathbf {b_c})\\ h_t &= o_t \odot tanh(c_t)$$ (Eq. 3)
where $i_t$ , $f_t$ , $o_t$ are input, forget, and output gates, respectively. The sigmoid function $\sigma ()$ is a soft gate function controlling the amount of information flow. $W$ s and $b$ s are model parameters to learn.
Neural Attention
A traditional RNN encoder-decoder approach BIBREF11 first encodes an arbitrary length input sequence into a fixed-length dense vector that can be used as input to subsequent classification models, or to initialize the hidden state of a secondary decoder. However, the requirement to compress all necessary information into a single fixed length vector can be problematic. A neural attention model BIBREF12 BIBREF13 has been recently proposed to alleviate this issue by enabling the network to attend to past outputs when decoding. Thus, the encoder no longer needs to represent an entire sequence with one vector; instead, it encodes information into a sequence of vectors, and adaptively chooses a subset of the vectors when decoding.
Predicting Relationships of Object Pairs with an Attention Model
In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.
The representations of the two objects are generated independently in this manner. However, we are more interested in the relationship instead of the object representations themselves. Therefore, we consider a serialized LSTM-encoder model in the right side of Figure 1 that is similar to that in BIBREF2 , but also allows an augmented feature input to the FNN classifier.
Figure 2 illustrates our attention framework in more detail. The first LSTM reads one object, and passes information through hidden units to the second LSTM. The second LSTM then reads the other object and generates the representation of this pair after the entire sequence is processed. We build another FNN that takes this representation as input to classify the relationship of this pair.
By adding an attention mechanism to the encoder, we allow the second LSTM to attend to the sequence of output vectors from the first LSTM, and hence generate a weighted representation of first object according to both objects. Let $h_N$ be the last output of second LSTM and $M = [h_1, h_2, \cdots , h_L]$ be the sequence of output vectors of the first object. The weighted representation of the first object is
$$h^{\prime } = \sum _{i=1}^{L} \alpha _i h_i$$ (Eq. 7)
The weight is computed by
$$\alpha _i = \dfrac{exp(a(h_i,h_N))}{\sum _{j=1}^{L}exp(a(h_j,h_N))}$$ (Eq. 8)
where $a()$ is the importance model that produces a higher score for $(h_i, h_N)$ if $h_i$ is useful to determine the object pair's relationship. We parametrize this model using another FNN. Note that in our framework, we also allow other augmented features (e.g., the ranking score from the IR system) to enhance the classifier. So the final input to the classifier will be $h_N$ , $h^{\prime }$ , as well as augmented features.
Modeling Question-External Comments
For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC).
Figure 3 shows our framework: the three lower models are separate serialized LSTM-encoders for the three respective object pairs, whereas the upper model is an FNN that takes as input the concatenation of the outputs of three encoders, and predicts the relationships for all three pairs. More specifically, the output layer consists of three softmax layers where each one is intended to predict the relationship of one particular pair.
For the overall loss function, we combine three separate loss functions using a heuristic weight vector $\beta $ that allocates a higher weight to the main task (oriQ-relC relationship prediction) as follows:
$$\mathcal {L} = \beta _1 \mathcal {L}_1 + \beta _2 \mathcal {L}_2 + \beta _3 \mathcal {L}_3$$ (Eq. 11)
By doing so, we hypothesize that the related tasks can improve the main task by leveraging commonality among all tasks.
Experiments
We evaluate our approach on all three cQA tasks. We use the cQA datasets provided by the Semeval 2016 task . The cQA data is organized as follows: there are 267 original questions, each question has 10 related question, and each related question has 10 comments. Therefore, for task A, there are a total number of 26,700 question-comment pairs. For task B, there are 2,670 question-question pairs. For task C, there are 26,700 question-comment pairs. The test dataset includes 50 questions, 500 related questions and 5,000 comments which do not overlap with the training set. To evaluate the performance, we use mean average precision (MAP) and F1 score.
Preliminary Results
Table 2 shows the initial results using the RNN encoder for different tasks. We observe that the attention model always gets better results than the RNN without attention, especially for task C. However, the RNN model achieves a very low F1 score. For task B, it is even worse than the random baseline. We believe the reason is because for task B, there are only 2,670 pairs for training which is very limited training for a reasonable neural network. For task C, we believe the problem is highly imbalanced data. Since the related comments did not directly comment on the original question, more than $90\%$ of the comments are labeled as irrelevant to the original question. The low F1 (with high precision and low recall) means our system tends to label most comments as irrelevant. In the following section, we investigate methods to address these issues.
Robust Parameter Initialization
One way to improve models trained on limited data is to use external data to pretrain the neural network. We therefore considered two different datasets for this task.
Cross-domain: The Stanford natural language inference (SNLI) corpus BIBREF17 has a huge amount of cleaned premise and hypothesis pairs. Unfortunately the pairs are for a different task. The relationship between the premise and hypothesis may be similar to the relation between questions and comments, but may also be different.
In-domain: since task A seems has reasonable performance, and the network is also well-trained, we could use it directly to initialize task B.
To utilize the data, we first trained the model on each auxiliary data (SNLI or Task A) and then removed the softmax layer. After that, we retrain the network using the target data with a softmax layer that was randomly initialized.
For task A, the SNLI cannot improve MAP or F1 scores. Actually it slightly hurts the performance. We surmise that it is probably because the domain is different. Further investigation is needed: for example, we could only use the parameter for embedding layers etc. For task B, the SNLI yields a slight improvement on MAP ( $0.2\%$ ), and Task A could give ( $1.2\%$ ) on top of that. No improvement was observed on F1. For task C, pretraining by task A is also better than using SNLI (task A is $1\%$ better than the baseline, while SNLI is almost the same).
In summary, the in-domain pretraining seems better, but overall, the improvement is less than we expected, especially for task B, which only has very limited target data. We will not make a conclusion here since more investigation is needed.
Multitask Learning
As mentioned in Section "Modeling Question-External Comments" , we also explored a multitask learning framework that jointly learns to predict the relationships of all three tasks. We set $0.8$ for the main task (task C) and $0.1$ for the other auxiliary tasks. The MAP score did not improve, but F1 increases to $0.1617$ . We believe this is because other tasks have more balanced labels, which improves the shared parameters for task C.
Augmented data
There are many sources of external question-answer pairs that could be used in our tasks. For example: WebQuestion (was introduced by the authors of SEMPRE system BIBREF18 ) and The SimpleQuestions dataset . All of them are positive examples for our task and we can easily create negative examples from it. Initial experiments indicate that it is very easy to overfit these obvious negative examples. We believe this is because our negative examples are non-informative for our task and just introduce noise.
Since the external data seems to hurt the performance, we try to use the in-domain pairs to enhance task B and task C. For task B, if relative question 1 (rel1) and relative question 2 (rel2) are both relevant to the original question, then we add a positive sample (rel1, rel2, 1). If either rel1 and rel2 is irrelevant and the other is relevant, we add a negative sample (rel1, rel2, 0). After doing this, the samples of task B increase from $2,670$ to $11,810$ . By applying this method, the MAP score increased slightly from $0.5723$ to $0.5789$ but the F1 score improved from $0.4334$ to $0.5860$ .
For task C, we used task A's data directly. The results are very similar with a slight improvement on MAP, but large improvement on F1 score from $0.1449$ to $0.2064$ .
Augmented features
To further enhance the system, we incorporate a one hot vector of the original IR ranking as an additional feature into the FNN classifier. Table 3 shows the results. In comparing the models with and without augmented features, we can see large improvement for task B and C. The F1 score for task A degrades slightly but MAP improves. This might be because task A already had a substantial amount of training data.
Comparison with Other Systems
Table 4 gives the final comparison between different models (we only list the MAP score because it is the official score for the challenge). Since the two baseline models did not use any additional data, in this table our system was also restricted to the provided training data. For task A, we can see that if there is enough training data our single system already performs better than a very strong feature-rich based system. For task B, since only limited training data is given, both feature-rich based system and our system are worse than the IR system. For task C, our system also got comparable results with the feature-rich based system. If we do a simple system combination (average the rank score) between our system and the IR system, the combined system will give large gains on tasks B and C. This implies that our system is complimentary with the IR system.
Analysis of Attention Mechanism
In addition to quantitative analysis, it is natural to qualitatively evaluate the performance of the attention mechanism by visualizing the weight distribution of each instance. We randomly picked several instances from the test set in task A, for which the sentence lengths are more moderate for demonstration. These examples are shown in Figure 5 , and categorized into short, long, and noisy sentences for discussion. A darker blue patch refers to a larger weight relative to other words in the same sentence.
Short Sentences
Figure 5 illustrates two cQA examples whose questions are relatively short. The comments corresponding to these questions are “...snorkeling two days ago off the coast of dukhan...” and “the doha international airport...”. We can observe that our model successfully learns to focus on the most representative part of the question pertaining to classifying the relationship, which is "place for snorkeling" for the first example and “place can ... visited in qatar” for the second example.
Long Sentences
In Figure 5 , we investigate two examples with longer questions, which both contain 63 words. Interestingly, the distribution of weights does not become more uniform; the model still focuses attention on a small number of hot words, for example, “puppy dog for ... mall” and “hectic driving in doha ... car insurance ... quite costly”. Additionally, some words that appear frequently but carry little information for classification are assigned very small weights, such as I/we/my, is/am, like, and to.
Noisy Sentence
Due to the open nature of cQA forums, some content is noisy. Figure 5 is an example with excessive usage of question marks. Again, our model exhibits its robustness by allocating very low weights to the noise symbols and therefore excludes the noninformative content.
Conclusion
In this paper, we demonstrate that a general RNN encoder framework can be applied to community question answering tasks. By adding a neural attention mechanism, we showed quantitatively and qualitatively that attention can improve the RNN encoder framework. To deal with a more realistic scenario, we expanded the framework to incorporate metadata as augmented inputs to a FNN classifier, and pretrained models on larger datasets, increasing both stability and performance. Our model is consistently better than or comparable to a strong feature-rich baseline system, and is superior to an IR-based system when there is a reasonable amount of training data.
Our model is complimentary with an IR-based system that uses vast amounts of external resources but trained for general purposes. By combining the two systems, it exceeds the feature-rich and IR-based system in all three tasks.
Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015.
Future work could proceed in two directions: first, we can enrich the existing system by incorporating available metadata and preprocessing data with morphological normalization and out-of-vocabulary mappings; second, we can reinforce our model by carrying out word-by-word and history-aware attention mechanisms instead of attending only when reading the last word. | Multitask learning is used for the task of predicting relevance of a comment on a different question to a given question, where the supplemental tasks are predicting relevance between the questions, and between the comment and the corresponding question |
a30958c7123d1ad4723dcfd19d8346ccedb136d5 | a30958c7123d1ad4723dcfd19d8346ccedb136d5_0 | Q: Is the improvement actually coming from using an RNN?
Text: Introduction
Community question answering (cQA) is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions. In the past decade, these websites have attracted a great number of users, and have accumulated a large collection of question-comment threads generated by these users. However, the low restriction results in a high variation in answer quality, which makes it time-consuming to search for useful information from the existing content. It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question, or when looking for solutions from comments of an existing question.
Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.
To overcome these issues, most previous work (e.g. SemEval 2015 BIBREF1 ) relied heavily on additional features and reasoning capabilities. In BIBREF2 , a neural attention-based model was proposed for automatically recognizing entailment relations between pairs of natural language sentences. In this study, we first modify this model for all three cQA tasks. We also extend this framework into a jointly trained model when the external resources are available, i.e. selecting an external comment when we know the question that the external comment answers (Task C).
Our ultimate objective is to classify relevant questions and comments without complicated handcrafted features. By applying RNN-based encoders, we avoid heavily engineered features and learn the representation automatically. In addition, an attention mechanism augments encoders with the ability to attend to past outputs directly. This becomes helpful when encoding longer sequences, since we no longer need to compress all information into a fixed-length vector representation.
In our view, existing annotated cQA corpora are generally too small to properly train an end-to-end neural network. To address this, we investigate transfer learning by pretraining the recurrent systems on other corpora, and also generating additional instances from existing cQA corpus.
Related Work
Earlier work of community question answering relied heavily on feature engineering, linguistic tools, and external resource. BIBREF3 and BIBREF4 utilized rich non-textual features such as answer's profile. BIBREF5 syntactically analyzed the question and extracted name entity features. BIBREF6 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment.
More recent work incorporated word vector into their feature extraction system and based on it designed different distance metric for question and answer BIBREF7 BIBREF8 . While these approaches showed effectiveness, it is difficult to generalize them to common cQA tasks since linguistic tools and external resource may be restrictive in other languages and features are highly customized for each cQA task.
Very recent work on answer selection also involved the use of neural networks. BIBREF9 used LSTM to construct a joint vector based on both the question and the answer and then converted it into a learning to rank problem. BIBREF10 proposed several convolutional neural network (CNN) architectures for cQA. Our method differs in that RNN encoder is applied here and by adding attention mechanism we jointly learn which words in question to focus and hence available to conduct qualitative analysis. During classification, we feed the extracted vector into a feed-forward neural network directly instead of using mean/max pooling on top of each time steps.
Method
In this section, we first discuss long short-term memory (LSTM) units and an associated attention mechanism. Next, we explain how we can encode a pair of sentences into a dense vector for predicting relationships using an LSTM with an attention mechanism. Finally, we apply these models to predict question-question similarity, question-comment similarity, and question-external comment similarity.
LSTM Models
LSTMs have shown great success in many different fields. An LSTM unit contains a memory cell with self-connections, as well as three multiplicative gates to control information flow. Given input vector $x_t$ , previous hidden outputs $h_{t-1}$ , and previous cell state $c_{t-1}$ , LSTM units operate as follows:
$$X &= \begin{bmatrix} x_t\\[0.3em] h_{t-1}\\[0.3em] \end{bmatrix}\\ i_t &= \sigma (\mathbf {W_{iX}}X + \mathbf {W_{ic}}c_{t-1} + \mathbf {b_i})\\ f_t &= \sigma (\mathbf {W_{fX}}X + \mathbf {W_{fc}}c_{t-1} + \mathbf {b_f})\\ o_t &= \sigma (\mathbf {W_{oX}}X + \mathbf {W_{oc}}c_{t-1} + \mathbf {b_o})\\ c_t &= f_t \odot c_{t-1} + i_t \odot tanh(\mathbf {W_{cX}}X + \mathbf {b_c})\\ h_t &= o_t \odot tanh(c_t)$$ (Eq. 3)
where $i_t$ , $f_t$ , $o_t$ are input, forget, and output gates, respectively. The sigmoid function $\sigma ()$ is a soft gate function controlling the amount of information flow. $W$ s and $b$ s are model parameters to learn.
Neural Attention
A traditional RNN encoder-decoder approach BIBREF11 first encodes an arbitrary length input sequence into a fixed-length dense vector that can be used as input to subsequent classification models, or to initialize the hidden state of a secondary decoder. However, the requirement to compress all necessary information into a single fixed length vector can be problematic. A neural attention model BIBREF12 BIBREF13 has been recently proposed to alleviate this issue by enabling the network to attend to past outputs when decoding. Thus, the encoder no longer needs to represent an entire sequence with one vector; instead, it encodes information into a sequence of vectors, and adaptively chooses a subset of the vectors when decoding.
Predicting Relationships of Object Pairs with an Attention Model
In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.
The representations of the two objects are generated independently in this manner. However, we are more interested in the relationship instead of the object representations themselves. Therefore, we consider a serialized LSTM-encoder model in the right side of Figure 1 that is similar to that in BIBREF2 , but also allows an augmented feature input to the FNN classifier.
Figure 2 illustrates our attention framework in more detail. The first LSTM reads one object, and passes information through hidden units to the second LSTM. The second LSTM then reads the other object and generates the representation of this pair after the entire sequence is processed. We build another FNN that takes this representation as input to classify the relationship of this pair.
By adding an attention mechanism to the encoder, we allow the second LSTM to attend to the sequence of output vectors from the first LSTM, and hence generate a weighted representation of first object according to both objects. Let $h_N$ be the last output of second LSTM and $M = [h_1, h_2, \cdots , h_L]$ be the sequence of output vectors of the first object. The weighted representation of the first object is
$$h^{\prime } = \sum _{i=1}^{L} \alpha _i h_i$$ (Eq. 7)
The weight is computed by
$$\alpha _i = \dfrac{exp(a(h_i,h_N))}{\sum _{j=1}^{L}exp(a(h_j,h_N))}$$ (Eq. 8)
where $a()$ is the importance model that produces a higher score for $(h_i, h_N)$ if $h_i$ is useful to determine the object pair's relationship. We parametrize this model using another FNN. Note that in our framework, we also allow other augmented features (e.g., the ranking score from the IR system) to enhance the classifier. So the final input to the classifier will be $h_N$ , $h^{\prime }$ , as well as augmented features.
Modeling Question-External Comments
For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC).
Figure 3 shows our framework: the three lower models are separate serialized LSTM-encoders for the three respective object pairs, whereas the upper model is an FNN that takes as input the concatenation of the outputs of three encoders, and predicts the relationships for all three pairs. More specifically, the output layer consists of three softmax layers where each one is intended to predict the relationship of one particular pair.
For the overall loss function, we combine three separate loss functions using a heuristic weight vector $\beta $ that allocates a higher weight to the main task (oriQ-relC relationship prediction) as follows:
$$\mathcal {L} = \beta _1 \mathcal {L}_1 + \beta _2 \mathcal {L}_2 + \beta _3 \mathcal {L}_3$$ (Eq. 11)
By doing so, we hypothesize that the related tasks can improve the main task by leveraging commonality among all tasks.
Experiments
We evaluate our approach on all three cQA tasks. We use the cQA datasets provided by the Semeval 2016 task . The cQA data is organized as follows: there are 267 original questions, each question has 10 related question, and each related question has 10 comments. Therefore, for task A, there are a total number of 26,700 question-comment pairs. For task B, there are 2,670 question-question pairs. For task C, there are 26,700 question-comment pairs. The test dataset includes 50 questions, 500 related questions and 5,000 comments which do not overlap with the training set. To evaluate the performance, we use mean average precision (MAP) and F1 score.
Preliminary Results
Table 2 shows the initial results using the RNN encoder for different tasks. We observe that the attention model always gets better results than the RNN without attention, especially for task C. However, the RNN model achieves a very low F1 score. For task B, it is even worse than the random baseline. We believe the reason is because for task B, there are only 2,670 pairs for training which is very limited training for a reasonable neural network. For task C, we believe the problem is highly imbalanced data. Since the related comments did not directly comment on the original question, more than $90\%$ of the comments are labeled as irrelevant to the original question. The low F1 (with high precision and low recall) means our system tends to label most comments as irrelevant. In the following section, we investigate methods to address these issues.
Robust Parameter Initialization
One way to improve models trained on limited data is to use external data to pretrain the neural network. We therefore considered two different datasets for this task.
Cross-domain: The Stanford natural language inference (SNLI) corpus BIBREF17 has a huge amount of cleaned premise and hypothesis pairs. Unfortunately the pairs are for a different task. The relationship between the premise and hypothesis may be similar to the relation between questions and comments, but may also be different.
In-domain: since task A seems has reasonable performance, and the network is also well-trained, we could use it directly to initialize task B.
To utilize the data, we first trained the model on each auxiliary data (SNLI or Task A) and then removed the softmax layer. After that, we retrain the network using the target data with a softmax layer that was randomly initialized.
For task A, the SNLI cannot improve MAP or F1 scores. Actually it slightly hurts the performance. We surmise that it is probably because the domain is different. Further investigation is needed: for example, we could only use the parameter for embedding layers etc. For task B, the SNLI yields a slight improvement on MAP ( $0.2\%$ ), and Task A could give ( $1.2\%$ ) on top of that. No improvement was observed on F1. For task C, pretraining by task A is also better than using SNLI (task A is $1\%$ better than the baseline, while SNLI is almost the same).
In summary, the in-domain pretraining seems better, but overall, the improvement is less than we expected, especially for task B, which only has very limited target data. We will not make a conclusion here since more investigation is needed.
Multitask Learning
As mentioned in Section "Modeling Question-External Comments" , we also explored a multitask learning framework that jointly learns to predict the relationships of all three tasks. We set $0.8$ for the main task (task C) and $0.1$ for the other auxiliary tasks. The MAP score did not improve, but F1 increases to $0.1617$ . We believe this is because other tasks have more balanced labels, which improves the shared parameters for task C.
Augmented data
There are many sources of external question-answer pairs that could be used in our tasks. For example: WebQuestion (was introduced by the authors of SEMPRE system BIBREF18 ) and The SimpleQuestions dataset . All of them are positive examples for our task and we can easily create negative examples from it. Initial experiments indicate that it is very easy to overfit these obvious negative examples. We believe this is because our negative examples are non-informative for our task and just introduce noise.
Since the external data seems to hurt the performance, we try to use the in-domain pairs to enhance task B and task C. For task B, if relative question 1 (rel1) and relative question 2 (rel2) are both relevant to the original question, then we add a positive sample (rel1, rel2, 1). If either rel1 and rel2 is irrelevant and the other is relevant, we add a negative sample (rel1, rel2, 0). After doing this, the samples of task B increase from $2,670$ to $11,810$ . By applying this method, the MAP score increased slightly from $0.5723$ to $0.5789$ but the F1 score improved from $0.4334$ to $0.5860$ .
For task C, we used task A's data directly. The results are very similar with a slight improvement on MAP, but large improvement on F1 score from $0.1449$ to $0.2064$ .
Augmented features
To further enhance the system, we incorporate a one hot vector of the original IR ranking as an additional feature into the FNN classifier. Table 3 shows the results. In comparing the models with and without augmented features, we can see large improvement for task B and C. The F1 score for task A degrades slightly but MAP improves. This might be because task A already had a substantial amount of training data.
Comparison with Other Systems
Table 4 gives the final comparison between different models (we only list the MAP score because it is the official score for the challenge). Since the two baseline models did not use any additional data, in this table our system was also restricted to the provided training data. For task A, we can see that if there is enough training data our single system already performs better than a very strong feature-rich based system. For task B, since only limited training data is given, both feature-rich based system and our system are worse than the IR system. For task C, our system also got comparable results with the feature-rich based system. If we do a simple system combination (average the rank score) between our system and the IR system, the combined system will give large gains on tasks B and C. This implies that our system is complimentary with the IR system.
Analysis of Attention Mechanism
In addition to quantitative analysis, it is natural to qualitatively evaluate the performance of the attention mechanism by visualizing the weight distribution of each instance. We randomly picked several instances from the test set in task A, for which the sentence lengths are more moderate for demonstration. These examples are shown in Figure 5 , and categorized into short, long, and noisy sentences for discussion. A darker blue patch refers to a larger weight relative to other words in the same sentence.
Short Sentences
Figure 5 illustrates two cQA examples whose questions are relatively short. The comments corresponding to these questions are “...snorkeling two days ago off the coast of dukhan...” and “the doha international airport...”. We can observe that our model successfully learns to focus on the most representative part of the question pertaining to classifying the relationship, which is "place for snorkeling" for the first example and “place can ... visited in qatar” for the second example.
Long Sentences
In Figure 5 , we investigate two examples with longer questions, which both contain 63 words. Interestingly, the distribution of weights does not become more uniform; the model still focuses attention on a small number of hot words, for example, “puppy dog for ... mall” and “hectic driving in doha ... car insurance ... quite costly”. Additionally, some words that appear frequently but carry little information for classification are assigned very small weights, such as I/we/my, is/am, like, and to.
Noisy Sentence
Due to the open nature of cQA forums, some content is noisy. Figure 5 is an example with excessive usage of question marks. Again, our model exhibits its robustness by allocating very low weights to the noise symbols and therefore excludes the noninformative content.
Conclusion
In this paper, we demonstrate that a general RNN encoder framework can be applied to community question answering tasks. By adding a neural attention mechanism, we showed quantitatively and qualitatively that attention can improve the RNN encoder framework. To deal with a more realistic scenario, we expanded the framework to incorporate metadata as augmented inputs to a FNN classifier, and pretrained models on larger datasets, increasing both stability and performance. Our model is consistently better than or comparable to a strong feature-rich baseline system, and is superior to an IR-based system when there is a reasonable amount of training data.
Our model is complimentary with an IR-based system that uses vast amounts of external resources but trained for general purposes. By combining the two systems, it exceeds the feature-rich and IR-based system in all three tasks.
Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015.
Future work could proceed in two directions: first, we can enrich the existing system by incorporating available metadata and preprocessing data with morphological normalization and out-of-vocabulary mappings; second, we can reinforce our model by carrying out word-by-word and history-aware attention mechanisms instead of attending only when reading the last word. | No |
08333e4dd1da7d6b5e9b645d40ec9d502823f5d7 | 08333e4dd1da7d6b5e9b645d40ec9d502823f5d7_0 | Q: How much performance gap between their approach and the strong handcrafted method?
Text: Introduction
Community question answering (cQA) is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions. In the past decade, these websites have attracted a great number of users, and have accumulated a large collection of question-comment threads generated by these users. However, the low restriction results in a high variation in answer quality, which makes it time-consuming to search for useful information from the existing content. It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question, or when looking for solutions from comments of an existing question.
Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.
To overcome these issues, most previous work (e.g. SemEval 2015 BIBREF1 ) relied heavily on additional features and reasoning capabilities. In BIBREF2 , a neural attention-based model was proposed for automatically recognizing entailment relations between pairs of natural language sentences. In this study, we first modify this model for all three cQA tasks. We also extend this framework into a jointly trained model when the external resources are available, i.e. selecting an external comment when we know the question that the external comment answers (Task C).
Our ultimate objective is to classify relevant questions and comments without complicated handcrafted features. By applying RNN-based encoders, we avoid heavily engineered features and learn the representation automatically. In addition, an attention mechanism augments encoders with the ability to attend to past outputs directly. This becomes helpful when encoding longer sequences, since we no longer need to compress all information into a fixed-length vector representation.
In our view, existing annotated cQA corpora are generally too small to properly train an end-to-end neural network. To address this, we investigate transfer learning by pretraining the recurrent systems on other corpora, and also generating additional instances from existing cQA corpus.
Related Work
Earlier work of community question answering relied heavily on feature engineering, linguistic tools, and external resource. BIBREF3 and BIBREF4 utilized rich non-textual features such as answer's profile. BIBREF5 syntactically analyzed the question and extracted name entity features. BIBREF6 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment.
More recent work incorporated word vector into their feature extraction system and based on it designed different distance metric for question and answer BIBREF7 BIBREF8 . While these approaches showed effectiveness, it is difficult to generalize them to common cQA tasks since linguistic tools and external resource may be restrictive in other languages and features are highly customized for each cQA task.
Very recent work on answer selection also involved the use of neural networks. BIBREF9 used LSTM to construct a joint vector based on both the question and the answer and then converted it into a learning to rank problem. BIBREF10 proposed several convolutional neural network (CNN) architectures for cQA. Our method differs in that RNN encoder is applied here and by adding attention mechanism we jointly learn which words in question to focus and hence available to conduct qualitative analysis. During classification, we feed the extracted vector into a feed-forward neural network directly instead of using mean/max pooling on top of each time steps.
Method
In this section, we first discuss long short-term memory (LSTM) units and an associated attention mechanism. Next, we explain how we can encode a pair of sentences into a dense vector for predicting relationships using an LSTM with an attention mechanism. Finally, we apply these models to predict question-question similarity, question-comment similarity, and question-external comment similarity.
LSTM Models
LSTMs have shown great success in many different fields. An LSTM unit contains a memory cell with self-connections, as well as three multiplicative gates to control information flow. Given input vector $x_t$ , previous hidden outputs $h_{t-1}$ , and previous cell state $c_{t-1}$ , LSTM units operate as follows:
$$X &= \begin{bmatrix} x_t\\[0.3em] h_{t-1}\\[0.3em] \end{bmatrix}\\ i_t &= \sigma (\mathbf {W_{iX}}X + \mathbf {W_{ic}}c_{t-1} + \mathbf {b_i})\\ f_t &= \sigma (\mathbf {W_{fX}}X + \mathbf {W_{fc}}c_{t-1} + \mathbf {b_f})\\ o_t &= \sigma (\mathbf {W_{oX}}X + \mathbf {W_{oc}}c_{t-1} + \mathbf {b_o})\\ c_t &= f_t \odot c_{t-1} + i_t \odot tanh(\mathbf {W_{cX}}X + \mathbf {b_c})\\ h_t &= o_t \odot tanh(c_t)$$ (Eq. 3)
where $i_t$ , $f_t$ , $o_t$ are input, forget, and output gates, respectively. The sigmoid function $\sigma ()$ is a soft gate function controlling the amount of information flow. $W$ s and $b$ s are model parameters to learn.
Neural Attention
A traditional RNN encoder-decoder approach BIBREF11 first encodes an arbitrary length input sequence into a fixed-length dense vector that can be used as input to subsequent classification models, or to initialize the hidden state of a secondary decoder. However, the requirement to compress all necessary information into a single fixed length vector can be problematic. A neural attention model BIBREF12 BIBREF13 has been recently proposed to alleviate this issue by enabling the network to attend to past outputs when decoding. Thus, the encoder no longer needs to represent an entire sequence with one vector; instead, it encodes information into a sequence of vectors, and adaptively chooses a subset of the vectors when decoding.
Predicting Relationships of Object Pairs with an Attention Model
In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.
The representations of the two objects are generated independently in this manner. However, we are more interested in the relationship instead of the object representations themselves. Therefore, we consider a serialized LSTM-encoder model in the right side of Figure 1 that is similar to that in BIBREF2 , but also allows an augmented feature input to the FNN classifier.
Figure 2 illustrates our attention framework in more detail. The first LSTM reads one object, and passes information through hidden units to the second LSTM. The second LSTM then reads the other object and generates the representation of this pair after the entire sequence is processed. We build another FNN that takes this representation as input to classify the relationship of this pair.
By adding an attention mechanism to the encoder, we allow the second LSTM to attend to the sequence of output vectors from the first LSTM, and hence generate a weighted representation of first object according to both objects. Let $h_N$ be the last output of second LSTM and $M = [h_1, h_2, \cdots , h_L]$ be the sequence of output vectors of the first object. The weighted representation of the first object is
$$h^{\prime } = \sum _{i=1}^{L} \alpha _i h_i$$ (Eq. 7)
The weight is computed by
$$\alpha _i = \dfrac{exp(a(h_i,h_N))}{\sum _{j=1}^{L}exp(a(h_j,h_N))}$$ (Eq. 8)
where $a()$ is the importance model that produces a higher score for $(h_i, h_N)$ if $h_i$ is useful to determine the object pair's relationship. We parametrize this model using another FNN. Note that in our framework, we also allow other augmented features (e.g., the ranking score from the IR system) to enhance the classifier. So the final input to the classifier will be $h_N$ , $h^{\prime }$ , as well as augmented features.
Modeling Question-External Comments
For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC).
Figure 3 shows our framework: the three lower models are separate serialized LSTM-encoders for the three respective object pairs, whereas the upper model is an FNN that takes as input the concatenation of the outputs of three encoders, and predicts the relationships for all three pairs. More specifically, the output layer consists of three softmax layers where each one is intended to predict the relationship of one particular pair.
For the overall loss function, we combine three separate loss functions using a heuristic weight vector $\beta $ that allocates a higher weight to the main task (oriQ-relC relationship prediction) as follows:
$$\mathcal {L} = \beta _1 \mathcal {L}_1 + \beta _2 \mathcal {L}_2 + \beta _3 \mathcal {L}_3$$ (Eq. 11)
By doing so, we hypothesize that the related tasks can improve the main task by leveraging commonality among all tasks.
Experiments
We evaluate our approach on all three cQA tasks. We use the cQA datasets provided by the Semeval 2016 task . The cQA data is organized as follows: there are 267 original questions, each question has 10 related question, and each related question has 10 comments. Therefore, for task A, there are a total number of 26,700 question-comment pairs. For task B, there are 2,670 question-question pairs. For task C, there are 26,700 question-comment pairs. The test dataset includes 50 questions, 500 related questions and 5,000 comments which do not overlap with the training set. To evaluate the performance, we use mean average precision (MAP) and F1 score.
Preliminary Results
Table 2 shows the initial results using the RNN encoder for different tasks. We observe that the attention model always gets better results than the RNN without attention, especially for task C. However, the RNN model achieves a very low F1 score. For task B, it is even worse than the random baseline. We believe the reason is because for task B, there are only 2,670 pairs for training which is very limited training for a reasonable neural network. For task C, we believe the problem is highly imbalanced data. Since the related comments did not directly comment on the original question, more than $90\%$ of the comments are labeled as irrelevant to the original question. The low F1 (with high precision and low recall) means our system tends to label most comments as irrelevant. In the following section, we investigate methods to address these issues.
Robust Parameter Initialization
One way to improve models trained on limited data is to use external data to pretrain the neural network. We therefore considered two different datasets for this task.
Cross-domain: The Stanford natural language inference (SNLI) corpus BIBREF17 has a huge amount of cleaned premise and hypothesis pairs. Unfortunately the pairs are for a different task. The relationship between the premise and hypothesis may be similar to the relation between questions and comments, but may also be different.
In-domain: since task A seems has reasonable performance, and the network is also well-trained, we could use it directly to initialize task B.
To utilize the data, we first trained the model on each auxiliary data (SNLI or Task A) and then removed the softmax layer. After that, we retrain the network using the target data with a softmax layer that was randomly initialized.
For task A, the SNLI cannot improve MAP or F1 scores. Actually it slightly hurts the performance. We surmise that it is probably because the domain is different. Further investigation is needed: for example, we could only use the parameter for embedding layers etc. For task B, the SNLI yields a slight improvement on MAP ( $0.2\%$ ), and Task A could give ( $1.2\%$ ) on top of that. No improvement was observed on F1. For task C, pretraining by task A is also better than using SNLI (task A is $1\%$ better than the baseline, while SNLI is almost the same).
In summary, the in-domain pretraining seems better, but overall, the improvement is less than we expected, especially for task B, which only has very limited target data. We will not make a conclusion here since more investigation is needed.
Multitask Learning
As mentioned in Section "Modeling Question-External Comments" , we also explored a multitask learning framework that jointly learns to predict the relationships of all three tasks. We set $0.8$ for the main task (task C) and $0.1$ for the other auxiliary tasks. The MAP score did not improve, but F1 increases to $0.1617$ . We believe this is because other tasks have more balanced labels, which improves the shared parameters for task C.
Augmented data
There are many sources of external question-answer pairs that could be used in our tasks. For example: WebQuestion (was introduced by the authors of SEMPRE system BIBREF18 ) and The SimpleQuestions dataset . All of them are positive examples for our task and we can easily create negative examples from it. Initial experiments indicate that it is very easy to overfit these obvious negative examples. We believe this is because our negative examples are non-informative for our task and just introduce noise.
Since the external data seems to hurt the performance, we try to use the in-domain pairs to enhance task B and task C. For task B, if relative question 1 (rel1) and relative question 2 (rel2) are both relevant to the original question, then we add a positive sample (rel1, rel2, 1). If either rel1 and rel2 is irrelevant and the other is relevant, we add a negative sample (rel1, rel2, 0). After doing this, the samples of task B increase from $2,670$ to $11,810$ . By applying this method, the MAP score increased slightly from $0.5723$ to $0.5789$ but the F1 score improved from $0.4334$ to $0.5860$ .
For task C, we used task A's data directly. The results are very similar with a slight improvement on MAP, but large improvement on F1 score from $0.1449$ to $0.2064$ .
Augmented features
To further enhance the system, we incorporate a one hot vector of the original IR ranking as an additional feature into the FNN classifier. Table 3 shows the results. In comparing the models with and without augmented features, we can see large improvement for task B and C. The F1 score for task A degrades slightly but MAP improves. This might be because task A already had a substantial amount of training data.
Comparison with Other Systems
Table 4 gives the final comparison between different models (we only list the MAP score because it is the official score for the challenge). Since the two baseline models did not use any additional data, in this table our system was also restricted to the provided training data. For task A, we can see that if there is enough training data our single system already performs better than a very strong feature-rich based system. For task B, since only limited training data is given, both feature-rich based system and our system are worse than the IR system. For task C, our system also got comparable results with the feature-rich based system. If we do a simple system combination (average the rank score) between our system and the IR system, the combined system will give large gains on tasks B and C. This implies that our system is complimentary with the IR system.
Analysis of Attention Mechanism
In addition to quantitative analysis, it is natural to qualitatively evaluate the performance of the attention mechanism by visualizing the weight distribution of each instance. We randomly picked several instances from the test set in task A, for which the sentence lengths are more moderate for demonstration. These examples are shown in Figure 5 , and categorized into short, long, and noisy sentences for discussion. A darker blue patch refers to a larger weight relative to other words in the same sentence.
Short Sentences
Figure 5 illustrates two cQA examples whose questions are relatively short. The comments corresponding to these questions are “...snorkeling two days ago off the coast of dukhan...” and “the doha international airport...”. We can observe that our model successfully learns to focus on the most representative part of the question pertaining to classifying the relationship, which is "place for snorkeling" for the first example and “place can ... visited in qatar” for the second example.
Long Sentences
In Figure 5 , we investigate two examples with longer questions, which both contain 63 words. Interestingly, the distribution of weights does not become more uniform; the model still focuses attention on a small number of hot words, for example, “puppy dog for ... mall” and “hectic driving in doha ... car insurance ... quite costly”. Additionally, some words that appear frequently but carry little information for classification are assigned very small weights, such as I/we/my, is/am, like, and to.
Noisy Sentence
Due to the open nature of cQA forums, some content is noisy. Figure 5 is an example with excessive usage of question marks. Again, our model exhibits its robustness by allocating very low weights to the noise symbols and therefore excludes the noninformative content.
Conclusion
In this paper, we demonstrate that a general RNN encoder framework can be applied to community question answering tasks. By adding a neural attention mechanism, we showed quantitatively and qualitatively that attention can improve the RNN encoder framework. To deal with a more realistic scenario, we expanded the framework to incorporate metadata as augmented inputs to a FNN classifier, and pretrained models on larger datasets, increasing both stability and performance. Our model is consistently better than or comparable to a strong feature-rich baseline system, and is superior to an IR-based system when there is a reasonable amount of training data.
Our model is complimentary with an IR-based system that uses vast amounts of external resources but trained for general purposes. By combining the two systems, it exceeds the feature-rich and IR-based system in all three tasks.
Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015.
Future work could proceed in two directions: first, we can enrich the existing system by incorporating available metadata and preprocessing data with morphological normalization and out-of-vocabulary mappings; second, we can reinforce our model by carrying out word-by-word and history-aware attention mechanisms instead of attending only when reading the last word. | 0.007 MAP on Task A, 0.032 MAP on Task B, 0.055 MAP on Task C |
bc1bc92920a757d5ec38007a27d0f49cb2dde0d1 | bc1bc92920a757d5ec38007a27d0f49cb2dde0d1_0 | Q: What is a strong feature-based method?
Text: Introduction
Community question answering (cQA) is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions. In the past decade, these websites have attracted a great number of users, and have accumulated a large collection of question-comment threads generated by these users. However, the low restriction results in a high variation in answer quality, which makes it time-consuming to search for useful information from the existing content. It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question, or when looking for solutions from comments of an existing question.
Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.
To overcome these issues, most previous work (e.g. SemEval 2015 BIBREF1 ) relied heavily on additional features and reasoning capabilities. In BIBREF2 , a neural attention-based model was proposed for automatically recognizing entailment relations between pairs of natural language sentences. In this study, we first modify this model for all three cQA tasks. We also extend this framework into a jointly trained model when the external resources are available, i.e. selecting an external comment when we know the question that the external comment answers (Task C).
Our ultimate objective is to classify relevant questions and comments without complicated handcrafted features. By applying RNN-based encoders, we avoid heavily engineered features and learn the representation automatically. In addition, an attention mechanism augments encoders with the ability to attend to past outputs directly. This becomes helpful when encoding longer sequences, since we no longer need to compress all information into a fixed-length vector representation.
In our view, existing annotated cQA corpora are generally too small to properly train an end-to-end neural network. To address this, we investigate transfer learning by pretraining the recurrent systems on other corpora, and also generating additional instances from existing cQA corpus.
Related Work
Earlier work of community question answering relied heavily on feature engineering, linguistic tools, and external resource. BIBREF3 and BIBREF4 utilized rich non-textual features such as answer's profile. BIBREF5 syntactically analyzed the question and extracted name entity features. BIBREF6 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment.
More recent work incorporated word vector into their feature extraction system and based on it designed different distance metric for question and answer BIBREF7 BIBREF8 . While these approaches showed effectiveness, it is difficult to generalize them to common cQA tasks since linguistic tools and external resource may be restrictive in other languages and features are highly customized for each cQA task.
Very recent work on answer selection also involved the use of neural networks. BIBREF9 used LSTM to construct a joint vector based on both the question and the answer and then converted it into a learning to rank problem. BIBREF10 proposed several convolutional neural network (CNN) architectures for cQA. Our method differs in that RNN encoder is applied here and by adding attention mechanism we jointly learn which words in question to focus and hence available to conduct qualitative analysis. During classification, we feed the extracted vector into a feed-forward neural network directly instead of using mean/max pooling on top of each time steps.
Method
In this section, we first discuss long short-term memory (LSTM) units and an associated attention mechanism. Next, we explain how we can encode a pair of sentences into a dense vector for predicting relationships using an LSTM with an attention mechanism. Finally, we apply these models to predict question-question similarity, question-comment similarity, and question-external comment similarity.
LSTM Models
LSTMs have shown great success in many different fields. An LSTM unit contains a memory cell with self-connections, as well as three multiplicative gates to control information flow. Given input vector $x_t$ , previous hidden outputs $h_{t-1}$ , and previous cell state $c_{t-1}$ , LSTM units operate as follows:
$$X &= \begin{bmatrix} x_t\\[0.3em] h_{t-1}\\[0.3em] \end{bmatrix}\\ i_t &= \sigma (\mathbf {W_{iX}}X + \mathbf {W_{ic}}c_{t-1} + \mathbf {b_i})\\ f_t &= \sigma (\mathbf {W_{fX}}X + \mathbf {W_{fc}}c_{t-1} + \mathbf {b_f})\\ o_t &= \sigma (\mathbf {W_{oX}}X + \mathbf {W_{oc}}c_{t-1} + \mathbf {b_o})\\ c_t &= f_t \odot c_{t-1} + i_t \odot tanh(\mathbf {W_{cX}}X + \mathbf {b_c})\\ h_t &= o_t \odot tanh(c_t)$$ (Eq. 3)
where $i_t$ , $f_t$ , $o_t$ are input, forget, and output gates, respectively. The sigmoid function $\sigma ()$ is a soft gate function controlling the amount of information flow. $W$ s and $b$ s are model parameters to learn.
Neural Attention
A traditional RNN encoder-decoder approach BIBREF11 first encodes an arbitrary length input sequence into a fixed-length dense vector that can be used as input to subsequent classification models, or to initialize the hidden state of a secondary decoder. However, the requirement to compress all necessary information into a single fixed length vector can be problematic. A neural attention model BIBREF12 BIBREF13 has been recently proposed to alleviate this issue by enabling the network to attend to past outputs when decoding. Thus, the encoder no longer needs to represent an entire sequence with one vector; instead, it encodes information into a sequence of vectors, and adaptively chooses a subset of the vectors when decoding.
Predicting Relationships of Object Pairs with an Attention Model
In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.
The representations of the two objects are generated independently in this manner. However, we are more interested in the relationship instead of the object representations themselves. Therefore, we consider a serialized LSTM-encoder model in the right side of Figure 1 that is similar to that in BIBREF2 , but also allows an augmented feature input to the FNN classifier.
Figure 2 illustrates our attention framework in more detail. The first LSTM reads one object, and passes information through hidden units to the second LSTM. The second LSTM then reads the other object and generates the representation of this pair after the entire sequence is processed. We build another FNN that takes this representation as input to classify the relationship of this pair.
By adding an attention mechanism to the encoder, we allow the second LSTM to attend to the sequence of output vectors from the first LSTM, and hence generate a weighted representation of first object according to both objects. Let $h_N$ be the last output of second LSTM and $M = [h_1, h_2, \cdots , h_L]$ be the sequence of output vectors of the first object. The weighted representation of the first object is
$$h^{\prime } = \sum _{i=1}^{L} \alpha _i h_i$$ (Eq. 7)
The weight is computed by
$$\alpha _i = \dfrac{exp(a(h_i,h_N))}{\sum _{j=1}^{L}exp(a(h_j,h_N))}$$ (Eq. 8)
where $a()$ is the importance model that produces a higher score for $(h_i, h_N)$ if $h_i$ is useful to determine the object pair's relationship. We parametrize this model using another FNN. Note that in our framework, we also allow other augmented features (e.g., the ranking score from the IR system) to enhance the classifier. So the final input to the classifier will be $h_N$ , $h^{\prime }$ , as well as augmented features.
Modeling Question-External Comments
For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC).
Figure 3 shows our framework: the three lower models are separate serialized LSTM-encoders for the three respective object pairs, whereas the upper model is an FNN that takes as input the concatenation of the outputs of three encoders, and predicts the relationships for all three pairs. More specifically, the output layer consists of three softmax layers where each one is intended to predict the relationship of one particular pair.
For the overall loss function, we combine three separate loss functions using a heuristic weight vector $\beta $ that allocates a higher weight to the main task (oriQ-relC relationship prediction) as follows:
$$\mathcal {L} = \beta _1 \mathcal {L}_1 + \beta _2 \mathcal {L}_2 + \beta _3 \mathcal {L}_3$$ (Eq. 11)
By doing so, we hypothesize that the related tasks can improve the main task by leveraging commonality among all tasks.
Experiments
We evaluate our approach on all three cQA tasks. We use the cQA datasets provided by the Semeval 2016 task . The cQA data is organized as follows: there are 267 original questions, each question has 10 related question, and each related question has 10 comments. Therefore, for task A, there are a total number of 26,700 question-comment pairs. For task B, there are 2,670 question-question pairs. For task C, there are 26,700 question-comment pairs. The test dataset includes 50 questions, 500 related questions and 5,000 comments which do not overlap with the training set. To evaluate the performance, we use mean average precision (MAP) and F1 score.
Preliminary Results
Table 2 shows the initial results using the RNN encoder for different tasks. We observe that the attention model always gets better results than the RNN without attention, especially for task C. However, the RNN model achieves a very low F1 score. For task B, it is even worse than the random baseline. We believe the reason is because for task B, there are only 2,670 pairs for training which is very limited training for a reasonable neural network. For task C, we believe the problem is highly imbalanced data. Since the related comments did not directly comment on the original question, more than $90\%$ of the comments are labeled as irrelevant to the original question. The low F1 (with high precision and low recall) means our system tends to label most comments as irrelevant. In the following section, we investigate methods to address these issues.
Robust Parameter Initialization
One way to improve models trained on limited data is to use external data to pretrain the neural network. We therefore considered two different datasets for this task.
Cross-domain: The Stanford natural language inference (SNLI) corpus BIBREF17 has a huge amount of cleaned premise and hypothesis pairs. Unfortunately the pairs are for a different task. The relationship between the premise and hypothesis may be similar to the relation between questions and comments, but may also be different.
In-domain: since task A seems has reasonable performance, and the network is also well-trained, we could use it directly to initialize task B.
To utilize the data, we first trained the model on each auxiliary data (SNLI or Task A) and then removed the softmax layer. After that, we retrain the network using the target data with a softmax layer that was randomly initialized.
For task A, the SNLI cannot improve MAP or F1 scores. Actually it slightly hurts the performance. We surmise that it is probably because the domain is different. Further investigation is needed: for example, we could only use the parameter for embedding layers etc. For task B, the SNLI yields a slight improvement on MAP ( $0.2\%$ ), and Task A could give ( $1.2\%$ ) on top of that. No improvement was observed on F1. For task C, pretraining by task A is also better than using SNLI (task A is $1\%$ better than the baseline, while SNLI is almost the same).
In summary, the in-domain pretraining seems better, but overall, the improvement is less than we expected, especially for task B, which only has very limited target data. We will not make a conclusion here since more investigation is needed.
Multitask Learning
As mentioned in Section "Modeling Question-External Comments" , we also explored a multitask learning framework that jointly learns to predict the relationships of all three tasks. We set $0.8$ for the main task (task C) and $0.1$ for the other auxiliary tasks. The MAP score did not improve, but F1 increases to $0.1617$ . We believe this is because other tasks have more balanced labels, which improves the shared parameters for task C.
Augmented data
There are many sources of external question-answer pairs that could be used in our tasks. For example: WebQuestion (was introduced by the authors of SEMPRE system BIBREF18 ) and The SimpleQuestions dataset . All of them are positive examples for our task and we can easily create negative examples from it. Initial experiments indicate that it is very easy to overfit these obvious negative examples. We believe this is because our negative examples are non-informative for our task and just introduce noise.
Since the external data seems to hurt the performance, we try to use the in-domain pairs to enhance task B and task C. For task B, if relative question 1 (rel1) and relative question 2 (rel2) are both relevant to the original question, then we add a positive sample (rel1, rel2, 1). If either rel1 and rel2 is irrelevant and the other is relevant, we add a negative sample (rel1, rel2, 0). After doing this, the samples of task B increase from $2,670$ to $11,810$ . By applying this method, the MAP score increased slightly from $0.5723$ to $0.5789$ but the F1 score improved from $0.4334$ to $0.5860$ .
For task C, we used task A's data directly. The results are very similar with a slight improvement on MAP, but large improvement on F1 score from $0.1449$ to $0.2064$ .
Augmented features
To further enhance the system, we incorporate a one hot vector of the original IR ranking as an additional feature into the FNN classifier. Table 3 shows the results. In comparing the models with and without augmented features, we can see large improvement for task B and C. The F1 score for task A degrades slightly but MAP improves. This might be because task A already had a substantial amount of training data.
Comparison with Other Systems
Table 4 gives the final comparison between different models (we only list the MAP score because it is the official score for the challenge). Since the two baseline models did not use any additional data, in this table our system was also restricted to the provided training data. For task A, we can see that if there is enough training data our single system already performs better than a very strong feature-rich based system. For task B, since only limited training data is given, both feature-rich based system and our system are worse than the IR system. For task C, our system also got comparable results with the feature-rich based system. If we do a simple system combination (average the rank score) between our system and the IR system, the combined system will give large gains on tasks B and C. This implies that our system is complimentary with the IR system.
Analysis of Attention Mechanism
In addition to quantitative analysis, it is natural to qualitatively evaluate the performance of the attention mechanism by visualizing the weight distribution of each instance. We randomly picked several instances from the test set in task A, for which the sentence lengths are more moderate for demonstration. These examples are shown in Figure 5 , and categorized into short, long, and noisy sentences for discussion. A darker blue patch refers to a larger weight relative to other words in the same sentence.
Short Sentences
Figure 5 illustrates two cQA examples whose questions are relatively short. The comments corresponding to these questions are “...snorkeling two days ago off the coast of dukhan...” and “the doha international airport...”. We can observe that our model successfully learns to focus on the most representative part of the question pertaining to classifying the relationship, which is "place for snorkeling" for the first example and “place can ... visited in qatar” for the second example.
Long Sentences
In Figure 5 , we investigate two examples with longer questions, which both contain 63 words. Interestingly, the distribution of weights does not become more uniform; the model still focuses attention on a small number of hot words, for example, “puppy dog for ... mall” and “hectic driving in doha ... car insurance ... quite costly”. Additionally, some words that appear frequently but carry little information for classification are assigned very small weights, such as I/we/my, is/am, like, and to.
Noisy Sentence
Due to the open nature of cQA forums, some content is noisy. Figure 5 is an example with excessive usage of question marks. Again, our model exhibits its robustness by allocating very low weights to the noise symbols and therefore excludes the noninformative content.
Conclusion
In this paper, we demonstrate that a general RNN encoder framework can be applied to community question answering tasks. By adding a neural attention mechanism, we showed quantitatively and qualitatively that attention can improve the RNN encoder framework. To deal with a more realistic scenario, we expanded the framework to incorporate metadata as augmented inputs to a FNN classifier, and pretrained models on larger datasets, increasing both stability and performance. Our model is consistently better than or comparable to a strong feature-rich baseline system, and is superior to an IR-based system when there is a reasonable amount of training data.
Our model is complimentary with an IR-based system that uses vast amounts of external resources but trained for general purposes. By combining the two systems, it exceeds the feature-rich and IR-based system in all three tasks.
Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015.
Future work could proceed in two directions: first, we can enrich the existing system by incorporating available metadata and preprocessing data with morphological normalization and out-of-vocabulary mappings; second, we can reinforce our model by carrying out word-by-word and history-aware attention mechanisms instead of attending only when reading the last word. | Unanswerable |
942eb1f7b243cdcfd47f176bcc71de2ef48a17c4 | 942eb1f7b243cdcfd47f176bcc71de2ef48a17c4_0 | Q: Did they experimnet in other languages?
Text: Introduction
Community question answering (cQA) is a paradigm that provides forums for users to ask or answer questions on any topic with barely any restrictions. In the past decade, these websites have attracted a great number of users, and have accumulated a large collection of question-comment threads generated by these users. However, the low restriction results in a high variation in answer quality, which makes it time-consuming to search for useful information from the existing content. It would therefore be valuable to automate the procedure of ranking related questions and comments for users with a new question, or when looking for solutions from comments of an existing question.
Automation of cQA forums can be divided into three tasks: question-comment relevance (Task A), question-question relevance (Task B), and question-external comment relevance (Task C). One might think that classic retrieval models like language models for information retrieval BIBREF0 could solve these tasks. However, a big challenge for cQA tasks is that users are used to expressing similar meanings with different words, which creates gaps when matching questions based on common words. Other challenges include informal usage of language, highly diverse content of comments, and variation in the length of both questions and comments.
To overcome these issues, most previous work (e.g. SemEval 2015 BIBREF1 ) relied heavily on additional features and reasoning capabilities. In BIBREF2 , a neural attention-based model was proposed for automatically recognizing entailment relations between pairs of natural language sentences. In this study, we first modify this model for all three cQA tasks. We also extend this framework into a jointly trained model when the external resources are available, i.e. selecting an external comment when we know the question that the external comment answers (Task C).
Our ultimate objective is to classify relevant questions and comments without complicated handcrafted features. By applying RNN-based encoders, we avoid heavily engineered features and learn the representation automatically. In addition, an attention mechanism augments encoders with the ability to attend to past outputs directly. This becomes helpful when encoding longer sequences, since we no longer need to compress all information into a fixed-length vector representation.
In our view, existing annotated cQA corpora are generally too small to properly train an end-to-end neural network. To address this, we investigate transfer learning by pretraining the recurrent systems on other corpora, and also generating additional instances from existing cQA corpus.
Related Work
Earlier work of community question answering relied heavily on feature engineering, linguistic tools, and external resource. BIBREF3 and BIBREF4 utilized rich non-textual features such as answer's profile. BIBREF5 syntactically analyzed the question and extracted name entity features. BIBREF6 demonstrated a textual entailment system can enhance cQA task by casting question answering to logical entailment.
More recent work incorporated word vector into their feature extraction system and based on it designed different distance metric for question and answer BIBREF7 BIBREF8 . While these approaches showed effectiveness, it is difficult to generalize them to common cQA tasks since linguistic tools and external resource may be restrictive in other languages and features are highly customized for each cQA task.
Very recent work on answer selection also involved the use of neural networks. BIBREF9 used LSTM to construct a joint vector based on both the question and the answer and then converted it into a learning to rank problem. BIBREF10 proposed several convolutional neural network (CNN) architectures for cQA. Our method differs in that RNN encoder is applied here and by adding attention mechanism we jointly learn which words in question to focus and hence available to conduct qualitative analysis. During classification, we feed the extracted vector into a feed-forward neural network directly instead of using mean/max pooling on top of each time steps.
Method
In this section, we first discuss long short-term memory (LSTM) units and an associated attention mechanism. Next, we explain how we can encode a pair of sentences into a dense vector for predicting relationships using an LSTM with an attention mechanism. Finally, we apply these models to predict question-question similarity, question-comment similarity, and question-external comment similarity.
LSTM Models
LSTMs have shown great success in many different fields. An LSTM unit contains a memory cell with self-connections, as well as three multiplicative gates to control information flow. Given input vector $x_t$ , previous hidden outputs $h_{t-1}$ , and previous cell state $c_{t-1}$ , LSTM units operate as follows:
$$X &= \begin{bmatrix} x_t\\[0.3em] h_{t-1}\\[0.3em] \end{bmatrix}\\ i_t &= \sigma (\mathbf {W_{iX}}X + \mathbf {W_{ic}}c_{t-1} + \mathbf {b_i})\\ f_t &= \sigma (\mathbf {W_{fX}}X + \mathbf {W_{fc}}c_{t-1} + \mathbf {b_f})\\ o_t &= \sigma (\mathbf {W_{oX}}X + \mathbf {W_{oc}}c_{t-1} + \mathbf {b_o})\\ c_t &= f_t \odot c_{t-1} + i_t \odot tanh(\mathbf {W_{cX}}X + \mathbf {b_c})\\ h_t &= o_t \odot tanh(c_t)$$ (Eq. 3)
where $i_t$ , $f_t$ , $o_t$ are input, forget, and output gates, respectively. The sigmoid function $\sigma ()$ is a soft gate function controlling the amount of information flow. $W$ s and $b$ s are model parameters to learn.
Neural Attention
A traditional RNN encoder-decoder approach BIBREF11 first encodes an arbitrary length input sequence into a fixed-length dense vector that can be used as input to subsequent classification models, or to initialize the hidden state of a secondary decoder. However, the requirement to compress all necessary information into a single fixed length vector can be problematic. A neural attention model BIBREF12 BIBREF13 has been recently proposed to alleviate this issue by enabling the network to attend to past outputs when decoding. Thus, the encoder no longer needs to represent an entire sequence with one vector; instead, it encodes information into a sequence of vectors, and adaptively chooses a subset of the vectors when decoding.
Predicting Relationships of Object Pairs with an Attention Model
In our cQA tasks, the pair of objects are (question, question) or (question, comment), and the relationship is relevant/irrelevant. The left side of Figure 1 shows one intuitive way to predict relationships using RNNs. Parallel LSTMs encode two objects independently, and then concatenate their outputs as an input to a feed-forward neural network (FNN) with a softmax output layer for classification.
The representations of the two objects are generated independently in this manner. However, we are more interested in the relationship instead of the object representations themselves. Therefore, we consider a serialized LSTM-encoder model in the right side of Figure 1 that is similar to that in BIBREF2 , but also allows an augmented feature input to the FNN classifier.
Figure 2 illustrates our attention framework in more detail. The first LSTM reads one object, and passes information through hidden units to the second LSTM. The second LSTM then reads the other object and generates the representation of this pair after the entire sequence is processed. We build another FNN that takes this representation as input to classify the relationship of this pair.
By adding an attention mechanism to the encoder, we allow the second LSTM to attend to the sequence of output vectors from the first LSTM, and hence generate a weighted representation of first object according to both objects. Let $h_N$ be the last output of second LSTM and $M = [h_1, h_2, \cdots , h_L]$ be the sequence of output vectors of the first object. The weighted representation of the first object is
$$h^{\prime } = \sum _{i=1}^{L} \alpha _i h_i$$ (Eq. 7)
The weight is computed by
$$\alpha _i = \dfrac{exp(a(h_i,h_N))}{\sum _{j=1}^{L}exp(a(h_j,h_N))}$$ (Eq. 8)
where $a()$ is the importance model that produces a higher score for $(h_i, h_N)$ if $h_i$ is useful to determine the object pair's relationship. We parametrize this model using another FNN. Note that in our framework, we also allow other augmented features (e.g., the ranking score from the IR system) to enhance the classifier. So the final input to the classifier will be $h_N$ , $h^{\prime }$ , as well as augmented features.
Modeling Question-External Comments
For task C, in addition to an original question (oriQ) and an external comment (relC), the question which relC commented on is also given (relQ). To incorporate this extra information, we consider a multitask learning framework which jointly learns to predict the relationships of the three pairs (oriQ/relQ, oriQ/relC, relQ/relC).
Figure 3 shows our framework: the three lower models are separate serialized LSTM-encoders for the three respective object pairs, whereas the upper model is an FNN that takes as input the concatenation of the outputs of three encoders, and predicts the relationships for all three pairs. More specifically, the output layer consists of three softmax layers where each one is intended to predict the relationship of one particular pair.
For the overall loss function, we combine three separate loss functions using a heuristic weight vector $\beta $ that allocates a higher weight to the main task (oriQ-relC relationship prediction) as follows:
$$\mathcal {L} = \beta _1 \mathcal {L}_1 + \beta _2 \mathcal {L}_2 + \beta _3 \mathcal {L}_3$$ (Eq. 11)
By doing so, we hypothesize that the related tasks can improve the main task by leveraging commonality among all tasks.
Experiments
We evaluate our approach on all three cQA tasks. We use the cQA datasets provided by the Semeval 2016 task . The cQA data is organized as follows: there are 267 original questions, each question has 10 related question, and each related question has 10 comments. Therefore, for task A, there are a total number of 26,700 question-comment pairs. For task B, there are 2,670 question-question pairs. For task C, there are 26,700 question-comment pairs. The test dataset includes 50 questions, 500 related questions and 5,000 comments which do not overlap with the training set. To evaluate the performance, we use mean average precision (MAP) and F1 score.
Preliminary Results
Table 2 shows the initial results using the RNN encoder for different tasks. We observe that the attention model always gets better results than the RNN without attention, especially for task C. However, the RNN model achieves a very low F1 score. For task B, it is even worse than the random baseline. We believe the reason is because for task B, there are only 2,670 pairs for training which is very limited training for a reasonable neural network. For task C, we believe the problem is highly imbalanced data. Since the related comments did not directly comment on the original question, more than $90\%$ of the comments are labeled as irrelevant to the original question. The low F1 (with high precision and low recall) means our system tends to label most comments as irrelevant. In the following section, we investigate methods to address these issues.
Robust Parameter Initialization
One way to improve models trained on limited data is to use external data to pretrain the neural network. We therefore considered two different datasets for this task.
Cross-domain: The Stanford natural language inference (SNLI) corpus BIBREF17 has a huge amount of cleaned premise and hypothesis pairs. Unfortunately the pairs are for a different task. The relationship between the premise and hypothesis may be similar to the relation between questions and comments, but may also be different.
In-domain: since task A seems has reasonable performance, and the network is also well-trained, we could use it directly to initialize task B.
To utilize the data, we first trained the model on each auxiliary data (SNLI or Task A) and then removed the softmax layer. After that, we retrain the network using the target data with a softmax layer that was randomly initialized.
For task A, the SNLI cannot improve MAP or F1 scores. Actually it slightly hurts the performance. We surmise that it is probably because the domain is different. Further investigation is needed: for example, we could only use the parameter for embedding layers etc. For task B, the SNLI yields a slight improvement on MAP ( $0.2\%$ ), and Task A could give ( $1.2\%$ ) on top of that. No improvement was observed on F1. For task C, pretraining by task A is also better than using SNLI (task A is $1\%$ better than the baseline, while SNLI is almost the same).
In summary, the in-domain pretraining seems better, but overall, the improvement is less than we expected, especially for task B, which only has very limited target data. We will not make a conclusion here since more investigation is needed.
Multitask Learning
As mentioned in Section "Modeling Question-External Comments" , we also explored a multitask learning framework that jointly learns to predict the relationships of all three tasks. We set $0.8$ for the main task (task C) and $0.1$ for the other auxiliary tasks. The MAP score did not improve, but F1 increases to $0.1617$ . We believe this is because other tasks have more balanced labels, which improves the shared parameters for task C.
Augmented data
There are many sources of external question-answer pairs that could be used in our tasks. For example: WebQuestion (was introduced by the authors of SEMPRE system BIBREF18 ) and The SimpleQuestions dataset . All of them are positive examples for our task and we can easily create negative examples from it. Initial experiments indicate that it is very easy to overfit these obvious negative examples. We believe this is because our negative examples are non-informative for our task and just introduce noise.
Since the external data seems to hurt the performance, we try to use the in-domain pairs to enhance task B and task C. For task B, if relative question 1 (rel1) and relative question 2 (rel2) are both relevant to the original question, then we add a positive sample (rel1, rel2, 1). If either rel1 and rel2 is irrelevant and the other is relevant, we add a negative sample (rel1, rel2, 0). After doing this, the samples of task B increase from $2,670$ to $11,810$ . By applying this method, the MAP score increased slightly from $0.5723$ to $0.5789$ but the F1 score improved from $0.4334$ to $0.5860$ .
For task C, we used task A's data directly. The results are very similar with a slight improvement on MAP, but large improvement on F1 score from $0.1449$ to $0.2064$ .
Augmented features
To further enhance the system, we incorporate a one hot vector of the original IR ranking as an additional feature into the FNN classifier. Table 3 shows the results. In comparing the models with and without augmented features, we can see large improvement for task B and C. The F1 score for task A degrades slightly but MAP improves. This might be because task A already had a substantial amount of training data.
Comparison with Other Systems
Table 4 gives the final comparison between different models (we only list the MAP score because it is the official score for the challenge). Since the two baseline models did not use any additional data, in this table our system was also restricted to the provided training data. For task A, we can see that if there is enough training data our single system already performs better than a very strong feature-rich based system. For task B, since only limited training data is given, both feature-rich based system and our system are worse than the IR system. For task C, our system also got comparable results with the feature-rich based system. If we do a simple system combination (average the rank score) between our system and the IR system, the combined system will give large gains on tasks B and C. This implies that our system is complimentary with the IR system.
Analysis of Attention Mechanism
In addition to quantitative analysis, it is natural to qualitatively evaluate the performance of the attention mechanism by visualizing the weight distribution of each instance. We randomly picked several instances from the test set in task A, for which the sentence lengths are more moderate for demonstration. These examples are shown in Figure 5 , and categorized into short, long, and noisy sentences for discussion. A darker blue patch refers to a larger weight relative to other words in the same sentence.
Short Sentences
Figure 5 illustrates two cQA examples whose questions are relatively short. The comments corresponding to these questions are “...snorkeling two days ago off the coast of dukhan...” and “the doha international airport...”. We can observe that our model successfully learns to focus on the most representative part of the question pertaining to classifying the relationship, which is "place for snorkeling" for the first example and “place can ... visited in qatar” for the second example.
Long Sentences
In Figure 5 , we investigate two examples with longer questions, which both contain 63 words. Interestingly, the distribution of weights does not become more uniform; the model still focuses attention on a small number of hot words, for example, “puppy dog for ... mall” and “hectic driving in doha ... car insurance ... quite costly”. Additionally, some words that appear frequently but carry little information for classification are assigned very small weights, such as I/we/my, is/am, like, and to.
Noisy Sentence
Due to the open nature of cQA forums, some content is noisy. Figure 5 is an example with excessive usage of question marks. Again, our model exhibits its robustness by allocating very low weights to the noise symbols and therefore excludes the noninformative content.
Conclusion
In this paper, we demonstrate that a general RNN encoder framework can be applied to community question answering tasks. By adding a neural attention mechanism, we showed quantitatively and qualitatively that attention can improve the RNN encoder framework. To deal with a more realistic scenario, we expanded the framework to incorporate metadata as augmented inputs to a FNN classifier, and pretrained models on larger datasets, increasing both stability and performance. Our model is consistently better than or comparable to a strong feature-rich baseline system, and is superior to an IR-based system when there is a reasonable amount of training data.
Our model is complimentary with an IR-based system that uses vast amounts of external resources but trained for general purposes. By combining the two systems, it exceeds the feature-rich and IR-based system in all three tasks.
Moreover, our approach is also language independent. We have also performed preliminary experiments on the Arabic portion of the SemEval-2016 cQA task. The results are competitive with a hand-tuned strong baseline from SemEval-2015.
Future work could proceed in two directions: first, we can enrich the existing system by incorporating available metadata and preprocessing data with morphological normalization and out-of-vocabulary mappings; second, we can reinforce our model by carrying out word-by-word and history-aware attention mechanisms instead of attending only when reading the last word. | Yes |
9bffc9a9c527e938b2a95ba60c483a916dbd1f6b | 9bffc9a9c527e938b2a95ba60c483a916dbd1f6b_0 | Q: Do they use multi-attention heads?
Text: Introduction
Targeted sentiment classification is a fine-grained sentiment analysis task, which aims at determining the sentiment polarities (e.g., negative, neutral, or positive) of a sentence over “opinion targets” that explicitly appear in the sentence. For example, given a sentence “I hated their service, but their food was great”, the sentiment polarities for the target “service” and “food” are negative and positive respectively. A target is usually an entity or an entity aspect.
In recent years, neural network models are designed to automatically learn useful low-dimensional representations from targets and contexts and obtain promising results BIBREF0 , BIBREF1 . However, these neural network models are still in infancy to deal with the fine-grained targeted sentiment classification task.
Attention mechanism, which has been successfully used in machine translation BIBREF2 , is incorporated to enforce the model to pay more attention to context words with closer semantic relations with the target. There are already some studies use attention to generate target-specific sentence representations BIBREF3 , BIBREF4 , BIBREF5 or to transform sentence representations according to target words BIBREF6 . However, these studies depend on complex recurrent neural networks (RNNs) as sequence encoder to compute hidden semantics of texts.
The first problem with previous works is that the modeling of text relies on RNNs. RNNs, such as LSTM, are very expressive, but they are hard to parallelize and backpropagation through time (BPTT) requires large amounts of memory and computation. Moreover, essentially every training algorithm of RNN is the truncated BPTT, which affects the model's ability to capture dependencies over longer time scales BIBREF7 . Although LSTM can alleviate the vanishing gradient problem to a certain extent and thus maintain long distance information, this usually requires a large amount of training data. Another problem that previous studies ignore is the label unreliability issue, since neutral sentiment is a fuzzy sentimental state and brings difficulty for model learning. As far as we know, we are the first to raise the label unreliability issue in the targeted sentiment classification task.
This paper propose an attention based model to solve the problems above. Specifically, our model eschews recurrence and employs attention as a competitive alternative to draw the introspective and interactive semantics between target and context words. To deal with the label unreliability issue, we employ a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT BIBREF8 to this task and show our model enhances the performance of basic BERT model. Experimental results on three benchmark datasets show that the proposed model achieves competitive performance and is a lightweight alternative of the best RNN based models.
The main contributions of this work are presented as follows:
Related Work
The research approach of the targeted sentiment classification task including traditional machine learning methods and neural networks methods.
Traditional machine learning methods, including rule-based methods BIBREF9 and statistic-based methods BIBREF10 , mainly focus on extracting a set of features like sentiment lexicons features and bag-of-words features to train a sentiment classifier BIBREF11 . The performance of these methods highly depends on the effectiveness of the feature engineering works, which are labor intensive.
In recent years, neural network methods are getting more and more attention as they do not need handcrafted features and can encode sentences with low-dimensional word vectors where rich semantic information stained. In order to incorporate target words into a model, Tang et al. tang2016effective propose TD-LSTM to extend LSTM by using two single-directional LSTM to model the left context and right context of the target word respectively. Tang et al. tang2016aspect design MemNet which consists of a multi-hop attention mechanism with an external memory to capture the importance of each context word concerning the given target. Multiple attention is paid to the memory represented by word embeddings to build higher semantic information. Wang et al. wang2016attention propose ATAE-LSTM which concatenates target embeddings with word representations and let targets participate in computing attention weights. Chen et al. chen2017recurrent propose RAM which adopts multiple-attention mechanism on the memory built with bidirectional LSTM and nonlinearly combines the attention results with gated recurrent units (GRUs). Ma et al. ma2017interactive propose IAN which learns the representations of the target and context with two attention networks interactively.
Proposed Methodology
Given a context sequence INLINEFORM0 and a target sequence INLINEFORM1 , where INLINEFORM2 is a sub-sequence of INLINEFORM3 . The goal of this model is to predict the sentiment polarity of the sentence INLINEFORM4 over the target INLINEFORM5 .
Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT.
Embedding Layer
Let INLINEFORM0 to be the pre-trained GloVe BIBREF12 embedding matrix, where INLINEFORM1 is the dimension of word vectors and INLINEFORM2 is the vocabulary size. Then we map each word INLINEFORM3 to its corresponding embedding vector INLINEFORM4 , which is a column in the embedding matrix INLINEFORM5 .
BERT embedding uses the pre-trained BERT to generate word vectors of sequence. In order to facilitate the training and fine-tuning of BERT model, we transform the given context and target to “[CLS] + context + [SEP]” and “[CLS] + target + [SEP]” respectively.
Attentional Encoder Layer
The attentional encoder layer is a parallelizable and interactive alternative of LSTM and is applied to compute the hidden states of the input embeddings. This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT).
Multi-Head Attention (MHA) is the attention that can perform multiple attention function in parallel. Different from Transformer BIBREF13 , we use Intra-MHA for introspective context words modeling and Inter-MHA for context-perceptive target words modeling, which is more lightweight and target is modeled according to a given context.
An attention function maps a key sequence INLINEFORM0 and a query sequence INLINEFORM1 to an output sequence INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 denotes the alignment function which learns the semantic relevance between INLINEFORM1 and INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 are learnable weights.
MHA can learn n_head different scores in parallel child spaces and is very powerful for alignments. The INLINEFORM0 outputs are concatenated and projected to the specified hidden dimension INLINEFORM1 , namely, DISPLAYFORM0
where “ INLINEFORM0 ” denotes vector concatenation, INLINEFORM1 , INLINEFORM2 is the output of the INLINEFORM3 -th head attention and INLINEFORM4 .
Intra-MHA, or multi-head self-attention, is a special situation for typical attention mechanism that INLINEFORM0 . Given a context embedding INLINEFORM1 , we can get the introspective context representation INLINEFORM2 by: DISPLAYFORM0
The learned context representation INLINEFORM0 is aware of long-term dependencies.
Inter-MHA is the generally used form of attention mechanism that INLINEFORM0 is different from INLINEFORM1 . Given a context embedding INLINEFORM2 and a target embedding INLINEFORM3 , we can get the context-perceptive target representation INLINEFORM4 by: DISPLAYFORM0
After this interactive procedure, each given target word INLINEFORM0 will have a composed representation selected from context embeddings INLINEFORM1 . Then we get the context-perceptive target words modeling INLINEFORM2 .
A Point-wise Convolution T ransformation (PCT) can transform contextual information gathered by the MHA. Point-wise means that the kernel sizes are 1 and the same transformation is applied to every single token belonging to the input. Formally, given a input sequence INLINEFORM0 , PCT is defined as: DISPLAYFORM0
where INLINEFORM0 stands for the ELU activation, INLINEFORM1 is the convolution operator, INLINEFORM2 and INLINEFORM3 are the learnable weights of the two convolutional kernels, INLINEFORM4 and INLINEFORM5 are biases of the two convolutional kernels.
Given INLINEFORM0 and INLINEFORM1 , PCTs are applied to get the output hidden states of the attentional encoder layer INLINEFORM2 and INLINEFORM3 by: DISPLAYFORM0
Target-specific Attention Layer
After we obtain the introspective context representation INLINEFORM0 and the context-perceptive target representation INLINEFORM1 , we employ another MHA to obtain the target-specific context representation INLINEFORM2 by: DISPLAYFORM0
The multi-head attention function here also has its independent parameters.
Output Layer
We get the final representations of the previous outputs by average pooling, concatenate them as the final comprehensive representation INLINEFORM0 , and use a full connected layer to project the concatenated vector into the space of the targeted INLINEFORM1 classes. DISPLAYFORM0
where INLINEFORM0 is the predicted sentiment polarity distribution, INLINEFORM1 and INLINEFORM2 are learnable parameters.
Regularization and Model Training
Since neutral sentiment is a very fuzzy sentimental state, training samples which labeled neutral are unreliable. We employ a Label Smoothing Regularization (LSR) term in the loss function. which penalizes low entropy output distributions BIBREF14 . LSR can reduce overfitting by preventing a network from assigning the full probability to each training example during training, replaces the 0 and 1 targets for a classifier with smoothed values like 0.1 or 0.9.
For a training sample INLINEFORM0 with the original ground-truth label distribution INLINEFORM1 , we replace INLINEFORM2 with DISPLAYFORM0
where INLINEFORM0 is the prior distribution over labels , and INLINEFORM1 is the smoothing parameter. In this paper, we set the prior label distribution to be uniform INLINEFORM2 .
LSR is equivalent to the KL divergence between the prior label distribution INLINEFORM0 and the network's predicted distribution INLINEFORM1 . Formally, LSR term is defined as: DISPLAYFORM0
The objective function (loss function) to be optimized is the cross-entropy loss with INLINEFORM0 and INLINEFORM1 regularization, which is defined as: DISPLAYFORM0
where INLINEFORM0 is the ground truth represented as a one-hot vector, INLINEFORM1 is the predicted sentiment distribution vector given by the output layer, INLINEFORM2 is the coefficient for INLINEFORM3 regularization term, and INLINEFORM4 is the parameter set.
Datasets and Experimental Settings
We conduct experiments on three datasets: SemEval 2014 Task 4 BIBREF15 dataset composed of Restaurant reviews and Laptop reviews, and ACL 14 Twitter dataset gathered by Dong et al. dong2014adaptive. These datasets are labeled with three sentiment polarities: positive, neutral and negative. Table TABREF31 shows the number of training and test instances in each category.
Word embeddings in AEN-GloVe do not get updated in the learning process, but we fine-tune pre-trained BERT in AEN-BERT. Embedding dimension INLINEFORM0 is 300 for GloVe and is 768 for pre-trained BERT. Dimension of hidden states INLINEFORM1 is set to 300. The weights of our model are initialized with Glorot initialization BIBREF16 . During training, we set label smoothing parameter INLINEFORM2 to 0.2 BIBREF14 , the coefficient INLINEFORM3 of INLINEFORM4 regularization item is INLINEFORM5 and dropout rate is 0.1. Adam optimizer BIBREF17 is applied to update all the parameters. We adopt the Accuracy and Macro-F1 metrics to evaluate the performance of the model.
Model Comparisons
In order to comprehensively evaluate and analysis the performance of AEN-GloVe, we list 7 baseline models and design 4 ablations of AEN-GloVe. We also design a basic BERT-based model to evaluate the performance of AEN-BERT.
Non-RNN based baselines:
INLINEFORM0 Feature-based SVM BIBREF18 is a traditional support vector machine based model with extensive feature engineering.
INLINEFORM0 Rec-NN BIBREF0 firstly uses rules to transform the dependency tree and put the opinion target at the root, and then learns the sentence representation toward target via semantic composition using Recursive NNs.
INLINEFORM0 MemNet BIBREF19 uses multi-hops of attention layers on the context word embeddings for sentence representation to explicitly captures the importance of each context word.
RNN based baselines:
INLINEFORM0 TD-LSTM BIBREF1 extends LSTM by using two LSTM networks to model the left context with target and the right context with target respectively. The left and right target-dependent representations are concatenated for predicting the sentiment polarity of the target.
INLINEFORM0 ATAE-LSTM BIBREF3 strengthens the effect of target embeddings, which appends the target embeddings with each word embeddings and use LSTM with attention to get the final representation for classification.
INLINEFORM0 IAN BIBREF4 learns the representations of the target and context with two LSTMs and attentions interactively, which generates the representations for targets and contexts with respect to each other.
INLINEFORM0 RAM BIBREF5 strengthens MemNet by representing memory with bidirectional LSTM and using a gated recurrent unit network to combine the multiple attention outputs for sentence representation.
AEN-GloVe ablations:
INLINEFORM0 AEN-GloVe w/o PCT ablates PCT module.
INLINEFORM0 AEN-GloVe w/o MHA ablates MHA module.
INLINEFORM0 AEN-GloVe w/o LSR ablates label smoothing regularization.
INLINEFORM0 AEN-GloVe-BiLSTM replaces the attentional encoder layer with two bidirectional LSTM.
Basic BERT-based model:
INLINEFORM0 BERT-SPC feeds sequence “[CLS] + context + [SEP] + target + [SEP]” into the basic BERT model for sentence pair classification task.
Main Results
Table TABREF34 shows the performance comparison of AEN with other models. BERT-SPC and AEN-BERT obtain substantial accuracy improvements, which shows the power of pre-trained BERT on small-data task. The overall performance of AEN-BERT is better than BERT-SPC, which suggests that it is important to design a downstream network customized to a specific task. As the prior knowledge in the pre-trained BERT is not specific to any particular domain, further fine-tuning on the specific task is necessary for releasing the true power of BERT.
The overall performance of TD-LSTM is not good since it only makes a rough treatment of the target words. ATAE-LSTM, IAN and RAM are attention based models, they stably exceed the TD-LSTM method on Restaurant and Laptop datasets. RAM is better than other RNN based models, but it does not perform well on Twitter dataset, which might because bidirectional LSTM is not good at modeling small and ungrammatical text.
Feature-based SVM is still a competitive baseline, but relying on manually-designed features. Rec-NN gets the worst performances among all neural network baselines as dependency parsing is not guaranteed to work well on ungrammatical short texts such as tweets and comments. Like AEN, MemNet also eschews recurrence, but its overall performance is not good since it does not model the hidden semantic of embeddings, and the result of the last attention is essentially a linear combination of word embeddings.
Model Analysis
As shown in Table TABREF34 , the performances of AEN-GloVe ablations are incomparable with AEN-GloVe in both accuracy and macro-F1 measure. This result shows that all of these discarded components are crucial for a good performance. Comparing the results of AEN-GloVe and AEN-GloVe w/o LSR, we observe that the accuracy of AEN-GloVe w/o LSR drops significantly on all three datasets. We could attribute this phenomenon to the unreliability of the training samples with neutral sentiment. The overall performance of AEN-GloVe and AEN-GloVe-BiLSTM is relatively close, AEN-GloVe performs better on the Restaurant dataset. More importantly, AEN-GloVe has fewer parameters and is easier to parallelize.
To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU .
RNN-based and BERT-based models indeed have larger model size. ATAE-LSTM, IAN, RAM, and AEN-GloVe-BiLSTM are all attention based RNN models, memory optimization for these models will be more difficult as the encoded hidden states must be kept simultaneously in memory in order to perform attention mechanisms. MemNet has the lowest model size as it only has one shared attention layer and two linear layers, it does not calculate hidden states of word embeddings. AEN-GloVe's lightweight level ranks second, since it takes some more parameters than MemNet in modeling hidden states of sequences. As a comparison, the model size of AEN-GloVe-BiLSTM is more than twice that of AEN-GloVe, but does not bring any performance improvements.
Conclusion
In this work, we propose an attentional encoder network for the targeted sentiment classification task. which employs attention based encoders for the modeling between context and target. We raise the the label unreliability issue add a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of the proposed model. | Yes |
8434974090491a3c00eed4f22a878f0b70970713 | 8434974090491a3c00eed4f22a878f0b70970713_0 | Q: How big is their model?
Text: Introduction
Targeted sentiment classification is a fine-grained sentiment analysis task, which aims at determining the sentiment polarities (e.g., negative, neutral, or positive) of a sentence over “opinion targets” that explicitly appear in the sentence. For example, given a sentence “I hated their service, but their food was great”, the sentiment polarities for the target “service” and “food” are negative and positive respectively. A target is usually an entity or an entity aspect.
In recent years, neural network models are designed to automatically learn useful low-dimensional representations from targets and contexts and obtain promising results BIBREF0 , BIBREF1 . However, these neural network models are still in infancy to deal with the fine-grained targeted sentiment classification task.
Attention mechanism, which has been successfully used in machine translation BIBREF2 , is incorporated to enforce the model to pay more attention to context words with closer semantic relations with the target. There are already some studies use attention to generate target-specific sentence representations BIBREF3 , BIBREF4 , BIBREF5 or to transform sentence representations according to target words BIBREF6 . However, these studies depend on complex recurrent neural networks (RNNs) as sequence encoder to compute hidden semantics of texts.
The first problem with previous works is that the modeling of text relies on RNNs. RNNs, such as LSTM, are very expressive, but they are hard to parallelize and backpropagation through time (BPTT) requires large amounts of memory and computation. Moreover, essentially every training algorithm of RNN is the truncated BPTT, which affects the model's ability to capture dependencies over longer time scales BIBREF7 . Although LSTM can alleviate the vanishing gradient problem to a certain extent and thus maintain long distance information, this usually requires a large amount of training data. Another problem that previous studies ignore is the label unreliability issue, since neutral sentiment is a fuzzy sentimental state and brings difficulty for model learning. As far as we know, we are the first to raise the label unreliability issue in the targeted sentiment classification task.
This paper propose an attention based model to solve the problems above. Specifically, our model eschews recurrence and employs attention as a competitive alternative to draw the introspective and interactive semantics between target and context words. To deal with the label unreliability issue, we employ a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT BIBREF8 to this task and show our model enhances the performance of basic BERT model. Experimental results on three benchmark datasets show that the proposed model achieves competitive performance and is a lightweight alternative of the best RNN based models.
The main contributions of this work are presented as follows:
Related Work
The research approach of the targeted sentiment classification task including traditional machine learning methods and neural networks methods.
Traditional machine learning methods, including rule-based methods BIBREF9 and statistic-based methods BIBREF10 , mainly focus on extracting a set of features like sentiment lexicons features and bag-of-words features to train a sentiment classifier BIBREF11 . The performance of these methods highly depends on the effectiveness of the feature engineering works, which are labor intensive.
In recent years, neural network methods are getting more and more attention as they do not need handcrafted features and can encode sentences with low-dimensional word vectors where rich semantic information stained. In order to incorporate target words into a model, Tang et al. tang2016effective propose TD-LSTM to extend LSTM by using two single-directional LSTM to model the left context and right context of the target word respectively. Tang et al. tang2016aspect design MemNet which consists of a multi-hop attention mechanism with an external memory to capture the importance of each context word concerning the given target. Multiple attention is paid to the memory represented by word embeddings to build higher semantic information. Wang et al. wang2016attention propose ATAE-LSTM which concatenates target embeddings with word representations and let targets participate in computing attention weights. Chen et al. chen2017recurrent propose RAM which adopts multiple-attention mechanism on the memory built with bidirectional LSTM and nonlinearly combines the attention results with gated recurrent units (GRUs). Ma et al. ma2017interactive propose IAN which learns the representations of the target and context with two attention networks interactively.
Proposed Methodology
Given a context sequence INLINEFORM0 and a target sequence INLINEFORM1 , where INLINEFORM2 is a sub-sequence of INLINEFORM3 . The goal of this model is to predict the sentiment polarity of the sentence INLINEFORM4 over the target INLINEFORM5 .
Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT.
Embedding Layer
Let INLINEFORM0 to be the pre-trained GloVe BIBREF12 embedding matrix, where INLINEFORM1 is the dimension of word vectors and INLINEFORM2 is the vocabulary size. Then we map each word INLINEFORM3 to its corresponding embedding vector INLINEFORM4 , which is a column in the embedding matrix INLINEFORM5 .
BERT embedding uses the pre-trained BERT to generate word vectors of sequence. In order to facilitate the training and fine-tuning of BERT model, we transform the given context and target to “[CLS] + context + [SEP]” and “[CLS] + target + [SEP]” respectively.
Attentional Encoder Layer
The attentional encoder layer is a parallelizable and interactive alternative of LSTM and is applied to compute the hidden states of the input embeddings. This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT).
Multi-Head Attention (MHA) is the attention that can perform multiple attention function in parallel. Different from Transformer BIBREF13 , we use Intra-MHA for introspective context words modeling and Inter-MHA for context-perceptive target words modeling, which is more lightweight and target is modeled according to a given context.
An attention function maps a key sequence INLINEFORM0 and a query sequence INLINEFORM1 to an output sequence INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 denotes the alignment function which learns the semantic relevance between INLINEFORM1 and INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 are learnable weights.
MHA can learn n_head different scores in parallel child spaces and is very powerful for alignments. The INLINEFORM0 outputs are concatenated and projected to the specified hidden dimension INLINEFORM1 , namely, DISPLAYFORM0
where “ INLINEFORM0 ” denotes vector concatenation, INLINEFORM1 , INLINEFORM2 is the output of the INLINEFORM3 -th head attention and INLINEFORM4 .
Intra-MHA, or multi-head self-attention, is a special situation for typical attention mechanism that INLINEFORM0 . Given a context embedding INLINEFORM1 , we can get the introspective context representation INLINEFORM2 by: DISPLAYFORM0
The learned context representation INLINEFORM0 is aware of long-term dependencies.
Inter-MHA is the generally used form of attention mechanism that INLINEFORM0 is different from INLINEFORM1 . Given a context embedding INLINEFORM2 and a target embedding INLINEFORM3 , we can get the context-perceptive target representation INLINEFORM4 by: DISPLAYFORM0
After this interactive procedure, each given target word INLINEFORM0 will have a composed representation selected from context embeddings INLINEFORM1 . Then we get the context-perceptive target words modeling INLINEFORM2 .
A Point-wise Convolution T ransformation (PCT) can transform contextual information gathered by the MHA. Point-wise means that the kernel sizes are 1 and the same transformation is applied to every single token belonging to the input. Formally, given a input sequence INLINEFORM0 , PCT is defined as: DISPLAYFORM0
where INLINEFORM0 stands for the ELU activation, INLINEFORM1 is the convolution operator, INLINEFORM2 and INLINEFORM3 are the learnable weights of the two convolutional kernels, INLINEFORM4 and INLINEFORM5 are biases of the two convolutional kernels.
Given INLINEFORM0 and INLINEFORM1 , PCTs are applied to get the output hidden states of the attentional encoder layer INLINEFORM2 and INLINEFORM3 by: DISPLAYFORM0
Target-specific Attention Layer
After we obtain the introspective context representation INLINEFORM0 and the context-perceptive target representation INLINEFORM1 , we employ another MHA to obtain the target-specific context representation INLINEFORM2 by: DISPLAYFORM0
The multi-head attention function here also has its independent parameters.
Output Layer
We get the final representations of the previous outputs by average pooling, concatenate them as the final comprehensive representation INLINEFORM0 , and use a full connected layer to project the concatenated vector into the space of the targeted INLINEFORM1 classes. DISPLAYFORM0
where INLINEFORM0 is the predicted sentiment polarity distribution, INLINEFORM1 and INLINEFORM2 are learnable parameters.
Regularization and Model Training
Since neutral sentiment is a very fuzzy sentimental state, training samples which labeled neutral are unreliable. We employ a Label Smoothing Regularization (LSR) term in the loss function. which penalizes low entropy output distributions BIBREF14 . LSR can reduce overfitting by preventing a network from assigning the full probability to each training example during training, replaces the 0 and 1 targets for a classifier with smoothed values like 0.1 or 0.9.
For a training sample INLINEFORM0 with the original ground-truth label distribution INLINEFORM1 , we replace INLINEFORM2 with DISPLAYFORM0
where INLINEFORM0 is the prior distribution over labels , and INLINEFORM1 is the smoothing parameter. In this paper, we set the prior label distribution to be uniform INLINEFORM2 .
LSR is equivalent to the KL divergence between the prior label distribution INLINEFORM0 and the network's predicted distribution INLINEFORM1 . Formally, LSR term is defined as: DISPLAYFORM0
The objective function (loss function) to be optimized is the cross-entropy loss with INLINEFORM0 and INLINEFORM1 regularization, which is defined as: DISPLAYFORM0
where INLINEFORM0 is the ground truth represented as a one-hot vector, INLINEFORM1 is the predicted sentiment distribution vector given by the output layer, INLINEFORM2 is the coefficient for INLINEFORM3 regularization term, and INLINEFORM4 is the parameter set.
Datasets and Experimental Settings
We conduct experiments on three datasets: SemEval 2014 Task 4 BIBREF15 dataset composed of Restaurant reviews and Laptop reviews, and ACL 14 Twitter dataset gathered by Dong et al. dong2014adaptive. These datasets are labeled with three sentiment polarities: positive, neutral and negative. Table TABREF31 shows the number of training and test instances in each category.
Word embeddings in AEN-GloVe do not get updated in the learning process, but we fine-tune pre-trained BERT in AEN-BERT. Embedding dimension INLINEFORM0 is 300 for GloVe and is 768 for pre-trained BERT. Dimension of hidden states INLINEFORM1 is set to 300. The weights of our model are initialized with Glorot initialization BIBREF16 . During training, we set label smoothing parameter INLINEFORM2 to 0.2 BIBREF14 , the coefficient INLINEFORM3 of INLINEFORM4 regularization item is INLINEFORM5 and dropout rate is 0.1. Adam optimizer BIBREF17 is applied to update all the parameters. We adopt the Accuracy and Macro-F1 metrics to evaluate the performance of the model.
Model Comparisons
In order to comprehensively evaluate and analysis the performance of AEN-GloVe, we list 7 baseline models and design 4 ablations of AEN-GloVe. We also design a basic BERT-based model to evaluate the performance of AEN-BERT.
Non-RNN based baselines:
INLINEFORM0 Feature-based SVM BIBREF18 is a traditional support vector machine based model with extensive feature engineering.
INLINEFORM0 Rec-NN BIBREF0 firstly uses rules to transform the dependency tree and put the opinion target at the root, and then learns the sentence representation toward target via semantic composition using Recursive NNs.
INLINEFORM0 MemNet BIBREF19 uses multi-hops of attention layers on the context word embeddings for sentence representation to explicitly captures the importance of each context word.
RNN based baselines:
INLINEFORM0 TD-LSTM BIBREF1 extends LSTM by using two LSTM networks to model the left context with target and the right context with target respectively. The left and right target-dependent representations are concatenated for predicting the sentiment polarity of the target.
INLINEFORM0 ATAE-LSTM BIBREF3 strengthens the effect of target embeddings, which appends the target embeddings with each word embeddings and use LSTM with attention to get the final representation for classification.
INLINEFORM0 IAN BIBREF4 learns the representations of the target and context with two LSTMs and attentions interactively, which generates the representations for targets and contexts with respect to each other.
INLINEFORM0 RAM BIBREF5 strengthens MemNet by representing memory with bidirectional LSTM and using a gated recurrent unit network to combine the multiple attention outputs for sentence representation.
AEN-GloVe ablations:
INLINEFORM0 AEN-GloVe w/o PCT ablates PCT module.
INLINEFORM0 AEN-GloVe w/o MHA ablates MHA module.
INLINEFORM0 AEN-GloVe w/o LSR ablates label smoothing regularization.
INLINEFORM0 AEN-GloVe-BiLSTM replaces the attentional encoder layer with two bidirectional LSTM.
Basic BERT-based model:
INLINEFORM0 BERT-SPC feeds sequence “[CLS] + context + [SEP] + target + [SEP]” into the basic BERT model for sentence pair classification task.
Main Results
Table TABREF34 shows the performance comparison of AEN with other models. BERT-SPC and AEN-BERT obtain substantial accuracy improvements, which shows the power of pre-trained BERT on small-data task. The overall performance of AEN-BERT is better than BERT-SPC, which suggests that it is important to design a downstream network customized to a specific task. As the prior knowledge in the pre-trained BERT is not specific to any particular domain, further fine-tuning on the specific task is necessary for releasing the true power of BERT.
The overall performance of TD-LSTM is not good since it only makes a rough treatment of the target words. ATAE-LSTM, IAN and RAM are attention based models, they stably exceed the TD-LSTM method on Restaurant and Laptop datasets. RAM is better than other RNN based models, but it does not perform well on Twitter dataset, which might because bidirectional LSTM is not good at modeling small and ungrammatical text.
Feature-based SVM is still a competitive baseline, but relying on manually-designed features. Rec-NN gets the worst performances among all neural network baselines as dependency parsing is not guaranteed to work well on ungrammatical short texts such as tweets and comments. Like AEN, MemNet also eschews recurrence, but its overall performance is not good since it does not model the hidden semantic of embeddings, and the result of the last attention is essentially a linear combination of word embeddings.
Model Analysis
As shown in Table TABREF34 , the performances of AEN-GloVe ablations are incomparable with AEN-GloVe in both accuracy and macro-F1 measure. This result shows that all of these discarded components are crucial for a good performance. Comparing the results of AEN-GloVe and AEN-GloVe w/o LSR, we observe that the accuracy of AEN-GloVe w/o LSR drops significantly on all three datasets. We could attribute this phenomenon to the unreliability of the training samples with neutral sentiment. The overall performance of AEN-GloVe and AEN-GloVe-BiLSTM is relatively close, AEN-GloVe performs better on the Restaurant dataset. More importantly, AEN-GloVe has fewer parameters and is easier to parallelize.
To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU .
RNN-based and BERT-based models indeed have larger model size. ATAE-LSTM, IAN, RAM, and AEN-GloVe-BiLSTM are all attention based RNN models, memory optimization for these models will be more difficult as the encoded hidden states must be kept simultaneously in memory in order to perform attention mechanisms. MemNet has the lowest model size as it only has one shared attention layer and two linear layers, it does not calculate hidden states of word embeddings. AEN-GloVe's lightweight level ranks second, since it takes some more parameters than MemNet in modeling hidden states of sequences. As a comparison, the model size of AEN-GloVe-BiLSTM is more than twice that of AEN-GloVe, but does not bring any performance improvements.
Conclusion
In this work, we propose an attentional encoder network for the targeted sentiment classification task. which employs attention based encoders for the modeling between context and target. We raise the the label unreliability issue add a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of the proposed model. | Proposed model has 1.16 million parameters and 11.04 MB. |
b67420da975689e47d3ea1c12b601851018c4071 | b67420da975689e47d3ea1c12b601851018c4071_0 | Q: How is their model different from BERT?
Text: Introduction
Targeted sentiment classification is a fine-grained sentiment analysis task, which aims at determining the sentiment polarities (e.g., negative, neutral, or positive) of a sentence over “opinion targets” that explicitly appear in the sentence. For example, given a sentence “I hated their service, but their food was great”, the sentiment polarities for the target “service” and “food” are negative and positive respectively. A target is usually an entity or an entity aspect.
In recent years, neural network models are designed to automatically learn useful low-dimensional representations from targets and contexts and obtain promising results BIBREF0 , BIBREF1 . However, these neural network models are still in infancy to deal with the fine-grained targeted sentiment classification task.
Attention mechanism, which has been successfully used in machine translation BIBREF2 , is incorporated to enforce the model to pay more attention to context words with closer semantic relations with the target. There are already some studies use attention to generate target-specific sentence representations BIBREF3 , BIBREF4 , BIBREF5 or to transform sentence representations according to target words BIBREF6 . However, these studies depend on complex recurrent neural networks (RNNs) as sequence encoder to compute hidden semantics of texts.
The first problem with previous works is that the modeling of text relies on RNNs. RNNs, such as LSTM, are very expressive, but they are hard to parallelize and backpropagation through time (BPTT) requires large amounts of memory and computation. Moreover, essentially every training algorithm of RNN is the truncated BPTT, which affects the model's ability to capture dependencies over longer time scales BIBREF7 . Although LSTM can alleviate the vanishing gradient problem to a certain extent and thus maintain long distance information, this usually requires a large amount of training data. Another problem that previous studies ignore is the label unreliability issue, since neutral sentiment is a fuzzy sentimental state and brings difficulty for model learning. As far as we know, we are the first to raise the label unreliability issue in the targeted sentiment classification task.
This paper propose an attention based model to solve the problems above. Specifically, our model eschews recurrence and employs attention as a competitive alternative to draw the introspective and interactive semantics between target and context words. To deal with the label unreliability issue, we employ a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT BIBREF8 to this task and show our model enhances the performance of basic BERT model. Experimental results on three benchmark datasets show that the proposed model achieves competitive performance and is a lightweight alternative of the best RNN based models.
The main contributions of this work are presented as follows:
Related Work
The research approach of the targeted sentiment classification task including traditional machine learning methods and neural networks methods.
Traditional machine learning methods, including rule-based methods BIBREF9 and statistic-based methods BIBREF10 , mainly focus on extracting a set of features like sentiment lexicons features and bag-of-words features to train a sentiment classifier BIBREF11 . The performance of these methods highly depends on the effectiveness of the feature engineering works, which are labor intensive.
In recent years, neural network methods are getting more and more attention as they do not need handcrafted features and can encode sentences with low-dimensional word vectors where rich semantic information stained. In order to incorporate target words into a model, Tang et al. tang2016effective propose TD-LSTM to extend LSTM by using two single-directional LSTM to model the left context and right context of the target word respectively. Tang et al. tang2016aspect design MemNet which consists of a multi-hop attention mechanism with an external memory to capture the importance of each context word concerning the given target. Multiple attention is paid to the memory represented by word embeddings to build higher semantic information. Wang et al. wang2016attention propose ATAE-LSTM which concatenates target embeddings with word representations and let targets participate in computing attention weights. Chen et al. chen2017recurrent propose RAM which adopts multiple-attention mechanism on the memory built with bidirectional LSTM and nonlinearly combines the attention results with gated recurrent units (GRUs). Ma et al. ma2017interactive propose IAN which learns the representations of the target and context with two attention networks interactively.
Proposed Methodology
Given a context sequence INLINEFORM0 and a target sequence INLINEFORM1 , where INLINEFORM2 is a sub-sequence of INLINEFORM3 . The goal of this model is to predict the sentiment polarity of the sentence INLINEFORM4 over the target INLINEFORM5 .
Figure FIGREF9 illustrates the overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. Embedding layer has two types: GloVe embedding and BERT embedding. Accordingly, the models are named AEN-GloVe and AEN-BERT.
Embedding Layer
Let INLINEFORM0 to be the pre-trained GloVe BIBREF12 embedding matrix, where INLINEFORM1 is the dimension of word vectors and INLINEFORM2 is the vocabulary size. Then we map each word INLINEFORM3 to its corresponding embedding vector INLINEFORM4 , which is a column in the embedding matrix INLINEFORM5 .
BERT embedding uses the pre-trained BERT to generate word vectors of sequence. In order to facilitate the training and fine-tuning of BERT model, we transform the given context and target to “[CLS] + context + [SEP]” and “[CLS] + target + [SEP]” respectively.
Attentional Encoder Layer
The attentional encoder layer is a parallelizable and interactive alternative of LSTM and is applied to compute the hidden states of the input embeddings. This layer consists of two submodules: the Multi-Head Attention (MHA) and the Point-wise Convolution Transformation (PCT).
Multi-Head Attention (MHA) is the attention that can perform multiple attention function in parallel. Different from Transformer BIBREF13 , we use Intra-MHA for introspective context words modeling and Inter-MHA for context-perceptive target words modeling, which is more lightweight and target is modeled according to a given context.
An attention function maps a key sequence INLINEFORM0 and a query sequence INLINEFORM1 to an output sequence INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 denotes the alignment function which learns the semantic relevance between INLINEFORM1 and INLINEFORM2 : DISPLAYFORM0
where INLINEFORM0 are learnable weights.
MHA can learn n_head different scores in parallel child spaces and is very powerful for alignments. The INLINEFORM0 outputs are concatenated and projected to the specified hidden dimension INLINEFORM1 , namely, DISPLAYFORM0
where “ INLINEFORM0 ” denotes vector concatenation, INLINEFORM1 , INLINEFORM2 is the output of the INLINEFORM3 -th head attention and INLINEFORM4 .
Intra-MHA, or multi-head self-attention, is a special situation for typical attention mechanism that INLINEFORM0 . Given a context embedding INLINEFORM1 , we can get the introspective context representation INLINEFORM2 by: DISPLAYFORM0
The learned context representation INLINEFORM0 is aware of long-term dependencies.
Inter-MHA is the generally used form of attention mechanism that INLINEFORM0 is different from INLINEFORM1 . Given a context embedding INLINEFORM2 and a target embedding INLINEFORM3 , we can get the context-perceptive target representation INLINEFORM4 by: DISPLAYFORM0
After this interactive procedure, each given target word INLINEFORM0 will have a composed representation selected from context embeddings INLINEFORM1 . Then we get the context-perceptive target words modeling INLINEFORM2 .
A Point-wise Convolution T ransformation (PCT) can transform contextual information gathered by the MHA. Point-wise means that the kernel sizes are 1 and the same transformation is applied to every single token belonging to the input. Formally, given a input sequence INLINEFORM0 , PCT is defined as: DISPLAYFORM0
where INLINEFORM0 stands for the ELU activation, INLINEFORM1 is the convolution operator, INLINEFORM2 and INLINEFORM3 are the learnable weights of the two convolutional kernels, INLINEFORM4 and INLINEFORM5 are biases of the two convolutional kernels.
Given INLINEFORM0 and INLINEFORM1 , PCTs are applied to get the output hidden states of the attentional encoder layer INLINEFORM2 and INLINEFORM3 by: DISPLAYFORM0
Target-specific Attention Layer
After we obtain the introspective context representation INLINEFORM0 and the context-perceptive target representation INLINEFORM1 , we employ another MHA to obtain the target-specific context representation INLINEFORM2 by: DISPLAYFORM0
The multi-head attention function here also has its independent parameters.
Output Layer
We get the final representations of the previous outputs by average pooling, concatenate them as the final comprehensive representation INLINEFORM0 , and use a full connected layer to project the concatenated vector into the space of the targeted INLINEFORM1 classes. DISPLAYFORM0
where INLINEFORM0 is the predicted sentiment polarity distribution, INLINEFORM1 and INLINEFORM2 are learnable parameters.
Regularization and Model Training
Since neutral sentiment is a very fuzzy sentimental state, training samples which labeled neutral are unreliable. We employ a Label Smoothing Regularization (LSR) term in the loss function. which penalizes low entropy output distributions BIBREF14 . LSR can reduce overfitting by preventing a network from assigning the full probability to each training example during training, replaces the 0 and 1 targets for a classifier with smoothed values like 0.1 or 0.9.
For a training sample INLINEFORM0 with the original ground-truth label distribution INLINEFORM1 , we replace INLINEFORM2 with DISPLAYFORM0
where INLINEFORM0 is the prior distribution over labels , and INLINEFORM1 is the smoothing parameter. In this paper, we set the prior label distribution to be uniform INLINEFORM2 .
LSR is equivalent to the KL divergence between the prior label distribution INLINEFORM0 and the network's predicted distribution INLINEFORM1 . Formally, LSR term is defined as: DISPLAYFORM0
The objective function (loss function) to be optimized is the cross-entropy loss with INLINEFORM0 and INLINEFORM1 regularization, which is defined as: DISPLAYFORM0
where INLINEFORM0 is the ground truth represented as a one-hot vector, INLINEFORM1 is the predicted sentiment distribution vector given by the output layer, INLINEFORM2 is the coefficient for INLINEFORM3 regularization term, and INLINEFORM4 is the parameter set.
Datasets and Experimental Settings
We conduct experiments on three datasets: SemEval 2014 Task 4 BIBREF15 dataset composed of Restaurant reviews and Laptop reviews, and ACL 14 Twitter dataset gathered by Dong et al. dong2014adaptive. These datasets are labeled with three sentiment polarities: positive, neutral and negative. Table TABREF31 shows the number of training and test instances in each category.
Word embeddings in AEN-GloVe do not get updated in the learning process, but we fine-tune pre-trained BERT in AEN-BERT. Embedding dimension INLINEFORM0 is 300 for GloVe and is 768 for pre-trained BERT. Dimension of hidden states INLINEFORM1 is set to 300. The weights of our model are initialized with Glorot initialization BIBREF16 . During training, we set label smoothing parameter INLINEFORM2 to 0.2 BIBREF14 , the coefficient INLINEFORM3 of INLINEFORM4 regularization item is INLINEFORM5 and dropout rate is 0.1. Adam optimizer BIBREF17 is applied to update all the parameters. We adopt the Accuracy and Macro-F1 metrics to evaluate the performance of the model.
Model Comparisons
In order to comprehensively evaluate and analysis the performance of AEN-GloVe, we list 7 baseline models and design 4 ablations of AEN-GloVe. We also design a basic BERT-based model to evaluate the performance of AEN-BERT.
Non-RNN based baselines:
INLINEFORM0 Feature-based SVM BIBREF18 is a traditional support vector machine based model with extensive feature engineering.
INLINEFORM0 Rec-NN BIBREF0 firstly uses rules to transform the dependency tree and put the opinion target at the root, and then learns the sentence representation toward target via semantic composition using Recursive NNs.
INLINEFORM0 MemNet BIBREF19 uses multi-hops of attention layers on the context word embeddings for sentence representation to explicitly captures the importance of each context word.
RNN based baselines:
INLINEFORM0 TD-LSTM BIBREF1 extends LSTM by using two LSTM networks to model the left context with target and the right context with target respectively. The left and right target-dependent representations are concatenated for predicting the sentiment polarity of the target.
INLINEFORM0 ATAE-LSTM BIBREF3 strengthens the effect of target embeddings, which appends the target embeddings with each word embeddings and use LSTM with attention to get the final representation for classification.
INLINEFORM0 IAN BIBREF4 learns the representations of the target and context with two LSTMs and attentions interactively, which generates the representations for targets and contexts with respect to each other.
INLINEFORM0 RAM BIBREF5 strengthens MemNet by representing memory with bidirectional LSTM and using a gated recurrent unit network to combine the multiple attention outputs for sentence representation.
AEN-GloVe ablations:
INLINEFORM0 AEN-GloVe w/o PCT ablates PCT module.
INLINEFORM0 AEN-GloVe w/o MHA ablates MHA module.
INLINEFORM0 AEN-GloVe w/o LSR ablates label smoothing regularization.
INLINEFORM0 AEN-GloVe-BiLSTM replaces the attentional encoder layer with two bidirectional LSTM.
Basic BERT-based model:
INLINEFORM0 BERT-SPC feeds sequence “[CLS] + context + [SEP] + target + [SEP]” into the basic BERT model for sentence pair classification task.
Main Results
Table TABREF34 shows the performance comparison of AEN with other models. BERT-SPC and AEN-BERT obtain substantial accuracy improvements, which shows the power of pre-trained BERT on small-data task. The overall performance of AEN-BERT is better than BERT-SPC, which suggests that it is important to design a downstream network customized to a specific task. As the prior knowledge in the pre-trained BERT is not specific to any particular domain, further fine-tuning on the specific task is necessary for releasing the true power of BERT.
The overall performance of TD-LSTM is not good since it only makes a rough treatment of the target words. ATAE-LSTM, IAN and RAM are attention based models, they stably exceed the TD-LSTM method on Restaurant and Laptop datasets. RAM is better than other RNN based models, but it does not perform well on Twitter dataset, which might because bidirectional LSTM is not good at modeling small and ungrammatical text.
Feature-based SVM is still a competitive baseline, but relying on manually-designed features. Rec-NN gets the worst performances among all neural network baselines as dependency parsing is not guaranteed to work well on ungrammatical short texts such as tweets and comments. Like AEN, MemNet also eschews recurrence, but its overall performance is not good since it does not model the hidden semantic of embeddings, and the result of the last attention is essentially a linear combination of word embeddings.
Model Analysis
As shown in Table TABREF34 , the performances of AEN-GloVe ablations are incomparable with AEN-GloVe in both accuracy and macro-F1 measure. This result shows that all of these discarded components are crucial for a good performance. Comparing the results of AEN-GloVe and AEN-GloVe w/o LSR, we observe that the accuracy of AEN-GloVe w/o LSR drops significantly on all three datasets. We could attribute this phenomenon to the unreliability of the training samples with neutral sentiment. The overall performance of AEN-GloVe and AEN-GloVe-BiLSTM is relatively close, AEN-GloVe performs better on the Restaurant dataset. More importantly, AEN-GloVe has fewer parameters and is easier to parallelize.
To figure out whether the proposed AEN-GloVe is a lightweight alternative of recurrent models, we study the model size of each model on the Restaurant dataset. Statistical results are reported in Table TABREF37 . We implement all the compared models base on the same source code infrastructure, use the same hyperparameters, and run them on the same GPU .
RNN-based and BERT-based models indeed have larger model size. ATAE-LSTM, IAN, RAM, and AEN-GloVe-BiLSTM are all attention based RNN models, memory optimization for these models will be more difficult as the encoded hidden states must be kept simultaneously in memory in order to perform attention mechanisms. MemNet has the lowest model size as it only has one shared attention layer and two linear layers, it does not calculate hidden states of word embeddings. AEN-GloVe's lightweight level ranks second, since it takes some more parameters than MemNet in modeling hidden states of sequences. As a comparison, the model size of AEN-GloVe-BiLSTM is more than twice that of AEN-GloVe, but does not bring any performance improvements.
Conclusion
In this work, we propose an attentional encoder network for the targeted sentiment classification task. which employs attention based encoders for the modeling between context and target. We raise the the label unreliability issue add a label smoothing regularization to encourage the model to be less confident with fuzzy labels. We also apply pre-trained BERT to this task and obtain new state-of-the-art results. Experiments and analysis demonstrate the effectiveness and lightweight of the proposed model. | overall architecture of the proposed Attentional Encoder Network (AEN), which mainly consists of an embedding layer, an attentional encoder layer, a target-specific attention layer, and an output layer. |
a4e66e842be1438e5cd8d7cb2a2c589f494aee27 | a4e66e842be1438e5cd8d7cb2a2c589f494aee27_0 | Q: Which tested technique was the worst performer?
Text: Introduction
Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms.
Many recent advances in natural language processing on emotions have focused on product reviews BIBREF0 and tweets BIBREF1, BIBREF2. These datasets are often limited in length (e.g. by the number of words in tweets), purpose (e.g. product reviews), or emotional spectrum (e.g. binary classification).
Character dialogues and narratives in storytelling usually carry strong emotions. A memorable story is often one in which the emotional journey of the characters resonates with the reader. Indeed, emotion is one of the most important aspects of narratives. In order to characterize narrative emotions properly, we must move beyond binary constraints (e.g. good or bad, happy or sad).
In this paper, we introduce the Dataset for Emotions of Narrative Sequences (DENS) for emotion analysis, consisting of passages from long-form fictional narratives from both classic literature and modern stories in English. The data samples consist of self-contained passages that span several sentences and a variety of subjects. Each sample is annotated by using one of 9 classes and an indicator for annotator agreement.
Background
Using the categorical basic emotion model BIBREF3, BIBREF4, BIBREF5 studied creating lexicons from tweets for use in emotion analysis. Recently, BIBREF1, BIBREF6 and BIBREF2 proposed shared-tasks for multi-class emotion analysis based on tweets.
Fewer works have been reported on understanding emotions in narratives. Emotional Arc BIBREF7 is one recent advance in this direction. The work used lexicons and unsupervised learning methods based on unlabelled passages from titles in Project Gutenberg.
For labelled datasets on narratives, BIBREF8 provided a sentence-level annotated corpus of childrens' stories and BIBREF9 provided phrase-level annotations on selected Project Gutenberg titles.
To the best of our knowledge, the dataset in this work is the first to provide multi-class emotion labels on passages, selected from both Project Gutenberg and modern narratives. The dataset is available upon request for non-commercial, research only purposes.
Dataset
In this section, we describe the process used to collect and annotate the dataset.
Dataset ::: Plutchik’s Wheel of Emotions
The dataset is annotated based on a modified Plutchik’s wheel of emotions.
The original Plutchik’s wheel consists of 8 primary emotions: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Trust, Disgust. In addition, more complex emotions can be formed by combing two basic emotions. For example, Love is defined as a combination of Joy and Trust (Fig. 1).
The intensity of an emotion is also captured in Plutchik's wheel. For example, the primary emotion of Anger can vary between Annoyance (mild) and Rage (intense).
We conducted an initial survey based on 100 stories with a significant fraction sampled from the romance genre. We asked readers to identify the major emotion exhibited in each story from a choice of the original 8 primary emotions.
We found that readers have significant difficulty in identifying Trust as an emotion associated with romantic stories. Hence, we modified our annotation scheme by removing Trust and adding Love. We also added the Neutral category to denote passages that do not exhibit any emotional content.
The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral.
Dataset ::: Passage Selection
We selected both classic and modern narratives in English for this dataset. The modern narratives were sampled based on popularity from Wattpad. We parsed selected narratives into passages, where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens.
In long-form narratives, many non-conversational passages are intended for transition or scene introduction, and may not carry any emotion. We divided the eligible passages into two parts, and one part was pruned using selected emotion-rich but ambiguous lexicons such as cry, punch, kiss, etc.. Then we mixed this pruned part with the unpruned part for annotation in order to reduce the number of neutral passages. See Appendix SECREF25 for the lexicons used.
Dataset ::: Mechanical Turk (MTurk)
MTurk was set up using the standard sentiment template and instructed the crowd annotators to `pick the best/major emotion embodied in the passage'.
We further provided instructions to clarify the intensity of an emotion, such as: “Rage/Annoyance is a form of Anger”, “Serenity/Ecstasy is a form of Joy”, and “Love includes Romantic/Family/Friendship”, along with sample passages.
We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\kappa $ score of greater than $0.4$.
For passages without majority agreement between annotators, we consolidated their labels using in-house data annotators who are experts in narrative content. A passage is accepted as valid if the in-house annotator's label matched any one of the MTurk annotators' labels. The remaining passages are discarded. We provide the fraction of annotator agreement for each label in the dataset.
Though passages may lose some emotional context when read independently of the complete narrative, we believe annotator agreement on our dataset supports the assertion that small excerpts can still convey coherent emotions.
During the annotation process, several annotators had suggested for us to include additional emotions such as confused, pain, and jealousy, which are common to narratives. As they were not part of the original Plutchik’s wheel, we decided to not include them. An interesting future direction is to study the relationship between emotions such as ‘pain versus sadness’ or ‘confused versus surprise’ and improve the emotion model for narratives.
Dataset ::: Dataset Statistics
The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.
The vocabulary size is 28K (when lowercased). It contains over 1600 unique titles across multiple categories, including 88 titles (1520 passages) from Project Gutenberg. All of the modern narratives were written after the year 2000, with notable amount of themes in coming-of-age, strong-female-lead, and LGBTQ+. The genre distribution is listed in Table TABREF8.
In the final dataset, 21.0% of the data has consensus between all annotators, 73.5% has majority agreement, and 5.48% has labels assigned after consultation with in-house annotators.
The distribution of data points over labels with top lexicons (lower-cased, normalized) is shown in Table TABREF9. Note that the Disgust category is very small and should be discarded. Furthermore, we suspect that the data labelled as Surprise may be noisier than other categories and should be discarded as well.
Table TABREF10 shows a few examples labelled data from classic titles. More examples can be found in Table TABREF26 in the Appendix SECREF27.
Benchmarks
We performed benchmark experiments on the dataset using several different algorithms. In all experiments, we have discarded the data labelled with Surprise and Disgust.
We pre-processed the data by using the SpaCy pipeline. We masked out named entities with entity-type specific placeholders to reduce the chance of benchmark models utilizing named entities as a basis for classification.
Benchmark results are shown in Table TABREF17. The dataset is approximately balanced after discarding the Surprise and Disgust classes. We report the average micro-F1 scores, with 5-fold cross validation for each technique.
We provide a brief overview of each benchmark experiment below. Among all of the benchmarks, Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 achieved the best performance with a 0.604 micro-F1 score.
Overall, we observed that deep-learning based techniques performed better than lexical based methods. This suggests that a method which attends to context and themes could do well on the dataset.
Benchmarks ::: Bag-of-Words-based Benchmarks
We computed bag-of-words-based benchmarks using the following methods:
Classification with TF-IDF + Linear SVM (TF-IDF + SVM)
Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)
Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)
Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)
Benchmarks ::: Doc2Vec + SVM
We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.
Benchmarks ::: Hierarchical RNN
For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.
The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.
Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)
One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.
Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.
The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.
Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.
Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)
Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.
We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.
Benchmarks ::: Fine-tuned BERT
Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.
We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\textrm {{\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%.
Conclusion
We introduce DENS, a dataset for multi-class emotion analysis from long-form narratives in English. We provide a number of benchmark results based on models ranging from bag-of-word models to methods based on pre-trained language models (ELMo and BERT).
Our benchmark results demonstrate that this dataset provides a novel challenge in emotion analysis. The results also demonstrate that attention-based models could significantly improve performance on classification tasks such as emotion analysis.
Interesting future directions for this work include: 1. incorporating common-sense knowledge into emotion analysis to capture semantic context and 2. using few-shot learning to bootstrap and improve performance of underrepresented emotions.
Finally, as narrative passages often involve interactions between multiple emotions, one avenue for future datasets could be to focus on the multi-emotion complexities of human language and their contextual interactions.
Appendices ::: Sample Data
Table TABREF26 shows sample passages from classic titles with corresponding labels. | Depeche + SVM |
cb78e280e3340b786e81636431834b75824568c3 | cb78e280e3340b786e81636431834b75824568c3_0 | Q: How many emotions do they look at?
Text: Introduction
Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms.
Many recent advances in natural language processing on emotions have focused on product reviews BIBREF0 and tweets BIBREF1, BIBREF2. These datasets are often limited in length (e.g. by the number of words in tweets), purpose (e.g. product reviews), or emotional spectrum (e.g. binary classification).
Character dialogues and narratives in storytelling usually carry strong emotions. A memorable story is often one in which the emotional journey of the characters resonates with the reader. Indeed, emotion is one of the most important aspects of narratives. In order to characterize narrative emotions properly, we must move beyond binary constraints (e.g. good or bad, happy or sad).
In this paper, we introduce the Dataset for Emotions of Narrative Sequences (DENS) for emotion analysis, consisting of passages from long-form fictional narratives from both classic literature and modern stories in English. The data samples consist of self-contained passages that span several sentences and a variety of subjects. Each sample is annotated by using one of 9 classes and an indicator for annotator agreement.
Background
Using the categorical basic emotion model BIBREF3, BIBREF4, BIBREF5 studied creating lexicons from tweets for use in emotion analysis. Recently, BIBREF1, BIBREF6 and BIBREF2 proposed shared-tasks for multi-class emotion analysis based on tweets.
Fewer works have been reported on understanding emotions in narratives. Emotional Arc BIBREF7 is one recent advance in this direction. The work used lexicons and unsupervised learning methods based on unlabelled passages from titles in Project Gutenberg.
For labelled datasets on narratives, BIBREF8 provided a sentence-level annotated corpus of childrens' stories and BIBREF9 provided phrase-level annotations on selected Project Gutenberg titles.
To the best of our knowledge, the dataset in this work is the first to provide multi-class emotion labels on passages, selected from both Project Gutenberg and modern narratives. The dataset is available upon request for non-commercial, research only purposes.
Dataset
In this section, we describe the process used to collect and annotate the dataset.
Dataset ::: Plutchik’s Wheel of Emotions
The dataset is annotated based on a modified Plutchik’s wheel of emotions.
The original Plutchik’s wheel consists of 8 primary emotions: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Trust, Disgust. In addition, more complex emotions can be formed by combing two basic emotions. For example, Love is defined as a combination of Joy and Trust (Fig. 1).
The intensity of an emotion is also captured in Plutchik's wheel. For example, the primary emotion of Anger can vary between Annoyance (mild) and Rage (intense).
We conducted an initial survey based on 100 stories with a significant fraction sampled from the romance genre. We asked readers to identify the major emotion exhibited in each story from a choice of the original 8 primary emotions.
We found that readers have significant difficulty in identifying Trust as an emotion associated with romantic stories. Hence, we modified our annotation scheme by removing Trust and adding Love. We also added the Neutral category to denote passages that do not exhibit any emotional content.
The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral.
Dataset ::: Passage Selection
We selected both classic and modern narratives in English for this dataset. The modern narratives were sampled based on popularity from Wattpad. We parsed selected narratives into passages, where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens.
In long-form narratives, many non-conversational passages are intended for transition or scene introduction, and may not carry any emotion. We divided the eligible passages into two parts, and one part was pruned using selected emotion-rich but ambiguous lexicons such as cry, punch, kiss, etc.. Then we mixed this pruned part with the unpruned part for annotation in order to reduce the number of neutral passages. See Appendix SECREF25 for the lexicons used.
Dataset ::: Mechanical Turk (MTurk)
MTurk was set up using the standard sentiment template and instructed the crowd annotators to `pick the best/major emotion embodied in the passage'.
We further provided instructions to clarify the intensity of an emotion, such as: “Rage/Annoyance is a form of Anger”, “Serenity/Ecstasy is a form of Joy”, and “Love includes Romantic/Family/Friendship”, along with sample passages.
We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\kappa $ score of greater than $0.4$.
For passages without majority agreement between annotators, we consolidated their labels using in-house data annotators who are experts in narrative content. A passage is accepted as valid if the in-house annotator's label matched any one of the MTurk annotators' labels. The remaining passages are discarded. We provide the fraction of annotator agreement for each label in the dataset.
Though passages may lose some emotional context when read independently of the complete narrative, we believe annotator agreement on our dataset supports the assertion that small excerpts can still convey coherent emotions.
During the annotation process, several annotators had suggested for us to include additional emotions such as confused, pain, and jealousy, which are common to narratives. As they were not part of the original Plutchik’s wheel, we decided to not include them. An interesting future direction is to study the relationship between emotions such as ‘pain versus sadness’ or ‘confused versus surprise’ and improve the emotion model for narratives.
Dataset ::: Dataset Statistics
The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.
The vocabulary size is 28K (when lowercased). It contains over 1600 unique titles across multiple categories, including 88 titles (1520 passages) from Project Gutenberg. All of the modern narratives were written after the year 2000, with notable amount of themes in coming-of-age, strong-female-lead, and LGBTQ+. The genre distribution is listed in Table TABREF8.
In the final dataset, 21.0% of the data has consensus between all annotators, 73.5% has majority agreement, and 5.48% has labels assigned after consultation with in-house annotators.
The distribution of data points over labels with top lexicons (lower-cased, normalized) is shown in Table TABREF9. Note that the Disgust category is very small and should be discarded. Furthermore, we suspect that the data labelled as Surprise may be noisier than other categories and should be discarded as well.
Table TABREF10 shows a few examples labelled data from classic titles. More examples can be found in Table TABREF26 in the Appendix SECREF27.
Benchmarks
We performed benchmark experiments on the dataset using several different algorithms. In all experiments, we have discarded the data labelled with Surprise and Disgust.
We pre-processed the data by using the SpaCy pipeline. We masked out named entities with entity-type specific placeholders to reduce the chance of benchmark models utilizing named entities as a basis for classification.
Benchmark results are shown in Table TABREF17. The dataset is approximately balanced after discarding the Surprise and Disgust classes. We report the average micro-F1 scores, with 5-fold cross validation for each technique.
We provide a brief overview of each benchmark experiment below. Among all of the benchmarks, Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 achieved the best performance with a 0.604 micro-F1 score.
Overall, we observed that deep-learning based techniques performed better than lexical based methods. This suggests that a method which attends to context and themes could do well on the dataset.
Benchmarks ::: Bag-of-Words-based Benchmarks
We computed bag-of-words-based benchmarks using the following methods:
Classification with TF-IDF + Linear SVM (TF-IDF + SVM)
Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)
Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)
Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)
Benchmarks ::: Doc2Vec + SVM
We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.
Benchmarks ::: Hierarchical RNN
For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.
The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.
Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)
One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.
Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.
The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.
Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.
Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)
Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.
We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.
Benchmarks ::: Fine-tuned BERT
Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.
We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\textrm {{\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%.
Conclusion
We introduce DENS, a dataset for multi-class emotion analysis from long-form narratives in English. We provide a number of benchmark results based on models ranging from bag-of-word models to methods based on pre-trained language models (ELMo and BERT).
Our benchmark results demonstrate that this dataset provides a novel challenge in emotion analysis. The results also demonstrate that attention-based models could significantly improve performance on classification tasks such as emotion analysis.
Interesting future directions for this work include: 1. incorporating common-sense knowledge into emotion analysis to capture semantic context and 2. using few-shot learning to bootstrap and improve performance of underrepresented emotions.
Finally, as narrative passages often involve interactions between multiple emotions, one avenue for future datasets could be to focus on the multi-emotion complexities of human language and their contextual interactions.
Appendices ::: Sample Data
Table TABREF26 shows sample passages from classic titles with corresponding labels. | 9 |
2941874356e98eb2832ba22eae9cb08ec8ce0308 | 2941874356e98eb2832ba22eae9cb08ec8ce0308_0 | Q: What are the baseline benchmarks?
Text: Introduction
Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms.
Many recent advances in natural language processing on emotions have focused on product reviews BIBREF0 and tweets BIBREF1, BIBREF2. These datasets are often limited in length (e.g. by the number of words in tweets), purpose (e.g. product reviews), or emotional spectrum (e.g. binary classification).
Character dialogues and narratives in storytelling usually carry strong emotions. A memorable story is often one in which the emotional journey of the characters resonates with the reader. Indeed, emotion is one of the most important aspects of narratives. In order to characterize narrative emotions properly, we must move beyond binary constraints (e.g. good or bad, happy or sad).
In this paper, we introduce the Dataset for Emotions of Narrative Sequences (DENS) for emotion analysis, consisting of passages from long-form fictional narratives from both classic literature and modern stories in English. The data samples consist of self-contained passages that span several sentences and a variety of subjects. Each sample is annotated by using one of 9 classes and an indicator for annotator agreement.
Background
Using the categorical basic emotion model BIBREF3, BIBREF4, BIBREF5 studied creating lexicons from tweets for use in emotion analysis. Recently, BIBREF1, BIBREF6 and BIBREF2 proposed shared-tasks for multi-class emotion analysis based on tweets.
Fewer works have been reported on understanding emotions in narratives. Emotional Arc BIBREF7 is one recent advance in this direction. The work used lexicons and unsupervised learning methods based on unlabelled passages from titles in Project Gutenberg.
For labelled datasets on narratives, BIBREF8 provided a sentence-level annotated corpus of childrens' stories and BIBREF9 provided phrase-level annotations on selected Project Gutenberg titles.
To the best of our knowledge, the dataset in this work is the first to provide multi-class emotion labels on passages, selected from both Project Gutenberg and modern narratives. The dataset is available upon request for non-commercial, research only purposes.
Dataset
In this section, we describe the process used to collect and annotate the dataset.
Dataset ::: Plutchik’s Wheel of Emotions
The dataset is annotated based on a modified Plutchik’s wheel of emotions.
The original Plutchik’s wheel consists of 8 primary emotions: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Trust, Disgust. In addition, more complex emotions can be formed by combing two basic emotions. For example, Love is defined as a combination of Joy and Trust (Fig. 1).
The intensity of an emotion is also captured in Plutchik's wheel. For example, the primary emotion of Anger can vary between Annoyance (mild) and Rage (intense).
We conducted an initial survey based on 100 stories with a significant fraction sampled from the romance genre. We asked readers to identify the major emotion exhibited in each story from a choice of the original 8 primary emotions.
We found that readers have significant difficulty in identifying Trust as an emotion associated with romantic stories. Hence, we modified our annotation scheme by removing Trust and adding Love. We also added the Neutral category to denote passages that do not exhibit any emotional content.
The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral.
Dataset ::: Passage Selection
We selected both classic and modern narratives in English for this dataset. The modern narratives were sampled based on popularity from Wattpad. We parsed selected narratives into passages, where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens.
In long-form narratives, many non-conversational passages are intended for transition or scene introduction, and may not carry any emotion. We divided the eligible passages into two parts, and one part was pruned using selected emotion-rich but ambiguous lexicons such as cry, punch, kiss, etc.. Then we mixed this pruned part with the unpruned part for annotation in order to reduce the number of neutral passages. See Appendix SECREF25 for the lexicons used.
Dataset ::: Mechanical Turk (MTurk)
MTurk was set up using the standard sentiment template and instructed the crowd annotators to `pick the best/major emotion embodied in the passage'.
We further provided instructions to clarify the intensity of an emotion, such as: “Rage/Annoyance is a form of Anger”, “Serenity/Ecstasy is a form of Joy”, and “Love includes Romantic/Family/Friendship”, along with sample passages.
We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\kappa $ score of greater than $0.4$.
For passages without majority agreement between annotators, we consolidated their labels using in-house data annotators who are experts in narrative content. A passage is accepted as valid if the in-house annotator's label matched any one of the MTurk annotators' labels. The remaining passages are discarded. We provide the fraction of annotator agreement for each label in the dataset.
Though passages may lose some emotional context when read independently of the complete narrative, we believe annotator agreement on our dataset supports the assertion that small excerpts can still convey coherent emotions.
During the annotation process, several annotators had suggested for us to include additional emotions such as confused, pain, and jealousy, which are common to narratives. As they were not part of the original Plutchik’s wheel, we decided to not include them. An interesting future direction is to study the relationship between emotions such as ‘pain versus sadness’ or ‘confused versus surprise’ and improve the emotion model for narratives.
Dataset ::: Dataset Statistics
The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.
The vocabulary size is 28K (when lowercased). It contains over 1600 unique titles across multiple categories, including 88 titles (1520 passages) from Project Gutenberg. All of the modern narratives were written after the year 2000, with notable amount of themes in coming-of-age, strong-female-lead, and LGBTQ+. The genre distribution is listed in Table TABREF8.
In the final dataset, 21.0% of the data has consensus between all annotators, 73.5% has majority agreement, and 5.48% has labels assigned after consultation with in-house annotators.
The distribution of data points over labels with top lexicons (lower-cased, normalized) is shown in Table TABREF9. Note that the Disgust category is very small and should be discarded. Furthermore, we suspect that the data labelled as Surprise may be noisier than other categories and should be discarded as well.
Table TABREF10 shows a few examples labelled data from classic titles. More examples can be found in Table TABREF26 in the Appendix SECREF27.
Benchmarks
We performed benchmark experiments on the dataset using several different algorithms. In all experiments, we have discarded the data labelled with Surprise and Disgust.
We pre-processed the data by using the SpaCy pipeline. We masked out named entities with entity-type specific placeholders to reduce the chance of benchmark models utilizing named entities as a basis for classification.
Benchmark results are shown in Table TABREF17. The dataset is approximately balanced after discarding the Surprise and Disgust classes. We report the average micro-F1 scores, with 5-fold cross validation for each technique.
We provide a brief overview of each benchmark experiment below. Among all of the benchmarks, Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 achieved the best performance with a 0.604 micro-F1 score.
Overall, we observed that deep-learning based techniques performed better than lexical based methods. This suggests that a method which attends to context and themes could do well on the dataset.
Benchmarks ::: Bag-of-Words-based Benchmarks
We computed bag-of-words-based benchmarks using the following methods:
Classification with TF-IDF + Linear SVM (TF-IDF + SVM)
Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)
Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)
Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)
Benchmarks ::: Doc2Vec + SVM
We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.
Benchmarks ::: Hierarchical RNN
For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.
The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.
Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)
One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.
Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.
The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.
Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.
Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)
Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.
We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.
Benchmarks ::: Fine-tuned BERT
Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.
We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\textrm {{\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%.
Conclusion
We introduce DENS, a dataset for multi-class emotion analysis from long-form narratives in English. We provide a number of benchmark results based on models ranging from bag-of-word models to methods based on pre-trained language models (ELMo and BERT).
Our benchmark results demonstrate that this dataset provides a novel challenge in emotion analysis. The results also demonstrate that attention-based models could significantly improve performance on classification tasks such as emotion analysis.
Interesting future directions for this work include: 1. incorporating common-sense knowledge into emotion analysis to capture semantic context and 2. using few-shot learning to bootstrap and improve performance of underrepresented emotions.
Finally, as narrative passages often involve interactions between multiple emotions, one avenue for future datasets could be to focus on the multi-emotion complexities of human language and their contextual interactions.
Appendices ::: Sample Data
Table TABREF26 shows sample passages from classic titles with corresponding labels. | TF-IDF + SVM, Depeche + SVM, NRC + SVM, TF-NRC + SVM, Doc2Vec + SVM, Hierarchical RNN, BiRNN + Self-Attention, ELMo + BiRNN, Fine-tuned BERT |
4e50e9965059899d15d3c3a0c0a2d73e0c5802a0 | 4e50e9965059899d15d3c3a0c0a2d73e0c5802a0_0 | Q: What is the size of this dataset?
Text: Introduction
Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms.
Many recent advances in natural language processing on emotions have focused on product reviews BIBREF0 and tweets BIBREF1, BIBREF2. These datasets are often limited in length (e.g. by the number of words in tweets), purpose (e.g. product reviews), or emotional spectrum (e.g. binary classification).
Character dialogues and narratives in storytelling usually carry strong emotions. A memorable story is often one in which the emotional journey of the characters resonates with the reader. Indeed, emotion is one of the most important aspects of narratives. In order to characterize narrative emotions properly, we must move beyond binary constraints (e.g. good or bad, happy or sad).
In this paper, we introduce the Dataset for Emotions of Narrative Sequences (DENS) for emotion analysis, consisting of passages from long-form fictional narratives from both classic literature and modern stories in English. The data samples consist of self-contained passages that span several sentences and a variety of subjects. Each sample is annotated by using one of 9 classes and an indicator for annotator agreement.
Background
Using the categorical basic emotion model BIBREF3, BIBREF4, BIBREF5 studied creating lexicons from tweets for use in emotion analysis. Recently, BIBREF1, BIBREF6 and BIBREF2 proposed shared-tasks for multi-class emotion analysis based on tweets.
Fewer works have been reported on understanding emotions in narratives. Emotional Arc BIBREF7 is one recent advance in this direction. The work used lexicons and unsupervised learning methods based on unlabelled passages from titles in Project Gutenberg.
For labelled datasets on narratives, BIBREF8 provided a sentence-level annotated corpus of childrens' stories and BIBREF9 provided phrase-level annotations on selected Project Gutenberg titles.
To the best of our knowledge, the dataset in this work is the first to provide multi-class emotion labels on passages, selected from both Project Gutenberg and modern narratives. The dataset is available upon request for non-commercial, research only purposes.
Dataset
In this section, we describe the process used to collect and annotate the dataset.
Dataset ::: Plutchik’s Wheel of Emotions
The dataset is annotated based on a modified Plutchik’s wheel of emotions.
The original Plutchik’s wheel consists of 8 primary emotions: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Trust, Disgust. In addition, more complex emotions can be formed by combing two basic emotions. For example, Love is defined as a combination of Joy and Trust (Fig. 1).
The intensity of an emotion is also captured in Plutchik's wheel. For example, the primary emotion of Anger can vary between Annoyance (mild) and Rage (intense).
We conducted an initial survey based on 100 stories with a significant fraction sampled from the romance genre. We asked readers to identify the major emotion exhibited in each story from a choice of the original 8 primary emotions.
We found that readers have significant difficulty in identifying Trust as an emotion associated with romantic stories. Hence, we modified our annotation scheme by removing Trust and adding Love. We also added the Neutral category to denote passages that do not exhibit any emotional content.
The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral.
Dataset ::: Passage Selection
We selected both classic and modern narratives in English for this dataset. The modern narratives were sampled based on popularity from Wattpad. We parsed selected narratives into passages, where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens.
In long-form narratives, many non-conversational passages are intended for transition or scene introduction, and may not carry any emotion. We divided the eligible passages into two parts, and one part was pruned using selected emotion-rich but ambiguous lexicons such as cry, punch, kiss, etc.. Then we mixed this pruned part with the unpruned part for annotation in order to reduce the number of neutral passages. See Appendix SECREF25 for the lexicons used.
Dataset ::: Mechanical Turk (MTurk)
MTurk was set up using the standard sentiment template and instructed the crowd annotators to `pick the best/major emotion embodied in the passage'.
We further provided instructions to clarify the intensity of an emotion, such as: “Rage/Annoyance is a form of Anger”, “Serenity/Ecstasy is a form of Joy”, and “Love includes Romantic/Family/Friendship”, along with sample passages.
We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\kappa $ score of greater than $0.4$.
For passages without majority agreement between annotators, we consolidated their labels using in-house data annotators who are experts in narrative content. A passage is accepted as valid if the in-house annotator's label matched any one of the MTurk annotators' labels. The remaining passages are discarded. We provide the fraction of annotator agreement for each label in the dataset.
Though passages may lose some emotional context when read independently of the complete narrative, we believe annotator agreement on our dataset supports the assertion that small excerpts can still convey coherent emotions.
During the annotation process, several annotators had suggested for us to include additional emotions such as confused, pain, and jealousy, which are common to narratives. As they were not part of the original Plutchik’s wheel, we decided to not include them. An interesting future direction is to study the relationship between emotions such as ‘pain versus sadness’ or ‘confused versus surprise’ and improve the emotion model for narratives.
Dataset ::: Dataset Statistics
The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.
The vocabulary size is 28K (when lowercased). It contains over 1600 unique titles across multiple categories, including 88 titles (1520 passages) from Project Gutenberg. All of the modern narratives were written after the year 2000, with notable amount of themes in coming-of-age, strong-female-lead, and LGBTQ+. The genre distribution is listed in Table TABREF8.
In the final dataset, 21.0% of the data has consensus between all annotators, 73.5% has majority agreement, and 5.48% has labels assigned after consultation with in-house annotators.
The distribution of data points over labels with top lexicons (lower-cased, normalized) is shown in Table TABREF9. Note that the Disgust category is very small and should be discarded. Furthermore, we suspect that the data labelled as Surprise may be noisier than other categories and should be discarded as well.
Table TABREF10 shows a few examples labelled data from classic titles. More examples can be found in Table TABREF26 in the Appendix SECREF27.
Benchmarks
We performed benchmark experiments on the dataset using several different algorithms. In all experiments, we have discarded the data labelled with Surprise and Disgust.
We pre-processed the data by using the SpaCy pipeline. We masked out named entities with entity-type specific placeholders to reduce the chance of benchmark models utilizing named entities as a basis for classification.
Benchmark results are shown in Table TABREF17. The dataset is approximately balanced after discarding the Surprise and Disgust classes. We report the average micro-F1 scores, with 5-fold cross validation for each technique.
We provide a brief overview of each benchmark experiment below. Among all of the benchmarks, Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 achieved the best performance with a 0.604 micro-F1 score.
Overall, we observed that deep-learning based techniques performed better than lexical based methods. This suggests that a method which attends to context and themes could do well on the dataset.
Benchmarks ::: Bag-of-Words-based Benchmarks
We computed bag-of-words-based benchmarks using the following methods:
Classification with TF-IDF + Linear SVM (TF-IDF + SVM)
Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)
Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)
Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)
Benchmarks ::: Doc2Vec + SVM
We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.
Benchmarks ::: Hierarchical RNN
For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.
The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.
Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)
One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.
Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.
The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.
Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.
Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)
Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.
We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.
Benchmarks ::: Fine-tuned BERT
Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.
We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\textrm {{\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%.
Conclusion
We introduce DENS, a dataset for multi-class emotion analysis from long-form narratives in English. We provide a number of benchmark results based on models ranging from bag-of-word models to methods based on pre-trained language models (ELMo and BERT).
Our benchmark results demonstrate that this dataset provides a novel challenge in emotion analysis. The results also demonstrate that attention-based models could significantly improve performance on classification tasks such as emotion analysis.
Interesting future directions for this work include: 1. incorporating common-sense knowledge into emotion analysis to capture semantic context and 2. using few-shot learning to bootstrap and improve performance of underrepresented emotions.
Finally, as narrative passages often involve interactions between multiple emotions, one avenue for future datasets could be to focus on the multi-emotion complexities of human language and their contextual interactions.
Appendices ::: Sample Data
Table TABREF26 shows sample passages from classic titles with corresponding labels. | 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words |
67d8e50ddcc870db71c94ad0ad7f8a59a6c67ca6 | 67d8e50ddcc870db71c94ad0ad7f8a59a6c67ca6_0 | Q: How many annotators were there?
Text: Introduction
Humans experience a variety of complex emotions in daily life. These emotions are heavily reflected in our language, in both spoken and written forms.
Many recent advances in natural language processing on emotions have focused on product reviews BIBREF0 and tweets BIBREF1, BIBREF2. These datasets are often limited in length (e.g. by the number of words in tweets), purpose (e.g. product reviews), or emotional spectrum (e.g. binary classification).
Character dialogues and narratives in storytelling usually carry strong emotions. A memorable story is often one in which the emotional journey of the characters resonates with the reader. Indeed, emotion is one of the most important aspects of narratives. In order to characterize narrative emotions properly, we must move beyond binary constraints (e.g. good or bad, happy or sad).
In this paper, we introduce the Dataset for Emotions of Narrative Sequences (DENS) for emotion analysis, consisting of passages from long-form fictional narratives from both classic literature and modern stories in English. The data samples consist of self-contained passages that span several sentences and a variety of subjects. Each sample is annotated by using one of 9 classes and an indicator for annotator agreement.
Background
Using the categorical basic emotion model BIBREF3, BIBREF4, BIBREF5 studied creating lexicons from tweets for use in emotion analysis. Recently, BIBREF1, BIBREF6 and BIBREF2 proposed shared-tasks for multi-class emotion analysis based on tweets.
Fewer works have been reported on understanding emotions in narratives. Emotional Arc BIBREF7 is one recent advance in this direction. The work used lexicons and unsupervised learning methods based on unlabelled passages from titles in Project Gutenberg.
For labelled datasets on narratives, BIBREF8 provided a sentence-level annotated corpus of childrens' stories and BIBREF9 provided phrase-level annotations on selected Project Gutenberg titles.
To the best of our knowledge, the dataset in this work is the first to provide multi-class emotion labels on passages, selected from both Project Gutenberg and modern narratives. The dataset is available upon request for non-commercial, research only purposes.
Dataset
In this section, we describe the process used to collect and annotate the dataset.
Dataset ::: Plutchik’s Wheel of Emotions
The dataset is annotated based on a modified Plutchik’s wheel of emotions.
The original Plutchik’s wheel consists of 8 primary emotions: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Trust, Disgust. In addition, more complex emotions can be formed by combing two basic emotions. For example, Love is defined as a combination of Joy and Trust (Fig. 1).
The intensity of an emotion is also captured in Plutchik's wheel. For example, the primary emotion of Anger can vary between Annoyance (mild) and Rage (intense).
We conducted an initial survey based on 100 stories with a significant fraction sampled from the romance genre. We asked readers to identify the major emotion exhibited in each story from a choice of the original 8 primary emotions.
We found that readers have significant difficulty in identifying Trust as an emotion associated with romantic stories. Hence, we modified our annotation scheme by removing Trust and adding Love. We also added the Neutral category to denote passages that do not exhibit any emotional content.
The final annotation categories for the dataset are: Joy, Sadness, Anger, Fear, Anticipation, Surprise, Love, Disgust, Neutral.
Dataset ::: Passage Selection
We selected both classic and modern narratives in English for this dataset. The modern narratives were sampled based on popularity from Wattpad. We parsed selected narratives into passages, where a passage is considered to be eligible for annotation if it contained between 40 and 200 tokens.
In long-form narratives, many non-conversational passages are intended for transition or scene introduction, and may not carry any emotion. We divided the eligible passages into two parts, and one part was pruned using selected emotion-rich but ambiguous lexicons such as cry, punch, kiss, etc.. Then we mixed this pruned part with the unpruned part for annotation in order to reduce the number of neutral passages. See Appendix SECREF25 for the lexicons used.
Dataset ::: Mechanical Turk (MTurk)
MTurk was set up using the standard sentiment template and instructed the crowd annotators to `pick the best/major emotion embodied in the passage'.
We further provided instructions to clarify the intensity of an emotion, such as: “Rage/Annoyance is a form of Anger”, “Serenity/Ecstasy is a form of Joy”, and “Love includes Romantic/Family/Friendship”, along with sample passages.
We required all annotators have a `master' MTurk qualification. Each passage was labelled by 3 unique annotators. Only passages with a majority agreement between annotators were accepted as valid. This is equivalent to a Fleiss's $\kappa $ score of greater than $0.4$.
For passages without majority agreement between annotators, we consolidated their labels using in-house data annotators who are experts in narrative content. A passage is accepted as valid if the in-house annotator's label matched any one of the MTurk annotators' labels. The remaining passages are discarded. We provide the fraction of annotator agreement for each label in the dataset.
Though passages may lose some emotional context when read independently of the complete narrative, we believe annotator agreement on our dataset supports the assertion that small excerpts can still convey coherent emotions.
During the annotation process, several annotators had suggested for us to include additional emotions such as confused, pain, and jealousy, which are common to narratives. As they were not part of the original Plutchik’s wheel, we decided to not include them. An interesting future direction is to study the relationship between emotions such as ‘pain versus sadness’ or ‘confused versus surprise’ and improve the emotion model for narratives.
Dataset ::: Dataset Statistics
The dataset contains a total of 9710 passages, with an average of 6.24 sentences per passage, 16.16 words per sentence, and an average length of 86 words.
The vocabulary size is 28K (when lowercased). It contains over 1600 unique titles across multiple categories, including 88 titles (1520 passages) from Project Gutenberg. All of the modern narratives were written after the year 2000, with notable amount of themes in coming-of-age, strong-female-lead, and LGBTQ+. The genre distribution is listed in Table TABREF8.
In the final dataset, 21.0% of the data has consensus between all annotators, 73.5% has majority agreement, and 5.48% has labels assigned after consultation with in-house annotators.
The distribution of data points over labels with top lexicons (lower-cased, normalized) is shown in Table TABREF9. Note that the Disgust category is very small and should be discarded. Furthermore, we suspect that the data labelled as Surprise may be noisier than other categories and should be discarded as well.
Table TABREF10 shows a few examples labelled data from classic titles. More examples can be found in Table TABREF26 in the Appendix SECREF27.
Benchmarks
We performed benchmark experiments on the dataset using several different algorithms. In all experiments, we have discarded the data labelled with Surprise and Disgust.
We pre-processed the data by using the SpaCy pipeline. We masked out named entities with entity-type specific placeholders to reduce the chance of benchmark models utilizing named entities as a basis for classification.
Benchmark results are shown in Table TABREF17. The dataset is approximately balanced after discarding the Surprise and Disgust classes. We report the average micro-F1 scores, with 5-fold cross validation for each technique.
We provide a brief overview of each benchmark experiment below. Among all of the benchmarks, Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 achieved the best performance with a 0.604 micro-F1 score.
Overall, we observed that deep-learning based techniques performed better than lexical based methods. This suggests that a method which attends to context and themes could do well on the dataset.
Benchmarks ::: Bag-of-Words-based Benchmarks
We computed bag-of-words-based benchmarks using the following methods:
Classification with TF-IDF + Linear SVM (TF-IDF + SVM)
Classification with Depeche++ Emotion lexicons BIBREF12 + Linear SVM (Depeche + SVM)
Classification with NRC Emotion lexicons BIBREF13, BIBREF14 + Linear SVM (NRC + SVM)
Combination of TF-IDF and NRC Emotion lexicons (TF-NRC + SVM)
Benchmarks ::: Doc2Vec + SVM
We also used simple classification models with learned embeddings. We trained a Doc2Vec model BIBREF15 using the dataset and used the embedding document vectors as features for a linear SVM classifier.
Benchmarks ::: Hierarchical RNN
For this benchmark, we considered a Hierarchical RNN, following BIBREF16. We used two BiLSTMs BIBREF17 with 256 units each to model sentences and documents. The tokens of a sentence were processed independently of other sentence tokens. For each direction in the token-level BiLSTM, the last outputs were concatenated and fed into the sentence-level BiLSTM as inputs.
The outputs of the BiLSTM were connected to 2 dense layers with 256 ReLU units and a Softmax layer. We initialized tokens with publicly available embeddings trained with GloVe BIBREF18. Sentence boundaries were provided by SpaCy. Dropout was applied to the dense hidden layers during training.
Benchmarks ::: Bi-directional RNN and Self-Attention (BiRNN + Self-Attention)
One challenge with RNN-based solutions for text classification is finding the best way to combine word-level representations into higher-level representations.
Self-attention BIBREF19, BIBREF20, BIBREF21 has been adapted to text classification, providing improved interpretability and performance. We used BIBREF20 as the basis of this benchmark.
The benchmark used a layered Bi-directional RNN (60 units) with GRU cells and a dense layer. Both self-attention layers were 60 units in size and cross-entropy was used as the cost function.
Note that we have omitted the orthogonal regularizer term, since this dataset is relatively small compared to the traditional datasets used for training such a model. We did not observe any significant performance gain while using the regularizer term in our experiments.
Benchmarks ::: ELMo embedding and Bi-directional RNN (ELMo + BiRNN)
Deep Contextualized Word Representations (ELMo) BIBREF22 have shown recent success in a number of NLP tasks. The unsupervised nature of the language model allows it to utilize a large amount of available unlabelled data in order to learn better representations of words.
We used the pre-trained ELMo model (v2) available on Tensorhub for this benchmark. We fed the word embeddings of ELMo as input into a one layer Bi-directional RNN (16 units) with GRU cells (with dropout) and a dense layer. Cross-entropy was used as the cost function.
Benchmarks ::: Fine-tuned BERT
Bidirectional Encoder Representations from Transformers (BERT) BIBREF11 has achieved state-of-the-art results on several NLP tasks, including sentence classification.
We used the fine-tuning procedure outlined in the original work to adapt the pre-trained uncased BERT$_\textrm {{\scriptsize LARGE}}$ to a multi-class passage classification task. This technique achieved the best result among our benchmarks, with an average micro-F1 score of 60.4%.
Conclusion
We introduce DENS, a dataset for multi-class emotion analysis from long-form narratives in English. We provide a number of benchmark results based on models ranging from bag-of-word models to methods based on pre-trained language models (ELMo and BERT).
Our benchmark results demonstrate that this dataset provides a novel challenge in emotion analysis. The results also demonstrate that attention-based models could significantly improve performance on classification tasks such as emotion analysis.
Interesting future directions for this work include: 1. incorporating common-sense knowledge into emotion analysis to capture semantic context and 2. using few-shot learning to bootstrap and improve performance of underrepresented emotions.
Finally, as narrative passages often involve interactions between multiple emotions, one avenue for future datasets could be to focus on the multi-emotion complexities of human language and their contextual interactions.
Appendices ::: Sample Data
Table TABREF26 shows sample passages from classic titles with corresponding labels. | 3 |
aecb485ea7d501094e50ad022ade4f0c93088d80 | aecb485ea7d501094e50ad022ade4f0c93088d80_0 | Q: Can SCRF be used to pretrain the model?
Text: Introduction
State-of-the-art speech recognition accuracy has significantly improved over the past few years since the application of deep neural networks BIBREF0 , BIBREF1 . Recently, it has been shown that with the application of both neural network acoustic model and language model, an automatic speech recognizer can approach human-level accuracy on the Switchboard conversational speech recognition benchmark using around 2,000 hours of transcribed data BIBREF2 . While progress is mainly driven by well engineered neural network architectures and a large amount of training data, the hidden Markov model (HMM) that has been the backbone for speech recognition for decades is still playing a central role. Though tremendously successful for the problem of speech recognition, the HMM-based pipeline factorizes the whole system into several components, and building these components separately may be less computationally efficient when developing a large-scale system from thousands to hundred of thousands of examples BIBREF3 .
Recently, along with hybrid HMM/NN frameworks for speech recognition, there has been increasing interest in end-to-end training approaches. The key idea is to directly map the input acoustic frames to output characters or words without the intermediate alignment to context-dependent phones used by HMMs. In particular, three architectures have been proposed for the goal of end-to-end learning: connectionist temporal classification (CTC) BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , sequence-to-sequence with attention model BIBREF8 , BIBREF9 , BIBREF10 , and neural network segmental conditional random field (SCRF) BIBREF11 , BIBREF12 . These end-to-end models simplify the pipeline of speech recognition significantly. They do not require intermediate alignment or segmentation like HMMs, instead, the alignment or segmentation is marginalized out during training for CTC and SCRF or inferred by the attention mechanism. In terms of the recognition accuracy, however, the end-to-end models usually lag behind their HMM-based counterparts. Though CTC has been shown to outperform HMM systems BIBREF13 , the improvement is based on the use of context-dependent phone targets and a very large amount of training data. Therefore, it has almost the same system complexity as HMM acoustic models. When the training data is less abundant, it has been shown that the accuracy of CTC systems degrades significantly BIBREF14 .
However, end-to-end models have the flexibility to be combined to mitigate their individual weaknesses. For instance, multitask learning with attention models has been investigated for machine translation BIBREF15 , and Mandarin speech recognition using joint Character-Pinyin training BIBREF16 . In BIBREF17 , Kim et al. proposed a multitask learning approach to train a joint attention model and a CTC model using a shared encoder. They showed that the CTC auxiliary task can help the attention model to overcome the misalignment problem in the initial few epochs, and speed up the convergence of the attention model. Another nice property of the multitask learning approach is that the joint model can still be trained end-to-end. Inspired by this work, we study end-to-end training of a joint CTC and SCRF model using an interpolated loss function. The key difference of our study from BIBREF17 is that the two loss functions of the CTC and attention models are locally normalized for each output token, and they are both trained using the cross entropy criterion. However, the SCRF loss function is normalized at the sequence-level, which is similar to the sequence discriminative training objective function for HMMs. From this perspective, the interpolation of CTC and SCRF loss functions is analogous to the sequence discriminative training of HMMs with CE regularization to overcome overfitting, where a sequence-level loss is also interpolated with a frame-level loss, e.g., BIBREF18 . Similar to the observations in BIBREF17 , we demonstrate that the joint training approach improves the recognition accuracies of both CTC and SCRF acoustic models. Further, we also show that CTC can be used to pretrain the neural network feature extractor to speed up the convergence of the joint model. Experiments were performed on the TIMIT database.
Segmental Conditional Random Fields
SCRF is a variant of the linear-chain CRF model where each output token corresponds to a segment of input tokens instead of a single input instance. In the context of speech recognition, given a sequence of input vectors of $T$ frames ${X} = ( {x}_1, \cdots , {x}_T )$ and its corresponding sequence of output labels ${y} = ( y_1, \cdots , y_J)$ , the zero-order linear-chain CRF defines the sequence-level conditional probability as P(y X) = 1Z(X) t=1T f ( yt, xt ), where $Z({X})$ denotes the normalization term, and $T=J$ . Extension to higher order models is straightforward, but it is usually computationally much more expensive. The model defined in Eq. ( "Segmental Conditional Random Fields" ) requires the length of ${X}$ and ${y}$ to be equal, which makes it inappropriate for speech recognition because the lengths of the input and output sequences are not equal. For the case where $T\ge J$ as in speech recognition, SCRF defines the sequence-level conditional probability with the auxiliary segment labels ${E} = ({e}_1, \cdots , {e}_J) $ as P(y, E X) = 1Z(X) j=1J f ( yj, ej, xj ), where $\mathbf {e}_j = \langle s_{j}, n_{j} \rangle $ is a tuple of the beginning ( ${X} = ( {x}_1, \cdots , {x}_T )$0 ) and the end ( ${X} = ( {x}_1, \cdots , {x}_T )$1 ) time tag for the segment of ${X} = ( {x}_1, \cdots , {x}_T )$2 , and ${X} = ( {x}_1, \cdots , {x}_T )$3 while ${X} = ( {x}_1, \cdots , {x}_T )$4 ; ${X} = ( {x}_1, \cdots , {x}_T )$5 and ${X} = ( {x}_1, \cdots , {x}_T )$6 denotes the vocabulary set; ${X} = ( {x}_1, \cdots , {x}_T )$7 is the embedding vector of the segment corresponding to the token ${X} = ( {x}_1, \cdots , {x}_T )$8 . In this case, ${X} = ( {x}_1, \cdots , {x}_T )$9 sums over all the possible ${y} = ( y_1, \cdots , y_J)$0 pairs, i.e.,
$$Z({X}) = \sum _{y,E} \prod _{j=1}^J \exp f \left( y_j, {e}_j, \bar{x}_j \right).$$ (Eq. 1)
Similar to other CRFs, the function $f(\cdot )$ is defined as
$$f \left( y_j, {e}_j, \bar{x}_t \right) = \mathbf {w}^\top \Phi (y_j, {e}_j, \bar{x}_j),$$ (Eq. 2)
where $\Phi (\cdot )$ denotes the feature function, and $\mathbf {w}$ is the weight vector. Most of conventional approaches for SCRF-based acoustic models use a manually defined feature function $\Phi (\cdot )$ , where the features and segment boundary information are provided by an auxiliary system BIBREF19 , BIBREF20 . In BIBREF21 , BIBREF12 , we proposed an end-to-end training approach for SCRFs, where $\Phi (\cdot )$ was defined with neural networks, and the segmental level features were learned by RNNs. The model was referred to as the segmental RNN (SRNN), and it will be used as the implementation of the SCRF acoustic model for multitask learning in this study.
Feature Function and Acoustic Embedding
SRNN uses an RNN to learn segmental level acoustic embeddings. Given the input sequence ${X} = ({x}_1, \cdots , {x}_T)$ , and we need to compute the embedding vector $\bar{x}_j$ in Eq. ( 2 ) corresponding to the segment ${e}_j = \langle s_j, n_j\rangle $ . Since the segment boundaries are known, it is straightforward to employ an RNN to map the segment into a vector as [ l hsj
hsj+1
$\vdots $
hnj ] = [ l RNN(h0, xsj)
RNN(hsj, xsj+1)
$\vdots $
RNN(hnj-1, xnj) ] where ${h}_0$ denotes the initial hidden state, which is initialized to be zero. RNN( $\cdot $ ) denotes the nonlinear recurrence operation used in an RNN, which takes the previous hidden state and the feature vector at the current timestep as inputs, and produce an updated hidden state vector. Given the recurrent hidden states, the embedding vector can be simply defined as $\bar{x}_j= {h}_{n_j}$ as in our previous work BIBREF12 . However, the drawback of this implementation is the large memory cost, as we need to store the array of hidden states $({h}_{s_j}, \cdots , {h}_{n_j})$ for all the possible segments $\langle s_j, n_j\rangle $ . If we denote $H$ as the dimension of an RNN hidden state, the memory cost will be on the order of $O(T^2H)$ , where $T$ is the length of $X$ . It is especially problematic for the joint model as the CTC model requires additional memory space. In this work, we adopt another approach that requires much less memory. In this approach, we use an RNN to read the whole input sequence as [ c h1
h2
$\vdots $
hT ] = [ l RNN(h0, x1)
RNN(h1, x2)
$\vdots $
RNN(hT-1, xT) ] and we define the embedding vector for segment ${e} = \langle k, t\rangle $ as xj = [ c hsj
hnj ] In this case, we only provide the context information for the feature function $\Phi (\cdot )$ to extract segmental features. We refer this approach as context-aware embedding. Since we only need to read the input sequence once, the memory requirement is on the order of $O(TH)$ , which is much smaller. The cost, however, is the slightly degradation of the recognition accuracy. This model is illustrated by Figure 1 .
The feature function $\Phi (\cdot )$ also requires a vector representation of the label $y_j$ . This embedding vector can be obtained using a linear embedding matrix, following common practice for RNN language models. More specifically, $y_j$ is first represented as a one-hot vector ${v}_j$ , and it is then mapped into a continuous space by a linear embedding matrix ${M}$ as
$${u}_j = {M v}_j$$ (Eq. 4)
Given the acoustic embedding $\bar{x}_j$ and label embedding $u_j$ , the feature function $\Phi (\cdot )$ can be represented as (yj, ej, xj) = (W1uj + W2xj + b), where $\sigma $ denotes a non-linear activation function (e.g., sigmoid or tanh); $W_1, W_2$ and $b$ are weight matrices and a bias vector. Eq. ( "Connectionist Temporal Classification " ) corresponds to one layer of non-linear transformation. In fact, it is straightforward to stack multiple nonlinear layers in this feature function.
Loss Function
For speech recognition, the segmentation labels ${E}$ are usually unknown in the training set. In this case, we cannot train the model directly by maximizing the conditional probability in Eq. ( "Segmental Conditional Random Fields" ). However, the problem can be addressed by marginalizing out the segmentation variable as Lscrf = - P(y X)
= - E P(y, E X)
= - E j f ( yj, ej, xj ) Z(X, y) + Z(X), where $Z({X}, {y})$ denotes the summation over all the possible segmentations when only ${y}$ is observed. To simplify notation, the objective function $\mathcal {L}_{\mathit {scrf}}$ is defined here with only one training utterance.
However, the number of possible segmentations is exponential in the length of ${X}$ , which makes the naïve computation of both $Z({X}, {y})$ and $Z({X})$ impractical. To address this problem, a dynamic programming algorithm can be applied, which can reduce the computational complexity to $O(T^2\cdot |\mathcal {Y}|)$ BIBREF22 . The computational cost can be further reduced by limiting the maximum length of all the possible segments. The reader is referred to BIBREF12 for further details including the decoding algorithm.
Connectionist Temporal Classification
CTC also directly computes the conditional probability $P(y \mid X)$ , with the key difference from SCRF in that it normalizes the probabilistic distribution at the frame level. To address the problem of length mismatch between the input and output sequences, CTC allows repetitions of output labels and introduces a special blank token ( $-$ ), which represents the probability of not emitting any label at a particular time step. The conditional probability is then obtained by summing over all the probabilities of all the paths that corresponding to $y$ after merging the repeated labels and removing the blank tokens, i.e., P(y X) = (y) P(X), where $\Psi (y)$ denotes the set of all possible paths that correspond to $y$ after repetitions of labels and insertions of the blank token. Now the length of $\pi $ is the same as $X$ , the probability $P(\pi \mid X)$ is then approximated by the independence assumption as P(X) t=1T P(t xt), where $\pi _t $ ranges over $\mathcal {Y}\cup \lbrace -\rbrace $ , and $-$0 can be computed using the softmax function. The training criterion for CTC is to maximize the conditional probability of the ground truth labels, which is equivalent to minimizing the negative log likelihood: Lctc = -P(y X), which can be reformulated as the CE criterion. More details regarding the computation of the loss and the backpropagation algorithm to train CTC models can be found in BIBREF23 .
Joint Training Loss
Training the two models jointly is trivial. We can simply interpolate the CTC and SCRF loss functions as L = Lctc + (1-)Lscrf, where $\lambda \in [0, 1]$ is the interpolation weight. The two models share the same neural network for feature extraction. In this work, we focus on the RNN with long short-term memory (LSTM) BIBREF24 units for feature extraction. Other types of neural architecture, e.g., convolutional neural network (CNN) or combinations of CNN and RNN, may be considered in future work.
Experiments
Our experiments were performed on the TIMIT database, and both the SRNN and CTC models were implemented using the DyNet toolkit BIBREF25 . We followed the standard protocol of the TIMIT dataset, and our experiments were based on the Kaldi recipe BIBREF26 . We used the core test set as our evaluation set, which has 192 utterances. Our models were trained with 48 phonemes, and their predictions were converted to 39 phonemes before scoring. The dimension of $\mathbf {u}_j$ was fixed to be 64, and the dimension of $\mathbf {w}$ in Eq. ( 2 ) is also 64. We set the initial SGD learning rate to be 0.1, and we exponentially decay the learning rate by 0.75 when the validation error stopped decreasing. We also subsampled the acoustic sequence by a factor of 4 using the hierarchical RNN as in BIBREF12 . Our models were trained with dropout regularization BIBREF27 , using a specific implementation for recurrent networks BIBREF28 . The dropout rate was 0.2 unless specified otherwise. Our models were randomly initialized with the same random seed.
Baseline Results
Table 1 shows the baseline results of SRNN and CTC models using two different kinds of features. The FBANK features are 120-dimensional with delta and delta-delta coefficients, and the fMLLR features are 40-dimensional, which were obtained from a Kaldi baseline system. We used a 3-layer bidirectional LSTMs for feature extraction, and we used the greedy best path decoding algorithm for both models. Our SRNN and CTC achieved comparable phone error rate (PER) for both kinds of features. However, for the CTC system, Graves et al. BIBREF29 obtained a better result, using about the same size of neural network (3 hidden layers with 250 hidden units of bidirectional LSTMs), compared to ours (18.6% vs. 19.9%). Apart from the implementation difference of using different code bases, Graves et al. BIBREF29 applied the prefix decoding with beam search, which may have lower search error than our best path decoding algorithm.
Multitask Learning Results
Table 2 shows results of multitask learning for CTC and SRNN using the interpolated loss in Eq. ( "Joint Training Loss" ). We only show results of using LSTMs with 250 dimensional hidden states. The interpolation weight was set to be 0.5. In our experiments, tuning the interpolation weight did not further improve the recognition accuracy. From Table 2 , we can see that multitask learning improves recognition accuracies of both SRNN and CTC acoustic models, which may due to the regularization effect of the joint training loss. The improvement for FBANK features is much larger than fMLLR features. In particular, with multitask learning, the recognition accuracy of our CTC system with best path decoding is comparable to the results obtained by Graves et al. BIBREF29 with beam search decoding.
One of the major drawbacks of SCRF models is their high computational cost. In our experiments, the CTC model is around 3–4 times faster than the SRNN model that uses the same RNN encoder. The joint model by multitask learning is slightly more expensive than the stand-alone SRNN model. To cut down the computational cost, we investigated if CTC can be used to pretrain the RNN encoder to speed up the training of the joint model. This is analogous to sequence training of HMM acoustic models, where the network is usually pretrained by the frame-level CE criterion. Figure 2 shows the convergence curves of the joint model with and without CTC pretraining, and we see pretraining indeed improves the convergence speed of the joint model.
Conclusion
We investigated multitask learning with CTC and SCRF for speech recognition in this paper. Using an RNN encoder for feature extraction, both CTC and SCRF can be trained end-to-end, and the two models can be trained together by interpolating the two loss functions. From experiments on the TIMIT dataset, the multitask learning approach improved the recognition accuracies of both CTC and SCRF acoustic models. We also showed that CTC can be used to pretrain the RNN encoder, speeding up the training of the joint model. In the future, we will study the multitask learning approach for larger-scale speech recognition tasks, where the CTC pretraining approach may be more helpful to overcome the problem of high computational cost.
Acknowledgements
We thank the NVIDIA Corporation for the donation of a Titan X GPU. | No |
2fea3c955ff78220b2c31a8ad1322bc77f6706f8 | 2fea3c955ff78220b2c31a8ad1322bc77f6706f8_0 | Q: What conclusions are drawn from the syntactic analysis?
Text: Introduction
A common way for marking information about gender, number, and case in language is morphology, or the structure of a given word in the language. However, different languages mark such information in different ways – for example, in some languages gender may be marked on the head word of a syntactic dependency relation, while in other languages it is marked on the dependent, on both, or on none of them BIBREF0 . This morphological diversity creates a challenge for machine translation, as there are ambiguous cases where more than one correct translation exists for the same source sentence. For example, while the English sentence “I love language” is ambiguous with respect to the gender of the speaker, Hebrew marks verbs for the gender of their subject and does not allow gender-neutral translation. This allows two possible Hebrew translations – one in a masculine and the other in a feminine form. As a consequence, a sentence-level translator (either human or machine) must commit to the gender of the speaker, adding information that is not present in the source. Without additional context, this choice must be done arbitrarily by relying on language conventions, world knowledge or statistical (stereotypical) knowledge.
Indeed, the English sentence “I work as a doctor” is translated into Hebrew by Google Translate using the masculine verb form oved, indicating a male speaker, while “I work as a nurse” is translated with the feminine form ovedet, indicating a female speaker (verified on March 2019). While this is still an issue, there have been recent efforts to reduce it for specific language pairs.
We present a simple black-box method to influence the interpretation chosen by an NMT system in these ambiguous cases. More concretely, we construct pre-defined textual hints about the gender and number of the speaker and the audience (the interlocutors), which we concatenate to a given input sentence that we would like to translate accordingly. We then show that a black-box NMT system makes the desired morphological decisions according to the given hint, even when no other evidence is available on the source side. While adding those hints results in additional text on the target side, we show that it is simple to remove, leaving only the desired translation.
Our method is appealing as it only requires simple pre-and-post processing of the inputs and outputs, without considering the system internals, or requiring specific annotated data and training procedure as in previous work BIBREF1 . We show that in spite of its simplicity, it is effective in resolving many of the ambiguities and improves the translation quality in up to 2.3 BLEU when given the correct hints, which may be inferred from text metadata or other sources. Finally, we perform a fine-grained syntactic analysis of the translations generated using our method which shows its effectiveness.
Morphological Ambiguity in Translation
Different languages use different morphological features marking different properties on different elements. For example, English marks for number, case, aspect, tense, person, and degree of comparison. However, English does not mark gender on nouns and verbs. Even when a certain property is marked, languages differ in the form and location of the marking BIBREF0 . For example, marking can occur on the head of a syntactic dependency construction, on its argument, on both (requiring agreement), or on none of them. Translation systems must generate correct target-language morphology as part of the translation process. This requires knowledge of both the source-side and target-side morphology. Current state-of-the-art translation systems do capture many aspects of natural language, including morphology, when a relevant context is available BIBREF2 , BIBREF3 , but resort to “guessing” based on the training-data statistics when it is not. Complications arise when different languages convey different kinds of information in their morphological systems. In such cases, a translation system may be required to remove information available in the source sentence, or to add information not available in it, where the latter can be especially tricky.
Black-Box Knowledge Injection
Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.
We are motivated by recent work by BIBREF4 who showed that NMT systems learn to track coreference chains when presented with sufficient discourse context. We conjecture that there are enough sentence-internal pronominal coreference chains appearing in the training data of large-scale NMT systems, such that state-of-the-art NMT systems can and do track sentence-internal coreference. We devise a wrapper method to make use of this coreference tracking ability by introducing artificial antecedents that unambiguously convey the desired gender and number properties of the speaker and audience.
More concretely, a sentence such as “I love you” is ambiguous with respect to the gender of the speaker and the gender and number of the audience. However, sentences such as “I love you, she told him” are unambiguous given the coreference groups {I, she} and {you, him} which determine I to be feminine singular and you to be masculine singular. We can thus inject the desired information by prefixing a sentence with short generic sentence fragment such as “She told him:” or “She told them that”, relying on the NMT system's coreference tracking abilities to trigger the correctly marked translation, and then remove the redundant translated prefix from the generated target sentence. We observed that using a parataxis construction (i.e. “she said to him:”) almost exclusively results in target-side parataxis as well (in 99.8% of our examples), making it easy to identify and strip the translated version from the target side. Moreover, because the parataxis construction is grammatically isolated from the rest of the sentence, it can be stripped without requiring additional changes or modification to the rest of the sentence, ensuring grammaticality.
Experiments & Results
To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience.
To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology.
Quantitative Results
We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence.
Qualitative Results
The BLEU score is an indication of how close the automated translation is to the reference translation, but does not tell us what exactly changed concerning the gender and number properties we attempt to control. We perform a finer-grained analysis focusing on the relation between the injected speaker and audience information, and the morphological realizations of the corresponding elements. We parse the translations and the references using a Hebrew dependency parser. In addition to the parse structure, the parser also performs morphological analysis and tagging of the individual tokens. We then perform the following analysis.
Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.
Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms.
Results: Speaker. Figure FIGREF3 shows the result for controlling the morphological properties of the speaker ({he, she, I} said). It shows the proportion of gender-inflected verbs for the various conditions and the reference. We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases).
Results: Audience. The chart in Figure FIGREF3 shows the results for controlling the number of the audience (...to them vs nothing). It shows the proportion of singular vs. plural second-person pronouns on the various conditions. It shows a similar trend: the baseline system severely under-predicts the plural forms with respect to the reference translation, while adding the “to them” condition brings the proportion much closer to that of the reference.
Comparison to vanmassenhove-hardmeier-way:2018:EMNLP
Closely related to our work, vanmassenhove-hardmeier-way:2018:EMNLP proposed a method and an English-French test set to evaluate gender-aware translation, based on the Europarl corpus BIBREF7 . We evaluate our method (using Google Translate and the given prefixes) on their test set to see whether it is applicable to another language pair and domain. Table shows the results of our approach vs. their published results and the Google Translate baseline. As may be expected, Google Translate outperforms their system as it is trained on a different corpus and may use more complex machine translation models. Using our method improves the BLEU score even further.
Other Languages
To test our method’s outputs on multiple languages, we run our pre-and post-processing steps with Google Translate using examples we sourced from native speakers of different languages. For every example we have an English sentence and two translations in the corresponding language, one in masculine and one in feminine form. Not all examples are using the same source English sentence as different languages mark different information. Table shows that for these specific examples our method worked on INLINEFORM0 of the languages we had examples for, while for INLINEFORM1 languages both translations are masculine, and for 1 language both are feminine.
Related Work
E17-1101 showed that given input with author traits like gender, it is possible to retain those traits in Statistical Machine Translation (SMT) models. W17-4727 showed that incorporating morphological analysis in the decoder improves NMT performance for morphologically rich languages. burlot:hal-01618387 presented a new protocol for evaluating the morphological competence of MT systems, indicating that current translation systems only manage to capture some morphological phenomena correctly. Regarding the application of constraints in NMT, N16-1005 presented a method for controlling the politeness level in the generated output. DBLP:journals/corr/FiclerG17aa showed how to guide a neural text generation system towards style and content parameters like the level of professionalism, subjective/objective, sentiment and others. W17-4811 showed that incorporating more context when translating subtitles can improve the coherence of the generated translations. Most closely to our work, vanmassenhove-hardmeier-way:2018:EMNLP also addressed the missing gender information by training proprietary models with a gender-indicating-prefix. We differ from this work by treating the problem in a black-box manner, and by addressing additional information like the number of the speaker and the gender and number of the audience.
Conclusions
We highlight the problem of translating between languages with different morphological systems, in which the target translation must contain gender and number information that is not available in the source. We propose a method for injecting such information into a pre-trained NMT model in a black-box setting. We demonstrate the effectiveness of this method by showing an improvement of 2.3 BLEU in an English-to-Hebrew translation setting where the speaker and audience gender can be inferred. We also perform a fine-grained syntactic analysis that shows how our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them. In future work we would like to explore automatic generation of the injected context, or the use of cross-sentence context to infer the injected information. | our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them |
faa4f28a2f2968cecb770d9379ab2cfcaaf5cfab | faa4f28a2f2968cecb770d9379ab2cfcaaf5cfab_0 | Q: What type of syntactic analysis is performed?
Text: Introduction
A common way for marking information about gender, number, and case in language is morphology, or the structure of a given word in the language. However, different languages mark such information in different ways – for example, in some languages gender may be marked on the head word of a syntactic dependency relation, while in other languages it is marked on the dependent, on both, or on none of them BIBREF0 . This morphological diversity creates a challenge for machine translation, as there are ambiguous cases where more than one correct translation exists for the same source sentence. For example, while the English sentence “I love language” is ambiguous with respect to the gender of the speaker, Hebrew marks verbs for the gender of their subject and does not allow gender-neutral translation. This allows two possible Hebrew translations – one in a masculine and the other in a feminine form. As a consequence, a sentence-level translator (either human or machine) must commit to the gender of the speaker, adding information that is not present in the source. Without additional context, this choice must be done arbitrarily by relying on language conventions, world knowledge or statistical (stereotypical) knowledge.
Indeed, the English sentence “I work as a doctor” is translated into Hebrew by Google Translate using the masculine verb form oved, indicating a male speaker, while “I work as a nurse” is translated with the feminine form ovedet, indicating a female speaker (verified on March 2019). While this is still an issue, there have been recent efforts to reduce it for specific language pairs.
We present a simple black-box method to influence the interpretation chosen by an NMT system in these ambiguous cases. More concretely, we construct pre-defined textual hints about the gender and number of the speaker and the audience (the interlocutors), which we concatenate to a given input sentence that we would like to translate accordingly. We then show that a black-box NMT system makes the desired morphological decisions according to the given hint, even when no other evidence is available on the source side. While adding those hints results in additional text on the target side, we show that it is simple to remove, leaving only the desired translation.
Our method is appealing as it only requires simple pre-and-post processing of the inputs and outputs, without considering the system internals, or requiring specific annotated data and training procedure as in previous work BIBREF1 . We show that in spite of its simplicity, it is effective in resolving many of the ambiguities and improves the translation quality in up to 2.3 BLEU when given the correct hints, which may be inferred from text metadata or other sources. Finally, we perform a fine-grained syntactic analysis of the translations generated using our method which shows its effectiveness.
Morphological Ambiguity in Translation
Different languages use different morphological features marking different properties on different elements. For example, English marks for number, case, aspect, tense, person, and degree of comparison. However, English does not mark gender on nouns and verbs. Even when a certain property is marked, languages differ in the form and location of the marking BIBREF0 . For example, marking can occur on the head of a syntactic dependency construction, on its argument, on both (requiring agreement), or on none of them. Translation systems must generate correct target-language morphology as part of the translation process. This requires knowledge of both the source-side and target-side morphology. Current state-of-the-art translation systems do capture many aspects of natural language, including morphology, when a relevant context is available BIBREF2 , BIBREF3 , but resort to “guessing” based on the training-data statistics when it is not. Complications arise when different languages convey different kinds of information in their morphological systems. In such cases, a translation system may be required to remove information available in the source sentence, or to add information not available in it, where the latter can be especially tricky.
Black-Box Knowledge Injection
Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.
We are motivated by recent work by BIBREF4 who showed that NMT systems learn to track coreference chains when presented with sufficient discourse context. We conjecture that there are enough sentence-internal pronominal coreference chains appearing in the training data of large-scale NMT systems, such that state-of-the-art NMT systems can and do track sentence-internal coreference. We devise a wrapper method to make use of this coreference tracking ability by introducing artificial antecedents that unambiguously convey the desired gender and number properties of the speaker and audience.
More concretely, a sentence such as “I love you” is ambiguous with respect to the gender of the speaker and the gender and number of the audience. However, sentences such as “I love you, she told him” are unambiguous given the coreference groups {I, she} and {you, him} which determine I to be feminine singular and you to be masculine singular. We can thus inject the desired information by prefixing a sentence with short generic sentence fragment such as “She told him:” or “She told them that”, relying on the NMT system's coreference tracking abilities to trigger the correctly marked translation, and then remove the redundant translated prefix from the generated target sentence. We observed that using a parataxis construction (i.e. “she said to him:”) almost exclusively results in target-side parataxis as well (in 99.8% of our examples), making it easy to identify and strip the translated version from the target side. Moreover, because the parataxis construction is grammatically isolated from the rest of the sentence, it can be stripped without requiring additional changes or modification to the rest of the sentence, ensuring grammaticality.
Experiments & Results
To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience.
To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology.
Quantitative Results
We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence.
Qualitative Results
The BLEU score is an indication of how close the automated translation is to the reference translation, but does not tell us what exactly changed concerning the gender and number properties we attempt to control. We perform a finer-grained analysis focusing on the relation between the injected speaker and audience information, and the morphological realizations of the corresponding elements. We parse the translations and the references using a Hebrew dependency parser. In addition to the parse structure, the parser also performs morphological analysis and tagging of the individual tokens. We then perform the following analysis.
Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.
Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms.
Results: Speaker. Figure FIGREF3 shows the result for controlling the morphological properties of the speaker ({he, she, I} said). It shows the proportion of gender-inflected verbs for the various conditions and the reference. We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases).
Results: Audience. The chart in Figure FIGREF3 shows the results for controlling the number of the audience (...to them vs nothing). It shows the proportion of singular vs. plural second-person pronouns on the various conditions. It shows a similar trend: the baseline system severely under-predicts the plural forms with respect to the reference translation, while adding the “to them” condition brings the proportion much closer to that of the reference.
Comparison to vanmassenhove-hardmeier-way:2018:EMNLP
Closely related to our work, vanmassenhove-hardmeier-way:2018:EMNLP proposed a method and an English-French test set to evaluate gender-aware translation, based on the Europarl corpus BIBREF7 . We evaluate our method (using Google Translate and the given prefixes) on their test set to see whether it is applicable to another language pair and domain. Table shows the results of our approach vs. their published results and the Google Translate baseline. As may be expected, Google Translate outperforms their system as it is trained on a different corpus and may use more complex machine translation models. Using our method improves the BLEU score even further.
Other Languages
To test our method’s outputs on multiple languages, we run our pre-and post-processing steps with Google Translate using examples we sourced from native speakers of different languages. For every example we have an English sentence and two translations in the corresponding language, one in masculine and one in feminine form. Not all examples are using the same source English sentence as different languages mark different information. Table shows that for these specific examples our method worked on INLINEFORM0 of the languages we had examples for, while for INLINEFORM1 languages both translations are masculine, and for 1 language both are feminine.
Related Work
E17-1101 showed that given input with author traits like gender, it is possible to retain those traits in Statistical Machine Translation (SMT) models. W17-4727 showed that incorporating morphological analysis in the decoder improves NMT performance for morphologically rich languages. burlot:hal-01618387 presented a new protocol for evaluating the morphological competence of MT systems, indicating that current translation systems only manage to capture some morphological phenomena correctly. Regarding the application of constraints in NMT, N16-1005 presented a method for controlling the politeness level in the generated output. DBLP:journals/corr/FiclerG17aa showed how to guide a neural text generation system towards style and content parameters like the level of professionalism, subjective/objective, sentiment and others. W17-4811 showed that incorporating more context when translating subtitles can improve the coherence of the generated translations. Most closely to our work, vanmassenhove-hardmeier-way:2018:EMNLP also addressed the missing gender information by training proprietary models with a gender-indicating-prefix. We differ from this work by treating the problem in a black-box manner, and by addressing additional information like the number of the speaker and the gender and number of the audience.
Conclusions
We highlight the problem of translating between languages with different morphological systems, in which the target translation must contain gender and number information that is not available in the source. We propose a method for injecting such information into a pre-trained NMT model in a black-box setting. We demonstrate the effectiveness of this method by showing an improvement of 2.3 BLEU in an English-to-Hebrew translation setting where the speaker and audience gender can be inferred. We also perform a fine-grained syntactic analysis that shows how our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them. In future work we would like to explore automatic generation of the injected context, or the use of cross-sentence context to infer the injected information. | Speaker's Gender Effects, Interlocutors' Gender and Number Effects |
da068b20988883bc324e55c073fb9c1a5c39be33 | da068b20988883bc324e55c073fb9c1a5c39be33_0 | Q: How is it demonstrated that the correct gender and number information is injected using this system?
Text: Introduction
A common way for marking information about gender, number, and case in language is morphology, or the structure of a given word in the language. However, different languages mark such information in different ways – for example, in some languages gender may be marked on the head word of a syntactic dependency relation, while in other languages it is marked on the dependent, on both, or on none of them BIBREF0 . This morphological diversity creates a challenge for machine translation, as there are ambiguous cases where more than one correct translation exists for the same source sentence. For example, while the English sentence “I love language” is ambiguous with respect to the gender of the speaker, Hebrew marks verbs for the gender of their subject and does not allow gender-neutral translation. This allows two possible Hebrew translations – one in a masculine and the other in a feminine form. As a consequence, a sentence-level translator (either human or machine) must commit to the gender of the speaker, adding information that is not present in the source. Without additional context, this choice must be done arbitrarily by relying on language conventions, world knowledge or statistical (stereotypical) knowledge.
Indeed, the English sentence “I work as a doctor” is translated into Hebrew by Google Translate using the masculine verb form oved, indicating a male speaker, while “I work as a nurse” is translated with the feminine form ovedet, indicating a female speaker (verified on March 2019). While this is still an issue, there have been recent efforts to reduce it for specific language pairs.
We present a simple black-box method to influence the interpretation chosen by an NMT system in these ambiguous cases. More concretely, we construct pre-defined textual hints about the gender and number of the speaker and the audience (the interlocutors), which we concatenate to a given input sentence that we would like to translate accordingly. We then show that a black-box NMT system makes the desired morphological decisions according to the given hint, even when no other evidence is available on the source side. While adding those hints results in additional text on the target side, we show that it is simple to remove, leaving only the desired translation.
Our method is appealing as it only requires simple pre-and-post processing of the inputs and outputs, without considering the system internals, or requiring specific annotated data and training procedure as in previous work BIBREF1 . We show that in spite of its simplicity, it is effective in resolving many of the ambiguities and improves the translation quality in up to 2.3 BLEU when given the correct hints, which may be inferred from text metadata or other sources. Finally, we perform a fine-grained syntactic analysis of the translations generated using our method which shows its effectiveness.
Morphological Ambiguity in Translation
Different languages use different morphological features marking different properties on different elements. For example, English marks for number, case, aspect, tense, person, and degree of comparison. However, English does not mark gender on nouns and verbs. Even when a certain property is marked, languages differ in the form and location of the marking BIBREF0 . For example, marking can occur on the head of a syntactic dependency construction, on its argument, on both (requiring agreement), or on none of them. Translation systems must generate correct target-language morphology as part of the translation process. This requires knowledge of both the source-side and target-side morphology. Current state-of-the-art translation systems do capture many aspects of natural language, including morphology, when a relevant context is available BIBREF2 , BIBREF3 , but resort to “guessing” based on the training-data statistics when it is not. Complications arise when different languages convey different kinds of information in their morphological systems. In such cases, a translation system may be required to remove information available in the source sentence, or to add information not available in it, where the latter can be especially tricky.
Black-Box Knowledge Injection
Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.
We are motivated by recent work by BIBREF4 who showed that NMT systems learn to track coreference chains when presented with sufficient discourse context. We conjecture that there are enough sentence-internal pronominal coreference chains appearing in the training data of large-scale NMT systems, such that state-of-the-art NMT systems can and do track sentence-internal coreference. We devise a wrapper method to make use of this coreference tracking ability by introducing artificial antecedents that unambiguously convey the desired gender and number properties of the speaker and audience.
More concretely, a sentence such as “I love you” is ambiguous with respect to the gender of the speaker and the gender and number of the audience. However, sentences such as “I love you, she told him” are unambiguous given the coreference groups {I, she} and {you, him} which determine I to be feminine singular and you to be masculine singular. We can thus inject the desired information by prefixing a sentence with short generic sentence fragment such as “She told him:” or “She told them that”, relying on the NMT system's coreference tracking abilities to trigger the correctly marked translation, and then remove the redundant translated prefix from the generated target sentence. We observed that using a parataxis construction (i.e. “she said to him:”) almost exclusively results in target-side parataxis as well (in 99.8% of our examples), making it easy to identify and strip the translated version from the target side. Moreover, because the parataxis construction is grammatically isolated from the rest of the sentence, it can be stripped without requiring additional changes or modification to the rest of the sentence, ensuring grammaticality.
Experiments & Results
To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience.
To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology.
Quantitative Results
We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence.
Qualitative Results
The BLEU score is an indication of how close the automated translation is to the reference translation, but does not tell us what exactly changed concerning the gender and number properties we attempt to control. We perform a finer-grained analysis focusing on the relation between the injected speaker and audience information, and the morphological realizations of the corresponding elements. We parse the translations and the references using a Hebrew dependency parser. In addition to the parse structure, the parser also performs morphological analysis and tagging of the individual tokens. We then perform the following analysis.
Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.
Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms.
Results: Speaker. Figure FIGREF3 shows the result for controlling the morphological properties of the speaker ({he, she, I} said). It shows the proportion of gender-inflected verbs for the various conditions and the reference. We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases).
Results: Audience. The chart in Figure FIGREF3 shows the results for controlling the number of the audience (...to them vs nothing). It shows the proportion of singular vs. plural second-person pronouns on the various conditions. It shows a similar trend: the baseline system severely under-predicts the plural forms with respect to the reference translation, while adding the “to them” condition brings the proportion much closer to that of the reference.
Comparison to vanmassenhove-hardmeier-way:2018:EMNLP
Closely related to our work, vanmassenhove-hardmeier-way:2018:EMNLP proposed a method and an English-French test set to evaluate gender-aware translation, based on the Europarl corpus BIBREF7 . We evaluate our method (using Google Translate and the given prefixes) on their test set to see whether it is applicable to another language pair and domain. Table shows the results of our approach vs. their published results and the Google Translate baseline. As may be expected, Google Translate outperforms their system as it is trained on a different corpus and may use more complex machine translation models. Using our method improves the BLEU score even further.
Other Languages
To test our method’s outputs on multiple languages, we run our pre-and post-processing steps with Google Translate using examples we sourced from native speakers of different languages. For every example we have an English sentence and two translations in the corresponding language, one in masculine and one in feminine form. Not all examples are using the same source English sentence as different languages mark different information. Table shows that for these specific examples our method worked on INLINEFORM0 of the languages we had examples for, while for INLINEFORM1 languages both translations are masculine, and for 1 language both are feminine.
Related Work
E17-1101 showed that given input with author traits like gender, it is possible to retain those traits in Statistical Machine Translation (SMT) models. W17-4727 showed that incorporating morphological analysis in the decoder improves NMT performance for morphologically rich languages. burlot:hal-01618387 presented a new protocol for evaluating the morphological competence of MT systems, indicating that current translation systems only manage to capture some morphological phenomena correctly. Regarding the application of constraints in NMT, N16-1005 presented a method for controlling the politeness level in the generated output. DBLP:journals/corr/FiclerG17aa showed how to guide a neural text generation system towards style and content parameters like the level of professionalism, subjective/objective, sentiment and others. W17-4811 showed that incorporating more context when translating subtitles can improve the coherence of the generated translations. Most closely to our work, vanmassenhove-hardmeier-way:2018:EMNLP also addressed the missing gender information by training proprietary models with a gender-indicating-prefix. We differ from this work by treating the problem in a black-box manner, and by addressing additional information like the number of the speaker and the gender and number of the audience.
Conclusions
We highlight the problem of translating between languages with different morphological systems, in which the target translation must contain gender and number information that is not available in the source. We propose a method for injecting such information into a pre-trained NMT model in a black-box setting. We demonstrate the effectiveness of this method by showing an improvement of 2.3 BLEU in an English-to-Hebrew translation setting where the speaker and audience gender can be inferred. We also perform a fine-grained syntactic analysis that shows how our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them. In future work we would like to explore automatic generation of the injected context, or the use of cross-sentence context to infer the injected information. | correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline, Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference |
0d6d5b6c00551dd0d2519f117ea81d1e9e8785ec | 0d6d5b6c00551dd0d2519f117ea81d1e9e8785ec_0 | Q: Which neural machine translation system is used?
Text: Introduction
A common way for marking information about gender, number, and case in language is morphology, or the structure of a given word in the language. However, different languages mark such information in different ways – for example, in some languages gender may be marked on the head word of a syntactic dependency relation, while in other languages it is marked on the dependent, on both, or on none of them BIBREF0 . This morphological diversity creates a challenge for machine translation, as there are ambiguous cases where more than one correct translation exists for the same source sentence. For example, while the English sentence “I love language” is ambiguous with respect to the gender of the speaker, Hebrew marks verbs for the gender of their subject and does not allow gender-neutral translation. This allows two possible Hebrew translations – one in a masculine and the other in a feminine form. As a consequence, a sentence-level translator (either human or machine) must commit to the gender of the speaker, adding information that is not present in the source. Without additional context, this choice must be done arbitrarily by relying on language conventions, world knowledge or statistical (stereotypical) knowledge.
Indeed, the English sentence “I work as a doctor” is translated into Hebrew by Google Translate using the masculine verb form oved, indicating a male speaker, while “I work as a nurse” is translated with the feminine form ovedet, indicating a female speaker (verified on March 2019). While this is still an issue, there have been recent efforts to reduce it for specific language pairs.
We present a simple black-box method to influence the interpretation chosen by an NMT system in these ambiguous cases. More concretely, we construct pre-defined textual hints about the gender and number of the speaker and the audience (the interlocutors), which we concatenate to a given input sentence that we would like to translate accordingly. We then show that a black-box NMT system makes the desired morphological decisions according to the given hint, even when no other evidence is available on the source side. While adding those hints results in additional text on the target side, we show that it is simple to remove, leaving only the desired translation.
Our method is appealing as it only requires simple pre-and-post processing of the inputs and outputs, without considering the system internals, or requiring specific annotated data and training procedure as in previous work BIBREF1 . We show that in spite of its simplicity, it is effective in resolving many of the ambiguities and improves the translation quality in up to 2.3 BLEU when given the correct hints, which may be inferred from text metadata or other sources. Finally, we perform a fine-grained syntactic analysis of the translations generated using our method which shows its effectiveness.
Morphological Ambiguity in Translation
Different languages use different morphological features marking different properties on different elements. For example, English marks for number, case, aspect, tense, person, and degree of comparison. However, English does not mark gender on nouns and verbs. Even when a certain property is marked, languages differ in the form and location of the marking BIBREF0 . For example, marking can occur on the head of a syntactic dependency construction, on its argument, on both (requiring agreement), or on none of them. Translation systems must generate correct target-language morphology as part of the translation process. This requires knowledge of both the source-side and target-side morphology. Current state-of-the-art translation systems do capture many aspects of natural language, including morphology, when a relevant context is available BIBREF2 , BIBREF3 , but resort to “guessing” based on the training-data statistics when it is not. Complications arise when different languages convey different kinds of information in their morphological systems. In such cases, a translation system may be required to remove information available in the source sentence, or to add information not available in it, where the latter can be especially tricky.
Black-Box Knowledge Injection
Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.
We are motivated by recent work by BIBREF4 who showed that NMT systems learn to track coreference chains when presented with sufficient discourse context. We conjecture that there are enough sentence-internal pronominal coreference chains appearing in the training data of large-scale NMT systems, such that state-of-the-art NMT systems can and do track sentence-internal coreference. We devise a wrapper method to make use of this coreference tracking ability by introducing artificial antecedents that unambiguously convey the desired gender and number properties of the speaker and audience.
More concretely, a sentence such as “I love you” is ambiguous with respect to the gender of the speaker and the gender and number of the audience. However, sentences such as “I love you, she told him” are unambiguous given the coreference groups {I, she} and {you, him} which determine I to be feminine singular and you to be masculine singular. We can thus inject the desired information by prefixing a sentence with short generic sentence fragment such as “She told him:” or “She told them that”, relying on the NMT system's coreference tracking abilities to trigger the correctly marked translation, and then remove the redundant translated prefix from the generated target sentence. We observed that using a parataxis construction (i.e. “she said to him:”) almost exclusively results in target-side parataxis as well (in 99.8% of our examples), making it easy to identify and strip the translated version from the target side. Moreover, because the parataxis construction is grammatically isolated from the rest of the sentence, it can be stripped without requiring additional changes or modification to the rest of the sentence, ensuring grammaticality.
Experiments & Results
To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience.
To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology.
Quantitative Results
We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence.
Qualitative Results
The BLEU score is an indication of how close the automated translation is to the reference translation, but does not tell us what exactly changed concerning the gender and number properties we attempt to control. We perform a finer-grained analysis focusing on the relation between the injected speaker and audience information, and the morphological realizations of the corresponding elements. We parse the translations and the references using a Hebrew dependency parser. In addition to the parse structure, the parser also performs morphological analysis and tagging of the individual tokens. We then perform the following analysis.
Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.
Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms.
Results: Speaker. Figure FIGREF3 shows the result for controlling the morphological properties of the speaker ({he, she, I} said). It shows the proportion of gender-inflected verbs for the various conditions and the reference. We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases).
Results: Audience. The chart in Figure FIGREF3 shows the results for controlling the number of the audience (...to them vs nothing). It shows the proportion of singular vs. plural second-person pronouns on the various conditions. It shows a similar trend: the baseline system severely under-predicts the plural forms with respect to the reference translation, while adding the “to them” condition brings the proportion much closer to that of the reference.
Comparison to vanmassenhove-hardmeier-way:2018:EMNLP
Closely related to our work, vanmassenhove-hardmeier-way:2018:EMNLP proposed a method and an English-French test set to evaluate gender-aware translation, based on the Europarl corpus BIBREF7 . We evaluate our method (using Google Translate and the given prefixes) on their test set to see whether it is applicable to another language pair and domain. Table shows the results of our approach vs. their published results and the Google Translate baseline. As may be expected, Google Translate outperforms their system as it is trained on a different corpus and may use more complex machine translation models. Using our method improves the BLEU score even further.
Other Languages
To test our method’s outputs on multiple languages, we run our pre-and post-processing steps with Google Translate using examples we sourced from native speakers of different languages. For every example we have an English sentence and two translations in the corresponding language, one in masculine and one in feminine form. Not all examples are using the same source English sentence as different languages mark different information. Table shows that for these specific examples our method worked on INLINEFORM0 of the languages we had examples for, while for INLINEFORM1 languages both translations are masculine, and for 1 language both are feminine.
Related Work
E17-1101 showed that given input with author traits like gender, it is possible to retain those traits in Statistical Machine Translation (SMT) models. W17-4727 showed that incorporating morphological analysis in the decoder improves NMT performance for morphologically rich languages. burlot:hal-01618387 presented a new protocol for evaluating the morphological competence of MT systems, indicating that current translation systems only manage to capture some morphological phenomena correctly. Regarding the application of constraints in NMT, N16-1005 presented a method for controlling the politeness level in the generated output. DBLP:journals/corr/FiclerG17aa showed how to guide a neural text generation system towards style and content parameters like the level of professionalism, subjective/objective, sentiment and others. W17-4811 showed that incorporating more context when translating subtitles can improve the coherence of the generated translations. Most closely to our work, vanmassenhove-hardmeier-way:2018:EMNLP also addressed the missing gender information by training proprietary models with a gender-indicating-prefix. We differ from this work by treating the problem in a black-box manner, and by addressing additional information like the number of the speaker and the gender and number of the audience.
Conclusions
We highlight the problem of translating between languages with different morphological systems, in which the target translation must contain gender and number information that is not available in the source. We propose a method for injecting such information into a pre-trained NMT model in a black-box setting. We demonstrate the effectiveness of this method by showing an improvement of 2.3 BLEU in an English-to-Hebrew translation setting where the speaker and audience gender can be inferred. We also perform a fine-grained syntactic analysis that shows how our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them. In future work we would like to explore automatic generation of the injected context, or the use of cross-sentence context to infer the injected information. | Google's machine translation system (GMT) |
edcde2b675cf8a362a63940b2bbdf02c150fe01f | edcde2b675cf8a362a63940b2bbdf02c150fe01f_0 | Q: What are the components of the black-box context injection system?
Text: Introduction
A common way for marking information about gender, number, and case in language is morphology, or the structure of a given word in the language. However, different languages mark such information in different ways – for example, in some languages gender may be marked on the head word of a syntactic dependency relation, while in other languages it is marked on the dependent, on both, or on none of them BIBREF0 . This morphological diversity creates a challenge for machine translation, as there are ambiguous cases where more than one correct translation exists for the same source sentence. For example, while the English sentence “I love language” is ambiguous with respect to the gender of the speaker, Hebrew marks verbs for the gender of their subject and does not allow gender-neutral translation. This allows two possible Hebrew translations – one in a masculine and the other in a feminine form. As a consequence, a sentence-level translator (either human or machine) must commit to the gender of the speaker, adding information that is not present in the source. Without additional context, this choice must be done arbitrarily by relying on language conventions, world knowledge or statistical (stereotypical) knowledge.
Indeed, the English sentence “I work as a doctor” is translated into Hebrew by Google Translate using the masculine verb form oved, indicating a male speaker, while “I work as a nurse” is translated with the feminine form ovedet, indicating a female speaker (verified on March 2019). While this is still an issue, there have been recent efforts to reduce it for specific language pairs.
We present a simple black-box method to influence the interpretation chosen by an NMT system in these ambiguous cases. More concretely, we construct pre-defined textual hints about the gender and number of the speaker and the audience (the interlocutors), which we concatenate to a given input sentence that we would like to translate accordingly. We then show that a black-box NMT system makes the desired morphological decisions according to the given hint, even when no other evidence is available on the source side. While adding those hints results in additional text on the target side, we show that it is simple to remove, leaving only the desired translation.
Our method is appealing as it only requires simple pre-and-post processing of the inputs and outputs, without considering the system internals, or requiring specific annotated data and training procedure as in previous work BIBREF1 . We show that in spite of its simplicity, it is effective in resolving many of the ambiguities and improves the translation quality in up to 2.3 BLEU when given the correct hints, which may be inferred from text metadata or other sources. Finally, we perform a fine-grained syntactic analysis of the translations generated using our method which shows its effectiveness.
Morphological Ambiguity in Translation
Different languages use different morphological features marking different properties on different elements. For example, English marks for number, case, aspect, tense, person, and degree of comparison. However, English does not mark gender on nouns and verbs. Even when a certain property is marked, languages differ in the form and location of the marking BIBREF0 . For example, marking can occur on the head of a syntactic dependency construction, on its argument, on both (requiring agreement), or on none of them. Translation systems must generate correct target-language morphology as part of the translation process. This requires knowledge of both the source-side and target-side morphology. Current state-of-the-art translation systems do capture many aspects of natural language, including morphology, when a relevant context is available BIBREF2 , BIBREF3 , but resort to “guessing” based on the training-data statistics when it is not. Complications arise when different languages convey different kinds of information in their morphological systems. In such cases, a translation system may be required to remove information available in the source sentence, or to add information not available in it, where the latter can be especially tricky.
Black-Box Knowledge Injection
Our goal is to supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences, in order to produce the desired target-side morphology when the information is not available in the source sentence. The approach we take in the current work is that of black-box injection, in which we attempt to inject knowledge to the input in order to influence the output of a trained NMT system, without having access to its internals or its training procedure as proposed by vanmassenhove-hardmeier-way:2018:EMNLP.
We are motivated by recent work by BIBREF4 who showed that NMT systems learn to track coreference chains when presented with sufficient discourse context. We conjecture that there are enough sentence-internal pronominal coreference chains appearing in the training data of large-scale NMT systems, such that state-of-the-art NMT systems can and do track sentence-internal coreference. We devise a wrapper method to make use of this coreference tracking ability by introducing artificial antecedents that unambiguously convey the desired gender and number properties of the speaker and audience.
More concretely, a sentence such as “I love you” is ambiguous with respect to the gender of the speaker and the gender and number of the audience. However, sentences such as “I love you, she told him” are unambiguous given the coreference groups {I, she} and {you, him} which determine I to be feminine singular and you to be masculine singular. We can thus inject the desired information by prefixing a sentence with short generic sentence fragment such as “She told him:” or “She told them that”, relying on the NMT system's coreference tracking abilities to trigger the correctly marked translation, and then remove the redundant translated prefix from the generated target sentence. We observed that using a parataxis construction (i.e. “she said to him:”) almost exclusively results in target-side parataxis as well (in 99.8% of our examples), making it easy to identify and strip the translated version from the target side. Moreover, because the parataxis construction is grammatically isolated from the rest of the sentence, it can be stripped without requiring additional changes or modification to the rest of the sentence, ensuring grammaticality.
Experiments & Results
To demonstrate our method in a black-box setting, we focus our experiments on Google's machine translation system (GMT), accessed through its Cloud API. To test the method on real-world sentences, we consider a monologue from the stand-up comedy show “Sarah Silverman: A Speck of Dust”. The monologue consists of 1,244 English sentences, all by a female speaker conveyed to a plural, gender-neutral audience. Our parallel corpora consists of the 1,244 English sentences from the transcript, and their corresponding Hebrew translations based on the Hebrew subtitles. We translate the monologue one sentence at a time through the Google Cloud API. Eyeballing the results suggest that most of the translations use the incorrect, but default, masculine and singular forms for the speaker and the audience, respectively. We expect that by adding the relevant condition of “female speaking to an audience” we will get better translations, affecting both the gender of the speaker and the number of the audience.
To verify this, we experiment with translating the sentences with the following variations: No Prefix—The baseline translation as returned by the GMT system. “He said:”—Signaling a male speaker. We expect to further skew the system towards masculine forms. “She said:”—Signaling a female speaker and unknown audience. As this matches the actual speaker's gender, we expect an improvement in translation of first-person pronouns and verbs with first-person pronouns as subjects. “I said to them:”—Signaling an unknown speaker and plural audience. “He said to them:”—Masculine speaker and plural audience. “She said to them:”—Female speaker and plural audience—the complete, correct condition. We expect the best translation accuracy on this setup. “He/she said to him/her”—Here we set an (incorrect) singular gender-marked audience, to investigate our ability to control the audience morphology.
Quantitative Results
We compare the different conditions by comparing BLEU BIBREF5 with respect to the reference Hebrew translations. We use the multi-bleu.perl script from the Moses toolkit BIBREF6 . Table shows BLEU scores for the different prefixes. The numbers match our expectations: Generally, providing an incorrect speaker and/or audience information decreases the BLEU scores, while providing the correct information substantially improves it - we see an increase of up to 2.3 BLEU over the baseline. We note the BLEU score improves in all cases, even when given the wrong gender of either the speaker or the audience. We hypothesise this improvement stems from the addition of the word “said” which hints the model to generate a more “spoken” language which matches the tested scenario. Providing correct information for both speaker and audience usually helps more than providing correct information to either one of them individually. The one outlier is providing “She” for the speaker and “her” for the audience. While this is not the correct scenario, we hypothesise it gives an improvement in BLEU as it further reinforces the female gender in the sentence.
Qualitative Results
The BLEU score is an indication of how close the automated translation is to the reference translation, but does not tell us what exactly changed concerning the gender and number properties we attempt to control. We perform a finer-grained analysis focusing on the relation between the injected speaker and audience information, and the morphological realizations of the corresponding elements. We parse the translations and the references using a Hebrew dependency parser. In addition to the parse structure, the parser also performs morphological analysis and tagging of the individual tokens. We then perform the following analysis.
Speaker's Gender Effects: We search for first-person singular pronouns with subject case (ani, unmarked for gender, corresponding to the English I), and consider the gender of its governing verb (or adjectives in copular constructions such as `I am nice'). The possible genders are `masculine', `feminine' and `both', where the latter indicates a case where the none-diacriticized written form admits both a masculine and a feminine reading. We expect the gender to match the ones requested in the prefix.
Interlocutors' Gender and Number Effects: We search for second-person pronouns and consider their gender and number. For pronouns in subject position, we also consider the gender and number of their governing verbs (or adjectives in copular constructions). For a singular audience, we expect the gender and number to match the requested ones. For a plural audience, we expect the masculine-plural forms.
Results: Speaker. Figure FIGREF3 shows the result for controlling the morphological properties of the speaker ({he, she, I} said). It shows the proportion of gender-inflected verbs for the various conditions and the reference. We see that the baseline system severely under-predicts the feminine form of verbs as compared to the reference. The “He said” conditions further decreases the number of feminine verbs, while the “I said” conditions bring it back to the baseline level. Finally, the “She said” prefixes substantially increase the number of feminine-marked verbs, bringing the proportion much closer to that of the reference (though still under-predicting some of the feminine cases).
Results: Audience. The chart in Figure FIGREF3 shows the results for controlling the number of the audience (...to them vs nothing). It shows the proportion of singular vs. plural second-person pronouns on the various conditions. It shows a similar trend: the baseline system severely under-predicts the plural forms with respect to the reference translation, while adding the “to them” condition brings the proportion much closer to that of the reference.
Comparison to vanmassenhove-hardmeier-way:2018:EMNLP
Closely related to our work, vanmassenhove-hardmeier-way:2018:EMNLP proposed a method and an English-French test set to evaluate gender-aware translation, based on the Europarl corpus BIBREF7 . We evaluate our method (using Google Translate and the given prefixes) on their test set to see whether it is applicable to another language pair and domain. Table shows the results of our approach vs. their published results and the Google Translate baseline. As may be expected, Google Translate outperforms their system as it is trained on a different corpus and may use more complex machine translation models. Using our method improves the BLEU score even further.
Other Languages
To test our method’s outputs on multiple languages, we run our pre-and post-processing steps with Google Translate using examples we sourced from native speakers of different languages. For every example we have an English sentence and two translations in the corresponding language, one in masculine and one in feminine form. Not all examples are using the same source English sentence as different languages mark different information. Table shows that for these specific examples our method worked on INLINEFORM0 of the languages we had examples for, while for INLINEFORM1 languages both translations are masculine, and for 1 language both are feminine.
Related Work
E17-1101 showed that given input with author traits like gender, it is possible to retain those traits in Statistical Machine Translation (SMT) models. W17-4727 showed that incorporating morphological analysis in the decoder improves NMT performance for morphologically rich languages. burlot:hal-01618387 presented a new protocol for evaluating the morphological competence of MT systems, indicating that current translation systems only manage to capture some morphological phenomena correctly. Regarding the application of constraints in NMT, N16-1005 presented a method for controlling the politeness level in the generated output. DBLP:journals/corr/FiclerG17aa showed how to guide a neural text generation system towards style and content parameters like the level of professionalism, subjective/objective, sentiment and others. W17-4811 showed that incorporating more context when translating subtitles can improve the coherence of the generated translations. Most closely to our work, vanmassenhove-hardmeier-way:2018:EMNLP also addressed the missing gender information by training proprietary models with a gender-indicating-prefix. We differ from this work by treating the problem in a black-box manner, and by addressing additional information like the number of the speaker and the gender and number of the audience.
Conclusions
We highlight the problem of translating between languages with different morphological systems, in which the target translation must contain gender and number information that is not available in the source. We propose a method for injecting such information into a pre-trained NMT model in a black-box setting. We demonstrate the effectiveness of this method by showing an improvement of 2.3 BLEU in an English-to-Hebrew translation setting where the speaker and audience gender can be inferred. We also perform a fine-grained syntactic analysis that shows how our method enables to control the morphological realization of first and second-person pronouns, together with verbs and adjectives related to them. In future work we would like to explore automatic generation of the injected context, or the use of cross-sentence context to infer the injected information. | supply an NMT system with knowledge regarding the speaker and interlocutor of first-person sentences |
d20d6c8ecd7cb0126479305d27deb0c8b642b09f | d20d6c8ecd7cb0126479305d27deb0c8b642b09f_0 | Q: What normalization techniques are mentioned?
Text: Introduction
Although development of the first speech recognition systems began half a century ago, there has been a significant increase of the accuracy of ASR systems and number of their applications for the recent ten years, even for low-resource languages BIBREF0 , BIBREF1 .
This is mainly due to widespread applying of deep learning and very effective performance of neural networks in hybrid recognition systems (DNN-HMM). However, for last few years there has been a trend to change traditional ASR training paradigm. End-to-end training systems gradually displace complex multistage learning process (including training of GMMs BIBREF2 , clustering of allophones’ states, aligning of speech to clustered senones, training neural networks with cross-entropy loss, followed by retraining with sequence-discriminative criterion). The new approach implies training the system in one global step, working only with acoustic data and reference texts, and significantly simplifies or even completely excludes in some cases the decoding process. It also avoids the problem of out-of-vocabulary words (OOV), because end-to-end system, trained with parts of the words as targets, can construct new words itself using graphemes or subword units, while traditional DNN-HMM systems are limited with language model vocabulary.
The whole variety of end-to-end systems can be divided into 3 main categories: Connectionist Temporal Classification (CTC) BIBREF3 ; Sequence-to-sequence models with attention mechanism BIBREF4 ; RNN-Transducers BIBREF5 .
Connectionist Temporal Classification (CTC) approach uses loss functions that utilize all possible alignments between reference text and audio data. Targets for CTC-based system can be phonemes, graphemes, syllables and other subword units and even whole words. However, a lot more data is usually required to train such systems well, compared to traditional hybrid systems.
Sequence-to-sequence models are used to map entire input sequences to output sequences without any assumptions about their alignment. The most popular architecture for sequence-to-sequence models is encoder-decoder model with attention. Encoder and decoder are usually constructed using recurrent neural networks, basic attention mechanism calculates energy weights that emphasize importance of encoder vectors for decoding on this step, and then sums all these vectors with energy weights. Encoder-decoder models with attention mechanism show results close to traditional DNN-HMM systems and in some cases surpass them, but for a number of reasons their usage is still rather limited. First of all, this is related to the fact, that such systems show best results when the duration of real utterances is close to the duration of utterances from training data. However, when the duration difference increases, the performance degrades significantly BIBREF4 .
Moreover, the entire utterance must be preprocessed by encoder before start of decoder's work. This is the reason, why it is hard to apply the approach to recognize long recordings or streaming audio. Segmenting long recordings into shorter utterances solves the duration issue, but leads to a context break, and eventually negatively affects recognition accuracy. Secondly, the computational complexity of encoder-decoder models is high because of recurrent networks usage, so these models are rather slow and hard to parallelize.
The idea of RNN-Transducer is an extension of CTC and provides the ability to model inner dependencies separately and jointly between elements of both input (audio frames) and output (phonemes and other subword units) sequences. Despite of mathematical elegance, such systems are very complicated and hard to implement, so they are still rarely used, although several impressive results were obtained using this technique.
CTC-based approach is easier to implement, better scaled and has many “degrees of freedom”, which allows to significantly improve baseline systems and achieve results close to state-of-the-art. Moreover, CTC-based systems are well compatible with traditional WFST-decoders and can be easily integrated with conventional ASR systems.
Besides, as already mentioned, CTC-systems are rather sensitive to the amount of training data, so it is very relevant to study how to build effective CTC-based recognition system using a small amount of training samples. It is especially actual for low-resource languages, where we have only a few dozen hours of speech. Building ASR system for low-resource languages is one of the aims of international Babel program, funded by the Intelligence Advanced Research Projects Activity (IARPA). Within the program extensive research was carried out, resulting in creation of a number of modern ASR systems for low-resource languages. Recently, end-to-end approaches were applied to this task, showing expectedly worse results than traditional systems, although the difference is rather small.
In this paper we explore a number of ways to improve end-to-end CTC-based systems in low-resource scenarios using the Turkish language dataset from the IARPA Babel collection. In the next section we describe in more details different versions of CTC-systems and their application for low-resource speech recognition. Section 3 describes the experiments and their results. Section 4 summarizes the results and discusses possible ways for further work.
Related work
Development of CTC-based systems originates from the paper BIBREF3 where CTC loss was introduced. This loss is a total probability of labels sequence given observation sequence, which takes into account all possible alignments induced by a given words sequence.
Although a number of possible alignments increases exponentially with sequences’ lengths, there is an efficient algorithm to compute CTC loss based on dynamic programming principle (known as Forward-Backward algorithm). This algorithm operates with posterior probabilities of any output sequence element observation given the time frame and CTC loss is differentiable with respect to these probabilities.
Therefore, if an acoustic model is based on the neural network which estimates these posteriors, its training may be performed with a conventional error back-propagation gradient descent BIBREF6 . Training of ASR system based on such a model does not require an explicit alignment of input utterance to the elements of output sequence and thus may be performed in end-to-end fashion. It is also important that CTC loss accumulates the information about the whole output sequence, and hence its optimization is in some sense an alternative to the traditional fine-tuning of neural network acoustic models by means of sequence-discriminative criteria such as sMBR BIBREF7 etc. The implementation of CTC is conventionally based on RNN/LSTM networks, including bidirectional ones as acoustic models, since they are known to model long context effectively.
The important component of CTC is a special “blank” symbol which fills in gaps between meaningful elements of output sequence to equalize its length to the number of frames in the input sequence. It corresponds to a separate output neuron, and blank symbols are deleted from the recognized sequence to obtain the final result. In BIBREF8 a modification of CTC loss was proposed, referred as Auto SeGmentation criterion (ASG loss), which does not use blank symbols. Instead of using “blank”, a simple transition probability model for an output symbols is introduced. This leads to a significant simplification and speedup of computations. Moreover, the improved recognition results compared to basic CTC loss were obtained.
DeepSpeech BIBREF9 developed by Baidu Inc. was one of the first systems that demonstrated an effectiveness of CTC-based speech recognition in LVCSR tasks. Being trained on 2300 hours of English Conversational Telephone Speech data, it demonstrated state-of-the-art results on Hub5'00 evaluation set. Research in this direction continued and resulted in DeepSpeech2 architecture BIBREF10 , composed of both convolutional and recurrent layers. This system demonstrates improved accuracy of recognition of both English and Mandarin speech. Another successful example of applying CTC to LVCSR tasks is EESEN system BIBREF11 . It integrates an RNN-based model trained with CTC criterion to the conventional WFST-based decoder from the Kaldi toolkit BIBREF12 . The paper BIBREF13 shows that end-to-end systems may be successfully built from convolutional layers only instead of recurrent ones. It was demonstrated that using Gated Convolutional Units (GLU-CNNs) and training with ASG-loss leads to the state-of-the-art results on the LibriSpeech database (960 hours of training data).
Recently, a new modification of DeepSpeech2 architecture was proposed in BIBREF14 . Several lower convolutional layers were replaced with a deep residual network with depth-wise separable convolutions. This modification along with using strong regularization and data augmentation techniques leads to the results close to DeepSpeech2 in spite of significantly lower amount of data used for training. Indeed, one of the models was trained with only 80 hours of speech data (which were augmented with noisy and speed-perturbed versions of original data).
These results suggest that CTC can be successfully applied for the training of ASR systems for low-resource languages, in particular, for those included in Babel research program (the amount of training data for them is normally 40 to 80 hours of speech).
Currently, Babel corpus contains data for more than 20 languages, and for most of them quite good traditional ASR system were built BIBREF15 , BIBREF16 , BIBREF17 . In order to improve speech recognition accuracy for a given language, data from other languages is widely used as well. It can be used to train multilingual system via multitask learning or to obtain high-level multilingual representations, usually bottleneck features, extracted from a pre-trained multilingual network.
One of the first attempts to build ASR system for low-resource BABEL languages using CTC-based end-to-end training was made recently BIBREF18 . Despite the obtained results are somewhat worse compared to the state-of-the-art traditional systems, they still demonstrate that CTC-based approach is viable for building low-resource ASR systems. The aim of our work is to investigate some ways to improve the obtained results.
Basic setup
For all experiments we used conversational speech from IARPA Babel Turkish Language Pack (LDC2016S10). This corpus contains about 80 hours of transcribed speech for training and 10 hours for development. The dataset is rather small compared to widely used benchmarks for conversational speech: English Switchboard corpus (300 hours, LDC97S62) and Fisher dataset (2000 hours, LDC2004S13 and LDC2005S13).
As targets we use 32 symbols: 29 lowercase characters of Turkish alphabet BIBREF19 , apostrophe, space and special 〈blank〉 character that means “no output”. Thus we do not use any prior linguistic knowledge and also avoid OOV problem as the system can construct new words directly.
All models are trained with CTC-loss. Input features are 40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, concatenated with deltas and delta-deltas (120 features in vector). We also tried to use spectrogram and experimented with different normalization techniques.
For decoding we used character-based beam search BIBREF20 with 3-gram language model build with SRILM package BIBREF21 finding sequence of characters INLINEFORM0 that maximizes the following objective BIBREF9 : INLINEFORM1
where INLINEFORM0 is language model weight and INLINEFORM1 is word insertion penalty.
For all experiments we used INLINEFORM0 , INLINEFORM1 , and performed decoding with beam width equal to 100 and 2000, which is not very large compared to 7000 and more active hypotheses used in traditional WFST decoders (e.g. many Kaldi recipes do decoding with INLINEFORM2 ).
To compare with other published results BIBREF18 , BIBREF22 we used Sclite BIBREF23 scoring package to measure results of decoding with beam width 2000, that takes into account incomplete words and spoken noise in reference texts and doesn't penalize model if it incorrectly recognize these pieces.
Also we report WER (word error rate) for simple argmax decoder (taking labels with maximum output on each time step and than applying CTC decoding rule – collapse repeated labels and remove “blanks”).
Experiments with architecture
We tried to explore the behavior of different neural network architectures in case when rather small data is available. We used multi-layer bidirectional LSTM networks, tried fully-convolutional architecture similar to Wav2Letter BIBREF8 and explored DeepSpeech-like architecture developed by Salesforce (DS-SF) BIBREF14 .
The convolutional model consists of 11 convolutional layers with batch normalization after each layer. The DeepSpeech-like architecture consists of 5-layers residual network with depth-wise separable convolutions followed by 4-layer bidirectional Gated Recurrent Unit (GRU) as described in BIBREF14 .
Our baseline bidirectional LSTM is 6-layers network with 320 hidden units per direction as in BIBREF18 . Also we tried to use bLSTM to label every second frame (20 ms) concatenating every first output from first layer with second and taking this as input for second model layer.
The performance of our baseline models is shown in Table TABREF6 .
Loss modification: segmenting during training
It is known that CTC-loss is very unstable for long utterances BIBREF3 , and smaller utterances are more useful for this task. Some techniques were developed to help model converge faster, e.g. sortagrad BIBREF10 (using shorter segments at the beginning of training).
To compute CTC-loss we use all possible alignments between audio features and reference text, but only some of the alignments make sense. Traditional DNN-HMM systems also use iterative training with finding best alignment and then training neural network to approximate this alignment. Therefore, we propose the following algorithm to use segmentation during training:
compute CTC-alignment (find the sequence of targets with minimal loss that can be mapped to real targets by collapsing repeated characters and removing blanks)
perform greedy decoding (argmax on each step)
find “well-recognized” words with INLINEFORM0 ( INLINEFORM1 is a hyperparameter): segment should start and end with space; word is “well-recognized” when argmax decoding is equal to computed alignment
if the word is “well-recognized”, divide the utterance into 5 segments: left segment before space, left space, the word, right space and right segment
compute CTC-loss for all this segments separately and do back-propagation as usual
The results of training with this criterion are shown in Table TABREF13 . The proposed criterion doesn't lead to consistent improvement while decoding with large beam width (2000), but shows significant improvement when decoding with smaller beam (100). We plan to further explore utilizing alignment information during training.
Using different features
We explored different normalization techniques. FBanks with cepstral mean normalization (CMN) perform better than raw FBanks. We found using variance with mean normalization (CMVN) unnecessary for the task. Using deltas and delta-deltas improves model, so we used them in other experiments. Models trained with spectrogram features converge slower and to worse minimum, but the difference when using CMN is not very big compared to FBanks.
Varying model size and number of layers
Experiments with varying number of hidden units of 6-layer bLSTM models are presented in Table TABREF17 . Models with 512 and 768 hidden units are worse than with 320, but model with 1024 hidden units is significantly better than others. We also observed that model with 6 layers performs better than others.
Training the best model
To train our best model we chose the best network from our experiments (6-layer bLSTM with 1024 hidden units), trained it with Adam optimizer and fine-tuned with SGD with momentum using exponential learning rate decay. The best model trained with speed and volume perturbation BIBREF24 achieved 45.8% WER, which is the best published end-to-end result on Babel Turkish dataset using in-domain data. For comparison, WER of model trained using in-domain data in BIBREF18 is 53.1%, using 4 additional languages (including English Switchboard dataset) – 48.7%. It is also not far from Kaldi DNN-HMM system BIBREF22 with 43.8% WER.
Conclusions and future work
In this paper we explored different end-to-end architectures in low-resource ASR task using Babel Turkish dataset. We considered different ways to improve performance and proposed promising CTC-loss modification that uses segmentation during training. Our final system achieved 45.8% WER using in-domain data only, which is the best published result for Turkish end-to-end systems. Our work also shows than well-tuned end-to-end system can achieve results very close to traditional DNN-HMM systems even for low-resource languages. In future work we plan to further investigate different loss modifications (Gram-CTC, ASG) and try to use RNN-Transducers and multi-task learning.
Acknowledgements
This work was financially supported by the Ministry of Education and Science of the Russian Federation, Contract 14.575.21.0132 (IDRFMEFI57517X0132). | FBanks with cepstral mean normalization (CMN), variance with mean normalization (CMVN) |
11e6b79f1f48ddc6c580c4d0a3cb9bcb42decb17 | 11e6b79f1f48ddc6c580c4d0a3cb9bcb42decb17_0 | Q: What features do they experiment with?
Text: Introduction
Although development of the first speech recognition systems began half a century ago, there has been a significant increase of the accuracy of ASR systems and number of their applications for the recent ten years, even for low-resource languages BIBREF0 , BIBREF1 .
This is mainly due to widespread applying of deep learning and very effective performance of neural networks in hybrid recognition systems (DNN-HMM). However, for last few years there has been a trend to change traditional ASR training paradigm. End-to-end training systems gradually displace complex multistage learning process (including training of GMMs BIBREF2 , clustering of allophones’ states, aligning of speech to clustered senones, training neural networks with cross-entropy loss, followed by retraining with sequence-discriminative criterion). The new approach implies training the system in one global step, working only with acoustic data and reference texts, and significantly simplifies or even completely excludes in some cases the decoding process. It also avoids the problem of out-of-vocabulary words (OOV), because end-to-end system, trained with parts of the words as targets, can construct new words itself using graphemes or subword units, while traditional DNN-HMM systems are limited with language model vocabulary.
The whole variety of end-to-end systems can be divided into 3 main categories: Connectionist Temporal Classification (CTC) BIBREF3 ; Sequence-to-sequence models with attention mechanism BIBREF4 ; RNN-Transducers BIBREF5 .
Connectionist Temporal Classification (CTC) approach uses loss functions that utilize all possible alignments between reference text and audio data. Targets for CTC-based system can be phonemes, graphemes, syllables and other subword units and even whole words. However, a lot more data is usually required to train such systems well, compared to traditional hybrid systems.
Sequence-to-sequence models are used to map entire input sequences to output sequences without any assumptions about their alignment. The most popular architecture for sequence-to-sequence models is encoder-decoder model with attention. Encoder and decoder are usually constructed using recurrent neural networks, basic attention mechanism calculates energy weights that emphasize importance of encoder vectors for decoding on this step, and then sums all these vectors with energy weights. Encoder-decoder models with attention mechanism show results close to traditional DNN-HMM systems and in some cases surpass them, but for a number of reasons their usage is still rather limited. First of all, this is related to the fact, that such systems show best results when the duration of real utterances is close to the duration of utterances from training data. However, when the duration difference increases, the performance degrades significantly BIBREF4 .
Moreover, the entire utterance must be preprocessed by encoder before start of decoder's work. This is the reason, why it is hard to apply the approach to recognize long recordings or streaming audio. Segmenting long recordings into shorter utterances solves the duration issue, but leads to a context break, and eventually negatively affects recognition accuracy. Secondly, the computational complexity of encoder-decoder models is high because of recurrent networks usage, so these models are rather slow and hard to parallelize.
The idea of RNN-Transducer is an extension of CTC and provides the ability to model inner dependencies separately and jointly between elements of both input (audio frames) and output (phonemes and other subword units) sequences. Despite of mathematical elegance, such systems are very complicated and hard to implement, so they are still rarely used, although several impressive results were obtained using this technique.
CTC-based approach is easier to implement, better scaled and has many “degrees of freedom”, which allows to significantly improve baseline systems and achieve results close to state-of-the-art. Moreover, CTC-based systems are well compatible with traditional WFST-decoders and can be easily integrated with conventional ASR systems.
Besides, as already mentioned, CTC-systems are rather sensitive to the amount of training data, so it is very relevant to study how to build effective CTC-based recognition system using a small amount of training samples. It is especially actual for low-resource languages, where we have only a few dozen hours of speech. Building ASR system for low-resource languages is one of the aims of international Babel program, funded by the Intelligence Advanced Research Projects Activity (IARPA). Within the program extensive research was carried out, resulting in creation of a number of modern ASR systems for low-resource languages. Recently, end-to-end approaches were applied to this task, showing expectedly worse results than traditional systems, although the difference is rather small.
In this paper we explore a number of ways to improve end-to-end CTC-based systems in low-resource scenarios using the Turkish language dataset from the IARPA Babel collection. In the next section we describe in more details different versions of CTC-systems and their application for low-resource speech recognition. Section 3 describes the experiments and their results. Section 4 summarizes the results and discusses possible ways for further work.
Related work
Development of CTC-based systems originates from the paper BIBREF3 where CTC loss was introduced. This loss is a total probability of labels sequence given observation sequence, which takes into account all possible alignments induced by a given words sequence.
Although a number of possible alignments increases exponentially with sequences’ lengths, there is an efficient algorithm to compute CTC loss based on dynamic programming principle (known as Forward-Backward algorithm). This algorithm operates with posterior probabilities of any output sequence element observation given the time frame and CTC loss is differentiable with respect to these probabilities.
Therefore, if an acoustic model is based on the neural network which estimates these posteriors, its training may be performed with a conventional error back-propagation gradient descent BIBREF6 . Training of ASR system based on such a model does not require an explicit alignment of input utterance to the elements of output sequence and thus may be performed in end-to-end fashion. It is also important that CTC loss accumulates the information about the whole output sequence, and hence its optimization is in some sense an alternative to the traditional fine-tuning of neural network acoustic models by means of sequence-discriminative criteria such as sMBR BIBREF7 etc. The implementation of CTC is conventionally based on RNN/LSTM networks, including bidirectional ones as acoustic models, since they are known to model long context effectively.
The important component of CTC is a special “blank” symbol which fills in gaps between meaningful elements of output sequence to equalize its length to the number of frames in the input sequence. It corresponds to a separate output neuron, and blank symbols are deleted from the recognized sequence to obtain the final result. In BIBREF8 a modification of CTC loss was proposed, referred as Auto SeGmentation criterion (ASG loss), which does not use blank symbols. Instead of using “blank”, a simple transition probability model for an output symbols is introduced. This leads to a significant simplification and speedup of computations. Moreover, the improved recognition results compared to basic CTC loss were obtained.
DeepSpeech BIBREF9 developed by Baidu Inc. was one of the first systems that demonstrated an effectiveness of CTC-based speech recognition in LVCSR tasks. Being trained on 2300 hours of English Conversational Telephone Speech data, it demonstrated state-of-the-art results on Hub5'00 evaluation set. Research in this direction continued and resulted in DeepSpeech2 architecture BIBREF10 , composed of both convolutional and recurrent layers. This system demonstrates improved accuracy of recognition of both English and Mandarin speech. Another successful example of applying CTC to LVCSR tasks is EESEN system BIBREF11 . It integrates an RNN-based model trained with CTC criterion to the conventional WFST-based decoder from the Kaldi toolkit BIBREF12 . The paper BIBREF13 shows that end-to-end systems may be successfully built from convolutional layers only instead of recurrent ones. It was demonstrated that using Gated Convolutional Units (GLU-CNNs) and training with ASG-loss leads to the state-of-the-art results on the LibriSpeech database (960 hours of training data).
Recently, a new modification of DeepSpeech2 architecture was proposed in BIBREF14 . Several lower convolutional layers were replaced with a deep residual network with depth-wise separable convolutions. This modification along with using strong regularization and data augmentation techniques leads to the results close to DeepSpeech2 in spite of significantly lower amount of data used for training. Indeed, one of the models was trained with only 80 hours of speech data (which were augmented with noisy and speed-perturbed versions of original data).
These results suggest that CTC can be successfully applied for the training of ASR systems for low-resource languages, in particular, for those included in Babel research program (the amount of training data for them is normally 40 to 80 hours of speech).
Currently, Babel corpus contains data for more than 20 languages, and for most of them quite good traditional ASR system were built BIBREF15 , BIBREF16 , BIBREF17 . In order to improve speech recognition accuracy for a given language, data from other languages is widely used as well. It can be used to train multilingual system via multitask learning or to obtain high-level multilingual representations, usually bottleneck features, extracted from a pre-trained multilingual network.
One of the first attempts to build ASR system for low-resource BABEL languages using CTC-based end-to-end training was made recently BIBREF18 . Despite the obtained results are somewhat worse compared to the state-of-the-art traditional systems, they still demonstrate that CTC-based approach is viable for building low-resource ASR systems. The aim of our work is to investigate some ways to improve the obtained results.
Basic setup
For all experiments we used conversational speech from IARPA Babel Turkish Language Pack (LDC2016S10). This corpus contains about 80 hours of transcribed speech for training and 10 hours for development. The dataset is rather small compared to widely used benchmarks for conversational speech: English Switchboard corpus (300 hours, LDC97S62) and Fisher dataset (2000 hours, LDC2004S13 and LDC2005S13).
As targets we use 32 symbols: 29 lowercase characters of Turkish alphabet BIBREF19 , apostrophe, space and special 〈blank〉 character that means “no output”. Thus we do not use any prior linguistic knowledge and also avoid OOV problem as the system can construct new words directly.
All models are trained with CTC-loss. Input features are 40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, concatenated with deltas and delta-deltas (120 features in vector). We also tried to use spectrogram and experimented with different normalization techniques.
For decoding we used character-based beam search BIBREF20 with 3-gram language model build with SRILM package BIBREF21 finding sequence of characters INLINEFORM0 that maximizes the following objective BIBREF9 : INLINEFORM1
where INLINEFORM0 is language model weight and INLINEFORM1 is word insertion penalty.
For all experiments we used INLINEFORM0 , INLINEFORM1 , and performed decoding with beam width equal to 100 and 2000, which is not very large compared to 7000 and more active hypotheses used in traditional WFST decoders (e.g. many Kaldi recipes do decoding with INLINEFORM2 ).
To compare with other published results BIBREF18 , BIBREF22 we used Sclite BIBREF23 scoring package to measure results of decoding with beam width 2000, that takes into account incomplete words and spoken noise in reference texts and doesn't penalize model if it incorrectly recognize these pieces.
Also we report WER (word error rate) for simple argmax decoder (taking labels with maximum output on each time step and than applying CTC decoding rule – collapse repeated labels and remove “blanks”).
Experiments with architecture
We tried to explore the behavior of different neural network architectures in case when rather small data is available. We used multi-layer bidirectional LSTM networks, tried fully-convolutional architecture similar to Wav2Letter BIBREF8 and explored DeepSpeech-like architecture developed by Salesforce (DS-SF) BIBREF14 .
The convolutional model consists of 11 convolutional layers with batch normalization after each layer. The DeepSpeech-like architecture consists of 5-layers residual network with depth-wise separable convolutions followed by 4-layer bidirectional Gated Recurrent Unit (GRU) as described in BIBREF14 .
Our baseline bidirectional LSTM is 6-layers network with 320 hidden units per direction as in BIBREF18 . Also we tried to use bLSTM to label every second frame (20 ms) concatenating every first output from first layer with second and taking this as input for second model layer.
The performance of our baseline models is shown in Table TABREF6 .
Loss modification: segmenting during training
It is known that CTC-loss is very unstable for long utterances BIBREF3 , and smaller utterances are more useful for this task. Some techniques were developed to help model converge faster, e.g. sortagrad BIBREF10 (using shorter segments at the beginning of training).
To compute CTC-loss we use all possible alignments between audio features and reference text, but only some of the alignments make sense. Traditional DNN-HMM systems also use iterative training with finding best alignment and then training neural network to approximate this alignment. Therefore, we propose the following algorithm to use segmentation during training:
compute CTC-alignment (find the sequence of targets with minimal loss that can be mapped to real targets by collapsing repeated characters and removing blanks)
perform greedy decoding (argmax on each step)
find “well-recognized” words with INLINEFORM0 ( INLINEFORM1 is a hyperparameter): segment should start and end with space; word is “well-recognized” when argmax decoding is equal to computed alignment
if the word is “well-recognized”, divide the utterance into 5 segments: left segment before space, left space, the word, right space and right segment
compute CTC-loss for all this segments separately and do back-propagation as usual
The results of training with this criterion are shown in Table TABREF13 . The proposed criterion doesn't lead to consistent improvement while decoding with large beam width (2000), but shows significant improvement when decoding with smaller beam (100). We plan to further explore utilizing alignment information during training.
Using different features
We explored different normalization techniques. FBanks with cepstral mean normalization (CMN) perform better than raw FBanks. We found using variance with mean normalization (CMVN) unnecessary for the task. Using deltas and delta-deltas improves model, so we used them in other experiments. Models trained with spectrogram features converge slower and to worse minimum, but the difference when using CMN is not very big compared to FBanks.
Varying model size and number of layers
Experiments with varying number of hidden units of 6-layer bLSTM models are presented in Table TABREF17 . Models with 512 and 768 hidden units are worse than with 320, but model with 1024 hidden units is significantly better than others. We also observed that model with 6 layers performs better than others.
Training the best model
To train our best model we chose the best network from our experiments (6-layer bLSTM with 1024 hidden units), trained it with Adam optimizer and fine-tuned with SGD with momentum using exponential learning rate decay. The best model trained with speed and volume perturbation BIBREF24 achieved 45.8% WER, which is the best published end-to-end result on Babel Turkish dataset using in-domain data. For comparison, WER of model trained using in-domain data in BIBREF18 is 53.1%, using 4 additional languages (including English Switchboard dataset) – 48.7%. It is also not far from Kaldi DNN-HMM system BIBREF22 with 43.8% WER.
Conclusions and future work
In this paper we explored different end-to-end architectures in low-resource ASR task using Babel Turkish dataset. We considered different ways to improve performance and proposed promising CTC-loss modification that uses segmentation during training. Our final system achieved 45.8% WER using in-domain data only, which is the best published result for Turkish end-to-end systems. Our work also shows than well-tuned end-to-end system can achieve results very close to traditional DNN-HMM systems even for low-resource languages. In future work we plan to further investigate different loss modifications (Gram-CTC, ASG) and try to use RNN-Transducers and multi-task learning.
Acknowledgements
This work was financially supported by the Ministry of Education and Science of the Russian Federation, Contract 14.575.21.0132 (IDRFMEFI57517X0132). | 40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, deltas and delta-deltas (120 features in vector), spectrogram |
2677b88c2def3ed94e25a776599555a788d197f2 | 2677b88c2def3ed94e25a776599555a788d197f2_0 | Q: Which architecture is their best model?
Text: Introduction
Although development of the first speech recognition systems began half a century ago, there has been a significant increase of the accuracy of ASR systems and number of their applications for the recent ten years, even for low-resource languages BIBREF0 , BIBREF1 .
This is mainly due to widespread applying of deep learning and very effective performance of neural networks in hybrid recognition systems (DNN-HMM). However, for last few years there has been a trend to change traditional ASR training paradigm. End-to-end training systems gradually displace complex multistage learning process (including training of GMMs BIBREF2 , clustering of allophones’ states, aligning of speech to clustered senones, training neural networks with cross-entropy loss, followed by retraining with sequence-discriminative criterion). The new approach implies training the system in one global step, working only with acoustic data and reference texts, and significantly simplifies or even completely excludes in some cases the decoding process. It also avoids the problem of out-of-vocabulary words (OOV), because end-to-end system, trained with parts of the words as targets, can construct new words itself using graphemes or subword units, while traditional DNN-HMM systems are limited with language model vocabulary.
The whole variety of end-to-end systems can be divided into 3 main categories: Connectionist Temporal Classification (CTC) BIBREF3 ; Sequence-to-sequence models with attention mechanism BIBREF4 ; RNN-Transducers BIBREF5 .
Connectionist Temporal Classification (CTC) approach uses loss functions that utilize all possible alignments between reference text and audio data. Targets for CTC-based system can be phonemes, graphemes, syllables and other subword units and even whole words. However, a lot more data is usually required to train such systems well, compared to traditional hybrid systems.
Sequence-to-sequence models are used to map entire input sequences to output sequences without any assumptions about their alignment. The most popular architecture for sequence-to-sequence models is encoder-decoder model with attention. Encoder and decoder are usually constructed using recurrent neural networks, basic attention mechanism calculates energy weights that emphasize importance of encoder vectors for decoding on this step, and then sums all these vectors with energy weights. Encoder-decoder models with attention mechanism show results close to traditional DNN-HMM systems and in some cases surpass them, but for a number of reasons their usage is still rather limited. First of all, this is related to the fact, that such systems show best results when the duration of real utterances is close to the duration of utterances from training data. However, when the duration difference increases, the performance degrades significantly BIBREF4 .
Moreover, the entire utterance must be preprocessed by encoder before start of decoder's work. This is the reason, why it is hard to apply the approach to recognize long recordings or streaming audio. Segmenting long recordings into shorter utterances solves the duration issue, but leads to a context break, and eventually negatively affects recognition accuracy. Secondly, the computational complexity of encoder-decoder models is high because of recurrent networks usage, so these models are rather slow and hard to parallelize.
The idea of RNN-Transducer is an extension of CTC and provides the ability to model inner dependencies separately and jointly between elements of both input (audio frames) and output (phonemes and other subword units) sequences. Despite of mathematical elegance, such systems are very complicated and hard to implement, so they are still rarely used, although several impressive results were obtained using this technique.
CTC-based approach is easier to implement, better scaled and has many “degrees of freedom”, which allows to significantly improve baseline systems and achieve results close to state-of-the-art. Moreover, CTC-based systems are well compatible with traditional WFST-decoders and can be easily integrated with conventional ASR systems.
Besides, as already mentioned, CTC-systems are rather sensitive to the amount of training data, so it is very relevant to study how to build effective CTC-based recognition system using a small amount of training samples. It is especially actual for low-resource languages, where we have only a few dozen hours of speech. Building ASR system for low-resource languages is one of the aims of international Babel program, funded by the Intelligence Advanced Research Projects Activity (IARPA). Within the program extensive research was carried out, resulting in creation of a number of modern ASR systems for low-resource languages. Recently, end-to-end approaches were applied to this task, showing expectedly worse results than traditional systems, although the difference is rather small.
In this paper we explore a number of ways to improve end-to-end CTC-based systems in low-resource scenarios using the Turkish language dataset from the IARPA Babel collection. In the next section we describe in more details different versions of CTC-systems and their application for low-resource speech recognition. Section 3 describes the experiments and their results. Section 4 summarizes the results and discusses possible ways for further work.
Related work
Development of CTC-based systems originates from the paper BIBREF3 where CTC loss was introduced. This loss is a total probability of labels sequence given observation sequence, which takes into account all possible alignments induced by a given words sequence.
Although a number of possible alignments increases exponentially with sequences’ lengths, there is an efficient algorithm to compute CTC loss based on dynamic programming principle (known as Forward-Backward algorithm). This algorithm operates with posterior probabilities of any output sequence element observation given the time frame and CTC loss is differentiable with respect to these probabilities.
Therefore, if an acoustic model is based on the neural network which estimates these posteriors, its training may be performed with a conventional error back-propagation gradient descent BIBREF6 . Training of ASR system based on such a model does not require an explicit alignment of input utterance to the elements of output sequence and thus may be performed in end-to-end fashion. It is also important that CTC loss accumulates the information about the whole output sequence, and hence its optimization is in some sense an alternative to the traditional fine-tuning of neural network acoustic models by means of sequence-discriminative criteria such as sMBR BIBREF7 etc. The implementation of CTC is conventionally based on RNN/LSTM networks, including bidirectional ones as acoustic models, since they are known to model long context effectively.
The important component of CTC is a special “blank” symbol which fills in gaps between meaningful elements of output sequence to equalize its length to the number of frames in the input sequence. It corresponds to a separate output neuron, and blank symbols are deleted from the recognized sequence to obtain the final result. In BIBREF8 a modification of CTC loss was proposed, referred as Auto SeGmentation criterion (ASG loss), which does not use blank symbols. Instead of using “blank”, a simple transition probability model for an output symbols is introduced. This leads to a significant simplification and speedup of computations. Moreover, the improved recognition results compared to basic CTC loss were obtained.
DeepSpeech BIBREF9 developed by Baidu Inc. was one of the first systems that demonstrated an effectiveness of CTC-based speech recognition in LVCSR tasks. Being trained on 2300 hours of English Conversational Telephone Speech data, it demonstrated state-of-the-art results on Hub5'00 evaluation set. Research in this direction continued and resulted in DeepSpeech2 architecture BIBREF10 , composed of both convolutional and recurrent layers. This system demonstrates improved accuracy of recognition of both English and Mandarin speech. Another successful example of applying CTC to LVCSR tasks is EESEN system BIBREF11 . It integrates an RNN-based model trained with CTC criterion to the conventional WFST-based decoder from the Kaldi toolkit BIBREF12 . The paper BIBREF13 shows that end-to-end systems may be successfully built from convolutional layers only instead of recurrent ones. It was demonstrated that using Gated Convolutional Units (GLU-CNNs) and training with ASG-loss leads to the state-of-the-art results on the LibriSpeech database (960 hours of training data).
Recently, a new modification of DeepSpeech2 architecture was proposed in BIBREF14 . Several lower convolutional layers were replaced with a deep residual network with depth-wise separable convolutions. This modification along with using strong regularization and data augmentation techniques leads to the results close to DeepSpeech2 in spite of significantly lower amount of data used for training. Indeed, one of the models was trained with only 80 hours of speech data (which were augmented with noisy and speed-perturbed versions of original data).
These results suggest that CTC can be successfully applied for the training of ASR systems for low-resource languages, in particular, for those included in Babel research program (the amount of training data for them is normally 40 to 80 hours of speech).
Currently, Babel corpus contains data for more than 20 languages, and for most of them quite good traditional ASR system were built BIBREF15 , BIBREF16 , BIBREF17 . In order to improve speech recognition accuracy for a given language, data from other languages is widely used as well. It can be used to train multilingual system via multitask learning or to obtain high-level multilingual representations, usually bottleneck features, extracted from a pre-trained multilingual network.
One of the first attempts to build ASR system for low-resource BABEL languages using CTC-based end-to-end training was made recently BIBREF18 . Despite the obtained results are somewhat worse compared to the state-of-the-art traditional systems, they still demonstrate that CTC-based approach is viable for building low-resource ASR systems. The aim of our work is to investigate some ways to improve the obtained results.
Basic setup
For all experiments we used conversational speech from IARPA Babel Turkish Language Pack (LDC2016S10). This corpus contains about 80 hours of transcribed speech for training and 10 hours for development. The dataset is rather small compared to widely used benchmarks for conversational speech: English Switchboard corpus (300 hours, LDC97S62) and Fisher dataset (2000 hours, LDC2004S13 and LDC2005S13).
As targets we use 32 symbols: 29 lowercase characters of Turkish alphabet BIBREF19 , apostrophe, space and special 〈blank〉 character that means “no output”. Thus we do not use any prior linguistic knowledge and also avoid OOV problem as the system can construct new words directly.
All models are trained with CTC-loss. Input features are 40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, concatenated with deltas and delta-deltas (120 features in vector). We also tried to use spectrogram and experimented with different normalization techniques.
For decoding we used character-based beam search BIBREF20 with 3-gram language model build with SRILM package BIBREF21 finding sequence of characters INLINEFORM0 that maximizes the following objective BIBREF9 : INLINEFORM1
where INLINEFORM0 is language model weight and INLINEFORM1 is word insertion penalty.
For all experiments we used INLINEFORM0 , INLINEFORM1 , and performed decoding with beam width equal to 100 and 2000, which is not very large compared to 7000 and more active hypotheses used in traditional WFST decoders (e.g. many Kaldi recipes do decoding with INLINEFORM2 ).
To compare with other published results BIBREF18 , BIBREF22 we used Sclite BIBREF23 scoring package to measure results of decoding with beam width 2000, that takes into account incomplete words and spoken noise in reference texts and doesn't penalize model if it incorrectly recognize these pieces.
Also we report WER (word error rate) for simple argmax decoder (taking labels with maximum output on each time step and than applying CTC decoding rule – collapse repeated labels and remove “blanks”).
Experiments with architecture
We tried to explore the behavior of different neural network architectures in case when rather small data is available. We used multi-layer bidirectional LSTM networks, tried fully-convolutional architecture similar to Wav2Letter BIBREF8 and explored DeepSpeech-like architecture developed by Salesforce (DS-SF) BIBREF14 .
The convolutional model consists of 11 convolutional layers with batch normalization after each layer. The DeepSpeech-like architecture consists of 5-layers residual network with depth-wise separable convolutions followed by 4-layer bidirectional Gated Recurrent Unit (GRU) as described in BIBREF14 .
Our baseline bidirectional LSTM is 6-layers network with 320 hidden units per direction as in BIBREF18 . Also we tried to use bLSTM to label every second frame (20 ms) concatenating every first output from first layer with second and taking this as input for second model layer.
The performance of our baseline models is shown in Table TABREF6 .
Loss modification: segmenting during training
It is known that CTC-loss is very unstable for long utterances BIBREF3 , and smaller utterances are more useful for this task. Some techniques were developed to help model converge faster, e.g. sortagrad BIBREF10 (using shorter segments at the beginning of training).
To compute CTC-loss we use all possible alignments between audio features and reference text, but only some of the alignments make sense. Traditional DNN-HMM systems also use iterative training with finding best alignment and then training neural network to approximate this alignment. Therefore, we propose the following algorithm to use segmentation during training:
compute CTC-alignment (find the sequence of targets with minimal loss that can be mapped to real targets by collapsing repeated characters and removing blanks)
perform greedy decoding (argmax on each step)
find “well-recognized” words with INLINEFORM0 ( INLINEFORM1 is a hyperparameter): segment should start and end with space; word is “well-recognized” when argmax decoding is equal to computed alignment
if the word is “well-recognized”, divide the utterance into 5 segments: left segment before space, left space, the word, right space and right segment
compute CTC-loss for all this segments separately and do back-propagation as usual
The results of training with this criterion are shown in Table TABREF13 . The proposed criterion doesn't lead to consistent improvement while decoding with large beam width (2000), but shows significant improvement when decoding with smaller beam (100). We plan to further explore utilizing alignment information during training.
Using different features
We explored different normalization techniques. FBanks with cepstral mean normalization (CMN) perform better than raw FBanks. We found using variance with mean normalization (CMVN) unnecessary for the task. Using deltas and delta-deltas improves model, so we used them in other experiments. Models trained with spectrogram features converge slower and to worse minimum, but the difference when using CMN is not very big compared to FBanks.
Varying model size and number of layers
Experiments with varying number of hidden units of 6-layer bLSTM models are presented in Table TABREF17 . Models with 512 and 768 hidden units are worse than with 320, but model with 1024 hidden units is significantly better than others. We also observed that model with 6 layers performs better than others.
Training the best model
To train our best model we chose the best network from our experiments (6-layer bLSTM with 1024 hidden units), trained it with Adam optimizer and fine-tuned with SGD with momentum using exponential learning rate decay. The best model trained with speed and volume perturbation BIBREF24 achieved 45.8% WER, which is the best published end-to-end result on Babel Turkish dataset using in-domain data. For comparison, WER of model trained using in-domain data in BIBREF18 is 53.1%, using 4 additional languages (including English Switchboard dataset) – 48.7%. It is also not far from Kaldi DNN-HMM system BIBREF22 with 43.8% WER.
Conclusions and future work
In this paper we explored different end-to-end architectures in low-resource ASR task using Babel Turkish dataset. We considered different ways to improve performance and proposed promising CTC-loss modification that uses segmentation during training. Our final system achieved 45.8% WER using in-domain data only, which is the best published result for Turkish end-to-end systems. Our work also shows than well-tuned end-to-end system can achieve results very close to traditional DNN-HMM systems even for low-resource languages. In future work we plan to further investigate different loss modifications (Gram-CTC, ASG) and try to use RNN-Transducers and multi-task learning.
Acknowledgements
This work was financially supported by the Ministry of Education and Science of the Russian Federation, Contract 14.575.21.0132 (IDRFMEFI57517X0132). | 6-layer bLSTM with 1024 hidden units |
8ca31caa34cc5b65dc1d01d0d1f36bf8c4928805 | 8ca31caa34cc5b65dc1d01d0d1f36bf8c4928805_0 | Q: What kind of spontaneous speech is used?
Text: Introduction
Although development of the first speech recognition systems began half a century ago, there has been a significant increase of the accuracy of ASR systems and number of their applications for the recent ten years, even for low-resource languages BIBREF0 , BIBREF1 .
This is mainly due to widespread applying of deep learning and very effective performance of neural networks in hybrid recognition systems (DNN-HMM). However, for last few years there has been a trend to change traditional ASR training paradigm. End-to-end training systems gradually displace complex multistage learning process (including training of GMMs BIBREF2 , clustering of allophones’ states, aligning of speech to clustered senones, training neural networks with cross-entropy loss, followed by retraining with sequence-discriminative criterion). The new approach implies training the system in one global step, working only with acoustic data and reference texts, and significantly simplifies or even completely excludes in some cases the decoding process. It also avoids the problem of out-of-vocabulary words (OOV), because end-to-end system, trained with parts of the words as targets, can construct new words itself using graphemes or subword units, while traditional DNN-HMM systems are limited with language model vocabulary.
The whole variety of end-to-end systems can be divided into 3 main categories: Connectionist Temporal Classification (CTC) BIBREF3 ; Sequence-to-sequence models with attention mechanism BIBREF4 ; RNN-Transducers BIBREF5 .
Connectionist Temporal Classification (CTC) approach uses loss functions that utilize all possible alignments between reference text and audio data. Targets for CTC-based system can be phonemes, graphemes, syllables and other subword units and even whole words. However, a lot more data is usually required to train such systems well, compared to traditional hybrid systems.
Sequence-to-sequence models are used to map entire input sequences to output sequences without any assumptions about their alignment. The most popular architecture for sequence-to-sequence models is encoder-decoder model with attention. Encoder and decoder are usually constructed using recurrent neural networks, basic attention mechanism calculates energy weights that emphasize importance of encoder vectors for decoding on this step, and then sums all these vectors with energy weights. Encoder-decoder models with attention mechanism show results close to traditional DNN-HMM systems and in some cases surpass them, but for a number of reasons their usage is still rather limited. First of all, this is related to the fact, that such systems show best results when the duration of real utterances is close to the duration of utterances from training data. However, when the duration difference increases, the performance degrades significantly BIBREF4 .
Moreover, the entire utterance must be preprocessed by encoder before start of decoder's work. This is the reason, why it is hard to apply the approach to recognize long recordings or streaming audio. Segmenting long recordings into shorter utterances solves the duration issue, but leads to a context break, and eventually negatively affects recognition accuracy. Secondly, the computational complexity of encoder-decoder models is high because of recurrent networks usage, so these models are rather slow and hard to parallelize.
The idea of RNN-Transducer is an extension of CTC and provides the ability to model inner dependencies separately and jointly between elements of both input (audio frames) and output (phonemes and other subword units) sequences. Despite of mathematical elegance, such systems are very complicated and hard to implement, so they are still rarely used, although several impressive results were obtained using this technique.
CTC-based approach is easier to implement, better scaled and has many “degrees of freedom”, which allows to significantly improve baseline systems and achieve results close to state-of-the-art. Moreover, CTC-based systems are well compatible with traditional WFST-decoders and can be easily integrated with conventional ASR systems.
Besides, as already mentioned, CTC-systems are rather sensitive to the amount of training data, so it is very relevant to study how to build effective CTC-based recognition system using a small amount of training samples. It is especially actual for low-resource languages, where we have only a few dozen hours of speech. Building ASR system for low-resource languages is one of the aims of international Babel program, funded by the Intelligence Advanced Research Projects Activity (IARPA). Within the program extensive research was carried out, resulting in creation of a number of modern ASR systems for low-resource languages. Recently, end-to-end approaches were applied to this task, showing expectedly worse results than traditional systems, although the difference is rather small.
In this paper we explore a number of ways to improve end-to-end CTC-based systems in low-resource scenarios using the Turkish language dataset from the IARPA Babel collection. In the next section we describe in more details different versions of CTC-systems and their application for low-resource speech recognition. Section 3 describes the experiments and their results. Section 4 summarizes the results and discusses possible ways for further work.
Related work
Development of CTC-based systems originates from the paper BIBREF3 where CTC loss was introduced. This loss is a total probability of labels sequence given observation sequence, which takes into account all possible alignments induced by a given words sequence.
Although a number of possible alignments increases exponentially with sequences’ lengths, there is an efficient algorithm to compute CTC loss based on dynamic programming principle (known as Forward-Backward algorithm). This algorithm operates with posterior probabilities of any output sequence element observation given the time frame and CTC loss is differentiable with respect to these probabilities.
Therefore, if an acoustic model is based on the neural network which estimates these posteriors, its training may be performed with a conventional error back-propagation gradient descent BIBREF6 . Training of ASR system based on such a model does not require an explicit alignment of input utterance to the elements of output sequence and thus may be performed in end-to-end fashion. It is also important that CTC loss accumulates the information about the whole output sequence, and hence its optimization is in some sense an alternative to the traditional fine-tuning of neural network acoustic models by means of sequence-discriminative criteria such as sMBR BIBREF7 etc. The implementation of CTC is conventionally based on RNN/LSTM networks, including bidirectional ones as acoustic models, since they are known to model long context effectively.
The important component of CTC is a special “blank” symbol which fills in gaps between meaningful elements of output sequence to equalize its length to the number of frames in the input sequence. It corresponds to a separate output neuron, and blank symbols are deleted from the recognized sequence to obtain the final result. In BIBREF8 a modification of CTC loss was proposed, referred as Auto SeGmentation criterion (ASG loss), which does not use blank symbols. Instead of using “blank”, a simple transition probability model for an output symbols is introduced. This leads to a significant simplification and speedup of computations. Moreover, the improved recognition results compared to basic CTC loss were obtained.
DeepSpeech BIBREF9 developed by Baidu Inc. was one of the first systems that demonstrated an effectiveness of CTC-based speech recognition in LVCSR tasks. Being trained on 2300 hours of English Conversational Telephone Speech data, it demonstrated state-of-the-art results on Hub5'00 evaluation set. Research in this direction continued and resulted in DeepSpeech2 architecture BIBREF10 , composed of both convolutional and recurrent layers. This system demonstrates improved accuracy of recognition of both English and Mandarin speech. Another successful example of applying CTC to LVCSR tasks is EESEN system BIBREF11 . It integrates an RNN-based model trained with CTC criterion to the conventional WFST-based decoder from the Kaldi toolkit BIBREF12 . The paper BIBREF13 shows that end-to-end systems may be successfully built from convolutional layers only instead of recurrent ones. It was demonstrated that using Gated Convolutional Units (GLU-CNNs) and training with ASG-loss leads to the state-of-the-art results on the LibriSpeech database (960 hours of training data).
Recently, a new modification of DeepSpeech2 architecture was proposed in BIBREF14 . Several lower convolutional layers were replaced with a deep residual network with depth-wise separable convolutions. This modification along with using strong regularization and data augmentation techniques leads to the results close to DeepSpeech2 in spite of significantly lower amount of data used for training. Indeed, one of the models was trained with only 80 hours of speech data (which were augmented with noisy and speed-perturbed versions of original data).
These results suggest that CTC can be successfully applied for the training of ASR systems for low-resource languages, in particular, for those included in Babel research program (the amount of training data for them is normally 40 to 80 hours of speech).
Currently, Babel corpus contains data for more than 20 languages, and for most of them quite good traditional ASR system were built BIBREF15 , BIBREF16 , BIBREF17 . In order to improve speech recognition accuracy for a given language, data from other languages is widely used as well. It can be used to train multilingual system via multitask learning or to obtain high-level multilingual representations, usually bottleneck features, extracted from a pre-trained multilingual network.
One of the first attempts to build ASR system for low-resource BABEL languages using CTC-based end-to-end training was made recently BIBREF18 . Despite the obtained results are somewhat worse compared to the state-of-the-art traditional systems, they still demonstrate that CTC-based approach is viable for building low-resource ASR systems. The aim of our work is to investigate some ways to improve the obtained results.
Basic setup
For all experiments we used conversational speech from IARPA Babel Turkish Language Pack (LDC2016S10). This corpus contains about 80 hours of transcribed speech for training and 10 hours for development. The dataset is rather small compared to widely used benchmarks for conversational speech: English Switchboard corpus (300 hours, LDC97S62) and Fisher dataset (2000 hours, LDC2004S13 and LDC2005S13).
As targets we use 32 symbols: 29 lowercase characters of Turkish alphabet BIBREF19 , apostrophe, space and special 〈blank〉 character that means “no output”. Thus we do not use any prior linguistic knowledge and also avoid OOV problem as the system can construct new words directly.
All models are trained with CTC-loss. Input features are 40 mel-scaled log filterbank enegries (FBanks) computed every 10 ms with 25 ms window, concatenated with deltas and delta-deltas (120 features in vector). We also tried to use spectrogram and experimented with different normalization techniques.
For decoding we used character-based beam search BIBREF20 with 3-gram language model build with SRILM package BIBREF21 finding sequence of characters INLINEFORM0 that maximizes the following objective BIBREF9 : INLINEFORM1
where INLINEFORM0 is language model weight and INLINEFORM1 is word insertion penalty.
For all experiments we used INLINEFORM0 , INLINEFORM1 , and performed decoding with beam width equal to 100 and 2000, which is not very large compared to 7000 and more active hypotheses used in traditional WFST decoders (e.g. many Kaldi recipes do decoding with INLINEFORM2 ).
To compare with other published results BIBREF18 , BIBREF22 we used Sclite BIBREF23 scoring package to measure results of decoding with beam width 2000, that takes into account incomplete words and spoken noise in reference texts and doesn't penalize model if it incorrectly recognize these pieces.
Also we report WER (word error rate) for simple argmax decoder (taking labels with maximum output on each time step and than applying CTC decoding rule – collapse repeated labels and remove “blanks”).
Experiments with architecture
We tried to explore the behavior of different neural network architectures in case when rather small data is available. We used multi-layer bidirectional LSTM networks, tried fully-convolutional architecture similar to Wav2Letter BIBREF8 and explored DeepSpeech-like architecture developed by Salesforce (DS-SF) BIBREF14 .
The convolutional model consists of 11 convolutional layers with batch normalization after each layer. The DeepSpeech-like architecture consists of 5-layers residual network with depth-wise separable convolutions followed by 4-layer bidirectional Gated Recurrent Unit (GRU) as described in BIBREF14 .
Our baseline bidirectional LSTM is 6-layers network with 320 hidden units per direction as in BIBREF18 . Also we tried to use bLSTM to label every second frame (20 ms) concatenating every first output from first layer with second and taking this as input for second model layer.
The performance of our baseline models is shown in Table TABREF6 .
Loss modification: segmenting during training
It is known that CTC-loss is very unstable for long utterances BIBREF3 , and smaller utterances are more useful for this task. Some techniques were developed to help model converge faster, e.g. sortagrad BIBREF10 (using shorter segments at the beginning of training).
To compute CTC-loss we use all possible alignments between audio features and reference text, but only some of the alignments make sense. Traditional DNN-HMM systems also use iterative training with finding best alignment and then training neural network to approximate this alignment. Therefore, we propose the following algorithm to use segmentation during training:
compute CTC-alignment (find the sequence of targets with minimal loss that can be mapped to real targets by collapsing repeated characters and removing blanks)
perform greedy decoding (argmax on each step)
find “well-recognized” words with INLINEFORM0 ( INLINEFORM1 is a hyperparameter): segment should start and end with space; word is “well-recognized” when argmax decoding is equal to computed alignment
if the word is “well-recognized”, divide the utterance into 5 segments: left segment before space, left space, the word, right space and right segment
compute CTC-loss for all this segments separately and do back-propagation as usual
The results of training with this criterion are shown in Table TABREF13 . The proposed criterion doesn't lead to consistent improvement while decoding with large beam width (2000), but shows significant improvement when decoding with smaller beam (100). We plan to further explore utilizing alignment information during training.
Using different features
We explored different normalization techniques. FBanks with cepstral mean normalization (CMN) perform better than raw FBanks. We found using variance with mean normalization (CMVN) unnecessary for the task. Using deltas and delta-deltas improves model, so we used them in other experiments. Models trained with spectrogram features converge slower and to worse minimum, but the difference when using CMN is not very big compared to FBanks.
Varying model size and number of layers
Experiments with varying number of hidden units of 6-layer bLSTM models are presented in Table TABREF17 . Models with 512 and 768 hidden units are worse than with 320, but model with 1024 hidden units is significantly better than others. We also observed that model with 6 layers performs better than others.
Training the best model
To train our best model we chose the best network from our experiments (6-layer bLSTM with 1024 hidden units), trained it with Adam optimizer and fine-tuned with SGD with momentum using exponential learning rate decay. The best model trained with speed and volume perturbation BIBREF24 achieved 45.8% WER, which is the best published end-to-end result on Babel Turkish dataset using in-domain data. For comparison, WER of model trained using in-domain data in BIBREF18 is 53.1%, using 4 additional languages (including English Switchboard dataset) – 48.7%. It is also not far from Kaldi DNN-HMM system BIBREF22 with 43.8% WER.
Conclusions and future work
In this paper we explored different end-to-end architectures in low-resource ASR task using Babel Turkish dataset. We considered different ways to improve performance and proposed promising CTC-loss modification that uses segmentation during training. Our final system achieved 45.8% WER using in-domain data only, which is the best published result for Turkish end-to-end systems. Our work also shows than well-tuned end-to-end system can achieve results very close to traditional DNN-HMM systems even for low-resource languages. In future work we plan to further investigate different loss modifications (Gram-CTC, ASG) and try to use RNN-Transducers and multi-task learning.
Acknowledgements
This work was financially supported by the Ministry of Education and Science of the Russian Federation, Contract 14.575.21.0132 (IDRFMEFI57517X0132). | Unanswerable |
9ab43f941c11a4b09a0e4aea61b4a5b4612e7933 | 9ab43f941c11a4b09a0e4aea61b4a5b4612e7933_0 | Q: What approach did previous models use for multi-span questions?
Text: Introduction
The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published.
DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets, in a hope to push the field towards a more comprehensive analysis of paragraphs of text. In addition to questions whose answers are a single continuous span from the paragraph text (questions of a type already included in SQuAD), DROP introduced additional types of questions. Among these new types were questions that require simple numerical reasoning, i.e questions whose answer is the result of a simple arithmetic expression containing numbers from the passage, and questions whose answers consist of several spans taken from the paragraph or the question itself, what we will denote as "multi-span questions".
Of all the existing models that tried to tackle DROP, only one model BIBREF4 directly targeted multi-span questions in a manner that wasn't just a by-product of the model's overall performance. In this paper, we propose a new method for tackling multi-span questions. Our method takes a different path from that of the aforementioned model. It does not try to generalize the existing approach for tackling single-span questions, but instead attempts to attack this issue with a new, tag-based, approach.
Related Work
Numerically-aware QANet (NAQANet) BIBREF3 was the model released with DROP. It uses QANET BIBREF5, at the time the best-performing published model on SQuAD 1.1 BIBREF0 (without data augmentation or pretraining), as the encoder. On top of QANET, NAQANet adds four different output layers, which we refer to as "heads". Each of these heads is designed to tackle a specific question type from DROP, where these types where identified by DROP's authors post-creation of the dataset. These four heads are (1) Passage span head, designed for producing answers that consist of a single span from the passage. This head deals with the type of questions already introduced in SQuAD. (2) Question span head, for answers that consist of a single span from the question. (3) Arithmetic head, for answers that require adding or subtracting numbers from the passage. (4) Count head, for answers that require counting and sorting entities from the text. In addition, to determine which head should be used to predict an answer, a 4-way categorical variable, as per the number of heads, is trained. We denote this categorical variable as the "head predictor".
Numerically-aware BERT (NABERT+) BIBREF6 introduced two main improvements over NAQANET. The first was to replace the QANET encoder with BERT. This change alone resulted in an absolute improvement of more than eight points in both EM and F1 metrics. The second improvement was to the arithmetic head, consisting of the addition of "standard numbers" and "templates". Standard numbers were predefined numbers which were added as additional inputs to the arithmetic head, regardless of their occurrence in the passage. Templates were an attempt to enrich the head's arithmetic capabilities, by adding the ability of doing simple multiplications and divisions between up to three numbers.
MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable.
Additionally, MTMSN introduced two new other, non span-related, components. The first was a new "negation" head, meant to deal with questions deemed as requiring logical negation (e.g. "How many percent were not German?"). The second was improving the arithmetic head by using beam search to re-rank candidate arithmetic expressions.
Model
Problem statement. Given a pair $(x^P,x^Q)$ of a passage and a question respectively, both comprised of tokens from a vocabulary $V$, we wish to predict an answer $y$. The answer could be either a collection of spans from the input, or a number, supposedly arrived to by performing arithmetic reasoning on the input. We want to estimate $p(y;x^P,x^Q)$.
The basic structure of our model is shared with NABERT+, which in turn is shared with that of NAQANET (the model initially released with DROP). Consequently, meticulously presenting every part of our model would very likely prove redundant. As a reasonable compromise, we will introduce the shared parts with more brevity, and will go into greater detail when presenting our contributions.
Model ::: NABERT+
Assume there are $K$ answer heads in the model and their weights denoted by $\theta $. For each pair $(x^P,x^Q)$ we assume a latent categorical random variable $z\in \left\lbrace 1,\ldots \,K\right\rbrace $ such that the probability of an answer $y$ is
where each component of the mixture corresponds to an output head such that
Note that a head is not always capable of producing the correct answer $y_\text{gold}$ for each type of question, in which case $p\left(y_\text{gold} \vert z ; x^{P},x^{Q},\theta \right)=0$. For example, the arithmetic head, whose output is always a single number, cannot possibly produce a correct answer for a multi-span question.
For a multi-span question with an answer composed of $l$ spans, denote $y_{{\text{gold}}_{\textit {MS}}}=\left\lbrace y_{{\text{gold}}_1}, \ldots , y_{{\text{gold}}_l} \right\rbrace $. NAQANET and NABERT+ had no head capable of outputting correct answers for multi-span questions. Instead of ignoring them in training, both models settled on using "semi-correct answers": each $y_\text{gold} \in y_{{\text{gold}}_{\textit {MS}}}$ was considered to be a correct answer (only in training). By deliberately encouraging the model to provide partial answers for multi-span questions, they were able to improve the corresponding F1 score. As our model does have a head with the ability to answer multi-span questions correctly, we didn't provide the aforementioned semi-correct answers to any of the other heads. Otherwise, we would have skewed the predictions of the head predictor and effectively mislead the other heads to believe they could predict correct answers for multi-span questions.
Model ::: NABERT+ ::: Heads Shared with NABERT+
Before going over the answer heads, two additional components should be introduced - the summary vectors, and the head predictor.
Summary vectors. The summary vectors are two fixed-size learned representations of the question and the passage, which serve as an input for some of the heads. To create the summary vectors, first define $\mathbf {T}$ as BERT's output on a $(x^{P},x^{Q})$ input. Then, let $\mathbf {T}^{P}$ and $\mathbf {T}^{Q}$ be subsequences of T that correspond to $x^P$ and $x^Q$ respectively. Finally, let us also define Bdim as the dimension of the tokens in $\mathbf {T}$ (e.g 768 for BERTbase), and have $\mathbf {W}^P \in \mathbb {R}^\texttt {Bdim}$ and $\mathbf {W}^Q \in \mathbb {R}^\texttt {Bdim}$ as learned linear layers. Then, the summary vectors are computed as:
Head predictor. A learned categorical variable with its number of outcomes equal to the number of answer heads in the model. Used to assign probabilities for using each of the heads in prediction.
where FFN is a two-layer feed-forward network with RELU activation.
Passage span. Define $\textbf {W}^S \in \mathbb {R}^\texttt {Bdim}$ and $\textbf {W}^E \in \mathbb {R}^\texttt {Bdim}$ as learned vectors. Then the probabilities of the start and end positions of a passage span are computed as
Question span. The probabilities of the start and end positions of a question span are computed as
where $\textbf {e}^{|\textbf {T}^Q|}\otimes \textbf {h}^P$ repeats $\textbf {h}^P$ for each component of $\textbf {T}^Q$.
Count. Counting is treated as a multi-class prediction problem with the numbers 0-9 as possible labels. The label probabilities are computed as
Arithmetic. As in NAQNET, this head obtains all of the numbers from the passage, and assigns a plus, minus or zero ("ignore") for each number. As BERT uses wordpiece tokenization, some numbers are broken up into multiple tokens. Following NABERT+, we chose to represent each number by its first wordpiece. That is, if $\textbf {N}^i$ is the set of tokens corresponding to the $i^\text{th}$ number, we define a number representation as $\textbf {h}_i^N = \textbf {N}^i_0$.
The selection of the sign for each number is a multi-class prediction problem with options $\lbrace 0, +, -\rbrace $, and the probabilities for the signs are given by
As for NABERT+'s two additional arithmetic features, we decided on using only the standard numbers, as the benefits from using templates were deemed inconclusive. Note that unlike the single-span heads, which are related to our introduction of a multi-span head, the arithmetic and count heads were not intended to play a significant role in our work. We didn't aim to improve results on these types of questions, perhaps only as a by-product of improving the general reading comprehension ability of our model.
Model ::: Multi-Span Head
A subset of questions that wasn't directly dealt with by the base models (NAQANET, NABERT+) is questions that have an answer which is composed of multiple non-continuous spans. We suggest a head that will be able to deal with both single-span and multi-span questions.
To model an answer which is a collection of spans, the multi-span head uses the $\mathtt {BIO}$ tagging format BIBREF8: $\mathtt {B}$ is used to mark the beginning of a span, $\mathtt {I}$ is used to mark the inside of a span and $\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans.
As words are broken up by the wordpiece tokenization for BERT, we decided on only considering the representation of the first sub-token of the word to tag, following the NER task from BIBREF2.
For the $i$-th token of an input, the probability to be assigned a $\text{tag} \in \left\lbrace {\mathtt {B},\mathtt {I},\mathtt {O}} \right\rbrace $ is computed as
Model ::: Objective and Training
To train our model, we try to maximize the log-likelihood of the correct answer $p(y_\text{gold};x^{P},x^{Q},\theta )$ as defined in Section SECREF2. If no head is capable of predicting the gold answer, the sample is skipped.
We enumerate over every answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $ (Passage Span, Question Span, Count, Arithmetic, Multi-Span) to compute each of the objective's addends:
Note that we are in a weakly supervised setup: the answer type is not given, and neither is the correct arithmetic expression required for deriving some answers. Therefore, it is possible that $y_\text{gold}$ could be derived by more than one way, even from the same head, with no indication of which is the "correct" one.
We use the weakly supervised training method used in NABERT+ and NAQANET. Based on BIBREF9, for each head we find all the executions that evaluate to the correct answer and maximize their marginal likelihood .
For a datapoint $\left(y, x^{P}, x^{Q} \right)$ let $\chi ^z$ be the set of all possible ways to get $y$ for answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $. Then, as in NABERT+, we have
Finally, for the arithmetic head, let $\mu $ be the set of all the standard numbers and the numbers from the passage, and let $\mathbf {\chi }^{\textit {A}}$ be the set of correct sign assignments to these numbers. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Training Objective
Denote by ${\chi }^{\textit {MS}}$ the set of correct tag sequences. If the concatenation of a question and a passage is $m$ tokens long, then denote a correct tag sequence as $\left(\text{tag}_1,\ldots ,\text{tag}_m\right)$.
We approximate the likelihood of a tag sequence by assuming independence between the sequence's positions, and multiplying the likelihoods of all the correct tags in the sequence. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Correct Tag Sequences
Since a given multi-span answer is a collection of spans, it is required to obtain its matching tag sequences in order to compute the training objective.
In what we consider to be a correct tag sequence, each answer span will be marked at least once. Due to the weakly supervised setup, we consider all the question/passage spans that match the answer spans as being correct. To illustrate, consider the following simple example. Given the text "X Y Z Z" and the correct multi-span answer ["Y", "Z"], there are three correct tag sequences: $\mathtt {O\,B\,B\,B}$,$\quad $ $\mathtt {O\,B\,B\,O}$,$\quad $ $\mathtt {O\,B\,O\,B}$.
Model ::: Objective and Training ::: Dealing with too Many Correct Tag Sequences
The number of correct tag sequences can be expressed by
where $s$ is the number of spans in the answer and $\#_i$ is the number of times the $i^\text{th}$ span appears in the text.
For questions with a reasonable amount of correct tag sequences, we generate all of them before the training starts. However, there is a small group of questions for which the amount of such sequences is between 10,000 and 100,000,000 - too many to generate and train on. In such cases, inspired by BIBREF9, instead of just using an arbitrary subset of the correct sequences, we use beam search to generate the top-k predictions of the training model, and then filter out the incorrect sequences. Compared to using an arbitrary subset, using these sequences causes the optimization to be done with respect to answers more compatible with the model. If no correct tag sequences were predicted within the top-k, we use the tag sequence that has all of the answer spans marked.
Model ::: Tag Sequence Prediction with the Multi-Span Head
Based on the outputs $\textbf {p}_{i}^{{\text{tag}}_{i}}$ we would like to predict the most likely sequence given the $\mathtt {BIO}$ constraints. Denote $\textit {validSeqs}$ as the set of all $\mathtt {BIO}$ sequences of length $m$ that are valid according to the rules specified in Section SECREF5. The $\mathtt {BIO}$ tag sequence to predict is then
We considered the following approaches:
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Viterbi Decoding
A natural candidate for getting the most likely sequence is Viterbi decoding, BIBREF10 with transition probabilities learned by a $\mathtt {BIO}$ constrained Conditional Random Field (CRF) BIBREF11. However, further inspection of our sequence's properties reveals that such a computational effort is probably not necessary, as explained in following paragraphs.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Beam Search
Due to our use of $\mathtt {BIO}$ tags and their constraints, observe that past tag predictions only affect future tag predictions from the last $\mathtt {B}$ prediction and as long as the best tag to predict is $\mathtt {I}$. Considering the frequency and length of the correct spans in the question and the passage, effectively there's no effect of past sequence's positions on future ones, other than a very few positions ahead. Together with the fact that at each prediction step there are no more than 3 tags to consider, it means using beam search to get the most likely sequence is very reasonable and even allows near-optimal results with small beam width values.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Greedy Tagging
Notice that greedy tagging does not enforce the $\mathtt {BIO}$ constraints. However, since the multi-span head's training objective adheres to the $\mathtt {BIO}$ constraints via being given the correct tag sequences, we can expect that even with greedy tagging the predictions will mostly adhere to these constraints as well. In case there are violations, their amendment is required post-prediction. Albeit faster, greedy tagging resulted in a small performance hit, as seen in Table TABREF26.
Preprocessing
We tokenize the passage, question, and all answer texts using the BERT uncased wordpiece tokenizer from huggingface. The tokenization resulting from each $(x^P,x^Q)$ input pair is truncated at 512 tokens so it can be fed to BERT as an input. However, before tokenizing the dataset texts, we perform additional preprocessing as listed below.
Preprocessing ::: Simple Preprocessing ::: Improved Textual Parsing
The raw dataset included almost a thousand of HTML entities that did not get parsed properly, e.g " " instead of a simple space. In addition, we fixed some quirks that were introduced by the original Wikipedia parsing method. For example, when encountering a reference to an external source that included a specific page from that reference, the original parser ended up introducing a redundant ":<PAGE NUMBER>" into the parsed text.
Preprocessing ::: Simple Preprocessing ::: Improved Handling of Numbers
Although we previously stated that we aren't focusing on improving arithmetic performance, while analyzing the training process we encountered two arithmetic-related issues that could be resolved rather quickly: a precision issue and a number extraction issue. Regarding precision, we noticed that while either generating expressions for the arithmetic head, or using the arithmetic head to predict a numeric answer, the value resulting from an arithmetic operation would not always yield the exact result due to floating point precision limitations. For example, $5.8 + 6.6 = 12.3999...$ instead of $12.4$. This issue has caused a significant performance hit of about 1.5 points for both F1 and EM and was fixed by simply rounding numbers to 5 decimal places, assuming that no answer requires a greater precision. Regarding number extraction, we noticed that some numeric entities, required in order to produce a correct answer, weren't being extracted from the passage. Examples include ordinals (121st, 189th) and some "per-" units (1,580.7/km2, 1050.95/month).
Preprocessing ::: Using NER for Cleaning Up Multi-Span Questions
The training dataset contains multi-span questions with answers that are clearly incorrect, with examples shown in Table TABREF22. In order to mitigate this, we applied an answer-cleaning technique using a pretrained Named Entity Recognition (NER) model BIBREF12 in the following manner: (1) Pre-define question prefixes whose answer spans are expected to contain only a specific entity type and filter the matching questions. (2) For a given answer of a filtered question, remove any span that does not contain at least one token of the expected type, where the types are determined by applying the NER model on the passage. For example, if a question starts with "who scored", we expect that any valid span will include a person entity ($\mathtt {PER}$). By applying such rules, we discovered that at least 3% of the multi-span questions in the training dataset included incorrect spans. As our analysis of prefixes wasn't exhaustive, we believe that this method could yield further gains. Table TABREF22 shows a few of our cleaning method results, where we perfectly clean the first two questions, and partially clean a third question.
Training
The starting point for our implementation was the NABERT+ model, which in turn was based on allenai's NAQANET. Our implementation can be found on GitHub. All three models utilize the allennlp framework. The pretrained BERT models were supplied by huggingface. For our base model we used bert-base-uncased. For our large models we used the standard bert-large-uncased-whole-word-masking and the squad fine-tuned bert-large-uncased- whole-word-masking-finetuned-squad.
Due to limited computational resources, we did not perform any hyperparameter searching. We preferred to focus our efforts on the ablation studies, in hope to gain further insights on the effect of the components that we ourselves introduced. For ease of performance comparison, we followed NABERT+'s training settings: we used the BERT Adam optimizer from huggingface with default settings and a learning rate of $1e^{-5}$. The only difference was that we used a batch size of 12. We trained our base model for 20 epochs. For the large models we used a batch size of 3 with a learning rate of $5e^{-6}$ and trained for 5 epochs, except for the model without the single-span heads that was trained with a batch size of 2 for 7 epochs. F1 was used as our validation metric. All models were trained on a single GPU with 12-16GB of memory.
Results and Discussion ::: Performance on DROP's Development Set
Table TABREF24 shows the results on DROP's development set. Compared to our base models, our large models exhibit a substantial improvement across all metrics.
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to the NABERT+ Baseline
We can see that our base model surpasses the NABERT+ baseline in every metric. The major improvement in multi-span performance was expected, as our multi-span head was introduced specifically to tackle this type of questions. For the other types, most of the improvement came from better preprocessing. A more detailed discussion could be found in Section (SECREF36).
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to MTMSN
Notice that different BERTlarge models were used, so the comparison is less direct. Overall, our large models exhibits similar results to those of MTMSNlarge.
For multi-span questions we achieve a significantly better performance. While a breakdown of metrics was only available for MTMSNlarge, notice that even when comparing these metrics to our base model, we still achieve a 12.2 absolute improvement in EM, and a 2.3 improvement in F1. All that, while keeping in mind we compare a base model to a large model (for reference, note the 8 point improvement between MTMSNbase and MTMSNlarge in both EM and F1). Our best model, large-squad, exhibits a huge improvement of 29.7 in EM and 15.1 in F1 compared to MTMSNlarge.
When comparing single-span performance, our best model exhibits slightly better results, but it should be noted that it retains the single-span heads from NABERT+, while in MTMSN they have one head to predict both single-span and multi-span answers. For a fairer comparison, we trained our model with the single-span heads removed, where our multi-span head remained the only head aimed for handling span questions. With this no-single-span-heads setting, while our multi-span performance even improved a bit, our single-span performance suffered a slight drop, ending up trailing by 0.8 in EM and 0.6 in F1 compared to MTMSN. Therefore, it could prove beneficial to try and analyze the reasons behind each model's (ours and MTMSN) relative advantages, and perhaps try to combine them into a more holistic approach of tackling span questions.
Results and Discussion ::: Performance on DROP's Test Set
Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.
Results and Discussion ::: Ablation Studies
In order to analyze the effect of each of our changes, we conduct ablation studies on the development set, depicted in Table TABREF26.
Not using the simple preprocessing from Section SECREF17 resulted in a 2.5 point decrease in both EM and F1. The numeric questions were the most affected, with their performance dropping by 3.5 points. Given that number questions make up about 61% of the dataset, we can deduce that our improved number handling is responsible for about a 2.1 point gain, while the rest could be be attributed to the improved Wikipedia parsing.
Although NER span cleaning (Section SECREF23) affected only 3% of the multi-span questions, it provided a solid improvement of 5.4 EM in multi-span questions and 1.5 EM in single-span questions. The single-span improvement is probably due to the combination of better multi-span head learning as a result of fixing multi-span questions and the fact that the multi-span head can answer single-span questions as well.
Not using the single-span heads results in a slight drop in multi-span performance, and a noticeable drop in single-span performance. However when performing the same comparison between our large models (see Table TABREF24), this performance gap becomes significantly smaller.
As expected, not using the multi-span head causes the multi-span performance to plummet. Note that for this ablation test the single-span heads were permitted to train on multi-span questions.
Compared to using greedy decoding in the prediction of multi-span questions, using beam search results in a small improvement. We used a beam with of 5, and didn't perform extensive tuning of the beam width.
Conclusion
In this work, we introduced a new approach for tackling multi-span questions in reading comprehension datasets. This approach is based on individually tagging each token with a categorical tag, relying on the tokens' contextual representation to bridge the information gap resulting from the tokens being tagged individually.
First, we show that integrating this new approach into an existing model, NABERT+, does not hinder performance on other questions types, while substantially improving the results on multi-span questions. Later, we compare our results to the current state-of-the-art on multi-span questions. We show that our model has a clear advantage in handling multi-span questions, with a 29.7 absolute improvement in EM, and a 15.1 absolute improvement in F1. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataeset. Finally, we present some ablation studies, analyzing the benefit gained from individual components of our model.
We believe that combining our tag-based approach for handling multi-span questions with current successful techniques for handling single-span questions could prove beneficial in finding better, more holistic ways, of tackling span questions in general.
Future Work ::: A Different Loss for Multi-span Questions
Currently, For each individual span, we optimize the average likelihood over all its possible tag sequences (see Section SECREF9). A different approach could be not taking each possible tag sequence into account but only the most likely one. This could provide the model more flexibility during training and the ability to focus on the more "correct" tag sequences.
Future Work ::: Explore Utilization of Non-First Wordpiece Sub-Tokens
As mentioned in Section SECREF5, we only considered the representation of the first wordpiece sub-token in our model. It would be interesting to see how different approaches to utilize the other sub-tokens' representations in the tagging task affect the results. | Only MTMSM specifically tried to tackle the multi-span questions. Their approach consisted of two parts: first train a dedicated categorical variable to predict the number of spans to extract and the second was to generalize the single-span head method of extracting a span |
5a02a3dd26485a4e4a77411b50b902d2bda3731b | 5a02a3dd26485a4e4a77411b50b902d2bda3731b_0 | Q: How they use sequence tagging to answer multi-span questions?
Text: Introduction
The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published.
DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets, in a hope to push the field towards a more comprehensive analysis of paragraphs of text. In addition to questions whose answers are a single continuous span from the paragraph text (questions of a type already included in SQuAD), DROP introduced additional types of questions. Among these new types were questions that require simple numerical reasoning, i.e questions whose answer is the result of a simple arithmetic expression containing numbers from the passage, and questions whose answers consist of several spans taken from the paragraph or the question itself, what we will denote as "multi-span questions".
Of all the existing models that tried to tackle DROP, only one model BIBREF4 directly targeted multi-span questions in a manner that wasn't just a by-product of the model's overall performance. In this paper, we propose a new method for tackling multi-span questions. Our method takes a different path from that of the aforementioned model. It does not try to generalize the existing approach for tackling single-span questions, but instead attempts to attack this issue with a new, tag-based, approach.
Related Work
Numerically-aware QANet (NAQANet) BIBREF3 was the model released with DROP. It uses QANET BIBREF5, at the time the best-performing published model on SQuAD 1.1 BIBREF0 (without data augmentation or pretraining), as the encoder. On top of QANET, NAQANet adds four different output layers, which we refer to as "heads". Each of these heads is designed to tackle a specific question type from DROP, where these types where identified by DROP's authors post-creation of the dataset. These four heads are (1) Passage span head, designed for producing answers that consist of a single span from the passage. This head deals with the type of questions already introduced in SQuAD. (2) Question span head, for answers that consist of a single span from the question. (3) Arithmetic head, for answers that require adding or subtracting numbers from the passage. (4) Count head, for answers that require counting and sorting entities from the text. In addition, to determine which head should be used to predict an answer, a 4-way categorical variable, as per the number of heads, is trained. We denote this categorical variable as the "head predictor".
Numerically-aware BERT (NABERT+) BIBREF6 introduced two main improvements over NAQANET. The first was to replace the QANET encoder with BERT. This change alone resulted in an absolute improvement of more than eight points in both EM and F1 metrics. The second improvement was to the arithmetic head, consisting of the addition of "standard numbers" and "templates". Standard numbers were predefined numbers which were added as additional inputs to the arithmetic head, regardless of their occurrence in the passage. Templates were an attempt to enrich the head's arithmetic capabilities, by adding the ability of doing simple multiplications and divisions between up to three numbers.
MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable.
Additionally, MTMSN introduced two new other, non span-related, components. The first was a new "negation" head, meant to deal with questions deemed as requiring logical negation (e.g. "How many percent were not German?"). The second was improving the arithmetic head by using beam search to re-rank candidate arithmetic expressions.
Model
Problem statement. Given a pair $(x^P,x^Q)$ of a passage and a question respectively, both comprised of tokens from a vocabulary $V$, we wish to predict an answer $y$. The answer could be either a collection of spans from the input, or a number, supposedly arrived to by performing arithmetic reasoning on the input. We want to estimate $p(y;x^P,x^Q)$.
The basic structure of our model is shared with NABERT+, which in turn is shared with that of NAQANET (the model initially released with DROP). Consequently, meticulously presenting every part of our model would very likely prove redundant. As a reasonable compromise, we will introduce the shared parts with more brevity, and will go into greater detail when presenting our contributions.
Model ::: NABERT+
Assume there are $K$ answer heads in the model and their weights denoted by $\theta $. For each pair $(x^P,x^Q)$ we assume a latent categorical random variable $z\in \left\lbrace 1,\ldots \,K\right\rbrace $ such that the probability of an answer $y$ is
where each component of the mixture corresponds to an output head such that
Note that a head is not always capable of producing the correct answer $y_\text{gold}$ for each type of question, in which case $p\left(y_\text{gold} \vert z ; x^{P},x^{Q},\theta \right)=0$. For example, the arithmetic head, whose output is always a single number, cannot possibly produce a correct answer for a multi-span question.
For a multi-span question with an answer composed of $l$ spans, denote $y_{{\text{gold}}_{\textit {MS}}}=\left\lbrace y_{{\text{gold}}_1}, \ldots , y_{{\text{gold}}_l} \right\rbrace $. NAQANET and NABERT+ had no head capable of outputting correct answers for multi-span questions. Instead of ignoring them in training, both models settled on using "semi-correct answers": each $y_\text{gold} \in y_{{\text{gold}}_{\textit {MS}}}$ was considered to be a correct answer (only in training). By deliberately encouraging the model to provide partial answers for multi-span questions, they were able to improve the corresponding F1 score. As our model does have a head with the ability to answer multi-span questions correctly, we didn't provide the aforementioned semi-correct answers to any of the other heads. Otherwise, we would have skewed the predictions of the head predictor and effectively mislead the other heads to believe they could predict correct answers for multi-span questions.
Model ::: NABERT+ ::: Heads Shared with NABERT+
Before going over the answer heads, two additional components should be introduced - the summary vectors, and the head predictor.
Summary vectors. The summary vectors are two fixed-size learned representations of the question and the passage, which serve as an input for some of the heads. To create the summary vectors, first define $\mathbf {T}$ as BERT's output on a $(x^{P},x^{Q})$ input. Then, let $\mathbf {T}^{P}$ and $\mathbf {T}^{Q}$ be subsequences of T that correspond to $x^P$ and $x^Q$ respectively. Finally, let us also define Bdim as the dimension of the tokens in $\mathbf {T}$ (e.g 768 for BERTbase), and have $\mathbf {W}^P \in \mathbb {R}^\texttt {Bdim}$ and $\mathbf {W}^Q \in \mathbb {R}^\texttt {Bdim}$ as learned linear layers. Then, the summary vectors are computed as:
Head predictor. A learned categorical variable with its number of outcomes equal to the number of answer heads in the model. Used to assign probabilities for using each of the heads in prediction.
where FFN is a two-layer feed-forward network with RELU activation.
Passage span. Define $\textbf {W}^S \in \mathbb {R}^\texttt {Bdim}$ and $\textbf {W}^E \in \mathbb {R}^\texttt {Bdim}$ as learned vectors. Then the probabilities of the start and end positions of a passage span are computed as
Question span. The probabilities of the start and end positions of a question span are computed as
where $\textbf {e}^{|\textbf {T}^Q|}\otimes \textbf {h}^P$ repeats $\textbf {h}^P$ for each component of $\textbf {T}^Q$.
Count. Counting is treated as a multi-class prediction problem with the numbers 0-9 as possible labels. The label probabilities are computed as
Arithmetic. As in NAQNET, this head obtains all of the numbers from the passage, and assigns a plus, minus or zero ("ignore") for each number. As BERT uses wordpiece tokenization, some numbers are broken up into multiple tokens. Following NABERT+, we chose to represent each number by its first wordpiece. That is, if $\textbf {N}^i$ is the set of tokens corresponding to the $i^\text{th}$ number, we define a number representation as $\textbf {h}_i^N = \textbf {N}^i_0$.
The selection of the sign for each number is a multi-class prediction problem with options $\lbrace 0, +, -\rbrace $, and the probabilities for the signs are given by
As for NABERT+'s two additional arithmetic features, we decided on using only the standard numbers, as the benefits from using templates were deemed inconclusive. Note that unlike the single-span heads, which are related to our introduction of a multi-span head, the arithmetic and count heads were not intended to play a significant role in our work. We didn't aim to improve results on these types of questions, perhaps only as a by-product of improving the general reading comprehension ability of our model.
Model ::: Multi-Span Head
A subset of questions that wasn't directly dealt with by the base models (NAQANET, NABERT+) is questions that have an answer which is composed of multiple non-continuous spans. We suggest a head that will be able to deal with both single-span and multi-span questions.
To model an answer which is a collection of spans, the multi-span head uses the $\mathtt {BIO}$ tagging format BIBREF8: $\mathtt {B}$ is used to mark the beginning of a span, $\mathtt {I}$ is used to mark the inside of a span and $\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans.
As words are broken up by the wordpiece tokenization for BERT, we decided on only considering the representation of the first sub-token of the word to tag, following the NER task from BIBREF2.
For the $i$-th token of an input, the probability to be assigned a $\text{tag} \in \left\lbrace {\mathtt {B},\mathtt {I},\mathtt {O}} \right\rbrace $ is computed as
Model ::: Objective and Training
To train our model, we try to maximize the log-likelihood of the correct answer $p(y_\text{gold};x^{P},x^{Q},\theta )$ as defined in Section SECREF2. If no head is capable of predicting the gold answer, the sample is skipped.
We enumerate over every answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $ (Passage Span, Question Span, Count, Arithmetic, Multi-Span) to compute each of the objective's addends:
Note that we are in a weakly supervised setup: the answer type is not given, and neither is the correct arithmetic expression required for deriving some answers. Therefore, it is possible that $y_\text{gold}$ could be derived by more than one way, even from the same head, with no indication of which is the "correct" one.
We use the weakly supervised training method used in NABERT+ and NAQANET. Based on BIBREF9, for each head we find all the executions that evaluate to the correct answer and maximize their marginal likelihood .
For a datapoint $\left(y, x^{P}, x^{Q} \right)$ let $\chi ^z$ be the set of all possible ways to get $y$ for answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $. Then, as in NABERT+, we have
Finally, for the arithmetic head, let $\mu $ be the set of all the standard numbers and the numbers from the passage, and let $\mathbf {\chi }^{\textit {A}}$ be the set of correct sign assignments to these numbers. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Training Objective
Denote by ${\chi }^{\textit {MS}}$ the set of correct tag sequences. If the concatenation of a question and a passage is $m$ tokens long, then denote a correct tag sequence as $\left(\text{tag}_1,\ldots ,\text{tag}_m\right)$.
We approximate the likelihood of a tag sequence by assuming independence between the sequence's positions, and multiplying the likelihoods of all the correct tags in the sequence. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Correct Tag Sequences
Since a given multi-span answer is a collection of spans, it is required to obtain its matching tag sequences in order to compute the training objective.
In what we consider to be a correct tag sequence, each answer span will be marked at least once. Due to the weakly supervised setup, we consider all the question/passage spans that match the answer spans as being correct. To illustrate, consider the following simple example. Given the text "X Y Z Z" and the correct multi-span answer ["Y", "Z"], there are three correct tag sequences: $\mathtt {O\,B\,B\,B}$,$\quad $ $\mathtt {O\,B\,B\,O}$,$\quad $ $\mathtt {O\,B\,O\,B}$.
Model ::: Objective and Training ::: Dealing with too Many Correct Tag Sequences
The number of correct tag sequences can be expressed by
where $s$ is the number of spans in the answer and $\#_i$ is the number of times the $i^\text{th}$ span appears in the text.
For questions with a reasonable amount of correct tag sequences, we generate all of them before the training starts. However, there is a small group of questions for which the amount of such sequences is between 10,000 and 100,000,000 - too many to generate and train on. In such cases, inspired by BIBREF9, instead of just using an arbitrary subset of the correct sequences, we use beam search to generate the top-k predictions of the training model, and then filter out the incorrect sequences. Compared to using an arbitrary subset, using these sequences causes the optimization to be done with respect to answers more compatible with the model. If no correct tag sequences were predicted within the top-k, we use the tag sequence that has all of the answer spans marked.
Model ::: Tag Sequence Prediction with the Multi-Span Head
Based on the outputs $\textbf {p}_{i}^{{\text{tag}}_{i}}$ we would like to predict the most likely sequence given the $\mathtt {BIO}$ constraints. Denote $\textit {validSeqs}$ as the set of all $\mathtt {BIO}$ sequences of length $m$ that are valid according to the rules specified in Section SECREF5. The $\mathtt {BIO}$ tag sequence to predict is then
We considered the following approaches:
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Viterbi Decoding
A natural candidate for getting the most likely sequence is Viterbi decoding, BIBREF10 with transition probabilities learned by a $\mathtt {BIO}$ constrained Conditional Random Field (CRF) BIBREF11. However, further inspection of our sequence's properties reveals that such a computational effort is probably not necessary, as explained in following paragraphs.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Beam Search
Due to our use of $\mathtt {BIO}$ tags and their constraints, observe that past tag predictions only affect future tag predictions from the last $\mathtt {B}$ prediction and as long as the best tag to predict is $\mathtt {I}$. Considering the frequency and length of the correct spans in the question and the passage, effectively there's no effect of past sequence's positions on future ones, other than a very few positions ahead. Together with the fact that at each prediction step there are no more than 3 tags to consider, it means using beam search to get the most likely sequence is very reasonable and even allows near-optimal results with small beam width values.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Greedy Tagging
Notice that greedy tagging does not enforce the $\mathtt {BIO}$ constraints. However, since the multi-span head's training objective adheres to the $\mathtt {BIO}$ constraints via being given the correct tag sequences, we can expect that even with greedy tagging the predictions will mostly adhere to these constraints as well. In case there are violations, their amendment is required post-prediction. Albeit faster, greedy tagging resulted in a small performance hit, as seen in Table TABREF26.
Preprocessing
We tokenize the passage, question, and all answer texts using the BERT uncased wordpiece tokenizer from huggingface. The tokenization resulting from each $(x^P,x^Q)$ input pair is truncated at 512 tokens so it can be fed to BERT as an input. However, before tokenizing the dataset texts, we perform additional preprocessing as listed below.
Preprocessing ::: Simple Preprocessing ::: Improved Textual Parsing
The raw dataset included almost a thousand of HTML entities that did not get parsed properly, e.g " " instead of a simple space. In addition, we fixed some quirks that were introduced by the original Wikipedia parsing method. For example, when encountering a reference to an external source that included a specific page from that reference, the original parser ended up introducing a redundant ":<PAGE NUMBER>" into the parsed text.
Preprocessing ::: Simple Preprocessing ::: Improved Handling of Numbers
Although we previously stated that we aren't focusing on improving arithmetic performance, while analyzing the training process we encountered two arithmetic-related issues that could be resolved rather quickly: a precision issue and a number extraction issue. Regarding precision, we noticed that while either generating expressions for the arithmetic head, or using the arithmetic head to predict a numeric answer, the value resulting from an arithmetic operation would not always yield the exact result due to floating point precision limitations. For example, $5.8 + 6.6 = 12.3999...$ instead of $12.4$. This issue has caused a significant performance hit of about 1.5 points for both F1 and EM and was fixed by simply rounding numbers to 5 decimal places, assuming that no answer requires a greater precision. Regarding number extraction, we noticed that some numeric entities, required in order to produce a correct answer, weren't being extracted from the passage. Examples include ordinals (121st, 189th) and some "per-" units (1,580.7/km2, 1050.95/month).
Preprocessing ::: Using NER for Cleaning Up Multi-Span Questions
The training dataset contains multi-span questions with answers that are clearly incorrect, with examples shown in Table TABREF22. In order to mitigate this, we applied an answer-cleaning technique using a pretrained Named Entity Recognition (NER) model BIBREF12 in the following manner: (1) Pre-define question prefixes whose answer spans are expected to contain only a specific entity type and filter the matching questions. (2) For a given answer of a filtered question, remove any span that does not contain at least one token of the expected type, where the types are determined by applying the NER model on the passage. For example, if a question starts with "who scored", we expect that any valid span will include a person entity ($\mathtt {PER}$). By applying such rules, we discovered that at least 3% of the multi-span questions in the training dataset included incorrect spans. As our analysis of prefixes wasn't exhaustive, we believe that this method could yield further gains. Table TABREF22 shows a few of our cleaning method results, where we perfectly clean the first two questions, and partially clean a third question.
Training
The starting point for our implementation was the NABERT+ model, which in turn was based on allenai's NAQANET. Our implementation can be found on GitHub. All three models utilize the allennlp framework. The pretrained BERT models were supplied by huggingface. For our base model we used bert-base-uncased. For our large models we used the standard bert-large-uncased-whole-word-masking and the squad fine-tuned bert-large-uncased- whole-word-masking-finetuned-squad.
Due to limited computational resources, we did not perform any hyperparameter searching. We preferred to focus our efforts on the ablation studies, in hope to gain further insights on the effect of the components that we ourselves introduced. For ease of performance comparison, we followed NABERT+'s training settings: we used the BERT Adam optimizer from huggingface with default settings and a learning rate of $1e^{-5}$. The only difference was that we used a batch size of 12. We trained our base model for 20 epochs. For the large models we used a batch size of 3 with a learning rate of $5e^{-6}$ and trained for 5 epochs, except for the model without the single-span heads that was trained with a batch size of 2 for 7 epochs. F1 was used as our validation metric. All models were trained on a single GPU with 12-16GB of memory.
Results and Discussion ::: Performance on DROP's Development Set
Table TABREF24 shows the results on DROP's development set. Compared to our base models, our large models exhibit a substantial improvement across all metrics.
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to the NABERT+ Baseline
We can see that our base model surpasses the NABERT+ baseline in every metric. The major improvement in multi-span performance was expected, as our multi-span head was introduced specifically to tackle this type of questions. For the other types, most of the improvement came from better preprocessing. A more detailed discussion could be found in Section (SECREF36).
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to MTMSN
Notice that different BERTlarge models were used, so the comparison is less direct. Overall, our large models exhibits similar results to those of MTMSNlarge.
For multi-span questions we achieve a significantly better performance. While a breakdown of metrics was only available for MTMSNlarge, notice that even when comparing these metrics to our base model, we still achieve a 12.2 absolute improvement in EM, and a 2.3 improvement in F1. All that, while keeping in mind we compare a base model to a large model (for reference, note the 8 point improvement between MTMSNbase and MTMSNlarge in both EM and F1). Our best model, large-squad, exhibits a huge improvement of 29.7 in EM and 15.1 in F1 compared to MTMSNlarge.
When comparing single-span performance, our best model exhibits slightly better results, but it should be noted that it retains the single-span heads from NABERT+, while in MTMSN they have one head to predict both single-span and multi-span answers. For a fairer comparison, we trained our model with the single-span heads removed, where our multi-span head remained the only head aimed for handling span questions. With this no-single-span-heads setting, while our multi-span performance even improved a bit, our single-span performance suffered a slight drop, ending up trailing by 0.8 in EM and 0.6 in F1 compared to MTMSN. Therefore, it could prove beneficial to try and analyze the reasons behind each model's (ours and MTMSN) relative advantages, and perhaps try to combine them into a more holistic approach of tackling span questions.
Results and Discussion ::: Performance on DROP's Test Set
Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.
Results and Discussion ::: Ablation Studies
In order to analyze the effect of each of our changes, we conduct ablation studies on the development set, depicted in Table TABREF26.
Not using the simple preprocessing from Section SECREF17 resulted in a 2.5 point decrease in both EM and F1. The numeric questions were the most affected, with their performance dropping by 3.5 points. Given that number questions make up about 61% of the dataset, we can deduce that our improved number handling is responsible for about a 2.1 point gain, while the rest could be be attributed to the improved Wikipedia parsing.
Although NER span cleaning (Section SECREF23) affected only 3% of the multi-span questions, it provided a solid improvement of 5.4 EM in multi-span questions and 1.5 EM in single-span questions. The single-span improvement is probably due to the combination of better multi-span head learning as a result of fixing multi-span questions and the fact that the multi-span head can answer single-span questions as well.
Not using the single-span heads results in a slight drop in multi-span performance, and a noticeable drop in single-span performance. However when performing the same comparison between our large models (see Table TABREF24), this performance gap becomes significantly smaller.
As expected, not using the multi-span head causes the multi-span performance to plummet. Note that for this ablation test the single-span heads were permitted to train on multi-span questions.
Compared to using greedy decoding in the prediction of multi-span questions, using beam search results in a small improvement. We used a beam with of 5, and didn't perform extensive tuning of the beam width.
Conclusion
In this work, we introduced a new approach for tackling multi-span questions in reading comprehension datasets. This approach is based on individually tagging each token with a categorical tag, relying on the tokens' contextual representation to bridge the information gap resulting from the tokens being tagged individually.
First, we show that integrating this new approach into an existing model, NABERT+, does not hinder performance on other questions types, while substantially improving the results on multi-span questions. Later, we compare our results to the current state-of-the-art on multi-span questions. We show that our model has a clear advantage in handling multi-span questions, with a 29.7 absolute improvement in EM, and a 15.1 absolute improvement in F1. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataeset. Finally, we present some ablation studies, analyzing the benefit gained from individual components of our model.
We believe that combining our tag-based approach for handling multi-span questions with current successful techniques for handling single-span questions could prove beneficial in finding better, more holistic ways, of tackling span questions in general.
Future Work ::: A Different Loss for Multi-span Questions
Currently, For each individual span, we optimize the average likelihood over all its possible tag sequences (see Section SECREF9). A different approach could be not taking each possible tag sequence into account but only the most likely one. This could provide the model more flexibility during training and the ability to focus on the more "correct" tag sequences.
Future Work ::: Explore Utilization of Non-First Wordpiece Sub-Tokens
As mentioned in Section SECREF5, we only considered the representation of the first wordpiece sub-token in our model. It would be interesting to see how different approaches to utilize the other sub-tokens' representations in the tagging task affect the results. | To model an answer which is a collection of spans, the multi-span head uses the $\mathtt {BIO}$ tagging format BIBREF8: $\mathtt {B}$ is used to mark the beginning of a span, $\mathtt {I}$ is used to mark the inside of a span and $\mathtt {O}$ is used to mark tokens not included in a span |
579941de2838502027716bae88e33e79e69997a6 | 579941de2838502027716bae88e33e79e69997a6_0 | Q: What is difference in peformance between proposed model and state-of-the art on other question types?
Text: Introduction
The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published.
DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets, in a hope to push the field towards a more comprehensive analysis of paragraphs of text. In addition to questions whose answers are a single continuous span from the paragraph text (questions of a type already included in SQuAD), DROP introduced additional types of questions. Among these new types were questions that require simple numerical reasoning, i.e questions whose answer is the result of a simple arithmetic expression containing numbers from the passage, and questions whose answers consist of several spans taken from the paragraph or the question itself, what we will denote as "multi-span questions".
Of all the existing models that tried to tackle DROP, only one model BIBREF4 directly targeted multi-span questions in a manner that wasn't just a by-product of the model's overall performance. In this paper, we propose a new method for tackling multi-span questions. Our method takes a different path from that of the aforementioned model. It does not try to generalize the existing approach for tackling single-span questions, but instead attempts to attack this issue with a new, tag-based, approach.
Related Work
Numerically-aware QANet (NAQANet) BIBREF3 was the model released with DROP. It uses QANET BIBREF5, at the time the best-performing published model on SQuAD 1.1 BIBREF0 (without data augmentation or pretraining), as the encoder. On top of QANET, NAQANet adds four different output layers, which we refer to as "heads". Each of these heads is designed to tackle a specific question type from DROP, where these types where identified by DROP's authors post-creation of the dataset. These four heads are (1) Passage span head, designed for producing answers that consist of a single span from the passage. This head deals with the type of questions already introduced in SQuAD. (2) Question span head, for answers that consist of a single span from the question. (3) Arithmetic head, for answers that require adding or subtracting numbers from the passage. (4) Count head, for answers that require counting and sorting entities from the text. In addition, to determine which head should be used to predict an answer, a 4-way categorical variable, as per the number of heads, is trained. We denote this categorical variable as the "head predictor".
Numerically-aware BERT (NABERT+) BIBREF6 introduced two main improvements over NAQANET. The first was to replace the QANET encoder with BERT. This change alone resulted in an absolute improvement of more than eight points in both EM and F1 metrics. The second improvement was to the arithmetic head, consisting of the addition of "standard numbers" and "templates". Standard numbers were predefined numbers which were added as additional inputs to the arithmetic head, regardless of their occurrence in the passage. Templates were an attempt to enrich the head's arithmetic capabilities, by adding the ability of doing simple multiplications and divisions between up to three numbers.
MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable.
Additionally, MTMSN introduced two new other, non span-related, components. The first was a new "negation" head, meant to deal with questions deemed as requiring logical negation (e.g. "How many percent were not German?"). The second was improving the arithmetic head by using beam search to re-rank candidate arithmetic expressions.
Model
Problem statement. Given a pair $(x^P,x^Q)$ of a passage and a question respectively, both comprised of tokens from a vocabulary $V$, we wish to predict an answer $y$. The answer could be either a collection of spans from the input, or a number, supposedly arrived to by performing arithmetic reasoning on the input. We want to estimate $p(y;x^P,x^Q)$.
The basic structure of our model is shared with NABERT+, which in turn is shared with that of NAQANET (the model initially released with DROP). Consequently, meticulously presenting every part of our model would very likely prove redundant. As a reasonable compromise, we will introduce the shared parts with more brevity, and will go into greater detail when presenting our contributions.
Model ::: NABERT+
Assume there are $K$ answer heads in the model and their weights denoted by $\theta $. For each pair $(x^P,x^Q)$ we assume a latent categorical random variable $z\in \left\lbrace 1,\ldots \,K\right\rbrace $ such that the probability of an answer $y$ is
where each component of the mixture corresponds to an output head such that
Note that a head is not always capable of producing the correct answer $y_\text{gold}$ for each type of question, in which case $p\left(y_\text{gold} \vert z ; x^{P},x^{Q},\theta \right)=0$. For example, the arithmetic head, whose output is always a single number, cannot possibly produce a correct answer for a multi-span question.
For a multi-span question with an answer composed of $l$ spans, denote $y_{{\text{gold}}_{\textit {MS}}}=\left\lbrace y_{{\text{gold}}_1}, \ldots , y_{{\text{gold}}_l} \right\rbrace $. NAQANET and NABERT+ had no head capable of outputting correct answers for multi-span questions. Instead of ignoring them in training, both models settled on using "semi-correct answers": each $y_\text{gold} \in y_{{\text{gold}}_{\textit {MS}}}$ was considered to be a correct answer (only in training). By deliberately encouraging the model to provide partial answers for multi-span questions, they were able to improve the corresponding F1 score. As our model does have a head with the ability to answer multi-span questions correctly, we didn't provide the aforementioned semi-correct answers to any of the other heads. Otherwise, we would have skewed the predictions of the head predictor and effectively mislead the other heads to believe they could predict correct answers for multi-span questions.
Model ::: NABERT+ ::: Heads Shared with NABERT+
Before going over the answer heads, two additional components should be introduced - the summary vectors, and the head predictor.
Summary vectors. The summary vectors are two fixed-size learned representations of the question and the passage, which serve as an input for some of the heads. To create the summary vectors, first define $\mathbf {T}$ as BERT's output on a $(x^{P},x^{Q})$ input. Then, let $\mathbf {T}^{P}$ and $\mathbf {T}^{Q}$ be subsequences of T that correspond to $x^P$ and $x^Q$ respectively. Finally, let us also define Bdim as the dimension of the tokens in $\mathbf {T}$ (e.g 768 for BERTbase), and have $\mathbf {W}^P \in \mathbb {R}^\texttt {Bdim}$ and $\mathbf {W}^Q \in \mathbb {R}^\texttt {Bdim}$ as learned linear layers. Then, the summary vectors are computed as:
Head predictor. A learned categorical variable with its number of outcomes equal to the number of answer heads in the model. Used to assign probabilities for using each of the heads in prediction.
where FFN is a two-layer feed-forward network with RELU activation.
Passage span. Define $\textbf {W}^S \in \mathbb {R}^\texttt {Bdim}$ and $\textbf {W}^E \in \mathbb {R}^\texttt {Bdim}$ as learned vectors. Then the probabilities of the start and end positions of a passage span are computed as
Question span. The probabilities of the start and end positions of a question span are computed as
where $\textbf {e}^{|\textbf {T}^Q|}\otimes \textbf {h}^P$ repeats $\textbf {h}^P$ for each component of $\textbf {T}^Q$.
Count. Counting is treated as a multi-class prediction problem with the numbers 0-9 as possible labels. The label probabilities are computed as
Arithmetic. As in NAQNET, this head obtains all of the numbers from the passage, and assigns a plus, minus or zero ("ignore") for each number. As BERT uses wordpiece tokenization, some numbers are broken up into multiple tokens. Following NABERT+, we chose to represent each number by its first wordpiece. That is, if $\textbf {N}^i$ is the set of tokens corresponding to the $i^\text{th}$ number, we define a number representation as $\textbf {h}_i^N = \textbf {N}^i_0$.
The selection of the sign for each number is a multi-class prediction problem with options $\lbrace 0, +, -\rbrace $, and the probabilities for the signs are given by
As for NABERT+'s two additional arithmetic features, we decided on using only the standard numbers, as the benefits from using templates were deemed inconclusive. Note that unlike the single-span heads, which are related to our introduction of a multi-span head, the arithmetic and count heads were not intended to play a significant role in our work. We didn't aim to improve results on these types of questions, perhaps only as a by-product of improving the general reading comprehension ability of our model.
Model ::: Multi-Span Head
A subset of questions that wasn't directly dealt with by the base models (NAQANET, NABERT+) is questions that have an answer which is composed of multiple non-continuous spans. We suggest a head that will be able to deal with both single-span and multi-span questions.
To model an answer which is a collection of spans, the multi-span head uses the $\mathtt {BIO}$ tagging format BIBREF8: $\mathtt {B}$ is used to mark the beginning of a span, $\mathtt {I}$ is used to mark the inside of a span and $\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans.
As words are broken up by the wordpiece tokenization for BERT, we decided on only considering the representation of the first sub-token of the word to tag, following the NER task from BIBREF2.
For the $i$-th token of an input, the probability to be assigned a $\text{tag} \in \left\lbrace {\mathtt {B},\mathtt {I},\mathtt {O}} \right\rbrace $ is computed as
Model ::: Objective and Training
To train our model, we try to maximize the log-likelihood of the correct answer $p(y_\text{gold};x^{P},x^{Q},\theta )$ as defined in Section SECREF2. If no head is capable of predicting the gold answer, the sample is skipped.
We enumerate over every answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $ (Passage Span, Question Span, Count, Arithmetic, Multi-Span) to compute each of the objective's addends:
Note that we are in a weakly supervised setup: the answer type is not given, and neither is the correct arithmetic expression required for deriving some answers. Therefore, it is possible that $y_\text{gold}$ could be derived by more than one way, even from the same head, with no indication of which is the "correct" one.
We use the weakly supervised training method used in NABERT+ and NAQANET. Based on BIBREF9, for each head we find all the executions that evaluate to the correct answer and maximize their marginal likelihood .
For a datapoint $\left(y, x^{P}, x^{Q} \right)$ let $\chi ^z$ be the set of all possible ways to get $y$ for answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $. Then, as in NABERT+, we have
Finally, for the arithmetic head, let $\mu $ be the set of all the standard numbers and the numbers from the passage, and let $\mathbf {\chi }^{\textit {A}}$ be the set of correct sign assignments to these numbers. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Training Objective
Denote by ${\chi }^{\textit {MS}}$ the set of correct tag sequences. If the concatenation of a question and a passage is $m$ tokens long, then denote a correct tag sequence as $\left(\text{tag}_1,\ldots ,\text{tag}_m\right)$.
We approximate the likelihood of a tag sequence by assuming independence between the sequence's positions, and multiplying the likelihoods of all the correct tags in the sequence. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Correct Tag Sequences
Since a given multi-span answer is a collection of spans, it is required to obtain its matching tag sequences in order to compute the training objective.
In what we consider to be a correct tag sequence, each answer span will be marked at least once. Due to the weakly supervised setup, we consider all the question/passage spans that match the answer spans as being correct. To illustrate, consider the following simple example. Given the text "X Y Z Z" and the correct multi-span answer ["Y", "Z"], there are three correct tag sequences: $\mathtt {O\,B\,B\,B}$,$\quad $ $\mathtt {O\,B\,B\,O}$,$\quad $ $\mathtt {O\,B\,O\,B}$.
Model ::: Objective and Training ::: Dealing with too Many Correct Tag Sequences
The number of correct tag sequences can be expressed by
where $s$ is the number of spans in the answer and $\#_i$ is the number of times the $i^\text{th}$ span appears in the text.
For questions with a reasonable amount of correct tag sequences, we generate all of them before the training starts. However, there is a small group of questions for which the amount of such sequences is between 10,000 and 100,000,000 - too many to generate and train on. In such cases, inspired by BIBREF9, instead of just using an arbitrary subset of the correct sequences, we use beam search to generate the top-k predictions of the training model, and then filter out the incorrect sequences. Compared to using an arbitrary subset, using these sequences causes the optimization to be done with respect to answers more compatible with the model. If no correct tag sequences were predicted within the top-k, we use the tag sequence that has all of the answer spans marked.
Model ::: Tag Sequence Prediction with the Multi-Span Head
Based on the outputs $\textbf {p}_{i}^{{\text{tag}}_{i}}$ we would like to predict the most likely sequence given the $\mathtt {BIO}$ constraints. Denote $\textit {validSeqs}$ as the set of all $\mathtt {BIO}$ sequences of length $m$ that are valid according to the rules specified in Section SECREF5. The $\mathtt {BIO}$ tag sequence to predict is then
We considered the following approaches:
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Viterbi Decoding
A natural candidate for getting the most likely sequence is Viterbi decoding, BIBREF10 with transition probabilities learned by a $\mathtt {BIO}$ constrained Conditional Random Field (CRF) BIBREF11. However, further inspection of our sequence's properties reveals that such a computational effort is probably not necessary, as explained in following paragraphs.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Beam Search
Due to our use of $\mathtt {BIO}$ tags and their constraints, observe that past tag predictions only affect future tag predictions from the last $\mathtt {B}$ prediction and as long as the best tag to predict is $\mathtt {I}$. Considering the frequency and length of the correct spans in the question and the passage, effectively there's no effect of past sequence's positions on future ones, other than a very few positions ahead. Together with the fact that at each prediction step there are no more than 3 tags to consider, it means using beam search to get the most likely sequence is very reasonable and even allows near-optimal results with small beam width values.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Greedy Tagging
Notice that greedy tagging does not enforce the $\mathtt {BIO}$ constraints. However, since the multi-span head's training objective adheres to the $\mathtt {BIO}$ constraints via being given the correct tag sequences, we can expect that even with greedy tagging the predictions will mostly adhere to these constraints as well. In case there are violations, their amendment is required post-prediction. Albeit faster, greedy tagging resulted in a small performance hit, as seen in Table TABREF26.
Preprocessing
We tokenize the passage, question, and all answer texts using the BERT uncased wordpiece tokenizer from huggingface. The tokenization resulting from each $(x^P,x^Q)$ input pair is truncated at 512 tokens so it can be fed to BERT as an input. However, before tokenizing the dataset texts, we perform additional preprocessing as listed below.
Preprocessing ::: Simple Preprocessing ::: Improved Textual Parsing
The raw dataset included almost a thousand of HTML entities that did not get parsed properly, e.g " " instead of a simple space. In addition, we fixed some quirks that were introduced by the original Wikipedia parsing method. For example, when encountering a reference to an external source that included a specific page from that reference, the original parser ended up introducing a redundant ":<PAGE NUMBER>" into the parsed text.
Preprocessing ::: Simple Preprocessing ::: Improved Handling of Numbers
Although we previously stated that we aren't focusing on improving arithmetic performance, while analyzing the training process we encountered two arithmetic-related issues that could be resolved rather quickly: a precision issue and a number extraction issue. Regarding precision, we noticed that while either generating expressions for the arithmetic head, or using the arithmetic head to predict a numeric answer, the value resulting from an arithmetic operation would not always yield the exact result due to floating point precision limitations. For example, $5.8 + 6.6 = 12.3999...$ instead of $12.4$. This issue has caused a significant performance hit of about 1.5 points for both F1 and EM and was fixed by simply rounding numbers to 5 decimal places, assuming that no answer requires a greater precision. Regarding number extraction, we noticed that some numeric entities, required in order to produce a correct answer, weren't being extracted from the passage. Examples include ordinals (121st, 189th) and some "per-" units (1,580.7/km2, 1050.95/month).
Preprocessing ::: Using NER for Cleaning Up Multi-Span Questions
The training dataset contains multi-span questions with answers that are clearly incorrect, with examples shown in Table TABREF22. In order to mitigate this, we applied an answer-cleaning technique using a pretrained Named Entity Recognition (NER) model BIBREF12 in the following manner: (1) Pre-define question prefixes whose answer spans are expected to contain only a specific entity type and filter the matching questions. (2) For a given answer of a filtered question, remove any span that does not contain at least one token of the expected type, where the types are determined by applying the NER model on the passage. For example, if a question starts with "who scored", we expect that any valid span will include a person entity ($\mathtt {PER}$). By applying such rules, we discovered that at least 3% of the multi-span questions in the training dataset included incorrect spans. As our analysis of prefixes wasn't exhaustive, we believe that this method could yield further gains. Table TABREF22 shows a few of our cleaning method results, where we perfectly clean the first two questions, and partially clean a third question.
Training
The starting point for our implementation was the NABERT+ model, which in turn was based on allenai's NAQANET. Our implementation can be found on GitHub. All three models utilize the allennlp framework. The pretrained BERT models were supplied by huggingface. For our base model we used bert-base-uncased. For our large models we used the standard bert-large-uncased-whole-word-masking and the squad fine-tuned bert-large-uncased- whole-word-masking-finetuned-squad.
Due to limited computational resources, we did not perform any hyperparameter searching. We preferred to focus our efforts on the ablation studies, in hope to gain further insights on the effect of the components that we ourselves introduced. For ease of performance comparison, we followed NABERT+'s training settings: we used the BERT Adam optimizer from huggingface with default settings and a learning rate of $1e^{-5}$. The only difference was that we used a batch size of 12. We trained our base model for 20 epochs. For the large models we used a batch size of 3 with a learning rate of $5e^{-6}$ and trained for 5 epochs, except for the model without the single-span heads that was trained with a batch size of 2 for 7 epochs. F1 was used as our validation metric. All models were trained on a single GPU with 12-16GB of memory.
Results and Discussion ::: Performance on DROP's Development Set
Table TABREF24 shows the results on DROP's development set. Compared to our base models, our large models exhibit a substantial improvement across all metrics.
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to the NABERT+ Baseline
We can see that our base model surpasses the NABERT+ baseline in every metric. The major improvement in multi-span performance was expected, as our multi-span head was introduced specifically to tackle this type of questions. For the other types, most of the improvement came from better preprocessing. A more detailed discussion could be found in Section (SECREF36).
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to MTMSN
Notice that different BERTlarge models were used, so the comparison is less direct. Overall, our large models exhibits similar results to those of MTMSNlarge.
For multi-span questions we achieve a significantly better performance. While a breakdown of metrics was only available for MTMSNlarge, notice that even when comparing these metrics to our base model, we still achieve a 12.2 absolute improvement in EM, and a 2.3 improvement in F1. All that, while keeping in mind we compare a base model to a large model (for reference, note the 8 point improvement between MTMSNbase and MTMSNlarge in both EM and F1). Our best model, large-squad, exhibits a huge improvement of 29.7 in EM and 15.1 in F1 compared to MTMSNlarge.
When comparing single-span performance, our best model exhibits slightly better results, but it should be noted that it retains the single-span heads from NABERT+, while in MTMSN they have one head to predict both single-span and multi-span answers. For a fairer comparison, we trained our model with the single-span heads removed, where our multi-span head remained the only head aimed for handling span questions. With this no-single-span-heads setting, while our multi-span performance even improved a bit, our single-span performance suffered a slight drop, ending up trailing by 0.8 in EM and 0.6 in F1 compared to MTMSN. Therefore, it could prove beneficial to try and analyze the reasons behind each model's (ours and MTMSN) relative advantages, and perhaps try to combine them into a more holistic approach of tackling span questions.
Results and Discussion ::: Performance on DROP's Test Set
Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.
Results and Discussion ::: Ablation Studies
In order to analyze the effect of each of our changes, we conduct ablation studies on the development set, depicted in Table TABREF26.
Not using the simple preprocessing from Section SECREF17 resulted in a 2.5 point decrease in both EM and F1. The numeric questions were the most affected, with their performance dropping by 3.5 points. Given that number questions make up about 61% of the dataset, we can deduce that our improved number handling is responsible for about a 2.1 point gain, while the rest could be be attributed to the improved Wikipedia parsing.
Although NER span cleaning (Section SECREF23) affected only 3% of the multi-span questions, it provided a solid improvement of 5.4 EM in multi-span questions and 1.5 EM in single-span questions. The single-span improvement is probably due to the combination of better multi-span head learning as a result of fixing multi-span questions and the fact that the multi-span head can answer single-span questions as well.
Not using the single-span heads results in a slight drop in multi-span performance, and a noticeable drop in single-span performance. However when performing the same comparison between our large models (see Table TABREF24), this performance gap becomes significantly smaller.
As expected, not using the multi-span head causes the multi-span performance to plummet. Note that for this ablation test the single-span heads were permitted to train on multi-span questions.
Compared to using greedy decoding in the prediction of multi-span questions, using beam search results in a small improvement. We used a beam with of 5, and didn't perform extensive tuning of the beam width.
Conclusion
In this work, we introduced a new approach for tackling multi-span questions in reading comprehension datasets. This approach is based on individually tagging each token with a categorical tag, relying on the tokens' contextual representation to bridge the information gap resulting from the tokens being tagged individually.
First, we show that integrating this new approach into an existing model, NABERT+, does not hinder performance on other questions types, while substantially improving the results on multi-span questions. Later, we compare our results to the current state-of-the-art on multi-span questions. We show that our model has a clear advantage in handling multi-span questions, with a 29.7 absolute improvement in EM, and a 15.1 absolute improvement in F1. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataeset. Finally, we present some ablation studies, analyzing the benefit gained from individual components of our model.
We believe that combining our tag-based approach for handling multi-span questions with current successful techniques for handling single-span questions could prove beneficial in finding better, more holistic ways, of tackling span questions in general.
Future Work ::: A Different Loss for Multi-span Questions
Currently, For each individual span, we optimize the average likelihood over all its possible tag sequences (see Section SECREF9). A different approach could be not taking each possible tag sequence into account but only the most likely one. This could provide the model more flexibility during training and the ability to focus on the more "correct" tag sequences.
Future Work ::: Explore Utilization of Non-First Wordpiece Sub-Tokens
As mentioned in Section SECREF5, we only considered the representation of the first wordpiece sub-token in our model. It would be interesting to see how different approaches to utilize the other sub-tokens' representations in the tagging task affect the results. | For single-span questions, the proposed LARGE-SQUAD improve performance of the MTMSNlarge baseline for 2.1 EM and 1.55 F1.
For number type question, MTMSNlarge baseline have improvement over LARGE-SQUAD for 3,11 EM and 2,98 F1.
For date question, LARGE-SQUAD have improvements in 2,02 EM but MTMSNlarge have improvement of 4,39 F1. |
9a65cfff4d99e4f9546c72dece2520cae6231810 | 9a65cfff4d99e4f9546c72dece2520cae6231810_0 | Q: What is the performance of proposed model on entire DROP dataset?
Text: Introduction
The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published.
DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets, in a hope to push the field towards a more comprehensive analysis of paragraphs of text. In addition to questions whose answers are a single continuous span from the paragraph text (questions of a type already included in SQuAD), DROP introduced additional types of questions. Among these new types were questions that require simple numerical reasoning, i.e questions whose answer is the result of a simple arithmetic expression containing numbers from the passage, and questions whose answers consist of several spans taken from the paragraph or the question itself, what we will denote as "multi-span questions".
Of all the existing models that tried to tackle DROP, only one model BIBREF4 directly targeted multi-span questions in a manner that wasn't just a by-product of the model's overall performance. In this paper, we propose a new method for tackling multi-span questions. Our method takes a different path from that of the aforementioned model. It does not try to generalize the existing approach for tackling single-span questions, but instead attempts to attack this issue with a new, tag-based, approach.
Related Work
Numerically-aware QANet (NAQANet) BIBREF3 was the model released with DROP. It uses QANET BIBREF5, at the time the best-performing published model on SQuAD 1.1 BIBREF0 (without data augmentation or pretraining), as the encoder. On top of QANET, NAQANet adds four different output layers, which we refer to as "heads". Each of these heads is designed to tackle a specific question type from DROP, where these types where identified by DROP's authors post-creation of the dataset. These four heads are (1) Passage span head, designed for producing answers that consist of a single span from the passage. This head deals with the type of questions already introduced in SQuAD. (2) Question span head, for answers that consist of a single span from the question. (3) Arithmetic head, for answers that require adding or subtracting numbers from the passage. (4) Count head, for answers that require counting and sorting entities from the text. In addition, to determine which head should be used to predict an answer, a 4-way categorical variable, as per the number of heads, is trained. We denote this categorical variable as the "head predictor".
Numerically-aware BERT (NABERT+) BIBREF6 introduced two main improvements over NAQANET. The first was to replace the QANET encoder with BERT. This change alone resulted in an absolute improvement of more than eight points in both EM and F1 metrics. The second improvement was to the arithmetic head, consisting of the addition of "standard numbers" and "templates". Standard numbers were predefined numbers which were added as additional inputs to the arithmetic head, regardless of their occurrence in the passage. Templates were an attempt to enrich the head's arithmetic capabilities, by adding the ability of doing simple multiplications and divisions between up to three numbers.
MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable.
Additionally, MTMSN introduced two new other, non span-related, components. The first was a new "negation" head, meant to deal with questions deemed as requiring logical negation (e.g. "How many percent were not German?"). The second was improving the arithmetic head by using beam search to re-rank candidate arithmetic expressions.
Model
Problem statement. Given a pair $(x^P,x^Q)$ of a passage and a question respectively, both comprised of tokens from a vocabulary $V$, we wish to predict an answer $y$. The answer could be either a collection of spans from the input, or a number, supposedly arrived to by performing arithmetic reasoning on the input. We want to estimate $p(y;x^P,x^Q)$.
The basic structure of our model is shared with NABERT+, which in turn is shared with that of NAQANET (the model initially released with DROP). Consequently, meticulously presenting every part of our model would very likely prove redundant. As a reasonable compromise, we will introduce the shared parts with more brevity, and will go into greater detail when presenting our contributions.
Model ::: NABERT+
Assume there are $K$ answer heads in the model and their weights denoted by $\theta $. For each pair $(x^P,x^Q)$ we assume a latent categorical random variable $z\in \left\lbrace 1,\ldots \,K\right\rbrace $ such that the probability of an answer $y$ is
where each component of the mixture corresponds to an output head such that
Note that a head is not always capable of producing the correct answer $y_\text{gold}$ for each type of question, in which case $p\left(y_\text{gold} \vert z ; x^{P},x^{Q},\theta \right)=0$. For example, the arithmetic head, whose output is always a single number, cannot possibly produce a correct answer for a multi-span question.
For a multi-span question with an answer composed of $l$ spans, denote $y_{{\text{gold}}_{\textit {MS}}}=\left\lbrace y_{{\text{gold}}_1}, \ldots , y_{{\text{gold}}_l} \right\rbrace $. NAQANET and NABERT+ had no head capable of outputting correct answers for multi-span questions. Instead of ignoring them in training, both models settled on using "semi-correct answers": each $y_\text{gold} \in y_{{\text{gold}}_{\textit {MS}}}$ was considered to be a correct answer (only in training). By deliberately encouraging the model to provide partial answers for multi-span questions, they were able to improve the corresponding F1 score. As our model does have a head with the ability to answer multi-span questions correctly, we didn't provide the aforementioned semi-correct answers to any of the other heads. Otherwise, we would have skewed the predictions of the head predictor and effectively mislead the other heads to believe they could predict correct answers for multi-span questions.
Model ::: NABERT+ ::: Heads Shared with NABERT+
Before going over the answer heads, two additional components should be introduced - the summary vectors, and the head predictor.
Summary vectors. The summary vectors are two fixed-size learned representations of the question and the passage, which serve as an input for some of the heads. To create the summary vectors, first define $\mathbf {T}$ as BERT's output on a $(x^{P},x^{Q})$ input. Then, let $\mathbf {T}^{P}$ and $\mathbf {T}^{Q}$ be subsequences of T that correspond to $x^P$ and $x^Q$ respectively. Finally, let us also define Bdim as the dimension of the tokens in $\mathbf {T}$ (e.g 768 for BERTbase), and have $\mathbf {W}^P \in \mathbb {R}^\texttt {Bdim}$ and $\mathbf {W}^Q \in \mathbb {R}^\texttt {Bdim}$ as learned linear layers. Then, the summary vectors are computed as:
Head predictor. A learned categorical variable with its number of outcomes equal to the number of answer heads in the model. Used to assign probabilities for using each of the heads in prediction.
where FFN is a two-layer feed-forward network with RELU activation.
Passage span. Define $\textbf {W}^S \in \mathbb {R}^\texttt {Bdim}$ and $\textbf {W}^E \in \mathbb {R}^\texttt {Bdim}$ as learned vectors. Then the probabilities of the start and end positions of a passage span are computed as
Question span. The probabilities of the start and end positions of a question span are computed as
where $\textbf {e}^{|\textbf {T}^Q|}\otimes \textbf {h}^P$ repeats $\textbf {h}^P$ for each component of $\textbf {T}^Q$.
Count. Counting is treated as a multi-class prediction problem with the numbers 0-9 as possible labels. The label probabilities are computed as
Arithmetic. As in NAQNET, this head obtains all of the numbers from the passage, and assigns a plus, minus or zero ("ignore") for each number. As BERT uses wordpiece tokenization, some numbers are broken up into multiple tokens. Following NABERT+, we chose to represent each number by its first wordpiece. That is, if $\textbf {N}^i$ is the set of tokens corresponding to the $i^\text{th}$ number, we define a number representation as $\textbf {h}_i^N = \textbf {N}^i_0$.
The selection of the sign for each number is a multi-class prediction problem with options $\lbrace 0, +, -\rbrace $, and the probabilities for the signs are given by
As for NABERT+'s two additional arithmetic features, we decided on using only the standard numbers, as the benefits from using templates were deemed inconclusive. Note that unlike the single-span heads, which are related to our introduction of a multi-span head, the arithmetic and count heads were not intended to play a significant role in our work. We didn't aim to improve results on these types of questions, perhaps only as a by-product of improving the general reading comprehension ability of our model.
Model ::: Multi-Span Head
A subset of questions that wasn't directly dealt with by the base models (NAQANET, NABERT+) is questions that have an answer which is composed of multiple non-continuous spans. We suggest a head that will be able to deal with both single-span and multi-span questions.
To model an answer which is a collection of spans, the multi-span head uses the $\mathtt {BIO}$ tagging format BIBREF8: $\mathtt {B}$ is used to mark the beginning of a span, $\mathtt {I}$ is used to mark the inside of a span and $\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans.
As words are broken up by the wordpiece tokenization for BERT, we decided on only considering the representation of the first sub-token of the word to tag, following the NER task from BIBREF2.
For the $i$-th token of an input, the probability to be assigned a $\text{tag} \in \left\lbrace {\mathtt {B},\mathtt {I},\mathtt {O}} \right\rbrace $ is computed as
Model ::: Objective and Training
To train our model, we try to maximize the log-likelihood of the correct answer $p(y_\text{gold};x^{P},x^{Q},\theta )$ as defined in Section SECREF2. If no head is capable of predicting the gold answer, the sample is skipped.
We enumerate over every answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $ (Passage Span, Question Span, Count, Arithmetic, Multi-Span) to compute each of the objective's addends:
Note that we are in a weakly supervised setup: the answer type is not given, and neither is the correct arithmetic expression required for deriving some answers. Therefore, it is possible that $y_\text{gold}$ could be derived by more than one way, even from the same head, with no indication of which is the "correct" one.
We use the weakly supervised training method used in NABERT+ and NAQANET. Based on BIBREF9, for each head we find all the executions that evaluate to the correct answer and maximize their marginal likelihood .
For a datapoint $\left(y, x^{P}, x^{Q} \right)$ let $\chi ^z$ be the set of all possible ways to get $y$ for answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $. Then, as in NABERT+, we have
Finally, for the arithmetic head, let $\mu $ be the set of all the standard numbers and the numbers from the passage, and let $\mathbf {\chi }^{\textit {A}}$ be the set of correct sign assignments to these numbers. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Training Objective
Denote by ${\chi }^{\textit {MS}}$ the set of correct tag sequences. If the concatenation of a question and a passage is $m$ tokens long, then denote a correct tag sequence as $\left(\text{tag}_1,\ldots ,\text{tag}_m\right)$.
We approximate the likelihood of a tag sequence by assuming independence between the sequence's positions, and multiplying the likelihoods of all the correct tags in the sequence. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Correct Tag Sequences
Since a given multi-span answer is a collection of spans, it is required to obtain its matching tag sequences in order to compute the training objective.
In what we consider to be a correct tag sequence, each answer span will be marked at least once. Due to the weakly supervised setup, we consider all the question/passage spans that match the answer spans as being correct. To illustrate, consider the following simple example. Given the text "X Y Z Z" and the correct multi-span answer ["Y", "Z"], there are three correct tag sequences: $\mathtt {O\,B\,B\,B}$,$\quad $ $\mathtt {O\,B\,B\,O}$,$\quad $ $\mathtt {O\,B\,O\,B}$.
Model ::: Objective and Training ::: Dealing with too Many Correct Tag Sequences
The number of correct tag sequences can be expressed by
where $s$ is the number of spans in the answer and $\#_i$ is the number of times the $i^\text{th}$ span appears in the text.
For questions with a reasonable amount of correct tag sequences, we generate all of them before the training starts. However, there is a small group of questions for which the amount of such sequences is between 10,000 and 100,000,000 - too many to generate and train on. In such cases, inspired by BIBREF9, instead of just using an arbitrary subset of the correct sequences, we use beam search to generate the top-k predictions of the training model, and then filter out the incorrect sequences. Compared to using an arbitrary subset, using these sequences causes the optimization to be done with respect to answers more compatible with the model. If no correct tag sequences were predicted within the top-k, we use the tag sequence that has all of the answer spans marked.
Model ::: Tag Sequence Prediction with the Multi-Span Head
Based on the outputs $\textbf {p}_{i}^{{\text{tag}}_{i}}$ we would like to predict the most likely sequence given the $\mathtt {BIO}$ constraints. Denote $\textit {validSeqs}$ as the set of all $\mathtt {BIO}$ sequences of length $m$ that are valid according to the rules specified in Section SECREF5. The $\mathtt {BIO}$ tag sequence to predict is then
We considered the following approaches:
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Viterbi Decoding
A natural candidate for getting the most likely sequence is Viterbi decoding, BIBREF10 with transition probabilities learned by a $\mathtt {BIO}$ constrained Conditional Random Field (CRF) BIBREF11. However, further inspection of our sequence's properties reveals that such a computational effort is probably not necessary, as explained in following paragraphs.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Beam Search
Due to our use of $\mathtt {BIO}$ tags and their constraints, observe that past tag predictions only affect future tag predictions from the last $\mathtt {B}$ prediction and as long as the best tag to predict is $\mathtt {I}$. Considering the frequency and length of the correct spans in the question and the passage, effectively there's no effect of past sequence's positions on future ones, other than a very few positions ahead. Together with the fact that at each prediction step there are no more than 3 tags to consider, it means using beam search to get the most likely sequence is very reasonable and even allows near-optimal results with small beam width values.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Greedy Tagging
Notice that greedy tagging does not enforce the $\mathtt {BIO}$ constraints. However, since the multi-span head's training objective adheres to the $\mathtt {BIO}$ constraints via being given the correct tag sequences, we can expect that even with greedy tagging the predictions will mostly adhere to these constraints as well. In case there are violations, their amendment is required post-prediction. Albeit faster, greedy tagging resulted in a small performance hit, as seen in Table TABREF26.
Preprocessing
We tokenize the passage, question, and all answer texts using the BERT uncased wordpiece tokenizer from huggingface. The tokenization resulting from each $(x^P,x^Q)$ input pair is truncated at 512 tokens so it can be fed to BERT as an input. However, before tokenizing the dataset texts, we perform additional preprocessing as listed below.
Preprocessing ::: Simple Preprocessing ::: Improved Textual Parsing
The raw dataset included almost a thousand of HTML entities that did not get parsed properly, e.g " " instead of a simple space. In addition, we fixed some quirks that were introduced by the original Wikipedia parsing method. For example, when encountering a reference to an external source that included a specific page from that reference, the original parser ended up introducing a redundant ":<PAGE NUMBER>" into the parsed text.
Preprocessing ::: Simple Preprocessing ::: Improved Handling of Numbers
Although we previously stated that we aren't focusing on improving arithmetic performance, while analyzing the training process we encountered two arithmetic-related issues that could be resolved rather quickly: a precision issue and a number extraction issue. Regarding precision, we noticed that while either generating expressions for the arithmetic head, or using the arithmetic head to predict a numeric answer, the value resulting from an arithmetic operation would not always yield the exact result due to floating point precision limitations. For example, $5.8 + 6.6 = 12.3999...$ instead of $12.4$. This issue has caused a significant performance hit of about 1.5 points for both F1 and EM and was fixed by simply rounding numbers to 5 decimal places, assuming that no answer requires a greater precision. Regarding number extraction, we noticed that some numeric entities, required in order to produce a correct answer, weren't being extracted from the passage. Examples include ordinals (121st, 189th) and some "per-" units (1,580.7/km2, 1050.95/month).
Preprocessing ::: Using NER for Cleaning Up Multi-Span Questions
The training dataset contains multi-span questions with answers that are clearly incorrect, with examples shown in Table TABREF22. In order to mitigate this, we applied an answer-cleaning technique using a pretrained Named Entity Recognition (NER) model BIBREF12 in the following manner: (1) Pre-define question prefixes whose answer spans are expected to contain only a specific entity type and filter the matching questions. (2) For a given answer of a filtered question, remove any span that does not contain at least one token of the expected type, where the types are determined by applying the NER model on the passage. For example, if a question starts with "who scored", we expect that any valid span will include a person entity ($\mathtt {PER}$). By applying such rules, we discovered that at least 3% of the multi-span questions in the training dataset included incorrect spans. As our analysis of prefixes wasn't exhaustive, we believe that this method could yield further gains. Table TABREF22 shows a few of our cleaning method results, where we perfectly clean the first two questions, and partially clean a third question.
Training
The starting point for our implementation was the NABERT+ model, which in turn was based on allenai's NAQANET. Our implementation can be found on GitHub. All three models utilize the allennlp framework. The pretrained BERT models were supplied by huggingface. For our base model we used bert-base-uncased. For our large models we used the standard bert-large-uncased-whole-word-masking and the squad fine-tuned bert-large-uncased- whole-word-masking-finetuned-squad.
Due to limited computational resources, we did not perform any hyperparameter searching. We preferred to focus our efforts on the ablation studies, in hope to gain further insights on the effect of the components that we ourselves introduced. For ease of performance comparison, we followed NABERT+'s training settings: we used the BERT Adam optimizer from huggingface with default settings and a learning rate of $1e^{-5}$. The only difference was that we used a batch size of 12. We trained our base model for 20 epochs. For the large models we used a batch size of 3 with a learning rate of $5e^{-6}$ and trained for 5 epochs, except for the model without the single-span heads that was trained with a batch size of 2 for 7 epochs. F1 was used as our validation metric. All models were trained on a single GPU with 12-16GB of memory.
Results and Discussion ::: Performance on DROP's Development Set
Table TABREF24 shows the results on DROP's development set. Compared to our base models, our large models exhibit a substantial improvement across all metrics.
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to the NABERT+ Baseline
We can see that our base model surpasses the NABERT+ baseline in every metric. The major improvement in multi-span performance was expected, as our multi-span head was introduced specifically to tackle this type of questions. For the other types, most of the improvement came from better preprocessing. A more detailed discussion could be found in Section (SECREF36).
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to MTMSN
Notice that different BERTlarge models were used, so the comparison is less direct. Overall, our large models exhibits similar results to those of MTMSNlarge.
For multi-span questions we achieve a significantly better performance. While a breakdown of metrics was only available for MTMSNlarge, notice that even when comparing these metrics to our base model, we still achieve a 12.2 absolute improvement in EM, and a 2.3 improvement in F1. All that, while keeping in mind we compare a base model to a large model (for reference, note the 8 point improvement between MTMSNbase and MTMSNlarge in both EM and F1). Our best model, large-squad, exhibits a huge improvement of 29.7 in EM and 15.1 in F1 compared to MTMSNlarge.
When comparing single-span performance, our best model exhibits slightly better results, but it should be noted that it retains the single-span heads from NABERT+, while in MTMSN they have one head to predict both single-span and multi-span answers. For a fairer comparison, we trained our model with the single-span heads removed, where our multi-span head remained the only head aimed for handling span questions. With this no-single-span-heads setting, while our multi-span performance even improved a bit, our single-span performance suffered a slight drop, ending up trailing by 0.8 in EM and 0.6 in F1 compared to MTMSN. Therefore, it could prove beneficial to try and analyze the reasons behind each model's (ours and MTMSN) relative advantages, and perhaps try to combine them into a more holistic approach of tackling span questions.
Results and Discussion ::: Performance on DROP's Test Set
Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.
Results and Discussion ::: Ablation Studies
In order to analyze the effect of each of our changes, we conduct ablation studies on the development set, depicted in Table TABREF26.
Not using the simple preprocessing from Section SECREF17 resulted in a 2.5 point decrease in both EM and F1. The numeric questions were the most affected, with their performance dropping by 3.5 points. Given that number questions make up about 61% of the dataset, we can deduce that our improved number handling is responsible for about a 2.1 point gain, while the rest could be be attributed to the improved Wikipedia parsing.
Although NER span cleaning (Section SECREF23) affected only 3% of the multi-span questions, it provided a solid improvement of 5.4 EM in multi-span questions and 1.5 EM in single-span questions. The single-span improvement is probably due to the combination of better multi-span head learning as a result of fixing multi-span questions and the fact that the multi-span head can answer single-span questions as well.
Not using the single-span heads results in a slight drop in multi-span performance, and a noticeable drop in single-span performance. However when performing the same comparison between our large models (see Table TABREF24), this performance gap becomes significantly smaller.
As expected, not using the multi-span head causes the multi-span performance to plummet. Note that for this ablation test the single-span heads were permitted to train on multi-span questions.
Compared to using greedy decoding in the prediction of multi-span questions, using beam search results in a small improvement. We used a beam with of 5, and didn't perform extensive tuning of the beam width.
Conclusion
In this work, we introduced a new approach for tackling multi-span questions in reading comprehension datasets. This approach is based on individually tagging each token with a categorical tag, relying on the tokens' contextual representation to bridge the information gap resulting from the tokens being tagged individually.
First, we show that integrating this new approach into an existing model, NABERT+, does not hinder performance on other questions types, while substantially improving the results on multi-span questions. Later, we compare our results to the current state-of-the-art on multi-span questions. We show that our model has a clear advantage in handling multi-span questions, with a 29.7 absolute improvement in EM, and a 15.1 absolute improvement in F1. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataeset. Finally, we present some ablation studies, analyzing the benefit gained from individual components of our model.
We believe that combining our tag-based approach for handling multi-span questions with current successful techniques for handling single-span questions could prove beneficial in finding better, more holistic ways, of tackling span questions in general.
Future Work ::: A Different Loss for Multi-span Questions
Currently, For each individual span, we optimize the average likelihood over all its possible tag sequences (see Section SECREF9). A different approach could be not taking each possible tag sequence into account but only the most likely one. This could provide the model more flexibility during training and the ability to focus on the more "correct" tag sequences.
Future Work ::: Explore Utilization of Non-First Wordpiece Sub-Tokens
As mentioned in Section SECREF5, we only considered the representation of the first wordpiece sub-token in our model. It would be interesting to see how different approaches to utilize the other sub-tokens' representations in the tagging task affect the results. | The proposed model achieves EM 77,63 and F1 80,73 on the test and EM 76,95 and F1 80,25 on the dev |
a9def7958eac7b9a780403d4f136927f756bab83 | a9def7958eac7b9a780403d4f136927f756bab83_0 | Q: What is the previous model that attempted to tackle multi-span questions as a part of its design?
Text: Introduction
The task of reading comprehension, where systems must understand a single passage of text well enough to answer arbitrary questions about it, has seen significant progress in the last few years. With models reaching human performance on the popular SQuAD dataset BIBREF0, and with much of the most popular reading comprehension datasets having been solved BIBREF1, BIBREF2, a new dataset, DROP BIBREF3, was recently published.
DROP aimed to present questions that require more complex reasoning in order to answer than that of previous datasets, in a hope to push the field towards a more comprehensive analysis of paragraphs of text. In addition to questions whose answers are a single continuous span from the paragraph text (questions of a type already included in SQuAD), DROP introduced additional types of questions. Among these new types were questions that require simple numerical reasoning, i.e questions whose answer is the result of a simple arithmetic expression containing numbers from the passage, and questions whose answers consist of several spans taken from the paragraph or the question itself, what we will denote as "multi-span questions".
Of all the existing models that tried to tackle DROP, only one model BIBREF4 directly targeted multi-span questions in a manner that wasn't just a by-product of the model's overall performance. In this paper, we propose a new method for tackling multi-span questions. Our method takes a different path from that of the aforementioned model. It does not try to generalize the existing approach for tackling single-span questions, but instead attempts to attack this issue with a new, tag-based, approach.
Related Work
Numerically-aware QANet (NAQANet) BIBREF3 was the model released with DROP. It uses QANET BIBREF5, at the time the best-performing published model on SQuAD 1.1 BIBREF0 (without data augmentation or pretraining), as the encoder. On top of QANET, NAQANet adds four different output layers, which we refer to as "heads". Each of these heads is designed to tackle a specific question type from DROP, where these types where identified by DROP's authors post-creation of the dataset. These four heads are (1) Passage span head, designed for producing answers that consist of a single span from the passage. This head deals with the type of questions already introduced in SQuAD. (2) Question span head, for answers that consist of a single span from the question. (3) Arithmetic head, for answers that require adding or subtracting numbers from the passage. (4) Count head, for answers that require counting and sorting entities from the text. In addition, to determine which head should be used to predict an answer, a 4-way categorical variable, as per the number of heads, is trained. We denote this categorical variable as the "head predictor".
Numerically-aware BERT (NABERT+) BIBREF6 introduced two main improvements over NAQANET. The first was to replace the QANET encoder with BERT. This change alone resulted in an absolute improvement of more than eight points in both EM and F1 metrics. The second improvement was to the arithmetic head, consisting of the addition of "standard numbers" and "templates". Standard numbers were predefined numbers which were added as additional inputs to the arithmetic head, regardless of their occurrence in the passage. Templates were an attempt to enrich the head's arithmetic capabilities, by adding the ability of doing simple multiplications and divisions between up to three numbers.
MTMSN BIBREF4 is the first, and only model so far, that specifically tried to tackle the multi-span questions of DROP. Their approach consisted of two parts. The first was to train a dedicated categorical variable to predict the number of spans to extract. The second was to generalize the single-span head method of extracting a span, by utilizing the non-maximum suppression (NMS) algorithm BIBREF7 to find the most probable set of non-overlapping spans. The number of spans to extract was determined by the aforementioned categorical variable.
Additionally, MTMSN introduced two new other, non span-related, components. The first was a new "negation" head, meant to deal with questions deemed as requiring logical negation (e.g. "How many percent were not German?"). The second was improving the arithmetic head by using beam search to re-rank candidate arithmetic expressions.
Model
Problem statement. Given a pair $(x^P,x^Q)$ of a passage and a question respectively, both comprised of tokens from a vocabulary $V$, we wish to predict an answer $y$. The answer could be either a collection of spans from the input, or a number, supposedly arrived to by performing arithmetic reasoning on the input. We want to estimate $p(y;x^P,x^Q)$.
The basic structure of our model is shared with NABERT+, which in turn is shared with that of NAQANET (the model initially released with DROP). Consequently, meticulously presenting every part of our model would very likely prove redundant. As a reasonable compromise, we will introduce the shared parts with more brevity, and will go into greater detail when presenting our contributions.
Model ::: NABERT+
Assume there are $K$ answer heads in the model and their weights denoted by $\theta $. For each pair $(x^P,x^Q)$ we assume a latent categorical random variable $z\in \left\lbrace 1,\ldots \,K\right\rbrace $ such that the probability of an answer $y$ is
where each component of the mixture corresponds to an output head such that
Note that a head is not always capable of producing the correct answer $y_\text{gold}$ for each type of question, in which case $p\left(y_\text{gold} \vert z ; x^{P},x^{Q},\theta \right)=0$. For example, the arithmetic head, whose output is always a single number, cannot possibly produce a correct answer for a multi-span question.
For a multi-span question with an answer composed of $l$ spans, denote $y_{{\text{gold}}_{\textit {MS}}}=\left\lbrace y_{{\text{gold}}_1}, \ldots , y_{{\text{gold}}_l} \right\rbrace $. NAQANET and NABERT+ had no head capable of outputting correct answers for multi-span questions. Instead of ignoring them in training, both models settled on using "semi-correct answers": each $y_\text{gold} \in y_{{\text{gold}}_{\textit {MS}}}$ was considered to be a correct answer (only in training). By deliberately encouraging the model to provide partial answers for multi-span questions, they were able to improve the corresponding F1 score. As our model does have a head with the ability to answer multi-span questions correctly, we didn't provide the aforementioned semi-correct answers to any of the other heads. Otherwise, we would have skewed the predictions of the head predictor and effectively mislead the other heads to believe they could predict correct answers for multi-span questions.
Model ::: NABERT+ ::: Heads Shared with NABERT+
Before going over the answer heads, two additional components should be introduced - the summary vectors, and the head predictor.
Summary vectors. The summary vectors are two fixed-size learned representations of the question and the passage, which serve as an input for some of the heads. To create the summary vectors, first define $\mathbf {T}$ as BERT's output on a $(x^{P},x^{Q})$ input. Then, let $\mathbf {T}^{P}$ and $\mathbf {T}^{Q}$ be subsequences of T that correspond to $x^P$ and $x^Q$ respectively. Finally, let us also define Bdim as the dimension of the tokens in $\mathbf {T}$ (e.g 768 for BERTbase), and have $\mathbf {W}^P \in \mathbb {R}^\texttt {Bdim}$ and $\mathbf {W}^Q \in \mathbb {R}^\texttt {Bdim}$ as learned linear layers. Then, the summary vectors are computed as:
Head predictor. A learned categorical variable with its number of outcomes equal to the number of answer heads in the model. Used to assign probabilities for using each of the heads in prediction.
where FFN is a two-layer feed-forward network with RELU activation.
Passage span. Define $\textbf {W}^S \in \mathbb {R}^\texttt {Bdim}$ and $\textbf {W}^E \in \mathbb {R}^\texttt {Bdim}$ as learned vectors. Then the probabilities of the start and end positions of a passage span are computed as
Question span. The probabilities of the start and end positions of a question span are computed as
where $\textbf {e}^{|\textbf {T}^Q|}\otimes \textbf {h}^P$ repeats $\textbf {h}^P$ for each component of $\textbf {T}^Q$.
Count. Counting is treated as a multi-class prediction problem with the numbers 0-9 as possible labels. The label probabilities are computed as
Arithmetic. As in NAQNET, this head obtains all of the numbers from the passage, and assigns a plus, minus or zero ("ignore") for each number. As BERT uses wordpiece tokenization, some numbers are broken up into multiple tokens. Following NABERT+, we chose to represent each number by its first wordpiece. That is, if $\textbf {N}^i$ is the set of tokens corresponding to the $i^\text{th}$ number, we define a number representation as $\textbf {h}_i^N = \textbf {N}^i_0$.
The selection of the sign for each number is a multi-class prediction problem with options $\lbrace 0, +, -\rbrace $, and the probabilities for the signs are given by
As for NABERT+'s two additional arithmetic features, we decided on using only the standard numbers, as the benefits from using templates were deemed inconclusive. Note that unlike the single-span heads, which are related to our introduction of a multi-span head, the arithmetic and count heads were not intended to play a significant role in our work. We didn't aim to improve results on these types of questions, perhaps only as a by-product of improving the general reading comprehension ability of our model.
Model ::: Multi-Span Head
A subset of questions that wasn't directly dealt with by the base models (NAQANET, NABERT+) is questions that have an answer which is composed of multiple non-continuous spans. We suggest a head that will be able to deal with both single-span and multi-span questions.
To model an answer which is a collection of spans, the multi-span head uses the $\mathtt {BIO}$ tagging format BIBREF8: $\mathtt {B}$ is used to mark the beginning of a span, $\mathtt {I}$ is used to mark the inside of a span and $\mathtt {O}$ is used to mark tokens not included in a span. In this way, we get a sequence of chunks that can be decoded to a final answer - a collection of spans.
As words are broken up by the wordpiece tokenization for BERT, we decided on only considering the representation of the first sub-token of the word to tag, following the NER task from BIBREF2.
For the $i$-th token of an input, the probability to be assigned a $\text{tag} \in \left\lbrace {\mathtt {B},\mathtt {I},\mathtt {O}} \right\rbrace $ is computed as
Model ::: Objective and Training
To train our model, we try to maximize the log-likelihood of the correct answer $p(y_\text{gold};x^{P},x^{Q},\theta )$ as defined in Section SECREF2. If no head is capable of predicting the gold answer, the sample is skipped.
We enumerate over every answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $ (Passage Span, Question Span, Count, Arithmetic, Multi-Span) to compute each of the objective's addends:
Note that we are in a weakly supervised setup: the answer type is not given, and neither is the correct arithmetic expression required for deriving some answers. Therefore, it is possible that $y_\text{gold}$ could be derived by more than one way, even from the same head, with no indication of which is the "correct" one.
We use the weakly supervised training method used in NABERT+ and NAQANET. Based on BIBREF9, for each head we find all the executions that evaluate to the correct answer and maximize their marginal likelihood .
For a datapoint $\left(y, x^{P}, x^{Q} \right)$ let $\chi ^z$ be the set of all possible ways to get $y$ for answer head $z\in \left\lbrace \textit {PS}, \textit {QS}, \textit {C}, \textit {A}, \textit {MS}\right\rbrace $. Then, as in NABERT+, we have
Finally, for the arithmetic head, let $\mu $ be the set of all the standard numbers and the numbers from the passage, and let $\mathbf {\chi }^{\textit {A}}$ be the set of correct sign assignments to these numbers. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Training Objective
Denote by ${\chi }^{\textit {MS}}$ the set of correct tag sequences. If the concatenation of a question and a passage is $m$ tokens long, then denote a correct tag sequence as $\left(\text{tag}_1,\ldots ,\text{tag}_m\right)$.
We approximate the likelihood of a tag sequence by assuming independence between the sequence's positions, and multiplying the likelihoods of all the correct tags in the sequence. Then, we have
Model ::: Objective and Training ::: Multi-Span Head Correct Tag Sequences
Since a given multi-span answer is a collection of spans, it is required to obtain its matching tag sequences in order to compute the training objective.
In what we consider to be a correct tag sequence, each answer span will be marked at least once. Due to the weakly supervised setup, we consider all the question/passage spans that match the answer spans as being correct. To illustrate, consider the following simple example. Given the text "X Y Z Z" and the correct multi-span answer ["Y", "Z"], there are three correct tag sequences: $\mathtt {O\,B\,B\,B}$,$\quad $ $\mathtt {O\,B\,B\,O}$,$\quad $ $\mathtt {O\,B\,O\,B}$.
Model ::: Objective and Training ::: Dealing with too Many Correct Tag Sequences
The number of correct tag sequences can be expressed by
where $s$ is the number of spans in the answer and $\#_i$ is the number of times the $i^\text{th}$ span appears in the text.
For questions with a reasonable amount of correct tag sequences, we generate all of them before the training starts. However, there is a small group of questions for which the amount of such sequences is between 10,000 and 100,000,000 - too many to generate and train on. In such cases, inspired by BIBREF9, instead of just using an arbitrary subset of the correct sequences, we use beam search to generate the top-k predictions of the training model, and then filter out the incorrect sequences. Compared to using an arbitrary subset, using these sequences causes the optimization to be done with respect to answers more compatible with the model. If no correct tag sequences were predicted within the top-k, we use the tag sequence that has all of the answer spans marked.
Model ::: Tag Sequence Prediction with the Multi-Span Head
Based on the outputs $\textbf {p}_{i}^{{\text{tag}}_{i}}$ we would like to predict the most likely sequence given the $\mathtt {BIO}$ constraints. Denote $\textit {validSeqs}$ as the set of all $\mathtt {BIO}$ sequences of length $m$ that are valid according to the rules specified in Section SECREF5. The $\mathtt {BIO}$ tag sequence to predict is then
We considered the following approaches:
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Viterbi Decoding
A natural candidate for getting the most likely sequence is Viterbi decoding, BIBREF10 with transition probabilities learned by a $\mathtt {BIO}$ constrained Conditional Random Field (CRF) BIBREF11. However, further inspection of our sequence's properties reveals that such a computational effort is probably not necessary, as explained in following paragraphs.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Beam Search
Due to our use of $\mathtt {BIO}$ tags and their constraints, observe that past tag predictions only affect future tag predictions from the last $\mathtt {B}$ prediction and as long as the best tag to predict is $\mathtt {I}$. Considering the frequency and length of the correct spans in the question and the passage, effectively there's no effect of past sequence's positions on future ones, other than a very few positions ahead. Together with the fact that at each prediction step there are no more than 3 tags to consider, it means using beam search to get the most likely sequence is very reasonable and even allows near-optimal results with small beam width values.
Model ::: Tag Sequence Prediction with the Multi-Span Head ::: Greedy Tagging
Notice that greedy tagging does not enforce the $\mathtt {BIO}$ constraints. However, since the multi-span head's training objective adheres to the $\mathtt {BIO}$ constraints via being given the correct tag sequences, we can expect that even with greedy tagging the predictions will mostly adhere to these constraints as well. In case there are violations, their amendment is required post-prediction. Albeit faster, greedy tagging resulted in a small performance hit, as seen in Table TABREF26.
Preprocessing
We tokenize the passage, question, and all answer texts using the BERT uncased wordpiece tokenizer from huggingface. The tokenization resulting from each $(x^P,x^Q)$ input pair is truncated at 512 tokens so it can be fed to BERT as an input. However, before tokenizing the dataset texts, we perform additional preprocessing as listed below.
Preprocessing ::: Simple Preprocessing ::: Improved Textual Parsing
The raw dataset included almost a thousand of HTML entities that did not get parsed properly, e.g " " instead of a simple space. In addition, we fixed some quirks that were introduced by the original Wikipedia parsing method. For example, when encountering a reference to an external source that included a specific page from that reference, the original parser ended up introducing a redundant ":<PAGE NUMBER>" into the parsed text.
Preprocessing ::: Simple Preprocessing ::: Improved Handling of Numbers
Although we previously stated that we aren't focusing on improving arithmetic performance, while analyzing the training process we encountered two arithmetic-related issues that could be resolved rather quickly: a precision issue and a number extraction issue. Regarding precision, we noticed that while either generating expressions for the arithmetic head, or using the arithmetic head to predict a numeric answer, the value resulting from an arithmetic operation would not always yield the exact result due to floating point precision limitations. For example, $5.8 + 6.6 = 12.3999...$ instead of $12.4$. This issue has caused a significant performance hit of about 1.5 points for both F1 and EM and was fixed by simply rounding numbers to 5 decimal places, assuming that no answer requires a greater precision. Regarding number extraction, we noticed that some numeric entities, required in order to produce a correct answer, weren't being extracted from the passage. Examples include ordinals (121st, 189th) and some "per-" units (1,580.7/km2, 1050.95/month).
Preprocessing ::: Using NER for Cleaning Up Multi-Span Questions
The training dataset contains multi-span questions with answers that are clearly incorrect, with examples shown in Table TABREF22. In order to mitigate this, we applied an answer-cleaning technique using a pretrained Named Entity Recognition (NER) model BIBREF12 in the following manner: (1) Pre-define question prefixes whose answer spans are expected to contain only a specific entity type and filter the matching questions. (2) For a given answer of a filtered question, remove any span that does not contain at least one token of the expected type, where the types are determined by applying the NER model on the passage. For example, if a question starts with "who scored", we expect that any valid span will include a person entity ($\mathtt {PER}$). By applying such rules, we discovered that at least 3% of the multi-span questions in the training dataset included incorrect spans. As our analysis of prefixes wasn't exhaustive, we believe that this method could yield further gains. Table TABREF22 shows a few of our cleaning method results, where we perfectly clean the first two questions, and partially clean a third question.
Training
The starting point for our implementation was the NABERT+ model, which in turn was based on allenai's NAQANET. Our implementation can be found on GitHub. All three models utilize the allennlp framework. The pretrained BERT models were supplied by huggingface. For our base model we used bert-base-uncased. For our large models we used the standard bert-large-uncased-whole-word-masking and the squad fine-tuned bert-large-uncased- whole-word-masking-finetuned-squad.
Due to limited computational resources, we did not perform any hyperparameter searching. We preferred to focus our efforts on the ablation studies, in hope to gain further insights on the effect of the components that we ourselves introduced. For ease of performance comparison, we followed NABERT+'s training settings: we used the BERT Adam optimizer from huggingface with default settings and a learning rate of $1e^{-5}$. The only difference was that we used a batch size of 12. We trained our base model for 20 epochs. For the large models we used a batch size of 3 with a learning rate of $5e^{-6}$ and trained for 5 epochs, except for the model without the single-span heads that was trained with a batch size of 2 for 7 epochs. F1 was used as our validation metric. All models were trained on a single GPU with 12-16GB of memory.
Results and Discussion ::: Performance on DROP's Development Set
Table TABREF24 shows the results on DROP's development set. Compared to our base models, our large models exhibit a substantial improvement across all metrics.
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to the NABERT+ Baseline
We can see that our base model surpasses the NABERT+ baseline in every metric. The major improvement in multi-span performance was expected, as our multi-span head was introduced specifically to tackle this type of questions. For the other types, most of the improvement came from better preprocessing. A more detailed discussion could be found in Section (SECREF36).
Results and Discussion ::: Performance on DROP's Development Set ::: Comparison to MTMSN
Notice that different BERTlarge models were used, so the comparison is less direct. Overall, our large models exhibits similar results to those of MTMSNlarge.
For multi-span questions we achieve a significantly better performance. While a breakdown of metrics was only available for MTMSNlarge, notice that even when comparing these metrics to our base model, we still achieve a 12.2 absolute improvement in EM, and a 2.3 improvement in F1. All that, while keeping in mind we compare a base model to a large model (for reference, note the 8 point improvement between MTMSNbase and MTMSNlarge in both EM and F1). Our best model, large-squad, exhibits a huge improvement of 29.7 in EM and 15.1 in F1 compared to MTMSNlarge.
When comparing single-span performance, our best model exhibits slightly better results, but it should be noted that it retains the single-span heads from NABERT+, while in MTMSN they have one head to predict both single-span and multi-span answers. For a fairer comparison, we trained our model with the single-span heads removed, where our multi-span head remained the only head aimed for handling span questions. With this no-single-span-heads setting, while our multi-span performance even improved a bit, our single-span performance suffered a slight drop, ending up trailing by 0.8 in EM and 0.6 in F1 compared to MTMSN. Therefore, it could prove beneficial to try and analyze the reasons behind each model's (ours and MTMSN) relative advantages, and perhaps try to combine them into a more holistic approach of tackling span questions.
Results and Discussion ::: Performance on DROP's Test Set
Table TABREF25 shows the results on DROP's test set, with our model being the best overall as of the time of writing, and not just on multi-span questions.
Results and Discussion ::: Ablation Studies
In order to analyze the effect of each of our changes, we conduct ablation studies on the development set, depicted in Table TABREF26.
Not using the simple preprocessing from Section SECREF17 resulted in a 2.5 point decrease in both EM and F1. The numeric questions were the most affected, with their performance dropping by 3.5 points. Given that number questions make up about 61% of the dataset, we can deduce that our improved number handling is responsible for about a 2.1 point gain, while the rest could be be attributed to the improved Wikipedia parsing.
Although NER span cleaning (Section SECREF23) affected only 3% of the multi-span questions, it provided a solid improvement of 5.4 EM in multi-span questions and 1.5 EM in single-span questions. The single-span improvement is probably due to the combination of better multi-span head learning as a result of fixing multi-span questions and the fact that the multi-span head can answer single-span questions as well.
Not using the single-span heads results in a slight drop in multi-span performance, and a noticeable drop in single-span performance. However when performing the same comparison between our large models (see Table TABREF24), this performance gap becomes significantly smaller.
As expected, not using the multi-span head causes the multi-span performance to plummet. Note that for this ablation test the single-span heads were permitted to train on multi-span questions.
Compared to using greedy decoding in the prediction of multi-span questions, using beam search results in a small improvement. We used a beam with of 5, and didn't perform extensive tuning of the beam width.
Conclusion
In this work, we introduced a new approach for tackling multi-span questions in reading comprehension datasets. This approach is based on individually tagging each token with a categorical tag, relying on the tokens' contextual representation to bridge the information gap resulting from the tokens being tagged individually.
First, we show that integrating this new approach into an existing model, NABERT+, does not hinder performance on other questions types, while substantially improving the results on multi-span questions. Later, we compare our results to the current state-of-the-art on multi-span questions. We show that our model has a clear advantage in handling multi-span questions, with a 29.7 absolute improvement in EM, and a 15.1 absolute improvement in F1. Furthermore, we show that our model slightly eclipses the current state-of-the-art results on the entire DROP dataeset. Finally, we present some ablation studies, analyzing the benefit gained from individual components of our model.
We believe that combining our tag-based approach for handling multi-span questions with current successful techniques for handling single-span questions could prove beneficial in finding better, more holistic ways, of tackling span questions in general.
Future Work ::: A Different Loss for Multi-span Questions
Currently, For each individual span, we optimize the average likelihood over all its possible tag sequences (see Section SECREF9). A different approach could be not taking each possible tag sequence into account but only the most likely one. This could provide the model more flexibility during training and the ability to focus on the more "correct" tag sequences.
Future Work ::: Explore Utilization of Non-First Wordpiece Sub-Tokens
As mentioned in Section SECREF5, we only considered the representation of the first wordpiece sub-token in our model. It would be interesting to see how different approaches to utilize the other sub-tokens' representations in the tagging task affect the results. | MTMSN BIBREF4 |
547be35cff38028648d199ad39fb48236cfb99ee | 547be35cff38028648d199ad39fb48236cfb99ee_0 | Q: How much more data does the model trained using XR loss have access to, compared to the fully supervised model?
Text: Introduction
Data annotation is a key bottleneck in many data driven algorithms. Specifically, deep learning models, which became a prominent tool in many data driven tasks in recent years, require large datasets to work well. However, many tasks require manual annotations which are relatively hard to obtain at scale. An attractive alternative is lightly supervised learning BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. For example, in label regularization BIBREF0 the model is trained to fit the true label proportions of an unlabeled dataset. Label regularization is special case of expectation regularization (XR) BIBREF0 , in which the model is trained to fit the conditional probabilities of labels given features.
In this work we consider the case of correlated tasks, in the sense that knowing the labels for task A provides information on the expected label composition of task B. We demonstrate the approach using sentence-level and aspect-level sentiment analysis, which we use as a running example: knowing that a sentence has positive sentiment label (task A), we can expect that most aspects within this sentence (task B) will also have positive label. While this expectation may be noisy on the individual example level, it holds well in aggregate: given a set of positively-labeled sentences, we can robustly estimate the proportion of positively-labeled aspects within this set. For example, in a random set of positive sentences, we expect to find 90% positive aspects, while in a set of negative sentences, we expect to find 70% negative aspects. These proportions can be easily either guessed or estimated from a small set.
We propose a novel application of the XR framework for transfer learning in this setup. We present an algorithm (Sec SECREF12 ) that, given a corpus labeled for task A (sentence-level sentiment), learns a classifier for performing task B (aspect-level sentiment) instead, without a direct supervision signal for task B. We note that the label information for task A is only used at training time. Furthermore, due to the stochastic nature of the estimation, the task A labels need not be fully accurate, allowing us to make use of noisy predictions which are assigned by an automatic classifier (Sections SECREF12 and SECREF4 ). In other words, given a medium-sized sentiment corpus with sentence-level labels, and a large collection of un-annotated text from the same distribution, we can train an accurate aspect-level sentiment classifier.
The XR loss allows us to use task A labels for training task B predictors. This ability seamlessly integrates into other semi-supervised schemes: we can use the XR loss on top of a pre-trained model to fine-tune the pre-trained representation to the target task, and we can also take the model trained using XR loss and plentiful data and fine-tune it to the target task using the available small-scale annotated data. In Section SECREF56 we explore these options and show that our XR framework improves the results also when applied on top of a pre-trained Bert-based model BIBREF9 .
Finally, to make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure (Section SECREF19 ). Source code is available at https://github.com/MatanBN/XRTransfer.
Lightly Supervised Learning
An effective way to supplement small annotated datasets is to use lightly supervised learning, in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. Previous work in lightly-supervised learning focused on training classifiers by using prior knowledge of label proportions BIBREF2 , BIBREF3 , BIBREF10 , BIBREF0 , BIBREF11 , BIBREF12 , BIBREF7 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF8 or prior knowledge of features label associations BIBREF1 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In the context of NLP, BIBREF17 suggested to use distributional similarities of words to train sequence models for part-of-speech tagging and a classified ads information extraction task. BIBREF19 used background lexical information in terms of word-class associations to train a sentiment classifier. BIBREF21 , BIBREF22 suggested to exploit the bilingual correlations between a resource rich language and a resource poor language to train a classifier for the resource poor language in a lightly supervised manner.
Expectation Regularization (XR)
Expectation Regularization (XR) BIBREF0 is a lightly supervised learning method, in which the model is trained to fit the conditional probabilities of labels given features. In the context of NLP, XR was used by BIBREF20 to train twitter-user attribute prediction using hundreds of noisy distributional expectations based on census demographics. Here, we suggest using XR to train a target task (aspect-level sentiment) based on the output of a related source-task classifier (sentence-level sentiment).
The main idea of XR is moving from a fully supervised situation in which each data-point INLINEFORM0 has an associated label INLINEFORM1 , to a setup in which sets of data points INLINEFORM2 are associated with corresponding label proportions INLINEFORM3 over that set.
Formally, let INLINEFORM0 be a set of data points, INLINEFORM1 be a set of INLINEFORM2 class labels, INLINEFORM3 be a set of sets where INLINEFORM4 for every INLINEFORM5 , and let INLINEFORM6 be the label distribution of set INLINEFORM7 . For example, INLINEFORM8 would indicate that 70% of data points in INLINEFORM9 are expected to have class 0, 20% are expected to have class 1 and 10% are expected to have class 2. Let INLINEFORM10 be a parameterized function with parameters INLINEFORM11 from INLINEFORM12 to a vector of conditional probabilities over labels in INLINEFORM13 . We write INLINEFORM14 to denote the probability assigned to the INLINEFORM15 th event (the conditional probability of INLINEFORM16 given INLINEFORM17 ).
A typically objective when training on fully labeled data of INLINEFORM0 pairs is to maximize likelihood of labeled data using the cross entropy loss, INLINEFORM1
Instead, in XR our data comes in the form of pairs INLINEFORM0 of sets and their corresponding expected label proportions, and we aim to optimize INLINEFORM1 to fit the label distribution INLINEFORM2 over INLINEFORM3 , for all INLINEFORM4 .
As counting the number of predicted class labels over a set INLINEFORM0 leads to a non-differentiable objective, BIBREF0 suggest to relax it and use instead the model's posterior distribution INLINEFORM1 over the set: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 indicates the INLINEFORM1 th entry in INLINEFORM2 . Then, we would like to set INLINEFORM3 such that INLINEFORM4 and INLINEFORM5 are close. BIBREF0 suggest to use KL-divergence for this. KL-divergence is composed of two parts: INLINEFORM6 INLINEFORM7
Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0
Notice that computing INLINEFORM0 requires summation over INLINEFORM1 for the entire set INLINEFORM2 , which can be prohibitive. We present batched approximation (Section SECREF19 ) to overcome this.
BIBREF0 find that XR might find a degenerate solution. For example, in a three class classification task, where INLINEFORM0 , it might find a solution such that INLINEFORM1 for every instance, as a result, every instance will be classified the same. To avoid this, BIBREF0 suggest to penalize flat distributions by using a temperature coefficient T likewise: DISPLAYFORM0
Where z is a feature vector and W and b are the linear classifier parameters.
Aspect-based Sentiment Classification
In the aspect-based sentiment classification (ABSC) task, we are given a sentence and an aspect, and need to determine the sentiment that is expressed towards the aspect. For example the sentence “Excellent food, although the interior could use some help.“ has two aspects: food and interior, a positive sentiment is expressed about the food, but a negative sentiment is expressed about the interior. A sentence INLINEFORM0 , may contain 0 or more aspects INLINEFORM1 , where each aspect corresponds to a sub-sequence of the original sentence, and has an associated sentiment label (Neg, Pos, or Neu). Concretely, we follow the task definition in the SemEval-2015 and SemEval-2016 shared tasks BIBREF23 , BIBREF24 , in which the relevant aspects are given and the task focuses on finding the sentiment label of the aspects.
While sentence-level sentiment labels are relatively easy to obtain, aspect-level annotation are much more scarce, as demonstrated in the small datasets of the SemEval shared tasks.
Transfer-training between related tasks with XR
[t!] Inputs: A dataset INLINEFORM0 , batch size INLINEFORM1 , differentiable classifier INLINEFORM2 [H] not converged INLINEFORM3 random( INLINEFORM4 ) INLINEFORM5 random-choice( INLINEFORM6 , INLINEFORM7 ) INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 Compute loss INLINEFORM12 (eq (4)) Compute gradients and update INLINEFORM13 INLINEFORM14 Stochastic Batched XR
Consider two classification tasks over a shared input space, a source task INLINEFORM0 from INLINEFORM1 to INLINEFORM2 and a target task INLINEFORM3 from INLINEFORM4 to INLINEFORM5 , which are related through a conditional distribution INLINEFORM6 . In other words, a labeling decision for task INLINEFORM7 induces an expected label distribution over the task INLINEFORM8 . For a set of datapoints INLINEFORM9 that share a source label INLINEFORM10 , we expect to see a target label distribution of INLINEFORM11 .
Given a large unlabeled dataset INLINEFORM0 , a small labeled dataset for the target task INLINEFORM1 , classifier INLINEFORM2 (or sufficient training data to train one) for the source task, we wish to use INLINEFORM3 and INLINEFORM4 to train a good classifier INLINEFORM5 for the target task. This can be achieved using the following procedure.
Apply INLINEFORM0 to INLINEFORM1 , resulting in a noisy source-side labels INLINEFORM2 for the target task.
Estimate the conditional probability INLINEFORM0 table using MLE estimates over INLINEFORM1 INLINEFORM2
where INLINEFORM0 is a counting function over INLINEFORM1 .
Apply INLINEFORM0 to the unlabeled data INLINEFORM1 resulting in labels INLINEFORM2 . Split INLINEFORM3 into INLINEFORM4 sets INLINEFORM5 according to the labeling induced by INLINEFORM6 : INLINEFORM7
Use Algorithm SECREF12 to train a classifier for the target task using input pairs INLINEFORM0 and the XR loss.
In words, by using XR training, we use the expected label proportions over the target task given predicted labels of the source task, to train a target-class classifier.
Stochastic Batched Training for Deep XR
BIBREF0 and following work take the base classifier INLINEFORM0 to be a logistic regression classifier, for which they manually derive gradients for the XR loss and train with LBFGs BIBREF25 . However, nothing precludes us from using an arbitrary neural network instead, as long as it culminates in a softmax layer.
One complicating factor is that the computation of INLINEFORM0 in equation ( EQREF5 ) requires a summation over INLINEFORM1 for the entire set INLINEFORM2 , which in our setup may contain hundreds of thousands of examples, making gradient computation and optimization impractical. We instead proposed a stochastic batched approximation in which, instead of requiring that the full constraint set INLINEFORM3 will match the expected label posterior distribution, we require that sufficiently large random subsets of it will match the distribution. At each training step we compute the loss and update the gradient with respect to a different random subset. Specifically, in each training step we sample a random pair INLINEFORM4 , sample a random subset INLINEFORM5 of INLINEFORM6 of size INLINEFORM7 , and compute the local XR loss of set INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 is computed by summing over the elements of INLINEFORM1 rather than of INLINEFORM2 in equations ( EQREF5 –2). The stochastic batched XR training algorithm is given in Algorithm SECREF12 . For large enough INLINEFORM3 , the expected label distribution of the subset is the same as that of the complete set.
Application to Aspect-based Sentiment
We demonstrate the procedure given above by training Aspect-based Sentiment Classifier (ABSC) using sentence-level sentiment signals.
Relating the classification tasks
We observe that while the sentence-level sentiment does not determine the sentiment of individual aspects (a positive sentence may contain negative remarks about some aspects), it is very predictive of the proportion of sentiment labels of the fragments within a sentence. Positively labeled sentences are likely to have more positive aspects and fewer negative ones, and vice-versa for negatively-labeled sentences. While these proportions may vary on the individual sentence level, we expect them to be stable when aggregating fragments from several sentences: when considering a large enough sample of fragments that all come from positively labeled sentences, we expect the different samples to have roughly similar label proportions to each other. This situation is idealy suited for performing XR training, as described in section SECREF12 .
The application to ABSC is almost straightforward, but is complicated a bit by the decomposition of sentences into fragments: each sentence level decision now corresponds to multiple fragment-level decisions. Thus, we apply the sentence-level (task A) classifier INLINEFORM0 on the aspect-level corpus INLINEFORM1 by applying it on the sentence level and then associating the predicted sentence labels with each of the fragments, resulting in fragment-level labeling. Similarly, when we apply INLINEFORM2 to the unlabeled data INLINEFORM3 we again do it at the sentence level, but the sets INLINEFORM4 are composed of fragments, not sentences: INLINEFORM5
We then apply algorithm SECREF12 as is: at each step of training we sample a source label INLINEFORM0 Pos,Neg,Neu INLINEFORM1 , sample INLINEFORM2 fragments from INLINEFORM3 , and use the XR loss to fit the expected fragment-label proportions over these INLINEFORM4 fragments to INLINEFORM5 . Figure FIGREF21 illustrates the procedure.
Classification Architecture
We model the ABSC problem by associating each (sentence,aspect) pair with a sentence-fragment, and constructing a neural classifier from fragments to sentiment labels. We heuristically decompose a sentence into fragments. We use the same BiLSTM based neural architecture for both sentence classification and fragment classification.
We now describe the procedure we use to associate a sentence fragment with each (sentence,aspect) pairs. The shared tasks data associates each aspect with a pivot-phrase INLINEFORM0 , where pivot phrase INLINEFORM1 is defined as a pre-determined sequence of words that is contained within the sentence. For a sentence INLINEFORM2 , a set of pivot phrases INLINEFORM3 and a specific pivot phrase INLINEFORM4 , we consult the constituency parse tree of INLINEFORM5 and look for tree nodes that satisfy the following conditions:
The node governs the desired pivot phrase INLINEFORM0 .
The node governs either a verb (VB, VBD, VBN, VBG, VBP, VBZ) or an adjective (JJ, JJR, JJS), which is different than any INLINEFORM0 .
The node governs a minimal number of pivot phrases from INLINEFORM0 , ideally only INLINEFORM1 .
We then select the highest node in the tree that satisfies all conditions. The span governed by this node is taken as the fragment associated with aspect INLINEFORM0 . The decomposition procedure is demonstrated in Figure FIGREF22 .
When aspect-level information is given, we take the pivot-phrases to be the requested aspects. When aspect-level information is not available, we take each noun in the sentence to be a pivot-phrase.
Our classification model is a simple 1-layer BiLSTM encoder (a concatenation of the last states of a forward and a backward running LSTMs) followed by a linear-predictor. The encoder is fed either a complete sentence or a sentence fragment.
Main Results
Table TABREF44 compares these baselines to three XR conditions.
The first condition, BiLSTM-XR-Dev, performs XR training on the automatically-labeled sentence-level dataset. The only access it has to aspect-level annotation is for estimating the proportions of labels for each sentence-level label, which is done based on the validation set of SemEval-2015 (i.e., 20% of the train set). The XR setting is very effective: without using any in-task data, this model already surpasses all other models, both supervised and semi-supervised, except for the BIBREF35 , BIBREF34 models which achieve higher F1 scores. We note that in contrast to XR, the competing models have complete access to the supervised aspect-based labels. The second condition, BiLSTM-XR, is similar but now the model is allowed to estimate the conditional label proportions based on the entire aspect-based training set (the classifier still does not have direct access to the labels beyond the aggregate proportion information). This improves results further, showing the importance of accurately estimating the proportions. Finally, in BiLSTM-XR+Finetuning, we follow the XR training with fully supervised fine-tuning on the small labeled dataset, using the attention-based model of BIBREF35 . This achieves the best results, and surpasses also the semi-supervised BIBREF35 baseline on accuracy, and matching it on F1.
We report significance tests for the robustness of the method under random parameter initialization. Our reported numbers are averaged over five random initialization. Since the datasets are unbalanced w.r.t the label distribution, we report both accuracy and macro-F1.
The XR training is also more stable than the other semi-supervised baselines, achieving substantially lower standard deviations across different runs.
Further experiments
In each experiment in this section we estimate the proportions using the SemEval-2015 train set.
How does the XR training scale with the amount of unlabeled data? Figure FIGREF54 a shows the macro-F1 scores on the entire SemEval-2016 dataset, with different unlabeled corpus sizes (measured in number of sentences). An unannotated corpus of INLINEFORM0 sentences is sufficient to surpass the results of the INLINEFORM1 sentence-level trained classifier, and more unannotated data further improves the results.
Our method requires a sentence level classifier INLINEFORM0 to label both the target-task corpus and the unlabeled corpus. How does the quality of this classifier affect the overall XR training? We vary the amount of supervision used to train INLINEFORM1 from 0 sentences (assigning the same label to all sentences), to 100, 1000, 5000 and 10000 sentences. We again measure macro-F1 on the entire SemEval 2016 corpus.
The results in Figure FIGREF54 b show that when using the prior distributions of aspects (0), the model struggles to learn from this signal, it learns mostly to predict the majority class, and hence reaches very low F1 scores of 35.28. The more data given to the sentence level classifier, the better the potential results will be when training with our method using the classifier labels, with a classifiers trained on 100,1000,5000 and 10000 labeled sentences, we get a F1 scores of 53.81, 58.84, 61.81, 65.58 respectively. Improvements in the source task classifier's quality clearly contribute to the target task accuracy.
The Stochastic Batched XR algorithm (Algorithm SECREF12 ) samples a batch of INLINEFORM0 examples at each step to estimate the posterior label distribution used in the loss computation. How does the size of INLINEFORM1 affect the results? We use INLINEFORM2 fragments in our main experiments, but smaller values of INLINEFORM3 reduce GPU memory load and may train better in practice. We tested our method with varying values of INLINEFORM4 on a sample of INLINEFORM5 , using batches that are composed of fragments of 5, 25, 100, 450, 1000 and 4500 sentences. The results are shown in Figure FIGREF54 c. Setting INLINEFORM6 result in low scores. Setting INLINEFORM7 yields better F1 score but with high variance across runs. For INLINEFORM8 fragments the results begin to stabilize, we also see a slight decrease in F1-scores with larger batch sizes. We attribute this drop despite having better estimation of the gradients to the general trend of larger batch sizes being harder to train with stochastic gradient methods.
Pre-training, Bert
The XR training can be performed also over pre-trained representations. We experiment with two pre-training methods: (1) pre-training by training the BiLSTM model to predict the noisy sentence-level predictions. (2) Using the pre-trained Bert representation BIBREF9 . For (1), we compare the effect of pre-train on unlabeled corpora of sizes of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 sentences. Results in Figure FIGREF54 d show that this form of pre-training is effective for smaller unlabeled corpora but evens out for larger ones.
For the Bert experiments, we experiment with the Bert-base model with INLINEFORM1 sets, 30 epochs for XR training or sentence level fine-tuning and 15 epochs for aspect based fine-tuning, on each training method we evaluated the model on the dev set after each epoch and the best model was chosen. We compare the following setups:
-Bert INLINEFORM0 Aspect Based Finetuning: pretrained bert model finetuned to the aspect based task.
-Bert INLINEFORM0 : A pretrained bert model finetuned to the sentence level task on the INLINEFORM1 sentences, and tested by predicting fragment-level sentiment.
-Bert INLINEFORM0 INLINEFORM1 INLINEFORM2 Aspect Based Finetuning: pretrained bert model finetuned to the sentence level task, and finetuned again to the aspect based one.
-Bert INLINEFORM0 XR: pretrained bert model followed by XR training using our method.
-Bert INLINEFORM0 XR INLINEFORM1 Aspect Based Finetuning: pretrained bert followed by XR training and then fine-tuned to the aspect level task.
The results are presented in Table TABREF55 . As before, aspect-based fine-tuning is beneficial for both SemEval-16 and SemEval-15. Training a BiLSTM with XR surpasses pre-trained bert models and using XR training on top of the pre-trained Bert models substantially increases the results even further.
Discussion
We presented a transfer learning method based on expectation regularization (XR), and demonstrated its effectiveness for training aspect-based sentiment classifiers using sentence-level supervision. The method achieves state-of-the-art results for the task, and is also effective for improving on top of a strong pre-trained Bert model. The proposed method provides an additional data-efficient tool in the modeling arsenal, which can be applied on its own or together with another training method, in situations where there is a conditional relations between the labels of a source task for which we have supervision, and a target task for which we don't.
While we demonstrated the approach on the sentiment domain, the required conditional dependence between task labels is present in many situations. Other possible application of the method includes training language identification of tweets given geo-location supervision (knowing the geographical region gives a prior on languages spoken), training predictors for renal failure from textual medical records given classifier for diabetes (there is a strong correlation between the two conditions), training a political affiliation classifier from social media tweets based on age-group classifiers, zip-code information, or social-status classifiers (there are known correlations between all of these to political affiliation), training hate-speech detection based on emotion detection, and so on.
Acknowledgements
The work was supported in part by The Israeli Science Foundation (grant number 1555/15). | Unanswerable |
47a30eb4d0d6f5f2ff4cdf6487265a25c1b18fd8 | 47a30eb4d0d6f5f2ff4cdf6487265a25c1b18fd8_0 | Q: Does the system trained only using XR loss outperform the fully supervised neural system?
Text: Introduction
Data annotation is a key bottleneck in many data driven algorithms. Specifically, deep learning models, which became a prominent tool in many data driven tasks in recent years, require large datasets to work well. However, many tasks require manual annotations which are relatively hard to obtain at scale. An attractive alternative is lightly supervised learning BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. For example, in label regularization BIBREF0 the model is trained to fit the true label proportions of an unlabeled dataset. Label regularization is special case of expectation regularization (XR) BIBREF0 , in which the model is trained to fit the conditional probabilities of labels given features.
In this work we consider the case of correlated tasks, in the sense that knowing the labels for task A provides information on the expected label composition of task B. We demonstrate the approach using sentence-level and aspect-level sentiment analysis, which we use as a running example: knowing that a sentence has positive sentiment label (task A), we can expect that most aspects within this sentence (task B) will also have positive label. While this expectation may be noisy on the individual example level, it holds well in aggregate: given a set of positively-labeled sentences, we can robustly estimate the proportion of positively-labeled aspects within this set. For example, in a random set of positive sentences, we expect to find 90% positive aspects, while in a set of negative sentences, we expect to find 70% negative aspects. These proportions can be easily either guessed or estimated from a small set.
We propose a novel application of the XR framework for transfer learning in this setup. We present an algorithm (Sec SECREF12 ) that, given a corpus labeled for task A (sentence-level sentiment), learns a classifier for performing task B (aspect-level sentiment) instead, without a direct supervision signal for task B. We note that the label information for task A is only used at training time. Furthermore, due to the stochastic nature of the estimation, the task A labels need not be fully accurate, allowing us to make use of noisy predictions which are assigned by an automatic classifier (Sections SECREF12 and SECREF4 ). In other words, given a medium-sized sentiment corpus with sentence-level labels, and a large collection of un-annotated text from the same distribution, we can train an accurate aspect-level sentiment classifier.
The XR loss allows us to use task A labels for training task B predictors. This ability seamlessly integrates into other semi-supervised schemes: we can use the XR loss on top of a pre-trained model to fine-tune the pre-trained representation to the target task, and we can also take the model trained using XR loss and plentiful data and fine-tune it to the target task using the available small-scale annotated data. In Section SECREF56 we explore these options and show that our XR framework improves the results also when applied on top of a pre-trained Bert-based model BIBREF9 .
Finally, to make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure (Section SECREF19 ). Source code is available at https://github.com/MatanBN/XRTransfer.
Lightly Supervised Learning
An effective way to supplement small annotated datasets is to use lightly supervised learning, in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. Previous work in lightly-supervised learning focused on training classifiers by using prior knowledge of label proportions BIBREF2 , BIBREF3 , BIBREF10 , BIBREF0 , BIBREF11 , BIBREF12 , BIBREF7 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF8 or prior knowledge of features label associations BIBREF1 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In the context of NLP, BIBREF17 suggested to use distributional similarities of words to train sequence models for part-of-speech tagging and a classified ads information extraction task. BIBREF19 used background lexical information in terms of word-class associations to train a sentiment classifier. BIBREF21 , BIBREF22 suggested to exploit the bilingual correlations between a resource rich language and a resource poor language to train a classifier for the resource poor language in a lightly supervised manner.
Expectation Regularization (XR)
Expectation Regularization (XR) BIBREF0 is a lightly supervised learning method, in which the model is trained to fit the conditional probabilities of labels given features. In the context of NLP, XR was used by BIBREF20 to train twitter-user attribute prediction using hundreds of noisy distributional expectations based on census demographics. Here, we suggest using XR to train a target task (aspect-level sentiment) based on the output of a related source-task classifier (sentence-level sentiment).
The main idea of XR is moving from a fully supervised situation in which each data-point INLINEFORM0 has an associated label INLINEFORM1 , to a setup in which sets of data points INLINEFORM2 are associated with corresponding label proportions INLINEFORM3 over that set.
Formally, let INLINEFORM0 be a set of data points, INLINEFORM1 be a set of INLINEFORM2 class labels, INLINEFORM3 be a set of sets where INLINEFORM4 for every INLINEFORM5 , and let INLINEFORM6 be the label distribution of set INLINEFORM7 . For example, INLINEFORM8 would indicate that 70% of data points in INLINEFORM9 are expected to have class 0, 20% are expected to have class 1 and 10% are expected to have class 2. Let INLINEFORM10 be a parameterized function with parameters INLINEFORM11 from INLINEFORM12 to a vector of conditional probabilities over labels in INLINEFORM13 . We write INLINEFORM14 to denote the probability assigned to the INLINEFORM15 th event (the conditional probability of INLINEFORM16 given INLINEFORM17 ).
A typically objective when training on fully labeled data of INLINEFORM0 pairs is to maximize likelihood of labeled data using the cross entropy loss, INLINEFORM1
Instead, in XR our data comes in the form of pairs INLINEFORM0 of sets and their corresponding expected label proportions, and we aim to optimize INLINEFORM1 to fit the label distribution INLINEFORM2 over INLINEFORM3 , for all INLINEFORM4 .
As counting the number of predicted class labels over a set INLINEFORM0 leads to a non-differentiable objective, BIBREF0 suggest to relax it and use instead the model's posterior distribution INLINEFORM1 over the set: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 indicates the INLINEFORM1 th entry in INLINEFORM2 . Then, we would like to set INLINEFORM3 such that INLINEFORM4 and INLINEFORM5 are close. BIBREF0 suggest to use KL-divergence for this. KL-divergence is composed of two parts: INLINEFORM6 INLINEFORM7
Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0
Notice that computing INLINEFORM0 requires summation over INLINEFORM1 for the entire set INLINEFORM2 , which can be prohibitive. We present batched approximation (Section SECREF19 ) to overcome this.
BIBREF0 find that XR might find a degenerate solution. For example, in a three class classification task, where INLINEFORM0 , it might find a solution such that INLINEFORM1 for every instance, as a result, every instance will be classified the same. To avoid this, BIBREF0 suggest to penalize flat distributions by using a temperature coefficient T likewise: DISPLAYFORM0
Where z is a feature vector and W and b are the linear classifier parameters.
Aspect-based Sentiment Classification
In the aspect-based sentiment classification (ABSC) task, we are given a sentence and an aspect, and need to determine the sentiment that is expressed towards the aspect. For example the sentence “Excellent food, although the interior could use some help.“ has two aspects: food and interior, a positive sentiment is expressed about the food, but a negative sentiment is expressed about the interior. A sentence INLINEFORM0 , may contain 0 or more aspects INLINEFORM1 , where each aspect corresponds to a sub-sequence of the original sentence, and has an associated sentiment label (Neg, Pos, or Neu). Concretely, we follow the task definition in the SemEval-2015 and SemEval-2016 shared tasks BIBREF23 , BIBREF24 , in which the relevant aspects are given and the task focuses on finding the sentiment label of the aspects.
While sentence-level sentiment labels are relatively easy to obtain, aspect-level annotation are much more scarce, as demonstrated in the small datasets of the SemEval shared tasks.
Transfer-training between related tasks with XR
[t!] Inputs: A dataset INLINEFORM0 , batch size INLINEFORM1 , differentiable classifier INLINEFORM2 [H] not converged INLINEFORM3 random( INLINEFORM4 ) INLINEFORM5 random-choice( INLINEFORM6 , INLINEFORM7 ) INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 Compute loss INLINEFORM12 (eq (4)) Compute gradients and update INLINEFORM13 INLINEFORM14 Stochastic Batched XR
Consider two classification tasks over a shared input space, a source task INLINEFORM0 from INLINEFORM1 to INLINEFORM2 and a target task INLINEFORM3 from INLINEFORM4 to INLINEFORM5 , which are related through a conditional distribution INLINEFORM6 . In other words, a labeling decision for task INLINEFORM7 induces an expected label distribution over the task INLINEFORM8 . For a set of datapoints INLINEFORM9 that share a source label INLINEFORM10 , we expect to see a target label distribution of INLINEFORM11 .
Given a large unlabeled dataset INLINEFORM0 , a small labeled dataset for the target task INLINEFORM1 , classifier INLINEFORM2 (or sufficient training data to train one) for the source task, we wish to use INLINEFORM3 and INLINEFORM4 to train a good classifier INLINEFORM5 for the target task. This can be achieved using the following procedure.
Apply INLINEFORM0 to INLINEFORM1 , resulting in a noisy source-side labels INLINEFORM2 for the target task.
Estimate the conditional probability INLINEFORM0 table using MLE estimates over INLINEFORM1 INLINEFORM2
where INLINEFORM0 is a counting function over INLINEFORM1 .
Apply INLINEFORM0 to the unlabeled data INLINEFORM1 resulting in labels INLINEFORM2 . Split INLINEFORM3 into INLINEFORM4 sets INLINEFORM5 according to the labeling induced by INLINEFORM6 : INLINEFORM7
Use Algorithm SECREF12 to train a classifier for the target task using input pairs INLINEFORM0 and the XR loss.
In words, by using XR training, we use the expected label proportions over the target task given predicted labels of the source task, to train a target-class classifier.
Stochastic Batched Training for Deep XR
BIBREF0 and following work take the base classifier INLINEFORM0 to be a logistic regression classifier, for which they manually derive gradients for the XR loss and train with LBFGs BIBREF25 . However, nothing precludes us from using an arbitrary neural network instead, as long as it culminates in a softmax layer.
One complicating factor is that the computation of INLINEFORM0 in equation ( EQREF5 ) requires a summation over INLINEFORM1 for the entire set INLINEFORM2 , which in our setup may contain hundreds of thousands of examples, making gradient computation and optimization impractical. We instead proposed a stochastic batched approximation in which, instead of requiring that the full constraint set INLINEFORM3 will match the expected label posterior distribution, we require that sufficiently large random subsets of it will match the distribution. At each training step we compute the loss and update the gradient with respect to a different random subset. Specifically, in each training step we sample a random pair INLINEFORM4 , sample a random subset INLINEFORM5 of INLINEFORM6 of size INLINEFORM7 , and compute the local XR loss of set INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 is computed by summing over the elements of INLINEFORM1 rather than of INLINEFORM2 in equations ( EQREF5 –2). The stochastic batched XR training algorithm is given in Algorithm SECREF12 . For large enough INLINEFORM3 , the expected label distribution of the subset is the same as that of the complete set.
Application to Aspect-based Sentiment
We demonstrate the procedure given above by training Aspect-based Sentiment Classifier (ABSC) using sentence-level sentiment signals.
Relating the classification tasks
We observe that while the sentence-level sentiment does not determine the sentiment of individual aspects (a positive sentence may contain negative remarks about some aspects), it is very predictive of the proportion of sentiment labels of the fragments within a sentence. Positively labeled sentences are likely to have more positive aspects and fewer negative ones, and vice-versa for negatively-labeled sentences. While these proportions may vary on the individual sentence level, we expect them to be stable when aggregating fragments from several sentences: when considering a large enough sample of fragments that all come from positively labeled sentences, we expect the different samples to have roughly similar label proportions to each other. This situation is idealy suited for performing XR training, as described in section SECREF12 .
The application to ABSC is almost straightforward, but is complicated a bit by the decomposition of sentences into fragments: each sentence level decision now corresponds to multiple fragment-level decisions. Thus, we apply the sentence-level (task A) classifier INLINEFORM0 on the aspect-level corpus INLINEFORM1 by applying it on the sentence level and then associating the predicted sentence labels with each of the fragments, resulting in fragment-level labeling. Similarly, when we apply INLINEFORM2 to the unlabeled data INLINEFORM3 we again do it at the sentence level, but the sets INLINEFORM4 are composed of fragments, not sentences: INLINEFORM5
We then apply algorithm SECREF12 as is: at each step of training we sample a source label INLINEFORM0 Pos,Neg,Neu INLINEFORM1 , sample INLINEFORM2 fragments from INLINEFORM3 , and use the XR loss to fit the expected fragment-label proportions over these INLINEFORM4 fragments to INLINEFORM5 . Figure FIGREF21 illustrates the procedure.
Classification Architecture
We model the ABSC problem by associating each (sentence,aspect) pair with a sentence-fragment, and constructing a neural classifier from fragments to sentiment labels. We heuristically decompose a sentence into fragments. We use the same BiLSTM based neural architecture for both sentence classification and fragment classification.
We now describe the procedure we use to associate a sentence fragment with each (sentence,aspect) pairs. The shared tasks data associates each aspect with a pivot-phrase INLINEFORM0 , where pivot phrase INLINEFORM1 is defined as a pre-determined sequence of words that is contained within the sentence. For a sentence INLINEFORM2 , a set of pivot phrases INLINEFORM3 and a specific pivot phrase INLINEFORM4 , we consult the constituency parse tree of INLINEFORM5 and look for tree nodes that satisfy the following conditions:
The node governs the desired pivot phrase INLINEFORM0 .
The node governs either a verb (VB, VBD, VBN, VBG, VBP, VBZ) or an adjective (JJ, JJR, JJS), which is different than any INLINEFORM0 .
The node governs a minimal number of pivot phrases from INLINEFORM0 , ideally only INLINEFORM1 .
We then select the highest node in the tree that satisfies all conditions. The span governed by this node is taken as the fragment associated with aspect INLINEFORM0 . The decomposition procedure is demonstrated in Figure FIGREF22 .
When aspect-level information is given, we take the pivot-phrases to be the requested aspects. When aspect-level information is not available, we take each noun in the sentence to be a pivot-phrase.
Our classification model is a simple 1-layer BiLSTM encoder (a concatenation of the last states of a forward and a backward running LSTMs) followed by a linear-predictor. The encoder is fed either a complete sentence or a sentence fragment.
Main Results
Table TABREF44 compares these baselines to three XR conditions.
The first condition, BiLSTM-XR-Dev, performs XR training on the automatically-labeled sentence-level dataset. The only access it has to aspect-level annotation is for estimating the proportions of labels for each sentence-level label, which is done based on the validation set of SemEval-2015 (i.e., 20% of the train set). The XR setting is very effective: without using any in-task data, this model already surpasses all other models, both supervised and semi-supervised, except for the BIBREF35 , BIBREF34 models which achieve higher F1 scores. We note that in contrast to XR, the competing models have complete access to the supervised aspect-based labels. The second condition, BiLSTM-XR, is similar but now the model is allowed to estimate the conditional label proportions based on the entire aspect-based training set (the classifier still does not have direct access to the labels beyond the aggregate proportion information). This improves results further, showing the importance of accurately estimating the proportions. Finally, in BiLSTM-XR+Finetuning, we follow the XR training with fully supervised fine-tuning on the small labeled dataset, using the attention-based model of BIBREF35 . This achieves the best results, and surpasses also the semi-supervised BIBREF35 baseline on accuracy, and matching it on F1.
We report significance tests for the robustness of the method under random parameter initialization. Our reported numbers are averaged over five random initialization. Since the datasets are unbalanced w.r.t the label distribution, we report both accuracy and macro-F1.
The XR training is also more stable than the other semi-supervised baselines, achieving substantially lower standard deviations across different runs.
Further experiments
In each experiment in this section we estimate the proportions using the SemEval-2015 train set.
How does the XR training scale with the amount of unlabeled data? Figure FIGREF54 a shows the macro-F1 scores on the entire SemEval-2016 dataset, with different unlabeled corpus sizes (measured in number of sentences). An unannotated corpus of INLINEFORM0 sentences is sufficient to surpass the results of the INLINEFORM1 sentence-level trained classifier, and more unannotated data further improves the results.
Our method requires a sentence level classifier INLINEFORM0 to label both the target-task corpus and the unlabeled corpus. How does the quality of this classifier affect the overall XR training? We vary the amount of supervision used to train INLINEFORM1 from 0 sentences (assigning the same label to all sentences), to 100, 1000, 5000 and 10000 sentences. We again measure macro-F1 on the entire SemEval 2016 corpus.
The results in Figure FIGREF54 b show that when using the prior distributions of aspects (0), the model struggles to learn from this signal, it learns mostly to predict the majority class, and hence reaches very low F1 scores of 35.28. The more data given to the sentence level classifier, the better the potential results will be when training with our method using the classifier labels, with a classifiers trained on 100,1000,5000 and 10000 labeled sentences, we get a F1 scores of 53.81, 58.84, 61.81, 65.58 respectively. Improvements in the source task classifier's quality clearly contribute to the target task accuracy.
The Stochastic Batched XR algorithm (Algorithm SECREF12 ) samples a batch of INLINEFORM0 examples at each step to estimate the posterior label distribution used in the loss computation. How does the size of INLINEFORM1 affect the results? We use INLINEFORM2 fragments in our main experiments, but smaller values of INLINEFORM3 reduce GPU memory load and may train better in practice. We tested our method with varying values of INLINEFORM4 on a sample of INLINEFORM5 , using batches that are composed of fragments of 5, 25, 100, 450, 1000 and 4500 sentences. The results are shown in Figure FIGREF54 c. Setting INLINEFORM6 result in low scores. Setting INLINEFORM7 yields better F1 score but with high variance across runs. For INLINEFORM8 fragments the results begin to stabilize, we also see a slight decrease in F1-scores with larger batch sizes. We attribute this drop despite having better estimation of the gradients to the general trend of larger batch sizes being harder to train with stochastic gradient methods.
Pre-training, Bert
The XR training can be performed also over pre-trained representations. We experiment with two pre-training methods: (1) pre-training by training the BiLSTM model to predict the noisy sentence-level predictions. (2) Using the pre-trained Bert representation BIBREF9 . For (1), we compare the effect of pre-train on unlabeled corpora of sizes of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 sentences. Results in Figure FIGREF54 d show that this form of pre-training is effective for smaller unlabeled corpora but evens out for larger ones.
For the Bert experiments, we experiment with the Bert-base model with INLINEFORM1 sets, 30 epochs for XR training or sentence level fine-tuning and 15 epochs for aspect based fine-tuning, on each training method we evaluated the model on the dev set after each epoch and the best model was chosen. We compare the following setups:
-Bert INLINEFORM0 Aspect Based Finetuning: pretrained bert model finetuned to the aspect based task.
-Bert INLINEFORM0 : A pretrained bert model finetuned to the sentence level task on the INLINEFORM1 sentences, and tested by predicting fragment-level sentiment.
-Bert INLINEFORM0 INLINEFORM1 INLINEFORM2 Aspect Based Finetuning: pretrained bert model finetuned to the sentence level task, and finetuned again to the aspect based one.
-Bert INLINEFORM0 XR: pretrained bert model followed by XR training using our method.
-Bert INLINEFORM0 XR INLINEFORM1 Aspect Based Finetuning: pretrained bert followed by XR training and then fine-tuned to the aspect level task.
The results are presented in Table TABREF55 . As before, aspect-based fine-tuning is beneficial for both SemEval-16 and SemEval-15. Training a BiLSTM with XR surpasses pre-trained bert models and using XR training on top of the pre-trained Bert models substantially increases the results even further.
Discussion
We presented a transfer learning method based on expectation regularization (XR), and demonstrated its effectiveness for training aspect-based sentiment classifiers using sentence-level supervision. The method achieves state-of-the-art results for the task, and is also effective for improving on top of a strong pre-trained Bert model. The proposed method provides an additional data-efficient tool in the modeling arsenal, which can be applied on its own or together with another training method, in situations where there is a conditional relations between the labels of a source task for which we have supervision, and a target task for which we don't.
While we demonstrated the approach on the sentiment domain, the required conditional dependence between task labels is present in many situations. Other possible application of the method includes training language identification of tweets given geo-location supervision (knowing the geographical region gives a prior on languages spoken), training predictors for renal failure from textual medical records given classifier for diabetes (there is a strong correlation between the two conditions), training a political affiliation classifier from social media tweets based on age-group classifiers, zip-code information, or social-status classifiers (there are known correlations between all of these to political affiliation), training hate-speech detection based on emotion detection, and so on.
Acknowledgements
The work was supported in part by The Israeli Science Foundation (grant number 1555/15). | Yes |
e42fbf6c183abf1c6c2321957359c7683122b48e | e42fbf6c183abf1c6c2321957359c7683122b48e_0 | Q: How accurate is the aspect based sentiment classifier trained only using the XR loss?
Text: Introduction
Data annotation is a key bottleneck in many data driven algorithms. Specifically, deep learning models, which became a prominent tool in many data driven tasks in recent years, require large datasets to work well. However, many tasks require manual annotations which are relatively hard to obtain at scale. An attractive alternative is lightly supervised learning BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. For example, in label regularization BIBREF0 the model is trained to fit the true label proportions of an unlabeled dataset. Label regularization is special case of expectation regularization (XR) BIBREF0 , in which the model is trained to fit the conditional probabilities of labels given features.
In this work we consider the case of correlated tasks, in the sense that knowing the labels for task A provides information on the expected label composition of task B. We demonstrate the approach using sentence-level and aspect-level sentiment analysis, which we use as a running example: knowing that a sentence has positive sentiment label (task A), we can expect that most aspects within this sentence (task B) will also have positive label. While this expectation may be noisy on the individual example level, it holds well in aggregate: given a set of positively-labeled sentences, we can robustly estimate the proportion of positively-labeled aspects within this set. For example, in a random set of positive sentences, we expect to find 90% positive aspects, while in a set of negative sentences, we expect to find 70% negative aspects. These proportions can be easily either guessed or estimated from a small set.
We propose a novel application of the XR framework for transfer learning in this setup. We present an algorithm (Sec SECREF12 ) that, given a corpus labeled for task A (sentence-level sentiment), learns a classifier for performing task B (aspect-level sentiment) instead, without a direct supervision signal for task B. We note that the label information for task A is only used at training time. Furthermore, due to the stochastic nature of the estimation, the task A labels need not be fully accurate, allowing us to make use of noisy predictions which are assigned by an automatic classifier (Sections SECREF12 and SECREF4 ). In other words, given a medium-sized sentiment corpus with sentence-level labels, and a large collection of un-annotated text from the same distribution, we can train an accurate aspect-level sentiment classifier.
The XR loss allows us to use task A labels for training task B predictors. This ability seamlessly integrates into other semi-supervised schemes: we can use the XR loss on top of a pre-trained model to fine-tune the pre-trained representation to the target task, and we can also take the model trained using XR loss and plentiful data and fine-tune it to the target task using the available small-scale annotated data. In Section SECREF56 we explore these options and show that our XR framework improves the results also when applied on top of a pre-trained Bert-based model BIBREF9 .
Finally, to make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure (Section SECREF19 ). Source code is available at https://github.com/MatanBN/XRTransfer.
Lightly Supervised Learning
An effective way to supplement small annotated datasets is to use lightly supervised learning, in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. Previous work in lightly-supervised learning focused on training classifiers by using prior knowledge of label proportions BIBREF2 , BIBREF3 , BIBREF10 , BIBREF0 , BIBREF11 , BIBREF12 , BIBREF7 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF8 or prior knowledge of features label associations BIBREF1 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In the context of NLP, BIBREF17 suggested to use distributional similarities of words to train sequence models for part-of-speech tagging and a classified ads information extraction task. BIBREF19 used background lexical information in terms of word-class associations to train a sentiment classifier. BIBREF21 , BIBREF22 suggested to exploit the bilingual correlations between a resource rich language and a resource poor language to train a classifier for the resource poor language in a lightly supervised manner.
Expectation Regularization (XR)
Expectation Regularization (XR) BIBREF0 is a lightly supervised learning method, in which the model is trained to fit the conditional probabilities of labels given features. In the context of NLP, XR was used by BIBREF20 to train twitter-user attribute prediction using hundreds of noisy distributional expectations based on census demographics. Here, we suggest using XR to train a target task (aspect-level sentiment) based on the output of a related source-task classifier (sentence-level sentiment).
The main idea of XR is moving from a fully supervised situation in which each data-point INLINEFORM0 has an associated label INLINEFORM1 , to a setup in which sets of data points INLINEFORM2 are associated with corresponding label proportions INLINEFORM3 over that set.
Formally, let INLINEFORM0 be a set of data points, INLINEFORM1 be a set of INLINEFORM2 class labels, INLINEFORM3 be a set of sets where INLINEFORM4 for every INLINEFORM5 , and let INLINEFORM6 be the label distribution of set INLINEFORM7 . For example, INLINEFORM8 would indicate that 70% of data points in INLINEFORM9 are expected to have class 0, 20% are expected to have class 1 and 10% are expected to have class 2. Let INLINEFORM10 be a parameterized function with parameters INLINEFORM11 from INLINEFORM12 to a vector of conditional probabilities over labels in INLINEFORM13 . We write INLINEFORM14 to denote the probability assigned to the INLINEFORM15 th event (the conditional probability of INLINEFORM16 given INLINEFORM17 ).
A typically objective when training on fully labeled data of INLINEFORM0 pairs is to maximize likelihood of labeled data using the cross entropy loss, INLINEFORM1
Instead, in XR our data comes in the form of pairs INLINEFORM0 of sets and their corresponding expected label proportions, and we aim to optimize INLINEFORM1 to fit the label distribution INLINEFORM2 over INLINEFORM3 , for all INLINEFORM4 .
As counting the number of predicted class labels over a set INLINEFORM0 leads to a non-differentiable objective, BIBREF0 suggest to relax it and use instead the model's posterior distribution INLINEFORM1 over the set: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 indicates the INLINEFORM1 th entry in INLINEFORM2 . Then, we would like to set INLINEFORM3 such that INLINEFORM4 and INLINEFORM5 are close. BIBREF0 suggest to use KL-divergence for this. KL-divergence is composed of two parts: INLINEFORM6 INLINEFORM7
Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0
Notice that computing INLINEFORM0 requires summation over INLINEFORM1 for the entire set INLINEFORM2 , which can be prohibitive. We present batched approximation (Section SECREF19 ) to overcome this.
BIBREF0 find that XR might find a degenerate solution. For example, in a three class classification task, where INLINEFORM0 , it might find a solution such that INLINEFORM1 for every instance, as a result, every instance will be classified the same. To avoid this, BIBREF0 suggest to penalize flat distributions by using a temperature coefficient T likewise: DISPLAYFORM0
Where z is a feature vector and W and b are the linear classifier parameters.
Aspect-based Sentiment Classification
In the aspect-based sentiment classification (ABSC) task, we are given a sentence and an aspect, and need to determine the sentiment that is expressed towards the aspect. For example the sentence “Excellent food, although the interior could use some help.“ has two aspects: food and interior, a positive sentiment is expressed about the food, but a negative sentiment is expressed about the interior. A sentence INLINEFORM0 , may contain 0 or more aspects INLINEFORM1 , where each aspect corresponds to a sub-sequence of the original sentence, and has an associated sentiment label (Neg, Pos, or Neu). Concretely, we follow the task definition in the SemEval-2015 and SemEval-2016 shared tasks BIBREF23 , BIBREF24 , in which the relevant aspects are given and the task focuses on finding the sentiment label of the aspects.
While sentence-level sentiment labels are relatively easy to obtain, aspect-level annotation are much more scarce, as demonstrated in the small datasets of the SemEval shared tasks.
Transfer-training between related tasks with XR
[t!] Inputs: A dataset INLINEFORM0 , batch size INLINEFORM1 , differentiable classifier INLINEFORM2 [H] not converged INLINEFORM3 random( INLINEFORM4 ) INLINEFORM5 random-choice( INLINEFORM6 , INLINEFORM7 ) INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 Compute loss INLINEFORM12 (eq (4)) Compute gradients and update INLINEFORM13 INLINEFORM14 Stochastic Batched XR
Consider two classification tasks over a shared input space, a source task INLINEFORM0 from INLINEFORM1 to INLINEFORM2 and a target task INLINEFORM3 from INLINEFORM4 to INLINEFORM5 , which are related through a conditional distribution INLINEFORM6 . In other words, a labeling decision for task INLINEFORM7 induces an expected label distribution over the task INLINEFORM8 . For a set of datapoints INLINEFORM9 that share a source label INLINEFORM10 , we expect to see a target label distribution of INLINEFORM11 .
Given a large unlabeled dataset INLINEFORM0 , a small labeled dataset for the target task INLINEFORM1 , classifier INLINEFORM2 (or sufficient training data to train one) for the source task, we wish to use INLINEFORM3 and INLINEFORM4 to train a good classifier INLINEFORM5 for the target task. This can be achieved using the following procedure.
Apply INLINEFORM0 to INLINEFORM1 , resulting in a noisy source-side labels INLINEFORM2 for the target task.
Estimate the conditional probability INLINEFORM0 table using MLE estimates over INLINEFORM1 INLINEFORM2
where INLINEFORM0 is a counting function over INLINEFORM1 .
Apply INLINEFORM0 to the unlabeled data INLINEFORM1 resulting in labels INLINEFORM2 . Split INLINEFORM3 into INLINEFORM4 sets INLINEFORM5 according to the labeling induced by INLINEFORM6 : INLINEFORM7
Use Algorithm SECREF12 to train a classifier for the target task using input pairs INLINEFORM0 and the XR loss.
In words, by using XR training, we use the expected label proportions over the target task given predicted labels of the source task, to train a target-class classifier.
Stochastic Batched Training for Deep XR
BIBREF0 and following work take the base classifier INLINEFORM0 to be a logistic regression classifier, for which they manually derive gradients for the XR loss and train with LBFGs BIBREF25 . However, nothing precludes us from using an arbitrary neural network instead, as long as it culminates in a softmax layer.
One complicating factor is that the computation of INLINEFORM0 in equation ( EQREF5 ) requires a summation over INLINEFORM1 for the entire set INLINEFORM2 , which in our setup may contain hundreds of thousands of examples, making gradient computation and optimization impractical. We instead proposed a stochastic batched approximation in which, instead of requiring that the full constraint set INLINEFORM3 will match the expected label posterior distribution, we require that sufficiently large random subsets of it will match the distribution. At each training step we compute the loss and update the gradient with respect to a different random subset. Specifically, in each training step we sample a random pair INLINEFORM4 , sample a random subset INLINEFORM5 of INLINEFORM6 of size INLINEFORM7 , and compute the local XR loss of set INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 is computed by summing over the elements of INLINEFORM1 rather than of INLINEFORM2 in equations ( EQREF5 –2). The stochastic batched XR training algorithm is given in Algorithm SECREF12 . For large enough INLINEFORM3 , the expected label distribution of the subset is the same as that of the complete set.
Application to Aspect-based Sentiment
We demonstrate the procedure given above by training Aspect-based Sentiment Classifier (ABSC) using sentence-level sentiment signals.
Relating the classification tasks
We observe that while the sentence-level sentiment does not determine the sentiment of individual aspects (a positive sentence may contain negative remarks about some aspects), it is very predictive of the proportion of sentiment labels of the fragments within a sentence. Positively labeled sentences are likely to have more positive aspects and fewer negative ones, and vice-versa for negatively-labeled sentences. While these proportions may vary on the individual sentence level, we expect them to be stable when aggregating fragments from several sentences: when considering a large enough sample of fragments that all come from positively labeled sentences, we expect the different samples to have roughly similar label proportions to each other. This situation is idealy suited for performing XR training, as described in section SECREF12 .
The application to ABSC is almost straightforward, but is complicated a bit by the decomposition of sentences into fragments: each sentence level decision now corresponds to multiple fragment-level decisions. Thus, we apply the sentence-level (task A) classifier INLINEFORM0 on the aspect-level corpus INLINEFORM1 by applying it on the sentence level and then associating the predicted sentence labels with each of the fragments, resulting in fragment-level labeling. Similarly, when we apply INLINEFORM2 to the unlabeled data INLINEFORM3 we again do it at the sentence level, but the sets INLINEFORM4 are composed of fragments, not sentences: INLINEFORM5
We then apply algorithm SECREF12 as is: at each step of training we sample a source label INLINEFORM0 Pos,Neg,Neu INLINEFORM1 , sample INLINEFORM2 fragments from INLINEFORM3 , and use the XR loss to fit the expected fragment-label proportions over these INLINEFORM4 fragments to INLINEFORM5 . Figure FIGREF21 illustrates the procedure.
Classification Architecture
We model the ABSC problem by associating each (sentence,aspect) pair with a sentence-fragment, and constructing a neural classifier from fragments to sentiment labels. We heuristically decompose a sentence into fragments. We use the same BiLSTM based neural architecture for both sentence classification and fragment classification.
We now describe the procedure we use to associate a sentence fragment with each (sentence,aspect) pairs. The shared tasks data associates each aspect with a pivot-phrase INLINEFORM0 , where pivot phrase INLINEFORM1 is defined as a pre-determined sequence of words that is contained within the sentence. For a sentence INLINEFORM2 , a set of pivot phrases INLINEFORM3 and a specific pivot phrase INLINEFORM4 , we consult the constituency parse tree of INLINEFORM5 and look for tree nodes that satisfy the following conditions:
The node governs the desired pivot phrase INLINEFORM0 .
The node governs either a verb (VB, VBD, VBN, VBG, VBP, VBZ) or an adjective (JJ, JJR, JJS), which is different than any INLINEFORM0 .
The node governs a minimal number of pivot phrases from INLINEFORM0 , ideally only INLINEFORM1 .
We then select the highest node in the tree that satisfies all conditions. The span governed by this node is taken as the fragment associated with aspect INLINEFORM0 . The decomposition procedure is demonstrated in Figure FIGREF22 .
When aspect-level information is given, we take the pivot-phrases to be the requested aspects. When aspect-level information is not available, we take each noun in the sentence to be a pivot-phrase.
Our classification model is a simple 1-layer BiLSTM encoder (a concatenation of the last states of a forward and a backward running LSTMs) followed by a linear-predictor. The encoder is fed either a complete sentence or a sentence fragment.
Main Results
Table TABREF44 compares these baselines to three XR conditions.
The first condition, BiLSTM-XR-Dev, performs XR training on the automatically-labeled sentence-level dataset. The only access it has to aspect-level annotation is for estimating the proportions of labels for each sentence-level label, which is done based on the validation set of SemEval-2015 (i.e., 20% of the train set). The XR setting is very effective: without using any in-task data, this model already surpasses all other models, both supervised and semi-supervised, except for the BIBREF35 , BIBREF34 models which achieve higher F1 scores. We note that in contrast to XR, the competing models have complete access to the supervised aspect-based labels. The second condition, BiLSTM-XR, is similar but now the model is allowed to estimate the conditional label proportions based on the entire aspect-based training set (the classifier still does not have direct access to the labels beyond the aggregate proportion information). This improves results further, showing the importance of accurately estimating the proportions. Finally, in BiLSTM-XR+Finetuning, we follow the XR training with fully supervised fine-tuning on the small labeled dataset, using the attention-based model of BIBREF35 . This achieves the best results, and surpasses also the semi-supervised BIBREF35 baseline on accuracy, and matching it on F1.
We report significance tests for the robustness of the method under random parameter initialization. Our reported numbers are averaged over five random initialization. Since the datasets are unbalanced w.r.t the label distribution, we report both accuracy and macro-F1.
The XR training is also more stable than the other semi-supervised baselines, achieving substantially lower standard deviations across different runs.
Further experiments
In each experiment in this section we estimate the proportions using the SemEval-2015 train set.
How does the XR training scale with the amount of unlabeled data? Figure FIGREF54 a shows the macro-F1 scores on the entire SemEval-2016 dataset, with different unlabeled corpus sizes (measured in number of sentences). An unannotated corpus of INLINEFORM0 sentences is sufficient to surpass the results of the INLINEFORM1 sentence-level trained classifier, and more unannotated data further improves the results.
Our method requires a sentence level classifier INLINEFORM0 to label both the target-task corpus and the unlabeled corpus. How does the quality of this classifier affect the overall XR training? We vary the amount of supervision used to train INLINEFORM1 from 0 sentences (assigning the same label to all sentences), to 100, 1000, 5000 and 10000 sentences. We again measure macro-F1 on the entire SemEval 2016 corpus.
The results in Figure FIGREF54 b show that when using the prior distributions of aspects (0), the model struggles to learn from this signal, it learns mostly to predict the majority class, and hence reaches very low F1 scores of 35.28. The more data given to the sentence level classifier, the better the potential results will be when training with our method using the classifier labels, with a classifiers trained on 100,1000,5000 and 10000 labeled sentences, we get a F1 scores of 53.81, 58.84, 61.81, 65.58 respectively. Improvements in the source task classifier's quality clearly contribute to the target task accuracy.
The Stochastic Batched XR algorithm (Algorithm SECREF12 ) samples a batch of INLINEFORM0 examples at each step to estimate the posterior label distribution used in the loss computation. How does the size of INLINEFORM1 affect the results? We use INLINEFORM2 fragments in our main experiments, but smaller values of INLINEFORM3 reduce GPU memory load and may train better in practice. We tested our method with varying values of INLINEFORM4 on a sample of INLINEFORM5 , using batches that are composed of fragments of 5, 25, 100, 450, 1000 and 4500 sentences. The results are shown in Figure FIGREF54 c. Setting INLINEFORM6 result in low scores. Setting INLINEFORM7 yields better F1 score but with high variance across runs. For INLINEFORM8 fragments the results begin to stabilize, we also see a slight decrease in F1-scores with larger batch sizes. We attribute this drop despite having better estimation of the gradients to the general trend of larger batch sizes being harder to train with stochastic gradient methods.
Pre-training, Bert
The XR training can be performed also over pre-trained representations. We experiment with two pre-training methods: (1) pre-training by training the BiLSTM model to predict the noisy sentence-level predictions. (2) Using the pre-trained Bert representation BIBREF9 . For (1), we compare the effect of pre-train on unlabeled corpora of sizes of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 sentences. Results in Figure FIGREF54 d show that this form of pre-training is effective for smaller unlabeled corpora but evens out for larger ones.
For the Bert experiments, we experiment with the Bert-base model with INLINEFORM1 sets, 30 epochs for XR training or sentence level fine-tuning and 15 epochs for aspect based fine-tuning, on each training method we evaluated the model on the dev set after each epoch and the best model was chosen. We compare the following setups:
-Bert INLINEFORM0 Aspect Based Finetuning: pretrained bert model finetuned to the aspect based task.
-Bert INLINEFORM0 : A pretrained bert model finetuned to the sentence level task on the INLINEFORM1 sentences, and tested by predicting fragment-level sentiment.
-Bert INLINEFORM0 INLINEFORM1 INLINEFORM2 Aspect Based Finetuning: pretrained bert model finetuned to the sentence level task, and finetuned again to the aspect based one.
-Bert INLINEFORM0 XR: pretrained bert model followed by XR training using our method.
-Bert INLINEFORM0 XR INLINEFORM1 Aspect Based Finetuning: pretrained bert followed by XR training and then fine-tuned to the aspect level task.
The results are presented in Table TABREF55 . As before, aspect-based fine-tuning is beneficial for both SemEval-16 and SemEval-15. Training a BiLSTM with XR surpasses pre-trained bert models and using XR training on top of the pre-trained Bert models substantially increases the results even further.
Discussion
We presented a transfer learning method based on expectation regularization (XR), and demonstrated its effectiveness for training aspect-based sentiment classifiers using sentence-level supervision. The method achieves state-of-the-art results for the task, and is also effective for improving on top of a strong pre-trained Bert model. The proposed method provides an additional data-efficient tool in the modeling arsenal, which can be applied on its own or together with another training method, in situations where there is a conditional relations between the labels of a source task for which we have supervision, and a target task for which we don't.
While we demonstrated the approach on the sentiment domain, the required conditional dependence between task labels is present in many situations. Other possible application of the method includes training language identification of tweets given geo-location supervision (knowing the geographical region gives a prior on languages spoken), training predictors for renal failure from textual medical records given classifier for diabetes (there is a strong correlation between the two conditions), training a political affiliation classifier from social media tweets based on age-group classifiers, zip-code information, or social-status classifiers (there are known correlations between all of these to political affiliation), training hate-speech detection based on emotion detection, and so on.
Acknowledgements
The work was supported in part by The Israeli Science Foundation (grant number 1555/15). | BiLSTM-XR-Dev Estimation accuracy is 83.31 for SemEval-15 and 87.68 for SemEval-16.
BiLSTM-XR accuracy is 83.31 for SemEval-15 and 88.12 for SemEval-16.
|
e574f0f733fb98ecef3c64044004aa7a320439be | e574f0f733fb98ecef3c64044004aa7a320439be_0 | Q: How is the expectation regularization loss defined?
Text: Introduction
Data annotation is a key bottleneck in many data driven algorithms. Specifically, deep learning models, which became a prominent tool in many data driven tasks in recent years, require large datasets to work well. However, many tasks require manual annotations which are relatively hard to obtain at scale. An attractive alternative is lightly supervised learning BIBREF1 , BIBREF2 , BIBREF3 , BIBREF4 , BIBREF5 , BIBREF6 , BIBREF7 , BIBREF8 , in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. For example, in label regularization BIBREF0 the model is trained to fit the true label proportions of an unlabeled dataset. Label regularization is special case of expectation regularization (XR) BIBREF0 , in which the model is trained to fit the conditional probabilities of labels given features.
In this work we consider the case of correlated tasks, in the sense that knowing the labels for task A provides information on the expected label composition of task B. We demonstrate the approach using sentence-level and aspect-level sentiment analysis, which we use as a running example: knowing that a sentence has positive sentiment label (task A), we can expect that most aspects within this sentence (task B) will also have positive label. While this expectation may be noisy on the individual example level, it holds well in aggregate: given a set of positively-labeled sentences, we can robustly estimate the proportion of positively-labeled aspects within this set. For example, in a random set of positive sentences, we expect to find 90% positive aspects, while in a set of negative sentences, we expect to find 70% negative aspects. These proportions can be easily either guessed or estimated from a small set.
We propose a novel application of the XR framework for transfer learning in this setup. We present an algorithm (Sec SECREF12 ) that, given a corpus labeled for task A (sentence-level sentiment), learns a classifier for performing task B (aspect-level sentiment) instead, without a direct supervision signal for task B. We note that the label information for task A is only used at training time. Furthermore, due to the stochastic nature of the estimation, the task A labels need not be fully accurate, allowing us to make use of noisy predictions which are assigned by an automatic classifier (Sections SECREF12 and SECREF4 ). In other words, given a medium-sized sentiment corpus with sentence-level labels, and a large collection of un-annotated text from the same distribution, we can train an accurate aspect-level sentiment classifier.
The XR loss allows us to use task A labels for training task B predictors. This ability seamlessly integrates into other semi-supervised schemes: we can use the XR loss on top of a pre-trained model to fine-tune the pre-trained representation to the target task, and we can also take the model trained using XR loss and plentiful data and fine-tune it to the target task using the available small-scale annotated data. In Section SECREF56 we explore these options and show that our XR framework improves the results also when applied on top of a pre-trained Bert-based model BIBREF9 .
Finally, to make the XR framework applicable to large-scale deep-learning setups, we propose a stochastic batched approximation procedure (Section SECREF19 ). Source code is available at https://github.com/MatanBN/XRTransfer.
Lightly Supervised Learning
An effective way to supplement small annotated datasets is to use lightly supervised learning, in which the objective function is supplemented by a set of domain-specific soft-constraints over the model's predictions on unlabeled data. Previous work in lightly-supervised learning focused on training classifiers by using prior knowledge of label proportions BIBREF2 , BIBREF3 , BIBREF10 , BIBREF0 , BIBREF11 , BIBREF12 , BIBREF7 , BIBREF13 , BIBREF14 , BIBREF15 , BIBREF16 , BIBREF8 or prior knowledge of features label associations BIBREF1 , BIBREF17 , BIBREF18 , BIBREF19 , BIBREF20 . In the context of NLP, BIBREF17 suggested to use distributional similarities of words to train sequence models for part-of-speech tagging and a classified ads information extraction task. BIBREF19 used background lexical information in terms of word-class associations to train a sentiment classifier. BIBREF21 , BIBREF22 suggested to exploit the bilingual correlations between a resource rich language and a resource poor language to train a classifier for the resource poor language in a lightly supervised manner.
Expectation Regularization (XR)
Expectation Regularization (XR) BIBREF0 is a lightly supervised learning method, in which the model is trained to fit the conditional probabilities of labels given features. In the context of NLP, XR was used by BIBREF20 to train twitter-user attribute prediction using hundreds of noisy distributional expectations based on census demographics. Here, we suggest using XR to train a target task (aspect-level sentiment) based on the output of a related source-task classifier (sentence-level sentiment).
The main idea of XR is moving from a fully supervised situation in which each data-point INLINEFORM0 has an associated label INLINEFORM1 , to a setup in which sets of data points INLINEFORM2 are associated with corresponding label proportions INLINEFORM3 over that set.
Formally, let INLINEFORM0 be a set of data points, INLINEFORM1 be a set of INLINEFORM2 class labels, INLINEFORM3 be a set of sets where INLINEFORM4 for every INLINEFORM5 , and let INLINEFORM6 be the label distribution of set INLINEFORM7 . For example, INLINEFORM8 would indicate that 70% of data points in INLINEFORM9 are expected to have class 0, 20% are expected to have class 1 and 10% are expected to have class 2. Let INLINEFORM10 be a parameterized function with parameters INLINEFORM11 from INLINEFORM12 to a vector of conditional probabilities over labels in INLINEFORM13 . We write INLINEFORM14 to denote the probability assigned to the INLINEFORM15 th event (the conditional probability of INLINEFORM16 given INLINEFORM17 ).
A typically objective when training on fully labeled data of INLINEFORM0 pairs is to maximize likelihood of labeled data using the cross entropy loss, INLINEFORM1
Instead, in XR our data comes in the form of pairs INLINEFORM0 of sets and their corresponding expected label proportions, and we aim to optimize INLINEFORM1 to fit the label distribution INLINEFORM2 over INLINEFORM3 , for all INLINEFORM4 .
As counting the number of predicted class labels over a set INLINEFORM0 leads to a non-differentiable objective, BIBREF0 suggest to relax it and use instead the model's posterior distribution INLINEFORM1 over the set: DISPLAYFORM0 DISPLAYFORM1
where INLINEFORM0 indicates the INLINEFORM1 th entry in INLINEFORM2 . Then, we would like to set INLINEFORM3 such that INLINEFORM4 and INLINEFORM5 are close. BIBREF0 suggest to use KL-divergence for this. KL-divergence is composed of two parts: INLINEFORM6 INLINEFORM7
Since INLINEFORM0 is constant, we only need to minimize INLINEFORM1 , therefore the loss function becomes: DISPLAYFORM0
Notice that computing INLINEFORM0 requires summation over INLINEFORM1 for the entire set INLINEFORM2 , which can be prohibitive. We present batched approximation (Section SECREF19 ) to overcome this.
BIBREF0 find that XR might find a degenerate solution. For example, in a three class classification task, where INLINEFORM0 , it might find a solution such that INLINEFORM1 for every instance, as a result, every instance will be classified the same. To avoid this, BIBREF0 suggest to penalize flat distributions by using a temperature coefficient T likewise: DISPLAYFORM0
Where z is a feature vector and W and b are the linear classifier parameters.
Aspect-based Sentiment Classification
In the aspect-based sentiment classification (ABSC) task, we are given a sentence and an aspect, and need to determine the sentiment that is expressed towards the aspect. For example the sentence “Excellent food, although the interior could use some help.“ has two aspects: food and interior, a positive sentiment is expressed about the food, but a negative sentiment is expressed about the interior. A sentence INLINEFORM0 , may contain 0 or more aspects INLINEFORM1 , where each aspect corresponds to a sub-sequence of the original sentence, and has an associated sentiment label (Neg, Pos, or Neu). Concretely, we follow the task definition in the SemEval-2015 and SemEval-2016 shared tasks BIBREF23 , BIBREF24 , in which the relevant aspects are given and the task focuses on finding the sentiment label of the aspects.
While sentence-level sentiment labels are relatively easy to obtain, aspect-level annotation are much more scarce, as demonstrated in the small datasets of the SemEval shared tasks.
Transfer-training between related tasks with XR
[t!] Inputs: A dataset INLINEFORM0 , batch size INLINEFORM1 , differentiable classifier INLINEFORM2 [H] not converged INLINEFORM3 random( INLINEFORM4 ) INLINEFORM5 random-choice( INLINEFORM6 , INLINEFORM7 ) INLINEFORM8 INLINEFORM9 INLINEFORM10 INLINEFORM11 Compute loss INLINEFORM12 (eq (4)) Compute gradients and update INLINEFORM13 INLINEFORM14 Stochastic Batched XR
Consider two classification tasks over a shared input space, a source task INLINEFORM0 from INLINEFORM1 to INLINEFORM2 and a target task INLINEFORM3 from INLINEFORM4 to INLINEFORM5 , which are related through a conditional distribution INLINEFORM6 . In other words, a labeling decision for task INLINEFORM7 induces an expected label distribution over the task INLINEFORM8 . For a set of datapoints INLINEFORM9 that share a source label INLINEFORM10 , we expect to see a target label distribution of INLINEFORM11 .
Given a large unlabeled dataset INLINEFORM0 , a small labeled dataset for the target task INLINEFORM1 , classifier INLINEFORM2 (or sufficient training data to train one) for the source task, we wish to use INLINEFORM3 and INLINEFORM4 to train a good classifier INLINEFORM5 for the target task. This can be achieved using the following procedure.
Apply INLINEFORM0 to INLINEFORM1 , resulting in a noisy source-side labels INLINEFORM2 for the target task.
Estimate the conditional probability INLINEFORM0 table using MLE estimates over INLINEFORM1 INLINEFORM2
where INLINEFORM0 is a counting function over INLINEFORM1 .
Apply INLINEFORM0 to the unlabeled data INLINEFORM1 resulting in labels INLINEFORM2 . Split INLINEFORM3 into INLINEFORM4 sets INLINEFORM5 according to the labeling induced by INLINEFORM6 : INLINEFORM7
Use Algorithm SECREF12 to train a classifier for the target task using input pairs INLINEFORM0 and the XR loss.
In words, by using XR training, we use the expected label proportions over the target task given predicted labels of the source task, to train a target-class classifier.
Stochastic Batched Training for Deep XR
BIBREF0 and following work take the base classifier INLINEFORM0 to be a logistic regression classifier, for which they manually derive gradients for the XR loss and train with LBFGs BIBREF25 . However, nothing precludes us from using an arbitrary neural network instead, as long as it culminates in a softmax layer.
One complicating factor is that the computation of INLINEFORM0 in equation ( EQREF5 ) requires a summation over INLINEFORM1 for the entire set INLINEFORM2 , which in our setup may contain hundreds of thousands of examples, making gradient computation and optimization impractical. We instead proposed a stochastic batched approximation in which, instead of requiring that the full constraint set INLINEFORM3 will match the expected label posterior distribution, we require that sufficiently large random subsets of it will match the distribution. At each training step we compute the loss and update the gradient with respect to a different random subset. Specifically, in each training step we sample a random pair INLINEFORM4 , sample a random subset INLINEFORM5 of INLINEFORM6 of size INLINEFORM7 , and compute the local XR loss of set INLINEFORM8 : DISPLAYFORM0
where INLINEFORM0 is computed by summing over the elements of INLINEFORM1 rather than of INLINEFORM2 in equations ( EQREF5 –2). The stochastic batched XR training algorithm is given in Algorithm SECREF12 . For large enough INLINEFORM3 , the expected label distribution of the subset is the same as that of the complete set.
Application to Aspect-based Sentiment
We demonstrate the procedure given above by training Aspect-based Sentiment Classifier (ABSC) using sentence-level sentiment signals.
Relating the classification tasks
We observe that while the sentence-level sentiment does not determine the sentiment of individual aspects (a positive sentence may contain negative remarks about some aspects), it is very predictive of the proportion of sentiment labels of the fragments within a sentence. Positively labeled sentences are likely to have more positive aspects and fewer negative ones, and vice-versa for negatively-labeled sentences. While these proportions may vary on the individual sentence level, we expect them to be stable when aggregating fragments from several sentences: when considering a large enough sample of fragments that all come from positively labeled sentences, we expect the different samples to have roughly similar label proportions to each other. This situation is idealy suited for performing XR training, as described in section SECREF12 .
The application to ABSC is almost straightforward, but is complicated a bit by the decomposition of sentences into fragments: each sentence level decision now corresponds to multiple fragment-level decisions. Thus, we apply the sentence-level (task A) classifier INLINEFORM0 on the aspect-level corpus INLINEFORM1 by applying it on the sentence level and then associating the predicted sentence labels with each of the fragments, resulting in fragment-level labeling. Similarly, when we apply INLINEFORM2 to the unlabeled data INLINEFORM3 we again do it at the sentence level, but the sets INLINEFORM4 are composed of fragments, not sentences: INLINEFORM5
We then apply algorithm SECREF12 as is: at each step of training we sample a source label INLINEFORM0 Pos,Neg,Neu INLINEFORM1 , sample INLINEFORM2 fragments from INLINEFORM3 , and use the XR loss to fit the expected fragment-label proportions over these INLINEFORM4 fragments to INLINEFORM5 . Figure FIGREF21 illustrates the procedure.
Classification Architecture
We model the ABSC problem by associating each (sentence,aspect) pair with a sentence-fragment, and constructing a neural classifier from fragments to sentiment labels. We heuristically decompose a sentence into fragments. We use the same BiLSTM based neural architecture for both sentence classification and fragment classification.
We now describe the procedure we use to associate a sentence fragment with each (sentence,aspect) pairs. The shared tasks data associates each aspect with a pivot-phrase INLINEFORM0 , where pivot phrase INLINEFORM1 is defined as a pre-determined sequence of words that is contained within the sentence. For a sentence INLINEFORM2 , a set of pivot phrases INLINEFORM3 and a specific pivot phrase INLINEFORM4 , we consult the constituency parse tree of INLINEFORM5 and look for tree nodes that satisfy the following conditions:
The node governs the desired pivot phrase INLINEFORM0 .
The node governs either a verb (VB, VBD, VBN, VBG, VBP, VBZ) or an adjective (JJ, JJR, JJS), which is different than any INLINEFORM0 .
The node governs a minimal number of pivot phrases from INLINEFORM0 , ideally only INLINEFORM1 .
We then select the highest node in the tree that satisfies all conditions. The span governed by this node is taken as the fragment associated with aspect INLINEFORM0 . The decomposition procedure is demonstrated in Figure FIGREF22 .
When aspect-level information is given, we take the pivot-phrases to be the requested aspects. When aspect-level information is not available, we take each noun in the sentence to be a pivot-phrase.
Our classification model is a simple 1-layer BiLSTM encoder (a concatenation of the last states of a forward and a backward running LSTMs) followed by a linear-predictor. The encoder is fed either a complete sentence or a sentence fragment.
Main Results
Table TABREF44 compares these baselines to three XR conditions.
The first condition, BiLSTM-XR-Dev, performs XR training on the automatically-labeled sentence-level dataset. The only access it has to aspect-level annotation is for estimating the proportions of labels for each sentence-level label, which is done based on the validation set of SemEval-2015 (i.e., 20% of the train set). The XR setting is very effective: without using any in-task data, this model already surpasses all other models, both supervised and semi-supervised, except for the BIBREF35 , BIBREF34 models which achieve higher F1 scores. We note that in contrast to XR, the competing models have complete access to the supervised aspect-based labels. The second condition, BiLSTM-XR, is similar but now the model is allowed to estimate the conditional label proportions based on the entire aspect-based training set (the classifier still does not have direct access to the labels beyond the aggregate proportion information). This improves results further, showing the importance of accurately estimating the proportions. Finally, in BiLSTM-XR+Finetuning, we follow the XR training with fully supervised fine-tuning on the small labeled dataset, using the attention-based model of BIBREF35 . This achieves the best results, and surpasses also the semi-supervised BIBREF35 baseline on accuracy, and matching it on F1.
We report significance tests for the robustness of the method under random parameter initialization. Our reported numbers are averaged over five random initialization. Since the datasets are unbalanced w.r.t the label distribution, we report both accuracy and macro-F1.
The XR training is also more stable than the other semi-supervised baselines, achieving substantially lower standard deviations across different runs.
Further experiments
In each experiment in this section we estimate the proportions using the SemEval-2015 train set.
How does the XR training scale with the amount of unlabeled data? Figure FIGREF54 a shows the macro-F1 scores on the entire SemEval-2016 dataset, with different unlabeled corpus sizes (measured in number of sentences). An unannotated corpus of INLINEFORM0 sentences is sufficient to surpass the results of the INLINEFORM1 sentence-level trained classifier, and more unannotated data further improves the results.
Our method requires a sentence level classifier INLINEFORM0 to label both the target-task corpus and the unlabeled corpus. How does the quality of this classifier affect the overall XR training? We vary the amount of supervision used to train INLINEFORM1 from 0 sentences (assigning the same label to all sentences), to 100, 1000, 5000 and 10000 sentences. We again measure macro-F1 on the entire SemEval 2016 corpus.
The results in Figure FIGREF54 b show that when using the prior distributions of aspects (0), the model struggles to learn from this signal, it learns mostly to predict the majority class, and hence reaches very low F1 scores of 35.28. The more data given to the sentence level classifier, the better the potential results will be when training with our method using the classifier labels, with a classifiers trained on 100,1000,5000 and 10000 labeled sentences, we get a F1 scores of 53.81, 58.84, 61.81, 65.58 respectively. Improvements in the source task classifier's quality clearly contribute to the target task accuracy.
The Stochastic Batched XR algorithm (Algorithm SECREF12 ) samples a batch of INLINEFORM0 examples at each step to estimate the posterior label distribution used in the loss computation. How does the size of INLINEFORM1 affect the results? We use INLINEFORM2 fragments in our main experiments, but smaller values of INLINEFORM3 reduce GPU memory load and may train better in practice. We tested our method with varying values of INLINEFORM4 on a sample of INLINEFORM5 , using batches that are composed of fragments of 5, 25, 100, 450, 1000 and 4500 sentences. The results are shown in Figure FIGREF54 c. Setting INLINEFORM6 result in low scores. Setting INLINEFORM7 yields better F1 score but with high variance across runs. For INLINEFORM8 fragments the results begin to stabilize, we also see a slight decrease in F1-scores with larger batch sizes. We attribute this drop despite having better estimation of the gradients to the general trend of larger batch sizes being harder to train with stochastic gradient methods.
Pre-training, Bert
The XR training can be performed also over pre-trained representations. We experiment with two pre-training methods: (1) pre-training by training the BiLSTM model to predict the noisy sentence-level predictions. (2) Using the pre-trained Bert representation BIBREF9 . For (1), we compare the effect of pre-train on unlabeled corpora of sizes of INLINEFORM0 , INLINEFORM1 and INLINEFORM2 sentences. Results in Figure FIGREF54 d show that this form of pre-training is effective for smaller unlabeled corpora but evens out for larger ones.
For the Bert experiments, we experiment with the Bert-base model with INLINEFORM1 sets, 30 epochs for XR training or sentence level fine-tuning and 15 epochs for aspect based fine-tuning, on each training method we evaluated the model on the dev set after each epoch and the best model was chosen. We compare the following setups:
-Bert INLINEFORM0 Aspect Based Finetuning: pretrained bert model finetuned to the aspect based task.
-Bert INLINEFORM0 : A pretrained bert model finetuned to the sentence level task on the INLINEFORM1 sentences, and tested by predicting fragment-level sentiment.
-Bert INLINEFORM0 INLINEFORM1 INLINEFORM2 Aspect Based Finetuning: pretrained bert model finetuned to the sentence level task, and finetuned again to the aspect based one.
-Bert INLINEFORM0 XR: pretrained bert model followed by XR training using our method.
-Bert INLINEFORM0 XR INLINEFORM1 Aspect Based Finetuning: pretrained bert followed by XR training and then fine-tuned to the aspect level task.
The results are presented in Table TABREF55 . As before, aspect-based fine-tuning is beneficial for both SemEval-16 and SemEval-15. Training a BiLSTM with XR surpasses pre-trained bert models and using XR training on top of the pre-trained Bert models substantially increases the results even further.
Discussion
We presented a transfer learning method based on expectation regularization (XR), and demonstrated its effectiveness for training aspect-based sentiment classifiers using sentence-level supervision. The method achieves state-of-the-art results for the task, and is also effective for improving on top of a strong pre-trained Bert model. The proposed method provides an additional data-efficient tool in the modeling arsenal, which can be applied on its own or together with another training method, in situations where there is a conditional relations between the labels of a source task for which we have supervision, and a target task for which we don't.
While we demonstrated the approach on the sentiment domain, the required conditional dependence between task labels is present in many situations. Other possible application of the method includes training language identification of tweets given geo-location supervision (knowing the geographical region gives a prior on languages spoken), training predictors for renal failure from textual medical records given classifier for diabetes (there is a strong correlation between the two conditions), training a political affiliation classifier from social media tweets based on age-group classifiers, zip-code information, or social-status classifiers (there are known correlations between all of these to political affiliation), training hate-speech detection based on emotion detection, and so on.
Acknowledgements
The work was supported in part by The Israeli Science Foundation (grant number 1555/15). | DISPLAYFORM0 |
b65b1c366c8bcf544f1be5710ae1efc6d2b1e2f1 | b65b1c366c8bcf544f1be5710ae1efc6d2b1e2f1_0 | Q: What were the non-neural baselines used for the task?
Text: Introduction
While producing a sentence, humans combine various types of knowledge to produce fluent output—various shades of meaning are expressed through word selection and tone, while the language is made to conform to underlying structural rules via syntax and morphology. Native speakers are often quick to identify disfluency, even if the meaning of a sentence is mostly clear.
Automatic systems must also consider these constraints when constructing or processing language. Strong enough language models can often reconstruct common syntactic structures, but are insufficient to properly model morphology. Many languages implement large inflectional paradigms that mark both function and content words with a varying levels of morphosyntactic information. For instance, Romanian verb forms inflect for person, number, tense, mood, and voice; meanwhile, Archi verbs can take on thousands of forms BIBREF0. Such complex paradigms produce large inventories of words, all of which must be producible by a realistic system, even though a large percentage of them will never be observed over billions of lines of linguistic input. Compounding the issue, good inflectional systems often require large amounts of supervised training data, which is infeasible in many of the world's languages.
This year's shared task is concentrated on encouraging the construction of strong morphological systems that perform two related but different inflectional tasks. The first task asks participants to create morphological inflectors for a large number of under-resourced languages, encouraging systems that use highly-resourced, related languages as a cross-lingual training signal. The second task welcomes submissions that invert this operation in light of contextual information: Given an unannotated sentence, lemmatize each word, and tag them with a morphosyntactic description. Both of these tasks extend upon previous morphological competitions, and the best submitted systems now represent the state of the art in their respective tasks.
Tasks and Evaluation ::: Task 1: Cross-lingual transfer for morphological inflection
Annotated resources for the world's languages are not distributed equally—some languages simply have more as they have more native speakers willing and able to annotate more data. We explore how to transfer knowledge from high-resource languages that are genetically related to low-resource languages.
The first task iterates on last year's main task: morphological inflection BIBREF1. Instead of giving some number of training examples in the language of interest, we provided only a limited number in that language. To accompany it, we provided a larger number of examples in either a related or unrelated language. Each test example asked participants to produce some other inflected form when given a lemma and a bundle of morphosyntactic features as input. The goal, thus, is to perform morphological inflection in the low-resource language, having hopefully exploited some similarity to the high-resource language. Models which perform well here can aid downstream tasks like machine translation in low-resource settings. All datasets were resampled from UniMorph, which makes them distinct from past years.
The mode of the task is inspired by BIBREF2, who fine-tune a model pre-trained on a high-resource language to perform well on a low-resource language. We do not, though, require that models be trained by fine-tuning. Joint modeling or any number of methods may be explored instead.
Tasks and Evaluation ::: Task 1: Cross-lingual transfer for morphological inflection ::: Example
The model will have access to type-level data in a low-resource target language, plus a high-resource source language. We give an example here of Asturian as the target language with Spanish as the source language.
Tasks and Evaluation ::: Task 1: Cross-lingual transfer for morphological inflection ::: Evaluation
We score the output of each system in terms of its predictions' exact-match accuracy and the average Levenshtein distance between the predictions and their corresponding true forms.
Tasks and Evaluation ::: Task 2: Morphological analysis in context
Although inflection of words in a context-agnostic manner is a useful evaluation of the morphological quality of a system, people do not learn morphology in isolation.
In 2018, the second task of the CoNLL–SIGMORPHON Shared Task BIBREF1 required submitting systems to complete an inflectional cloze task BIBREF3 given only the sentential context and the desired lemma – an example of the problem is given in the following lines: A successful system would predict the plural form “dogs”. Likewise, a Spanish word form ayuda may be a feminine noun or a third-person verb form, which must be disambiguated by context.
This year's task extends the second task from last year. Rather than inflect a single word in context, the task is to provide a complete morphological tagging of a sentence: for each word, a successful system will need to lemmatize and tag it with a morphsyntactic description (MSD).
width=
Context is critical—depending on the sentence, identical word forms realize a large number of potential inflectional categories, which will in turn influence lemmatization decisions. If the sentence were instead “The barking dogs kept us up all night”, “barking” is now an adjective, and its lemma is also “barking”.
Data ::: Data for Task 1 ::: Language pairs
We presented data in 100 language pairs spanning 79 unique languages. Data for all but four languages (Basque, Kurmanji, Murrinhpatha, and Sorani) are extracted from English Wiktionary, a large multi-lingual crowd-sourced dictionary with morphological paradigms for many lemmata. 20 of the 100 language pairs are either distantly related or unrelated; this allows speculation into the relative importance of data quantity and linguistic relatedness.
Data ::: Data for Task 1 ::: Data format
For each language, the basic data consists of triples of the form (lemma, feature bundle, inflected form), as in tab:sub1data. The first feature in the bundle always specifies the core part of speech (e.g., verb). For each language pair, separate files contain the high- and low-resource training examples.
All features in the bundle are coded according to the UniMorph Schema, a cross-linguistically consistent universal morphological feature set BIBREF8, BIBREF9.
Data ::: Data for Task 1 ::: Extraction from Wiktionary
For each of the Wiktionary languages, Wiktionary provides a number of tables, each of which specifies the full inflectional paradigm for a particular lemma. As in the previous iteration, tables were extracted using a template annotation procedure described in BIBREF10.
Data ::: Data for Task 1 ::: Sampling data splits
From each language's collection of paradigms, we sampled the training, development, and test sets as in 2018. Crucially, while the data were sampled in the same fashion, the datasets are distinct from those used for the 2018 shared task.
Our first step was to construct probability distributions over the (lemma, feature bundle, inflected form) triples in our full dataset. For each triple, we counted how many tokens the inflected form has in the February 2017 dump of Wikipedia for that language. To distribute the counts of an observed form over all the triples that have this token as its form, we follow the method used in the previous shared task BIBREF1, training a neural network on unambiguous forms to estimate the distribution over all, even ambiguous, forms. We then sampled 12,000 triples without replacement from this distribution. The first 100 were taken as training data for low-resource settings. The first 10,000 were used as high-resource training sets. As these sets are nested, the highest-count triples tend to appear in the smaller training sets.
The final 2000 triples were randomly shuffled and then split in half to obtain development and test sets of 1000 forms each. The final shuffling was performed to ensure that the development set is similar to the test set. By contrast, the development and test sets tend to contain lower-count triples than the training set.
Data ::: Data for Task 1 ::: Other modifications
We further adopted some changes to increase compatibility. Namely, we corrected some annotation errors created while scraping Wiktionary for the 2018 task, and we standardized Romanian t-cedilla and t-comma to t-comma. (The same was done with s-cedilla and s-comma.)
Data ::: Data for Task 2
Our data for task 2 come from the Universal Dependencies treebanks BIBREF11, which provides pre-defined training, development, and test splits and annotations in a unified annotation schema for morphosyntax and dependency relationships. Unlike the 2018 cloze task which used UD data, we require no manual data preparation and are able to leverage all 107 monolingual treebanks. As is typical, data are presented in CoNLL-U format, although we modify the morphological feature and lemma fields.
Data ::: Data for Task 2 ::: Data conversion
The morphological annotations for the 2019 shared task were converted to the UniMorph schema BIBREF10 according to BIBREF12, who provide a deterministic mapping that increases agreement across languages. This also moves the part of speech into the bundle of morphological features. We do not attempt to individually correct any errors in the UD source material. Further, some languages received additional pre-processing. In the Finnish data, we removed morpheme boundaries that were present in the lemmata (e.g., puhe#kieli $\mapsto $ puhekieli `spoken+language'). Russian lemmata in the GSD treebank were presented in all uppercase; to match the 2018 shared task, we lowercased these. In development and test data, all fields except for form and index within the sentence were struck.
Baselines ::: Task 1 Baseline
We include four neural sequence-to-sequence models mapping lemma into inflected word forms: soft attention BIBREF13, non-monotonic hard attention BIBREF14, monotonic hard attention and a variant with offset-based transition distribution BIBREF15. Neural sequence-to-sequence models with soft attention BIBREF13 have dominated previous SIGMORPHON shared tasks BIBREF16. BIBREF14 instead models the alignment between characters in the lemma and the inflected word form explicitly with hard attention and learns this alignment and transduction jointly. BIBREF15 shows that enforcing strict monotonicity with hard attention is beneficial in tasks such as morphological inflection where the transduction is mostly monotonic. The encoder is a biLSTM while the decoder is a left-to-right LSTM. All models use multiplicative attention and have roughly the same number of parameters. In the model, a morphological tag is fed to the decoder along with target character embeddings to guide the decoding. During the training of the hard attention model, dynamic programming is applied to marginalize all latent alignments exactly.
Baselines ::: Task 2 Baselines ::: Non-neural
BIBREF17: The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. The model is globally normalized with the use of a second order linear-chain CRF. To efficiently calculate the partition function, the choice of lemmata are pruned with the use of pre-extracted edit trees.
Baselines ::: Task 2 Baselines ::: Neural
BIBREF18: This is a state-of-the-art neural model that also performs joint morphological tagging and lemmatization, but also accounts for the exposure bias with the application of maximum likelihood (MLE). The model stitches the tagger and lemmatizer together with the use of jackknifing BIBREF19 to expose the lemmatizer to the errors made by the tagger model during training. The morphological tagger is based on a character-level biLSTM embedder that produces the embedding for a word, and a word-level biLSTM tagger that predicts a morphological tag sequence for each word in the sentence. The lemmatizer is a neural sequence-to-sequence model BIBREF15 that uses the decoded morphological tag sequence from the tagger as an additional attribute. The model uses hard monotonic attention instead of standard soft attention, along with a dynamic programming based training scheme.
Results
The SIGMORPHON 2019 shared task received 30 submissions—14 for task 1 and 16 for task 2—from 23 teams. In addition, the organizers' baseline systems were evaluated.
Results ::: Task 1 Results
Five teams participated in the first Task, with a variety of methods aimed at leveraging the cross-lingual data to improve system performance.
The University of Alberta (UAlberta) performed a focused investigation on four language pairs, training cognate-projection systems from external cognate lists. Two methods were considered: one which trained a high-resource neural encoder-decoder, and projected the test data into the HRL, and one that projected the HRL data into the LRL, and trained a combined system. Results demonstrated that certain language pairs may be amenable to such methods.
The Tuebingen University submission (Tuebingen) aligned source and target to learn a set of edit-actions with both linear and neural classifiers that independently learned to predict action sequences for each morphological category. Adding in the cross-lingual data only led to modest gains.
AX-Semantics combined the low- and high-resource data to train an encoder-decoder seq2seq model; optionally also implementing domain adaptation methods to focus later epochs on the target language.
The CMU submission first attends over a decoupled representation of the desired morphological sequence before using the updated decoder state to attend over the character sequence of the lemma. Secondly, in order to reduce the bias of the decoder's language model, they hallucinate two types of data that encourage common affixes and character copying. Simply allowing the model to learn to copy characters for several epochs significantly out-performs the task baseline, while further improvements are obtained through fine-tuning. Making use of an adversarial language discriminator, cross lingual gains are highly-correlated to linguistic similarity, while augmenting the data with hallucinated forms and multiple related target language further improves the model.
The system from IT-IST also attends separately to tags and lemmas, using a gating mechanism to interpolate the importance of the individual attentions. By combining the gated dual-head attention with a SparseMax activation function, they are able to jointly learn stem and affix modifications, improving significantly over the baseline system.
The relative system performance is described in tab:sub2team, which shows the average per-language accuracy of each system. The table reflects the fact that some teams submitted more than one system (e.g. Tuebingen-1 & Tuebingen-2 in the table).
Results ::: Task 2 Results
Nine teams submitted system papers for Task 2, with several interesting modifications to either the baseline or other prior work that led to modest improvements.
Charles-Saarland achieved the highest overall tagging accuracy by leveraging multi-lingual BERT embeddings fine-tuned on a concatenation of all available languages, effectively transporting the cross-lingual objective of Task 1 into Task 2. Lemmas and tags are decoded separately (with a joint encoder and separate attention); Lemmas are a sequence of edit-actions, while tags are calculated jointly. (There is no splitting of tags into features; tags are atomic.)
CBNU instead lemmatize using a transformer network, while performing tagging with a multilayer perceptron with biaffine attention. Input words are first lemmatized, and then pipelined to the tagger, which produces atomic tag sequences (i.e., no splitting of features).
The team from Istanbul Technical University (ITU) jointly produces lemmatic edit-actions and morphological tags via a two level encoder (first word embeddings, and then context embeddings) and separate decoders. Their system slightly improves over the baseline lemmatization, but significantly improves tagging accuracy.
The team from the University of Groningen (RUG) also uses separate decoders for lemmatization and tagging, but uses ELMo to initialize the contextual embeddings, leading to large gains in performance. Furthermore, joint training on related languages further improves results.
CMU approaches tagging differently than the multi-task decoding we've seen so far (baseline is used for lemmatization). Making use of a hierarchical CRF that first predicts POS (that is subsequently looped back into the encoder), they then seek to predict each feature separately. In particular, predicting POS separately greatly improves results. An attempt to leverage gold typological information led to little gain in the results; experiments suggest that the system is already learning the pertinent information.
The team from Ohio State University (OHIOSTATE) concentrates on predicting tags; the baseline lemmatizer is used for lemmatization. To that end, they make use of a dual decoder that first predicts features given only the word embedding as input; the predictions are fed to a GRU seq2seq, which then predicts the sequence of tags.
The UNT HiLT+Ling team investigates a low-resource setting of the tagging, by using parallel Bible data to learn a translation matrix between English and the target language, learning morphological tags through analogy with English.
The UFAL-Prague team extends their submission from the UD shared task (multi-layer LSTM), replacing the pretrained embeddings with BERT, to great success (first in lemmatization, 2nd in tagging). Although they predict complete tags, they use the individual features to regularize the decoder. Small gains are also obtained from joining multi-lingual corpora and ensembling.
CUNI–Malta performs lemmatization as operations over edit actions with LSTM and ReLU. Tagging is a bidirectional LSTM augmented by the edit actions (i.e., two-stage decoding), predicting features separately.
The Edinburgh system is a character-based LSTM encoder-decoder with attention, implemented in OpenNMT. It can be seen as an extension of the contextual lemmatization system Lematus BIBREF20 to include morphological tagging, or alternatively as an adaptation of the morphological re-inflection system MED BIBREF21 to incorporate context and perform analysis rather than re-inflection. Like these systems it uses a completely generic encoder-decoder architecture with no specific adaptation to the morphological processing task other than the form of the input. In the submitted version of the system, the input is split into short chunks corresponding to the target word plus one word of context on either side, and the system is trained to output the corresponding lemmas and tags for each three-word chunk.
Several teams relied on external resources to improve their lemmatization and feature analysis. Several teams made use of pre-trained embeddings. CHARLES-SAARLAND-2 and UFALPRAGUE-1 used pretrained contextual embeddings (BERT) provided by Google BIBREF22. CBNU-1 used a mix of pre-trained embeddings from the CoNLL 2017 shared task and fastText. Further, some teams trained their own embeddings to aid performance.
Future Directions
In general, the application of typology to natural language processing BIBREF23, BIBREF24 provides an interesting avenue for multilinguality. Further, our shared task was designed to only leverage a single helper language, though many may exist with lexical or morphological overlap with the target language. Techniques like those of BIBREF25 may aid in designing universal inflection architectures. Neither task this year included unannotated monolingual corpora. Using such data is well-motivated from an L1-learning point of view, and may affect the performance of low-resource data settings.
In the case of inflection an interesting future topic could involve departing from orthographic representation and using more IPA-like representations, i.e. transductions over pronunciations. Different languages, in particular those with idiosyncratic orthographies, may offer new challenges in this respect.
Only one team tried to learn inflection in a multilingual setting—i.e. to use all training data to train one model. Such transfer learning is an interesting avenue of future research, but evaluation could be difficult. Whether any cross-language transfer is actually being learned vs. whether having more data better biases the networks to copy strings is an evaluation step to disentangle.
Creating new data sets that accurately reflect learner exposure (whether L1 or L2) is also an important consideration in the design of future shared tasks. One pertinent facet of this is information about inflectional categories—often the inflectional information is insufficiently prescribed by the lemma, as with the Romanian verbal inflection classes or nominal gender in German.
As we move toward multilingual models for morphology, it becomes important to understand which representations are critical or irrelevant for adapting to new languages; this may be probed in the style of BIBREF27, and it can be used as a first step toward designing systems that avoid catastrophic forgetting as they learn to inflect new languages BIBREF28.
Future directions for Task 2 include exploring cross-lingual analysis—in stride with both Task 1 and BIBREF29—and leveraging these analyses in downstream tasks.
Conclusions
The SIGMORPHON 2019 shared task provided a type-level evaluation on 100 language pairs in 79 languages and a token-level evaluation on 107 treebanks in 66 languages, of systems for inflection and analysis. On task 1 (low-resource inflection with cross-lingual transfer), 14 systems were submitted, while on task 2 (lemmatization and morphological feature analysis), 16 systems were submitted. All used neural network models, completing a trend in past years' shared tasks and other recent work on morphology.
In task 1, gains from cross-lingual training were generally modest, with gains positively correlating with the linguistic similarity of the two languages.
In the second task, several methods were implemented by multiple groups, with the most successful systems implementing variations of multi-headed attention, multi-level encoding, multiple decoders, and ELMo and BERT contextual embeddings.
We have released the training, development, and test sets, and expect these datasets to provide a useful benchmark for future research into learning of inflectional morphology and string-to-string transduction.
Acknowledgments
MS has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 771113). | The Lemming model in BIBREF17 |
bd3ccb63fd8ce5575338d7332e96def7a3fabad6 | bd3ccb63fd8ce5575338d7332e96def7a3fabad6_0 | Q: Which publicly available NLU dataset is used?
Text: Introduction
Research in Conversational AI (also known as Spoken Dialogue Systems) has applications ranging from home devices to robotics, and has a growing presence in industry. A key problem in real-world Dialogue Systems is Natural Language Understanding (NLU) – the process of extracting structured representations of meaning from user utterances. In fact, the effective extraction of semantics is an essential feature, being the entry point of any Natural Language interaction system. Apart from challenges given by the inherent complexity and ambiguity of human language, other challenges arise whenever the NLU has to operate over multiple domains. In fact, interaction patterns, domain, and language vary depending on the device the user is interacting with. For example, chit-chatting and instruction-giving for executing an action are different processes in terms of language, domain, syntax and interaction schemes involved. And what if the user combines two interaction domains: “play some music, but first what's the weather tomorrow”?
In this work, we present HERMIT, a HiERarchical MultI-Task Natural Language Understanding architecture, designed for effective semantic parsing of domain-independent user utterances, extracting meaning representations in terms of high-level intents and frame-like semantic structures. With respect to previous approaches to NLU for SDS, HERMIT stands out for being a cross-domain, multi-task architecture, capable of recognising multiple intents/frames in an utterance. HERMIT also shows better performance with respect to current state-of-the-art commercial systems. Such a novel combination of requirements is discussed below.
Introduction ::: Cross-domain NLU
A cross-domain dialogue agent must be able to handle heterogeneous types of conversation, such as chit-chatting, giving directions, entertaining, and triggering domain/task actions. A domain-independent and rich meaning representation is thus required to properly capture the intent of the user. Meaning is modelled here through three layers of knowledge: dialogue acts, frames, and frame arguments. Frames and arguments can be in turn mapped to domain-dependent intents and slots, or to Frame Semantics' BIBREF0 structures (i.e. semantic frames and frame elements, respectively), which allow handling of heterogeneous domains and language.
Introduction ::: Multi-task NLU
Deriving such a multi-layered meaning representation can be approached through a multi-task learning approach. Multi-task learning has found success in several NLP problems BIBREF1, BIBREF2, especially with the recent rise of Deep Learning. Thanks to the possibility of building complex networks, handling more tasks at once has been proven to be a successful solution, provided that some degree of dependence holds between the tasks. Moreover, multi-task learning allows the use of different datasets to train sub-parts of the network BIBREF3. Following the same trend, HERMIT is a hierarchical multi-task neural architecture which is able to deal with the three tasks of tagging dialogue acts, frame-like structures, and their arguments in parallel. The network, based on self-attention mechanisms, seq2seq bi-directional Long-Short Term Memory (BiLSTM) encoders, and CRF tagging layers, is hierarchical in the sense that information output from earlier layers flows through the network, feeding following layers to solve downstream dependent tasks.
Introduction ::: Multi-dialogue act and -intent NLU
Another degree of complexity in NLU is represented by the granularity of knowledge that can be extracted from an utterance. Utterance semantics is often rich and expressive: approximating meaning to a single user intent is often not enough to convey the required information. As opposed to the traditional single-dialogue act and single-intent view in previous work BIBREF4, BIBREF5, BIBREF6, HERMIT operates on a meaning representation that is multi-dialogue act and multi-intent. In fact, it is possible to model an utterance's meaning through multiple dialogue acts and intents at the same time. For example, the user would be able both to request tomorrow's weather and listen to his/her favourite music with just a single utterance.
A further requirement is that for practical application the system should be competitive with state-of-the-art: we evaluate HERMIT's effectiveness by running several empirical investigations. We perform a robust test on a publicly available NLU-Benchmark (NLU-BM) BIBREF7 containing 25K cross-domain utterances with a conversational agent. The results obtained show a performance higher than well-known off-the-shelf tools (i.e., Rasa, DialogueFlow, LUIS, and Watson). The contribution of the different network components is then highlighted through an ablation study. We also test HERMIT on the smaller Robotics-Oriented MUltitask Language UnderStanding (ROMULUS) corpus, annotated with Dialogue Acts and Frame Semantics. HERMIT produces promising results for the application in a real scenario.
Related Work
Much research on Natural (or Spoken, depending on the input) Language Understanding has been carried out in the area of Spoken Dialogue Systems BIBREF8, where the advent of statistical learning has led to the application of many data-driven approaches BIBREF9. In recent years, the rise of deep learning models has further improved the state-of-the-art. Recurrent Neural Networks (RNNs) have proven to be particularly successful, especially uni- and bi-directional LSTMs and Gated Recurrent Units (GRUs). The use of such deep architectures has also fostered the development of joint classification models of intents and slots. Bi-directional GRUs are applied in BIBREF10, where the hidden state of each time step is used for slot tagging in a seq2seq fashion, while the final state of the GRU is used for intent classification. The application of attention mechanisms in a BiLSTM architecture is investigated in BIBREF5, while the work of BIBREF11 explores the use of memory networks BIBREF12 to exploit encoding of historical user utterances to improve the slot-filling task. Seq2seq with self-attention is applied in BIBREF13, where the classified intent is also used to guide a special gated unit that contributes to the slot classification of each token.
One of the first attempts to jointly detect domains in addition to intent-slot tagging is the work of BIBREF4. An utterance syntax is encoded through a Recursive NN, and it is used to predict the joined domain-intent classes. Syntactic features extracted from the same network are used in the per-word slot classifier. The work of BIBREF6 applies the same idea of BIBREF10, this time using a context-augmented BiLSTM, and performing domain-intent classification as a single joint task. As in BIBREF11, the history of user utterances is also considered in BIBREF14, in combination with a dialogue context encoder. A two-layer hierarchical structure made of a combination of BiLSTM and BiGRU is used for joint classification of domains and intents, together with slot tagging. BIBREF15 apply multi-task learning to the dialogue domain. Dialogue state tracking, dialogue act and intent classification, and slot tagging are jointly learned. Dialogue states and user utterances are encoded to provide hidden representations, which jointly affect all the other tasks.
Many previous systems are trained and compared over the ATIS (Airline Travel Information Systems) dataset BIBREF16, which covers only the flight-booking domain. Some of them also use bigger, not publicly available datasets, which appear to be similar to the NLU-BM in terms of number of intents and slots, but they cover no more than three or four domains. Our work stands out for its more challenging NLU setting, since we are dealing with a higher number of domains/scenarios (18), intents (64) and slots (54) in the NLU-BM dataset, and dialogue acts (11), frames (58) and frame elements (84) in the ROMULUS dataset. Moreover, we propose a multi-task hierarchical architecture, where each layer is trained to solve one of the three tasks. Each of these is tackled with a seq2seq classification using a CRF output layer, as in BIBREF3.
The NLU problem has been studied also on the Interactive Robotics front, mostly to support basic dialogue systems, with few dialogue states and tailored for specific tasks, such as semantic mapping BIBREF17, navigation BIBREF18, BIBREF19, or grounded language learning BIBREF20. However, the designed approaches, either based on formal languages or data-driven, have never been shown to scale to real world scenarios. The work of BIBREF21 makes a step forward in this direction. Their model still deals with the single `pick and place' domain, covering no more than two intents, but it is trained on several thousands of examples, making it able to manage more unstructured language. An attempt to manage a higher number of intents, as well as more variable language, is represented by the work of BIBREF22 where the sole Frame Semantics is applied to represent user intents, with no Dialogue Acts.
Jointly parsing dialogue acts and frame-like structures
The identification of Dialogue Acts (henceforth DAs) is required to drive the dialogue manager to the next dialogue state. General frame structures (FRs) provide a reference framework to capture user intents, in terms of required or desired actions that a conversational agent has to perform. Depending on the level of abstraction required by an application, these can be interpreted as more domain-dependent paradigms like intent, or to shallower representations, such as semantic frames, as conceived in FrameNet BIBREF23. From this perspective, semantic frames represent a versatile abstraction that can be mapped over an agent's capabilities, allowing also the system to be easily extended with new functionalities without requiring the definition of new ad-hoc structures. Similarly, frame arguments (ARs) act as slots in a traditional intent-slots scheme, or to frame elements for semantic frames.
In our work, the whole process of extracting a complete semantic interpretation as required by the system is tackled with a multi-task learning approach across DAs, FRs, and ARs. Each of these tasks is modelled as a seq2seq problem, where a task-specific label is assigned to each token of the sentence according to the IOB2 notation BIBREF24, with “B-” marking the Beginning of the chunk, “I-” the tokens Inside the chunk while “O-” is assigned to any token that does not belong to any chunk. Task labels are drawn from the set of classes defined for DAs, FRs, and ARs. Figure TABREF5 shows an example of the tagging layers over the sentence Where can I find Starbucks?, where Frame Semantics has been selected as underlying reference theory.
Jointly parsing dialogue acts and frame-like structures ::: Architecture description
The central motivation behind the proposed architecture is that there is a dependence among the three tasks of identifying DAs, FRs, and ARs. The relationship between tagging frame and arguments appears more evident, as also developed in theories like Frame Semantics – although it is defined independently by each theory. However, some degree of dependence also holds between the DAs and FRs. For example, the FrameNet semantic frame Desiring, expressing a desire of the user for an event to occur, is more likely to be used in the context of an Inform DA, which indicates the state of notifying the agent with an information, other than in an Instruction. This is clearly visible in interactions like “I'd like a cup of hot chocolate” or “I'd like to find a shoe shop”, where the user is actually notifying the agent about a desire of hers/his.
In order to reflect such inter-task dependence, the classification process is tackled here through a hierarchical multi-task learning approach. We designed a multi-layer neural network, whose architecture is shown in Figure FIGREF7, where each layer is trained to solve one of the three tasks, namely labelling dialogue acts ($DA$ layer), semantic frames ($FR$ layer), and frame elements ($AR$ layer). The layers are arranged in a hierarchical structure that allows the information produced by earlier layers to be fed to downstream tasks.
The network is mainly composed of three BiLSTM BIBREF25 encoding layers. A sequence of input words is initially converted into an embedded representation through an ELMo embeddings layer BIBREF26, and is fed to the $DA$ layer. The embedded representation is also passed over through shortcut connections BIBREF1, and concatenated with both the outputs of the $DA$ and $FR$ layers. Self-attention layers BIBREF27 are placed after the $DA$ and $FR$ BiLSTM encoders. Where $w_t$ is the input word at time step $t$ of the sentence $\textbf {\textrm {w}} = (w_1, ..., w_T)$, the architecture can be formalised by:
where $\oplus $ represents the vector concatenation operator, $e_t$ is the embedding of the word at time $t$, and $\textbf {\textrm {s}}^{L}$ = ($s_1^L$, ..., $s_T^L$) is the embedded sequence output of each $L$ layer, with $L = \lbrace DA, FR, AR\rbrace $. Given an input sentence, the final sequence of labels $\textbf {y}^L$ for each task is computed through a CRF tagging layer, which operates on the output of the $DA$ and $FR$ self-attention, and of the $AR$ BiLSTM embedding, so that:
where a$^{DA}$, a$^{FR}$ are attended embedded sequences. Due to shortcut connections, layers in the upper levels of the architecture can rely both on direct word embeddings as well as the hidden representation $a_t^L$ computed by a previous layer. Operationally, the latter carries task specific information which, combined with the input embeddings, helps in stabilising the classification of each CRF layer, as shown by our experiments. The network is trained by minimising the sum of the individual negative log-likelihoods of the three CRF layers, while at test time the most likely sequence is obtained through the Viterbi decoding over the output scores of the CRF layer.
Experimental Evaluation
In order to assess the effectiveness of the proposed architecture and compare against existing off-the-shelf tools, we run several empirical evaluations.
Experimental Evaluation ::: Datasets
We tested the system on two datasets, different in size and complexity of the addressed language.
Experimental Evaluation ::: Datasets ::: NLU-Benchmark dataset
The first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities. For example, “schedule a call with Lisa on Monday morning” is labelled to contain a calendar scenario, where the set_event action is instantiated through the entities [event_name: a call with Lisa] and [date: Monday morning]. The Intent is then obtained by concatenating scenario and action labels (e.g., calendar_set_event). This dataset consists of multiple home assistant task domains (e.g., scheduling, playing music), chit-chat, and commands to a robot BIBREF7.
Experimental Evaluation ::: Datasets ::: ROMULUS dataset
The second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation. The corpus is composed of different subsections, addressing heterogeneous linguistic phenomena, ranging from imperative instructions (e.g., “enter the bedroom slowly, turn left and turn the lights off ”) to complex requests for information (e.g., “good morning I want to buy a new mobile phone is there any shop nearby?”) or open-domain chit-chat (e.g., “nope thanks let's talk about cinema”). A considerable number of utterances in the dataset is collected through Human-Human Interaction studies in robotic domain ($\approx $$70\%$), though a small portion has been synthetically generated for balancing the frame distribution.
Note that while the NLU-BM is designed to have at most one intent per utterance, sentences are here tagged following the IOB2 sequence labelling scheme (see example of Figure TABREF5), so that multiple dialogue acts, frames, and frame elements can be defined at the same time for the same utterance. For example, three dialogue acts are identified within the sentence [good morning]$_{\textsc {Opening}}$ [I want to buy a new mobile phone]$_{\textsc {Inform}}$ [is there any shop nearby?]$_{\textsc {Req\_info}}$. As a result, though smaller, the ROMULUS dataset provides a richer representation of the sentence's semantics, making the tasks more complex and challenging. These observations are highlighted by the statistics in Table TABREF13, that show an average number of dialogue acts, frames and frame elements always greater than 1 (i.e., $1.33$, $1.41$ and $3.54$, respectively).
Experimental Evaluation ::: Experimental setup
All the models are implemented with Keras BIBREF28 and Tensorflow BIBREF29 as backend, and run on a Titan Xp. Experiments are performed in a 10-fold setting, using one fold for tuning and one for testing. However, since HERMIT is designed to operate on dialogue acts, semantic frames and frame elements, the best hyperparameters are obtained over the ROMULUS dataset via a grid search using early stopping, and are applied also to the NLU-BM models. This guarantees fairness towards other systems, that do not perform any fine-tuning on the training data. We make use of pre-trained 1024-dim ELMo embeddings BIBREF26 as word vector representations without re-training the weights.
Experimental Evaluation ::: Experiments on the NLU-Benchmark
This section shows the results obtained on the NLU-Benchmark (NLU-BM) dataset provided by BIBREF7, by comparing HERMIT to off-the-shelf NLU services, namely: Rasa, Dialogflow, LUIS and Watson. In order to apply HERMIT to NLU-BM annotations, these have been aligned so that Scenarios are treated as DAs, Actions as FRs and Entities as ARs.
To make our model comparable against other approaches, we reproduced the same folds as in BIBREF7, where a resized version of the original dataset is used. Table TABREF11 shows some statistics of the NLU-BM and its reduced version. Moreover, micro-averaged Precision, Recall and F1 are computed following the original paper to assure consistency. TP, FP and FN of intent labels are obtained as in any other multi-class task. An entity is instead counted as TP if there is an overlap between the predicted and the gold span, and their labels match.
Experimental results are reported in Table TABREF21. The statistical significance is evaluated through the Wilcoxon signed-rank test. When looking at the intent F1, HERMIT performs significantly better than Rasa $[Z=-2.701, p = .007]$ and LUIS $[Z=-2.807, p = .005]$. On the contrary, the improvements w.r.t. Dialogflow $[Z=-1.173, p = .241]$ do not seem to be significant. This is probably due to the high variance obtained by Dialogflow across the 10 folds. Watson is by a significant margin the most accurate system in recognising intents $[Z=-2.191, p = .028]$, especially due to its Precision score.
The hierarchical multi-task architecture of HERMIT seems to contribute strongly to entity tagging accuracy. In fact, in this task it performs significantly better than Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.805, p = .005]$, with improvements from $7.08$ to $35.92$ of F1.
Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems. The statistical analysis shows a significant improvement over Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.803, p = .005]$.
Experimental Evaluation ::: Experiments on the NLU-Benchmark ::: Ablation study
In order to assess the contributions of the HERMIT's components, we performed an ablation study. The results are obtained on the NLU-BM, following the same setup as in Section SECREF16.
Results are shown in Table TABREF25. The first row refers to the complete architecture, while –SA shows the results of HERMIT without the self-attention mechanism. Then, from this latter we further remove shortcut connections (– SA/CN) and CRF taggers (– SA/CRF). The last row (– SA/CN/CRF) shows the results of a simple architecture, without self-attention, shortcuts, and CRF. Though not significant, the contribution of the several architectural components can be observed. The contribution of self-attention is distributed across all the tasks, with a small inclination towards the upstream ones. This means that while the entity tagging task is mostly lexicon independent, it is easier to identify pivoting keywords for predicting the intent, e.g. the verb “schedule” triggering the calendar_set_event intent. The impact of shortcut connections is more evident on entity tagging. In fact, the effect provided by shortcut connections is that the information flowing throughout the hierarchical architecture allows higher layers to encode richer representations (i.e., original word embeddings + latent semantics from the previous task). Conversely, the presence of the CRF tagger affects mainly the lower levels of the hierarchical architecture. This is not probably due to their position in the hierarchy, but to the way the tasks have been designed. In fact, while the span of an entity is expected to cover few tokens, in intent recognition (i.e., a combination of Scenario and Action recognition) the span always covers all the tokens of an utterance. CRF therefore preserves consistency of IOB2 sequences structure. However, HERMIT seems to be the most stable architecture, both in terms of standard deviation and task performance, with a good balance between intent and entity recognition.
Experimental Evaluation ::: Experiments on the ROMULUS dataset
In this section we report the experiments performed on the ROMULUS dataset (Table TABREF27). Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions – e.g., a match is when all the three sequences are correct.
Results in terms of EM reflect the complexity of the different tasks, motivating their position within the hierarchy. Specifically, dialogue act identification is the easiest task ($89.31\%$) with respect to frame ($82.60\%$) and frame element ($79.73\%$), due to the shallow semantics it aims to catch. However, when looking at the span F1, its score ($89.42\%$) is lower than the frame element identification task ($92.26\%$). What happens is that even though the label set is smaller, dialogue act spans are supposed to be longer than frame element ones, sometimes covering the whole sentence. Frame elements, instead, are often one or two tokens long, that contribute in increasing span based metrics. Frame identification is the most complex task for several reasons. First, lots of frame spans are interlaced or even nested; this contributes to increasing the network entropy. Second, while the dialogue act label is highly related to syntactic structures, frame identification is often subject to the inherent ambiguity of language (e.g., get can evoke both Commerce_buy and Arriving). We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks. However, the frame element scores are comparable to the benchmark, since the task is very similar.
Overall, getting back to the combined EM accuracy, HERMIT seems to be promising, with the network being able to reproduce all the three gold sequences for almost $70\%$ of the cases. The importance of this result provides an idea of the architecture behaviour over the entire pipeline.
Experimental Evaluation ::: Discussion
The experimental evaluation reported in this section provides different insights. The proposed architecture addresses the problem of NLU in wide-coverage conversational systems, modelling semantics through multiple Dialogue Acts and Frame-like structures in an end-to-end fashion. In addition, its hierarchical structure, which reflects the complexity of the single tasks, allows providing rich representations across the whole network. In this respect, we can affirm that the architecture successfully tackles the multi-task problem, with results that are promising in terms of usability and applicability of the system in real scenarios.
However, a thorough evaluation in the wild must be carried out, to assess to what extent the system is able to handle complex spoken language phenomena, such as repetitions, disfluencies, etc. To this end, a real scenario evaluation may open new research directions, by addressing new tasks to be included in the multi-task architecture. This is supported by the scalable nature of the proposed approach. Moreover, following BIBREF3, corpora providing different annotations can be exploited within the same multi-task network.
We also empirically showed how the same architectural design could be applied to a dataset addressing similar problems. In fact, a comparison with off-the-shelf tools shows the benefits provided by the hierarchical structure, with better overall performance better than any current solution. An ablation study has been performed, assessing the contribution provided by the different components of the network. The results show how the shortcut connections help in the more fine-grained tasks, successfully encoding richer representations. CRFs help when longer spans are being predicted, more present in the upstream tasks.
Finally, the seq2seq design allowed obtaining a multi-label approach, enabling the identification of multiple spans in the same utterance that might evoke different dialogue acts/frames. This represents a novelty for NLU in conversational systems, as such a problem has always been tackled as a single-intent detection. However, the seq2seq approach carries also some limitations, especially on the Frame Semantics side. In fact, label sequences are linear structures, not suitable for representing nested predicates, a tough and common problem in Natural Language. For example, in the sentence “I want to buy a new mobile phone”, the [to buy a new mobile phone] span represents both the Desired_event frame element of the Desiring frame and a Commerce_buy frame at the same time. At the moment of writing, we are working on modeling nested predicates through the application of bilinear models.
Future Work
We have started integrating a corpus of 5M sentences of real users chit-chatting with our conversational agent, though at the time of writing they represent only $16\%$ of the current dataset.
As already pointed out in Section SECREF28, there are some limitations in the current approach that need to be addressed. First, we have to assess the network's capability in handling typical phenomena of spontaneous spoken language input, such as repetitions and disfluencies BIBREF30. This may open new research directions, by including new tasks to identify/remove any kind of noise from the spoken input. Second, the seq2seq scheme does not deal with nested predicates, a common aspect of Natural Language. To the best of our knowledge, there is no architecture that implements an end-to-end network for FrameNet based semantic parsing. Following previous work BIBREF2, one of our future goals is to tackle such problems through hierarchical multi-task architectures that rely on bilinear models.
Conclusion
In this paper we presented HERMIT NLU, a hierarchical multi-task architecture for semantic parsing sentences for cross-domain spoken dialogue systems. The problem is addressed using a seq2seq model employing BiLSTM encoders and self-attention mechanisms and followed by CRF tagging layers. We evaluated HERMIT on a 25K sentences NLU-Benchmark and out-perform state-of-the-art NLU tools such as Rasa, Dialogflow, LUIS and Watson, even without specific fine-tuning of the model.
Acknowledgement
This research was partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 688147 (MuMMER project). | ROMULUS dataset, NLU-Benchmark dataset |
7c794fa0b2818d354ca666969107818a2ffdda0c | 7c794fa0b2818d354ca666969107818a2ffdda0c_0 | Q: What metrics other than entity tagging are compared?
Text: Introduction
Research in Conversational AI (also known as Spoken Dialogue Systems) has applications ranging from home devices to robotics, and has a growing presence in industry. A key problem in real-world Dialogue Systems is Natural Language Understanding (NLU) – the process of extracting structured representations of meaning from user utterances. In fact, the effective extraction of semantics is an essential feature, being the entry point of any Natural Language interaction system. Apart from challenges given by the inherent complexity and ambiguity of human language, other challenges arise whenever the NLU has to operate over multiple domains. In fact, interaction patterns, domain, and language vary depending on the device the user is interacting with. For example, chit-chatting and instruction-giving for executing an action are different processes in terms of language, domain, syntax and interaction schemes involved. And what if the user combines two interaction domains: “play some music, but first what's the weather tomorrow”?
In this work, we present HERMIT, a HiERarchical MultI-Task Natural Language Understanding architecture, designed for effective semantic parsing of domain-independent user utterances, extracting meaning representations in terms of high-level intents and frame-like semantic structures. With respect to previous approaches to NLU for SDS, HERMIT stands out for being a cross-domain, multi-task architecture, capable of recognising multiple intents/frames in an utterance. HERMIT also shows better performance with respect to current state-of-the-art commercial systems. Such a novel combination of requirements is discussed below.
Introduction ::: Cross-domain NLU
A cross-domain dialogue agent must be able to handle heterogeneous types of conversation, such as chit-chatting, giving directions, entertaining, and triggering domain/task actions. A domain-independent and rich meaning representation is thus required to properly capture the intent of the user. Meaning is modelled here through three layers of knowledge: dialogue acts, frames, and frame arguments. Frames and arguments can be in turn mapped to domain-dependent intents and slots, or to Frame Semantics' BIBREF0 structures (i.e. semantic frames and frame elements, respectively), which allow handling of heterogeneous domains and language.
Introduction ::: Multi-task NLU
Deriving such a multi-layered meaning representation can be approached through a multi-task learning approach. Multi-task learning has found success in several NLP problems BIBREF1, BIBREF2, especially with the recent rise of Deep Learning. Thanks to the possibility of building complex networks, handling more tasks at once has been proven to be a successful solution, provided that some degree of dependence holds between the tasks. Moreover, multi-task learning allows the use of different datasets to train sub-parts of the network BIBREF3. Following the same trend, HERMIT is a hierarchical multi-task neural architecture which is able to deal with the three tasks of tagging dialogue acts, frame-like structures, and their arguments in parallel. The network, based on self-attention mechanisms, seq2seq bi-directional Long-Short Term Memory (BiLSTM) encoders, and CRF tagging layers, is hierarchical in the sense that information output from earlier layers flows through the network, feeding following layers to solve downstream dependent tasks.
Introduction ::: Multi-dialogue act and -intent NLU
Another degree of complexity in NLU is represented by the granularity of knowledge that can be extracted from an utterance. Utterance semantics is often rich and expressive: approximating meaning to a single user intent is often not enough to convey the required information. As opposed to the traditional single-dialogue act and single-intent view in previous work BIBREF4, BIBREF5, BIBREF6, HERMIT operates on a meaning representation that is multi-dialogue act and multi-intent. In fact, it is possible to model an utterance's meaning through multiple dialogue acts and intents at the same time. For example, the user would be able both to request tomorrow's weather and listen to his/her favourite music with just a single utterance.
A further requirement is that for practical application the system should be competitive with state-of-the-art: we evaluate HERMIT's effectiveness by running several empirical investigations. We perform a robust test on a publicly available NLU-Benchmark (NLU-BM) BIBREF7 containing 25K cross-domain utterances with a conversational agent. The results obtained show a performance higher than well-known off-the-shelf tools (i.e., Rasa, DialogueFlow, LUIS, and Watson). The contribution of the different network components is then highlighted through an ablation study. We also test HERMIT on the smaller Robotics-Oriented MUltitask Language UnderStanding (ROMULUS) corpus, annotated with Dialogue Acts and Frame Semantics. HERMIT produces promising results for the application in a real scenario.
Related Work
Much research on Natural (or Spoken, depending on the input) Language Understanding has been carried out in the area of Spoken Dialogue Systems BIBREF8, where the advent of statistical learning has led to the application of many data-driven approaches BIBREF9. In recent years, the rise of deep learning models has further improved the state-of-the-art. Recurrent Neural Networks (RNNs) have proven to be particularly successful, especially uni- and bi-directional LSTMs and Gated Recurrent Units (GRUs). The use of such deep architectures has also fostered the development of joint classification models of intents and slots. Bi-directional GRUs are applied in BIBREF10, where the hidden state of each time step is used for slot tagging in a seq2seq fashion, while the final state of the GRU is used for intent classification. The application of attention mechanisms in a BiLSTM architecture is investigated in BIBREF5, while the work of BIBREF11 explores the use of memory networks BIBREF12 to exploit encoding of historical user utterances to improve the slot-filling task. Seq2seq with self-attention is applied in BIBREF13, where the classified intent is also used to guide a special gated unit that contributes to the slot classification of each token.
One of the first attempts to jointly detect domains in addition to intent-slot tagging is the work of BIBREF4. An utterance syntax is encoded through a Recursive NN, and it is used to predict the joined domain-intent classes. Syntactic features extracted from the same network are used in the per-word slot classifier. The work of BIBREF6 applies the same idea of BIBREF10, this time using a context-augmented BiLSTM, and performing domain-intent classification as a single joint task. As in BIBREF11, the history of user utterances is also considered in BIBREF14, in combination with a dialogue context encoder. A two-layer hierarchical structure made of a combination of BiLSTM and BiGRU is used for joint classification of domains and intents, together with slot tagging. BIBREF15 apply multi-task learning to the dialogue domain. Dialogue state tracking, dialogue act and intent classification, and slot tagging are jointly learned. Dialogue states and user utterances are encoded to provide hidden representations, which jointly affect all the other tasks.
Many previous systems are trained and compared over the ATIS (Airline Travel Information Systems) dataset BIBREF16, which covers only the flight-booking domain. Some of them also use bigger, not publicly available datasets, which appear to be similar to the NLU-BM in terms of number of intents and slots, but they cover no more than three or four domains. Our work stands out for its more challenging NLU setting, since we are dealing with a higher number of domains/scenarios (18), intents (64) and slots (54) in the NLU-BM dataset, and dialogue acts (11), frames (58) and frame elements (84) in the ROMULUS dataset. Moreover, we propose a multi-task hierarchical architecture, where each layer is trained to solve one of the three tasks. Each of these is tackled with a seq2seq classification using a CRF output layer, as in BIBREF3.
The NLU problem has been studied also on the Interactive Robotics front, mostly to support basic dialogue systems, with few dialogue states and tailored for specific tasks, such as semantic mapping BIBREF17, navigation BIBREF18, BIBREF19, or grounded language learning BIBREF20. However, the designed approaches, either based on formal languages or data-driven, have never been shown to scale to real world scenarios. The work of BIBREF21 makes a step forward in this direction. Their model still deals with the single `pick and place' domain, covering no more than two intents, but it is trained on several thousands of examples, making it able to manage more unstructured language. An attempt to manage a higher number of intents, as well as more variable language, is represented by the work of BIBREF22 where the sole Frame Semantics is applied to represent user intents, with no Dialogue Acts.
Jointly parsing dialogue acts and frame-like structures
The identification of Dialogue Acts (henceforth DAs) is required to drive the dialogue manager to the next dialogue state. General frame structures (FRs) provide a reference framework to capture user intents, in terms of required or desired actions that a conversational agent has to perform. Depending on the level of abstraction required by an application, these can be interpreted as more domain-dependent paradigms like intent, or to shallower representations, such as semantic frames, as conceived in FrameNet BIBREF23. From this perspective, semantic frames represent a versatile abstraction that can be mapped over an agent's capabilities, allowing also the system to be easily extended with new functionalities without requiring the definition of new ad-hoc structures. Similarly, frame arguments (ARs) act as slots in a traditional intent-slots scheme, or to frame elements for semantic frames.
In our work, the whole process of extracting a complete semantic interpretation as required by the system is tackled with a multi-task learning approach across DAs, FRs, and ARs. Each of these tasks is modelled as a seq2seq problem, where a task-specific label is assigned to each token of the sentence according to the IOB2 notation BIBREF24, with “B-” marking the Beginning of the chunk, “I-” the tokens Inside the chunk while “O-” is assigned to any token that does not belong to any chunk. Task labels are drawn from the set of classes defined for DAs, FRs, and ARs. Figure TABREF5 shows an example of the tagging layers over the sentence Where can I find Starbucks?, where Frame Semantics has been selected as underlying reference theory.
Jointly parsing dialogue acts and frame-like structures ::: Architecture description
The central motivation behind the proposed architecture is that there is a dependence among the three tasks of identifying DAs, FRs, and ARs. The relationship between tagging frame and arguments appears more evident, as also developed in theories like Frame Semantics – although it is defined independently by each theory. However, some degree of dependence also holds between the DAs and FRs. For example, the FrameNet semantic frame Desiring, expressing a desire of the user for an event to occur, is more likely to be used in the context of an Inform DA, which indicates the state of notifying the agent with an information, other than in an Instruction. This is clearly visible in interactions like “I'd like a cup of hot chocolate” or “I'd like to find a shoe shop”, where the user is actually notifying the agent about a desire of hers/his.
In order to reflect such inter-task dependence, the classification process is tackled here through a hierarchical multi-task learning approach. We designed a multi-layer neural network, whose architecture is shown in Figure FIGREF7, where each layer is trained to solve one of the three tasks, namely labelling dialogue acts ($DA$ layer), semantic frames ($FR$ layer), and frame elements ($AR$ layer). The layers are arranged in a hierarchical structure that allows the information produced by earlier layers to be fed to downstream tasks.
The network is mainly composed of three BiLSTM BIBREF25 encoding layers. A sequence of input words is initially converted into an embedded representation through an ELMo embeddings layer BIBREF26, and is fed to the $DA$ layer. The embedded representation is also passed over through shortcut connections BIBREF1, and concatenated with both the outputs of the $DA$ and $FR$ layers. Self-attention layers BIBREF27 are placed after the $DA$ and $FR$ BiLSTM encoders. Where $w_t$ is the input word at time step $t$ of the sentence $\textbf {\textrm {w}} = (w_1, ..., w_T)$, the architecture can be formalised by:
where $\oplus $ represents the vector concatenation operator, $e_t$ is the embedding of the word at time $t$, and $\textbf {\textrm {s}}^{L}$ = ($s_1^L$, ..., $s_T^L$) is the embedded sequence output of each $L$ layer, with $L = \lbrace DA, FR, AR\rbrace $. Given an input sentence, the final sequence of labels $\textbf {y}^L$ for each task is computed through a CRF tagging layer, which operates on the output of the $DA$ and $FR$ self-attention, and of the $AR$ BiLSTM embedding, so that:
where a$^{DA}$, a$^{FR}$ are attended embedded sequences. Due to shortcut connections, layers in the upper levels of the architecture can rely both on direct word embeddings as well as the hidden representation $a_t^L$ computed by a previous layer. Operationally, the latter carries task specific information which, combined with the input embeddings, helps in stabilising the classification of each CRF layer, as shown by our experiments. The network is trained by minimising the sum of the individual negative log-likelihoods of the three CRF layers, while at test time the most likely sequence is obtained through the Viterbi decoding over the output scores of the CRF layer.
Experimental Evaluation
In order to assess the effectiveness of the proposed architecture and compare against existing off-the-shelf tools, we run several empirical evaluations.
Experimental Evaluation ::: Datasets
We tested the system on two datasets, different in size and complexity of the addressed language.
Experimental Evaluation ::: Datasets ::: NLU-Benchmark dataset
The first (publicly available) dataset, NLU-Benchmark (NLU-BM), contains $25,716$ utterances annotated with targeted Scenario, Action, and involved Entities. For example, “schedule a call with Lisa on Monday morning” is labelled to contain a calendar scenario, where the set_event action is instantiated through the entities [event_name: a call with Lisa] and [date: Monday morning]. The Intent is then obtained by concatenating scenario and action labels (e.g., calendar_set_event). This dataset consists of multiple home assistant task domains (e.g., scheduling, playing music), chit-chat, and commands to a robot BIBREF7.
Experimental Evaluation ::: Datasets ::: ROMULUS dataset
The second dataset, ROMULUS, is composed of $1,431$ sentences, for each of which dialogue acts, semantic frames, and corresponding frame elements are provided. This dataset is being developed for modelling user utterances to open-domain conversational systems for robotic platforms that are expected to handle different interaction situations/patterns – e.g., chit-chat, command interpretation. The corpus is composed of different subsections, addressing heterogeneous linguistic phenomena, ranging from imperative instructions (e.g., “enter the bedroom slowly, turn left and turn the lights off ”) to complex requests for information (e.g., “good morning I want to buy a new mobile phone is there any shop nearby?”) or open-domain chit-chat (e.g., “nope thanks let's talk about cinema”). A considerable number of utterances in the dataset is collected through Human-Human Interaction studies in robotic domain ($\approx $$70\%$), though a small portion has been synthetically generated for balancing the frame distribution.
Note that while the NLU-BM is designed to have at most one intent per utterance, sentences are here tagged following the IOB2 sequence labelling scheme (see example of Figure TABREF5), so that multiple dialogue acts, frames, and frame elements can be defined at the same time for the same utterance. For example, three dialogue acts are identified within the sentence [good morning]$_{\textsc {Opening}}$ [I want to buy a new mobile phone]$_{\textsc {Inform}}$ [is there any shop nearby?]$_{\textsc {Req\_info}}$. As a result, though smaller, the ROMULUS dataset provides a richer representation of the sentence's semantics, making the tasks more complex and challenging. These observations are highlighted by the statistics in Table TABREF13, that show an average number of dialogue acts, frames and frame elements always greater than 1 (i.e., $1.33$, $1.41$ and $3.54$, respectively).
Experimental Evaluation ::: Experimental setup
All the models are implemented with Keras BIBREF28 and Tensorflow BIBREF29 as backend, and run on a Titan Xp. Experiments are performed in a 10-fold setting, using one fold for tuning and one for testing. However, since HERMIT is designed to operate on dialogue acts, semantic frames and frame elements, the best hyperparameters are obtained over the ROMULUS dataset via a grid search using early stopping, and are applied also to the NLU-BM models. This guarantees fairness towards other systems, that do not perform any fine-tuning on the training data. We make use of pre-trained 1024-dim ELMo embeddings BIBREF26 as word vector representations without re-training the weights.
Experimental Evaluation ::: Experiments on the NLU-Benchmark
This section shows the results obtained on the NLU-Benchmark (NLU-BM) dataset provided by BIBREF7, by comparing HERMIT to off-the-shelf NLU services, namely: Rasa, Dialogflow, LUIS and Watson. In order to apply HERMIT to NLU-BM annotations, these have been aligned so that Scenarios are treated as DAs, Actions as FRs and Entities as ARs.
To make our model comparable against other approaches, we reproduced the same folds as in BIBREF7, where a resized version of the original dataset is used. Table TABREF11 shows some statistics of the NLU-BM and its reduced version. Moreover, micro-averaged Precision, Recall and F1 are computed following the original paper to assure consistency. TP, FP and FN of intent labels are obtained as in any other multi-class task. An entity is instead counted as TP if there is an overlap between the predicted and the gold span, and their labels match.
Experimental results are reported in Table TABREF21. The statistical significance is evaluated through the Wilcoxon signed-rank test. When looking at the intent F1, HERMIT performs significantly better than Rasa $[Z=-2.701, p = .007]$ and LUIS $[Z=-2.807, p = .005]$. On the contrary, the improvements w.r.t. Dialogflow $[Z=-1.173, p = .241]$ do not seem to be significant. This is probably due to the high variance obtained by Dialogflow across the 10 folds. Watson is by a significant margin the most accurate system in recognising intents $[Z=-2.191, p = .028]$, especially due to its Precision score.
The hierarchical multi-task architecture of HERMIT seems to contribute strongly to entity tagging accuracy. In fact, in this task it performs significantly better than Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.805, p = .005]$, with improvements from $7.08$ to $35.92$ of F1.
Following BIBREF7, we then evaluated a metric that combines intent and entities, computed by simply summing up the two confusion matrices (Table TABREF23). Results highlight the contribution of the entity tagging task, where HERMIT outperforms the other approaches. Paired-samples t-tests were conducted to compare the HERMIT combined F1 against the other systems. The statistical analysis shows a significant improvement over Rasa $[Z=-2.803, p = .005]$, Dialogflow $[Z=-2.803, p = .005]$, LUIS $[Z=-2.803, p = .005]$ and Watson $[Z=-2.803, p = .005]$.
Experimental Evaluation ::: Experiments on the NLU-Benchmark ::: Ablation study
In order to assess the contributions of the HERMIT's components, we performed an ablation study. The results are obtained on the NLU-BM, following the same setup as in Section SECREF16.
Results are shown in Table TABREF25. The first row refers to the complete architecture, while –SA shows the results of HERMIT without the self-attention mechanism. Then, from this latter we further remove shortcut connections (– SA/CN) and CRF taggers (– SA/CRF). The last row (– SA/CN/CRF) shows the results of a simple architecture, without self-attention, shortcuts, and CRF. Though not significant, the contribution of the several architectural components can be observed. The contribution of self-attention is distributed across all the tasks, with a small inclination towards the upstream ones. This means that while the entity tagging task is mostly lexicon independent, it is easier to identify pivoting keywords for predicting the intent, e.g. the verb “schedule” triggering the calendar_set_event intent. The impact of shortcut connections is more evident on entity tagging. In fact, the effect provided by shortcut connections is that the information flowing throughout the hierarchical architecture allows higher layers to encode richer representations (i.e., original word embeddings + latent semantics from the previous task). Conversely, the presence of the CRF tagger affects mainly the lower levels of the hierarchical architecture. This is not probably due to their position in the hierarchy, but to the way the tasks have been designed. In fact, while the span of an entity is expected to cover few tokens, in intent recognition (i.e., a combination of Scenario and Action recognition) the span always covers all the tokens of an utterance. CRF therefore preserves consistency of IOB2 sequences structure. However, HERMIT seems to be the most stable architecture, both in terms of standard deviation and task performance, with a good balance between intent and entity recognition.
Experimental Evaluation ::: Experiments on the ROMULUS dataset
In this section we report the experiments performed on the ROMULUS dataset (Table TABREF27). Together with the evaluation metrics used in BIBREF7, we report the span F1, computed using the CoNLL-2000 shared task evaluation script, and the Exact Match (EM) accuracy of the entire sequence of labels. It is worth noticing that the EM Combined score is computed as the conjunction of the three individual predictions – e.g., a match is when all the three sequences are correct.
Results in terms of EM reflect the complexity of the different tasks, motivating their position within the hierarchy. Specifically, dialogue act identification is the easiest task ($89.31\%$) with respect to frame ($82.60\%$) and frame element ($79.73\%$), due to the shallow semantics it aims to catch. However, when looking at the span F1, its score ($89.42\%$) is lower than the frame element identification task ($92.26\%$). What happens is that even though the label set is smaller, dialogue act spans are supposed to be longer than frame element ones, sometimes covering the whole sentence. Frame elements, instead, are often one or two tokens long, that contribute in increasing span based metrics. Frame identification is the most complex task for several reasons. First, lots of frame spans are interlaced or even nested; this contributes to increasing the network entropy. Second, while the dialogue act label is highly related to syntactic structures, frame identification is often subject to the inherent ambiguity of language (e.g., get can evoke both Commerce_buy and Arriving). We also report the metrics in BIBREF7 for consistency. For dialogue act and frame tasks, scores provide just the extent to which the network is able to detect those labels. In fact, the metrics do not consider any span information, essential to solve and evaluate our tasks. However, the frame element scores are comparable to the benchmark, since the task is very similar.
Overall, getting back to the combined EM accuracy, HERMIT seems to be promising, with the network being able to reproduce all the three gold sequences for almost $70\%$ of the cases. The importance of this result provides an idea of the architecture behaviour over the entire pipeline.
Experimental Evaluation ::: Discussion
The experimental evaluation reported in this section provides different insights. The proposed architecture addresses the problem of NLU in wide-coverage conversational systems, modelling semantics through multiple Dialogue Acts and Frame-like structures in an end-to-end fashion. In addition, its hierarchical structure, which reflects the complexity of the single tasks, allows providing rich representations across the whole network. In this respect, we can affirm that the architecture successfully tackles the multi-task problem, with results that are promising in terms of usability and applicability of the system in real scenarios.
However, a thorough evaluation in the wild must be carried out, to assess to what extent the system is able to handle complex spoken language phenomena, such as repetitions, disfluencies, etc. To this end, a real scenario evaluation may open new research directions, by addressing new tasks to be included in the multi-task architecture. This is supported by the scalable nature of the proposed approach. Moreover, following BIBREF3, corpora providing different annotations can be exploited within the same multi-task network.
We also empirically showed how the same architectural design could be applied to a dataset addressing similar problems. In fact, a comparison with off-the-shelf tools shows the benefits provided by the hierarchical structure, with better overall performance better than any current solution. An ablation study has been performed, assessing the contribution provided by the different components of the network. The results show how the shortcut connections help in the more fine-grained tasks, successfully encoding richer representations. CRFs help when longer spans are being predicted, more present in the upstream tasks.
Finally, the seq2seq design allowed obtaining a multi-label approach, enabling the identification of multiple spans in the same utterance that might evoke different dialogue acts/frames. This represents a novelty for NLU in conversational systems, as such a problem has always been tackled as a single-intent detection. However, the seq2seq approach carries also some limitations, especially on the Frame Semantics side. In fact, label sequences are linear structures, not suitable for representing nested predicates, a tough and common problem in Natural Language. For example, in the sentence “I want to buy a new mobile phone”, the [to buy a new mobile phone] span represents both the Desired_event frame element of the Desiring frame and a Commerce_buy frame at the same time. At the moment of writing, we are working on modeling nested predicates through the application of bilinear models.
Future Work
We have started integrating a corpus of 5M sentences of real users chit-chatting with our conversational agent, though at the time of writing they represent only $16\%$ of the current dataset.
As already pointed out in Section SECREF28, there are some limitations in the current approach that need to be addressed. First, we have to assess the network's capability in handling typical phenomena of spontaneous spoken language input, such as repetitions and disfluencies BIBREF30. This may open new research directions, by including new tasks to identify/remove any kind of noise from the spoken input. Second, the seq2seq scheme does not deal with nested predicates, a common aspect of Natural Language. To the best of our knowledge, there is no architecture that implements an end-to-end network for FrameNet based semantic parsing. Following previous work BIBREF2, one of our future goals is to tackle such problems through hierarchical multi-task architectures that rely on bilinear models.
Conclusion
In this paper we presented HERMIT NLU, a hierarchical multi-task architecture for semantic parsing sentences for cross-domain spoken dialogue systems. The problem is addressed using a seq2seq model employing BiLSTM encoders and self-attention mechanisms and followed by CRF tagging layers. We evaluated HERMIT on a 25K sentences NLU-Benchmark and out-perform state-of-the-art NLU tools such as Rasa, Dialogflow, LUIS and Watson, even without specific fine-tuning of the model.
Acknowledgement
This research was partially supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 688147 (MuMMER project). | We also report the metrics in BIBREF7 for consistency, we report the span F1, Exact Match (EM) accuracy of the entire sequence of labels, metric that combines intent and entities |
1ef5fc4473105f1c72b4d35cf93d312736833d3d | 1ef5fc4473105f1c72b4d35cf93d312736833d3d_0 | Q: Do they provide decision sequences as supervision while training models?
Text: Introduction
Many machine reading comprehension (MRC) datasets have been released in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 to benchmark a system's ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein.
The supporting document is, more often than not, static and fully observable. This raises concerns, since models may find answers simply through shallow pattern matching; e.g., syntactic similarity between the words in questions and documents. As pointed out by BIBREF5, for questions starting with when, models tend to predict the only date/time answer in the supporting document. Such behavior limits the generality and usefulness of MRC models, and suggests that they do not learn a proper `understanding' of the intended task. In this paper, to address this problem, we shift the focus of MRC data away from `spoon-feeding' models with sufficient information in fully observable, static documents. Instead, we propose interactive versions of existing MRC tasks, whereby the information needed to answer a question must be gathered sequentially.
The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL).
As an initial case study, we repurpose two well known, related corpora with different difficulty levels for our interactive MRC task: SQuAD and NewsQA. Table TABREF2 shows some examples of a model performing interactive MRC on these datasets. Naturally, our reframing makes the MRC problem harder; however, we believe the added demands of iMRC more closely match web-level QA and may lead to deeper comprehension of documents' content.
The main contributions of this work are as follows:
We describe a method to make MRC datasets interactive and formulate the new task as an RL problem.
We develop a baseline agent that combines a top performing MRC model and a state-of-the-art RL optimization algorithm and test it on our iMRC tasks.
We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting.
Related Works
Skip-reading BIBREF6, BIBREF7, BIBREF8 is an existing setting in which MRC models read partial documents. Concretely, these methods assume that not all tokens in the input sequence are useful, and therefore learn to skip irrelevant tokens based on the current input and their internal memory. Since skipping decisions are discrete, the models are often optimized by the REINFORCE algorithm BIBREF9. For example, the structural-jump-LSTM proposed in BIBREF10 learns to skip and jump over chunks of text. In a similar vein, BIBREF11 designed a QA task where the model reads streaming data unidirectionally, without knowing when the question will be provided. Skip-reading approaches are limited in that they only consider jumping over a few consecutive tokens and the skipping operations are usually unidirectional. Based on the assumption that a single pass of reading may not provide sufficient information, multi-pass reading methods have also been studied BIBREF12, BIBREF13.
Compared to skip-reading and multi-turn reading, our work enables an agent to jump through a document in a more dynamic manner, in some sense combining aspects of skip-reading and re-reading. For example, it can jump forward, backward, or to an arbitrary position, depending on the query. This also distinguishes the model we develop in this work from ReasoNet BIBREF13, where an agent decides when to stop unidirectional reading.
Recently, BIBREF14 propose DocQN, which is a DQN-based agent that leverages the (tree) structure of documents and navigates across sentences and paragraphs. The proposed method has been shown to outperform vanilla DQN and IR baselines on TriviaQA dataset. The main differences between our work and DocQA include: iMRC does not depend on extra meta information of documents (e.g., title, paragraph title) for building document trees as in DocQN; our proposed environment is partially-observable, and thus an agent is required to explore and memorize the environment via interaction; the action space in our setting (especially for the Ctrl+F command as defined in later section) is arguably larger than the tree sampling action space in DocQN.
Closely related to iMRC is work by BIBREF15, in which the authors introduce a collection of synthetic tasks to train and test information-seeking capabilities in neural models. We extend that work by developing a realistic and challenging text-based task.
Broadly speaking, our approach is also linked to the optimal stopping problem in the literature Markov decision processes (MDP) BIBREF16, where at each time-step the agent either continues or stops and accumulates reward. Here, we reformulate conventional QA tasks through the lens of optimal stopping, in hopes of improving over the shallow matching behaviors exhibited by many MRC systems.
iMRC: Making MRC Interactive
We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\lbrace p, q, a\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.
We first split every paragraph $p$ into a list of sentences $\mathcal {S} = \lbrace s_1, s_2, ..., s_n\rbrace $, where $n$ stands for number of sentences in $p$. Given a question $q$, rather than showing the entire paragraph $p$, we only show an agent the first sentence $s_1$ and withhold the rest. The agent must issue commands to reveal the hidden sentences progressively and thereby gather the information needed to answer question $q$.
An agent decides when to stop interacting and output an answer, but the number of interaction steps is limited. Once an agent has exhausted its step budget, it is forced to answer the question.
iMRC: Making MRC Interactive ::: Interactive MRC as a POMDP
As described in the previous section, we convert MRC tasks into sequential decision-making problems (which we will refer to as games). These can be described naturally within the reinforcement learning (RL) framework. Formally, tasks in iMRC are partially observable Markov decision processes (POMDP) BIBREF17. An iMRC data-point is a discrete-time POMDP defined by $(S, T, A, \Omega , O, R, \gamma )$, where $\gamma \in [0, 1]$ is the discount factor and the other elements are described in detail below.
Environment States ($S$): The environment state at turn $t$ in the game is $s_t \in S$. It contains the complete internal information of the game, much of which is hidden from the agent. When an agent issues an action $a_t$, the environment transitions to state $s_{t+1}$ with probability $T(s_{t+1} | s_t, a_t)$). In this work, transition probabilities are either 0 or 1 (i.e., deterministic environment).
Actions ($A$): At each game turn $t$, the agent issues an action $a_t \in A$. We will elaborate on the action space of iMRC in the action space section.
Observations ($\Omega $): The text information perceived by the agent at a given game turn $t$ is the agent's observation, $o_t \in \Omega $, which depends on the environment state and the previous action with probability $O(o_t|s_t)$. In this work, observation probabilities are either 0 or 1 (i.e., noiseless observation). Reward Function ($R$): Based on its actions, the agent receives rewards $r_t = R(s_t, a_t)$. Its objective is to maximize the expected discounted sum of rewards $E \left[\sum _t \gamma ^t r_t \right]$.
iMRC: Making MRC Interactive ::: Action Space
To better describe the action space of iMRC, we split an agent's actions into two phases: information gathering and question answering. During the information gathering phase, the agent interacts with the environment to collect knowledge. It answers questions with its accumulated knowledge in the question answering phase.
Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \le k \le n$:
previous: jump to $ \small {\left\lbrace \begin{array}{ll} s_n & \text{if $k = 1$,}\\ s_{k-1} & \text{otherwise;} \end{array}\right.} $
next: jump to $ \small {\left\lbrace \begin{array}{ll} s_1 & \text{if $k = n$,}\\ s_{k+1} & \text{otherwise;} \end{array}\right.} $
Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;
stop: terminate information gathering phase.
Question Answering: We follow the output format of both SQuAD and NewsQA, where an agent is required to point to the head and tail positions of an answer span within $p$. Assume that at step $t$ the agent stops interacting and the observation $o_t$ is $s_k$. The agent points to a head-tail position pair in $s_k$.
iMRC: Making MRC Interactive ::: Query Types
Given the question “When is the deadline of AAAI?”, as a human, one might try searching “AAAI” on a search engine, follow the link to the official AAAI website, then search for keywords “deadline” or “due date” on the website to jump to a specific paragraph. Humans have a deep understanding of questions because of their significant background knowledge. As a result, the keywords they use to search are not limited to what appears in the question.
Inspired by this observation, we study 3 query types for the Ctrl+F $<$query$>$ command.
One token from the question: the setting with smallest action space. Because iMRC deals with Ctrl+F commands by exact string matching, there is no guarantee that all sentences are accessible from question tokens only.
One token from the union of the question and the current observation: an intermediate level where the action space is larger.
One token from the dataset vocabulary: the action space is huge (see Table TABREF16 for statistics of SQuAD and NewsQA). It is guaranteed that all sentences in all documents are accessible through these tokens.
iMRC: Making MRC Interactive ::: Evaluation Metric
Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance .
Baseline Agent
As a baseline, we propose QA-DQN, an agent that adopts components from QANet BIBREF18 and adds an extra command generation module inspired by LSTM-DQN BIBREF19.
As illustrated in Figure FIGREF6, the agent consists of three components: an encoder, an action generator, and a question answerer. More precisely, at a game step $t$, the encoder reads observation string $o_t$ and question string $q$ to generate attention aggregated hidden representations $M_t$. Using $M_t$, the action generator outputs commands (defined in previous sections) to interact with iMRC. If the generated command is stop or the agent is forced to stop, the question answerer takes the current information at game step $t$ to generate head and tail pointers for answering the question; otherwise, the information gathering procedure continues.
In this section, we describe the high-level model structure and training strategies of QA-DQN. We refer readers to BIBREF18 for detailed information. We will release datasets and code in the near future.
Baseline Agent ::: Model Structure
In this section, we use game step $t$ to denote one round of interaction between an agent with the iMRC environment. We use $o_t$ to denote text observation at game step $t$ and $q$ to denote question text. We use $L$ to refer to a linear transformation. $[\cdot ;\cdot ]$ denotes vector concatenation.
Baseline Agent ::: Model Structure ::: Encoder
The encoder consists of an embedding layer, two stacks of transformer blocks (denoted as encoder transformer blocks and aggregation transformer blocks), and an attention layer.
In the embedding layer, we aggregate both word- and character-level embeddings. Word embeddings are initialized by the 300-dimension fastText BIBREF20 vectors trained on Common Crawl (600B tokens), and are fixed during training. Character embeddings are initialized by 200-dimension random vectors. A convolutional layer with 96 kernels of size 5 is used to aggregate the sequence of characters. We use a max pooling layer on the character dimension, then a multi-layer perceptron (MLP) of size 96 is used to aggregate the concatenation of word- and character-level representations. A highway network BIBREF21 is used on top of this MLP. The resulting vectors are used as input to the encoding transformer blocks.
Each encoding transformer block consists of four convolutional layers (with shared weights), a self-attention layer, and an MLP. Each convolutional layer has 96 filters, each kernel's size is 7. In the self-attention layer, we use a block hidden size of 96 and a single head attention mechanism. Layer normalization and dropout are applied after each component inside the block. We add positional encoding into each block's input. We use one layer of such an encoding block.
At a game step $t$, the encoder processes text observation $o_t$ and question $q$ to generate context-aware encodings $h_{o_t} \in \mathbb {R}^{L^{o_t} \times H_1}$ and $h_q \in \mathbb {R}^{L^{q} \times H_1}$, where $L^{o_t}$ and $L^{q}$ denote length of $o_t$ and $q$ respectively, $H_1$ is 96.
Following BIBREF18, we use a context-query attention layer to aggregate the two representations $h_{o_t}$ and $h_q$. Specifically, the attention layer first uses two MLPs to map $h_{o_t}$ and $h_q$ into the same space, with the resulting representations denoted as $h_{o_t}^{\prime } \in \mathbb {R}^{L^{o_t} \times H_2}$ and $h_q^{\prime } \in \mathbb {R}^{L^{q} \times H_2}$, in which, $H_2$ is 96.
Then, a tri-linear similarity function is used to compute the similarities between each pair of $h_{o_t}^{\prime }$ and $h_q^{\prime }$ items:
where $\odot $ indicates element-wise multiplication and $w$ is trainable parameter vector of size 96.
We apply softmax to the resulting similarity matrix $S$ along both dimensions, producing $S^A$ and $S^B$. Information in the two representations are then aggregated as
where $h_{oq}$ is aggregated observation representation.
On top of the attention layer, a stack of aggregation transformer blocks is used to further map the observation representations to action representations and answer representations. The configuration parameters are the same as the encoder transformer blocks, except there are two convolution layers (with shared weights), and the number of blocks is 7.
Let $M_t \in \mathbb {R}^{L^{o_t} \times H_3}$ denote the output of the stack of aggregation transformer blocks, in which $H_3$ is 96.
Baseline Agent ::: Model Structure ::: Action Generator
The action generator takes $M_t$ as input and estimates Q-values for all possible actions. As described in previous section, when an action is a Ctrl+F command, it is composed of two tokens (the token “Ctrl+F” and the query token). Therefore, the action generator consists of three MLPs:
Here, the size of $L_{shared} \in \mathbb {R}^{95 \times 150}$; $L_{action}$ has an output size of 4 or 2 depending on the number of actions available; the size of $L_{ctrlf}$ is the same as the size of a dataset's vocabulary size (depending on different query type settings, we mask out words in the vocabulary that are not query candidates). The overall Q-value is simply the sum of the two components:
Baseline Agent ::: Model Structure ::: Question Answerer
Following BIBREF18, we append two extra stacks of aggregation transformer blocks on top of the encoder to compute head and tail positions:
Here, $M_{head}$ and $M_{tail}$ are outputs of the two extra transformer stacks, $L_0$, $L_1$, $L_2$ and $L_3$ are trainable parameters with output size 150, 150, 1 and 1, respectively.
Baseline Agent ::: Memory and Reward Shaping ::: Memory
In iMRC, some questions may not be easily answerable based only on observation of a single sentence. To overcome this limitation, we provide an explicit memory mechanism to QA-DQN. Specifically, we use a queue to store strings that have been observed recently. The queue has a limited size of slots (we use queues of size [1, 3, 5] in this work). This prevents the agent from issuing next commands until the environment has been observed fully, in which case our task would degenerate to the standard MRC setting. The memory slots are reset episodically.
Baseline Agent ::: Memory and Reward Shaping ::: Reward Shaping
Because the question answerer in QA-DQN is a pointing model, its performance relies heavily on whether the agent can find and stop at the sentence that contains the answer. We design a heuristic reward to encourage and guide this behavior. In particular, we assign a reward if the agent halts at game step $k$ and the answer is a sub-string of $o_k$ (if larger memory slots are used, we assign this reward if the answer is a sub-string of the memory at game step $k$). We denote this reward as the sufficient information reward, since, if an agent sees the answer, it should have a good chance of having gathered sufficient information for the question (although this is not guaranteed).
Note this sufficient information reward is part of the design of QA-DQN, whereas the question answering score is the only metric used to evaluate an agent's performance on the iMRC task.
Baseline Agent ::: Memory and Reward Shaping ::: Ctrl+F Only Mode
As mentioned above, an agent might bypass Ctrl+F actions and explore an iMRC game only via next commands. We study this possibility in an ablation study, where we limit the agent to the Ctrl+F and stop commands. In this setting, an agent is forced to explore by means of search a queries.
Baseline Agent ::: Training Strategy
In this section, we describe our training strategy. We split the training pipeline into two parts for easy comprehension. We use Adam BIBREF22 as the step rule for optimization in both parts, with the learning rate set to 0.00025.
Baseline Agent ::: Training Strategy ::: Action Generation
iMRC games are interactive environments. We use an RL training algorithm to train the interactive information-gathering behavior of QA-DQN. We adopt the Rainbow algorithm proposed by BIBREF23, which integrates several extensions to the original Deep Q-Learning algorithm BIBREF24. Rainbox exhibits state-of-the-art performance on several RL benchmark tasks (e.g., Atari games).
During game playing, we use a mini-batch of size 10 and push all transitions (observation string, question string, generated command, reward) into a replay buffer of size 500,000. We do not compute losses directly using these transitions. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer, compute loss, and update the network.
Detailed hyper-parameter settings for action generation are shown in Table TABREF38.
Baseline Agent ::: Training Strategy ::: Question Answering
Similarly, we use another replay buffer to store question answering transitions (observation string when interaction stops, question string, ground-truth answer).
Because both iSQuAD and iNewsQA are converted from datasets that provide ground-truth answer positions, we can leverage this information and train the question answerer with supervised learning. Specifically, we only push question answering transitions when the ground-truth answer is in the observation string. For each transition, we convert the ground-truth answer head- and tail-positions from the SQuAD and NewsQA datasets to positions in the current observation string. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer and train the question answerer using the Negative Log-Likelihood (NLL) loss. We use a dropout rate of 0.1.
Experimental Results
In this study, we focus on three factors and their effects on iMRC and the performance of the QA-DQN agent:
different Ctrl+F strategies, as described in the action space section;
enabled vs. disabled next and previous actions;
different memory slot sizes.
Below we report the baseline agent's training performance followed by its generalization performance on test data.
Experimental Results ::: Mastering Training Games
It remains difficult for RL agents to master multiple games at the same time. In our case, each document-question pair can be considered a unique game, and there are hundred of thousands of them. Therefore, as is common practice in the RL literature, we study an agent's training curves.
Due to the space limitations, we select several representative settings to discuss in this section and provide QA-DQN's training and evaluation curves for all experimental settings in the Appendix. We provide the agent's sufficient information rewards (i.e., if the agent stopped at a state where the observation contains the answer) during training in Appendix as well.
Figure FIGREF36 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are available. Figure FIGREF40 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are disabled. Note that all training curves are averaged over 3 runs with different random seeds and all evaluation curves show the one run with max validation performance among the three.
From Figure FIGREF36, we can see that the three Ctrl+F strategies show similar difficulty levels when next and previous are available, although QA-DQN works slightly better when selecting a word from the question as query (especially on iNewsQA). However, from Figure FIGREF40 we observe that when next and previous are disabled, QA-DQN shows significant advantage when selecting a word from the question as query. This may due to the fact that when an agent must use Ctrl+F to navigate within documents, the set of question words is a much smaller action space in contrast to the other two settings. In the 4-action setting, an agent can rely on issuing next and previous actions to reach any sentence in a document.
The effect of action space size on model performance is particularly clear when using a datasets' entire vocabulary as query candidates in the 2-action setting. From Figure FIGREF40 (and figures with sufficient information rewards in the Appendix) we see QA-DQN has a hard time learning in this setting. As shown in Table TABREF16, both datasets have a vocabulary size of more than 100k. This is much larger than in the other two settings, where on average the length of questions is around 10. This suggests that the methods with better sample efficiency are needed to act in more realistic problem settings with huge action spaces.
Experiments also show that a larger memory slot size always helps. Intuitively, with a memory mechanism (either implicit or explicit), an agent could make the environment closer to fully observed by exploring and memorizing observations. Presumably, a larger memory may further improve QA-DQN's performance, but considering the average number of sentences in each iSQuAD game is 5, a memory with more than 5 slots will defeat the purpose of our study of partially observable text environments.
Not surprisingly, QA-DQN performs worse in general on iNewsQA, in all experiments. As shown in Table TABREF16, the average number of sentences per document in iNewsQA is about 6 times more than in iSQuAD. This is analogous to games with larger maps in the RL literature, where the environment is partially observable. A better exploration (in our case, jumping) strategy may help QA-DQN to master such harder games.
Experimental Results ::: Generalizing to Test Set
To study QA-DQN's ability to generalize, we select the best performing agent in each experimental setting on the validation set and report their performance on the test set. The agent's test performance is reported in Table TABREF41. In addition, to support our claim that the challenging part of iMRC tasks is information seeking rather than answering questions given sufficient information, we also report the $\text{F}_1$ score of an agent when it has reached the piece of text that contains the answer, which we denote as $\text{F}_{1\text{info}}$.
From Table TABREF41 (and validation curves provided in appendix) we can observe that QA-DQN's performance during evaluation matches its training performance in most settings. $\text{F}_{1\text{info}}$ scores are consistently higher than the overall $\text{F}_1$ scores, and they have much less variance across different settings. This supports our hypothesis that information seeking play an important role in solving iMRC tasks, whereas question answering given necessary information is relatively straightforward. This also suggests that an interactive agent that can better navigate to important sentences is very likely to achieve better performance on iMRC tasks.
Discussion and Future Work
In this work, we propose and explore the direction of converting MRC datasets into interactive environments. We believe interactive, information-seeking behavior is desirable for neural MRC systems when knowledge sources are partially observable and/or too large to encode in their entirety — for instance, when searching for information on the internet, where knowledge is by design easily accessible to humans through interaction.
Despite being restricted, our proposed task presents major challenges to existing techniques. iMRC lies at the intersection of NLP and RL, which is arguably less studied in existing literature. We hope to encourage researchers from both NLP and RL communities to work toward solving this task.
For our baseline, we adopted an off-the-shelf, top-performing MRC model and RL method. Either component can be replaced straightforwardly with other methods (e.g., to utilize a large-scale pretrained language model).
Our proposed setup and baseline agent presently use only a single word with the query command. However, a host of other options should be considered in future work. For example, multi-word queries with fuzzy matching are more realistic. It would also be interesting for an agent to generate a vector representation of the query in some latent space. This vector could then be compared with precomputed document representations (e.g., in an open domain QA dataset) to determine what text to observe next, with such behavior tantamount to learning to do IR.
As mentioned, our idea for reformulating existing MRC datasets as partially observable and interactive environments is straightforward and general. Almost all MRC datasets can be used to study interactive, information-seeking behavior through similar modifications. We hypothesize that such behavior can, in turn, help in solving real-world MRC problems involving search. | No |
5f9bd99a598a4bbeb9d2ac46082bd3302e961a0f | 5f9bd99a598a4bbeb9d2ac46082bd3302e961a0f_0 | Q: What are the models evaluated on?
Text: Introduction
Many machine reading comprehension (MRC) datasets have been released in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 to benchmark a system's ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein.
The supporting document is, more often than not, static and fully observable. This raises concerns, since models may find answers simply through shallow pattern matching; e.g., syntactic similarity between the words in questions and documents. As pointed out by BIBREF5, for questions starting with when, models tend to predict the only date/time answer in the supporting document. Such behavior limits the generality and usefulness of MRC models, and suggests that they do not learn a proper `understanding' of the intended task. In this paper, to address this problem, we shift the focus of MRC data away from `spoon-feeding' models with sufficient information in fully observable, static documents. Instead, we propose interactive versions of existing MRC tasks, whereby the information needed to answer a question must be gathered sequentially.
The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL).
As an initial case study, we repurpose two well known, related corpora with different difficulty levels for our interactive MRC task: SQuAD and NewsQA. Table TABREF2 shows some examples of a model performing interactive MRC on these datasets. Naturally, our reframing makes the MRC problem harder; however, we believe the added demands of iMRC more closely match web-level QA and may lead to deeper comprehension of documents' content.
The main contributions of this work are as follows:
We describe a method to make MRC datasets interactive and formulate the new task as an RL problem.
We develop a baseline agent that combines a top performing MRC model and a state-of-the-art RL optimization algorithm and test it on our iMRC tasks.
We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting.
Related Works
Skip-reading BIBREF6, BIBREF7, BIBREF8 is an existing setting in which MRC models read partial documents. Concretely, these methods assume that not all tokens in the input sequence are useful, and therefore learn to skip irrelevant tokens based on the current input and their internal memory. Since skipping decisions are discrete, the models are often optimized by the REINFORCE algorithm BIBREF9. For example, the structural-jump-LSTM proposed in BIBREF10 learns to skip and jump over chunks of text. In a similar vein, BIBREF11 designed a QA task where the model reads streaming data unidirectionally, without knowing when the question will be provided. Skip-reading approaches are limited in that they only consider jumping over a few consecutive tokens and the skipping operations are usually unidirectional. Based on the assumption that a single pass of reading may not provide sufficient information, multi-pass reading methods have also been studied BIBREF12, BIBREF13.
Compared to skip-reading and multi-turn reading, our work enables an agent to jump through a document in a more dynamic manner, in some sense combining aspects of skip-reading and re-reading. For example, it can jump forward, backward, or to an arbitrary position, depending on the query. This also distinguishes the model we develop in this work from ReasoNet BIBREF13, where an agent decides when to stop unidirectional reading.
Recently, BIBREF14 propose DocQN, which is a DQN-based agent that leverages the (tree) structure of documents and navigates across sentences and paragraphs. The proposed method has been shown to outperform vanilla DQN and IR baselines on TriviaQA dataset. The main differences between our work and DocQA include: iMRC does not depend on extra meta information of documents (e.g., title, paragraph title) for building document trees as in DocQN; our proposed environment is partially-observable, and thus an agent is required to explore and memorize the environment via interaction; the action space in our setting (especially for the Ctrl+F command as defined in later section) is arguably larger than the tree sampling action space in DocQN.
Closely related to iMRC is work by BIBREF15, in which the authors introduce a collection of synthetic tasks to train and test information-seeking capabilities in neural models. We extend that work by developing a realistic and challenging text-based task.
Broadly speaking, our approach is also linked to the optimal stopping problem in the literature Markov decision processes (MDP) BIBREF16, where at each time-step the agent either continues or stops and accumulates reward. Here, we reformulate conventional QA tasks through the lens of optimal stopping, in hopes of improving over the shallow matching behaviors exhibited by many MRC systems.
iMRC: Making MRC Interactive
We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\lbrace p, q, a\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.
We first split every paragraph $p$ into a list of sentences $\mathcal {S} = \lbrace s_1, s_2, ..., s_n\rbrace $, where $n$ stands for number of sentences in $p$. Given a question $q$, rather than showing the entire paragraph $p$, we only show an agent the first sentence $s_1$ and withhold the rest. The agent must issue commands to reveal the hidden sentences progressively and thereby gather the information needed to answer question $q$.
An agent decides when to stop interacting and output an answer, but the number of interaction steps is limited. Once an agent has exhausted its step budget, it is forced to answer the question.
iMRC: Making MRC Interactive ::: Interactive MRC as a POMDP
As described in the previous section, we convert MRC tasks into sequential decision-making problems (which we will refer to as games). These can be described naturally within the reinforcement learning (RL) framework. Formally, tasks in iMRC are partially observable Markov decision processes (POMDP) BIBREF17. An iMRC data-point is a discrete-time POMDP defined by $(S, T, A, \Omega , O, R, \gamma )$, where $\gamma \in [0, 1]$ is the discount factor and the other elements are described in detail below.
Environment States ($S$): The environment state at turn $t$ in the game is $s_t \in S$. It contains the complete internal information of the game, much of which is hidden from the agent. When an agent issues an action $a_t$, the environment transitions to state $s_{t+1}$ with probability $T(s_{t+1} | s_t, a_t)$). In this work, transition probabilities are either 0 or 1 (i.e., deterministic environment).
Actions ($A$): At each game turn $t$, the agent issues an action $a_t \in A$. We will elaborate on the action space of iMRC in the action space section.
Observations ($\Omega $): The text information perceived by the agent at a given game turn $t$ is the agent's observation, $o_t \in \Omega $, which depends on the environment state and the previous action with probability $O(o_t|s_t)$. In this work, observation probabilities are either 0 or 1 (i.e., noiseless observation). Reward Function ($R$): Based on its actions, the agent receives rewards $r_t = R(s_t, a_t)$. Its objective is to maximize the expected discounted sum of rewards $E \left[\sum _t \gamma ^t r_t \right]$.
iMRC: Making MRC Interactive ::: Action Space
To better describe the action space of iMRC, we split an agent's actions into two phases: information gathering and question answering. During the information gathering phase, the agent interacts with the environment to collect knowledge. It answers questions with its accumulated knowledge in the question answering phase.
Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \le k \le n$:
previous: jump to $ \small {\left\lbrace \begin{array}{ll} s_n & \text{if $k = 1$,}\\ s_{k-1} & \text{otherwise;} \end{array}\right.} $
next: jump to $ \small {\left\lbrace \begin{array}{ll} s_1 & \text{if $k = n$,}\\ s_{k+1} & \text{otherwise;} \end{array}\right.} $
Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;
stop: terminate information gathering phase.
Question Answering: We follow the output format of both SQuAD and NewsQA, where an agent is required to point to the head and tail positions of an answer span within $p$. Assume that at step $t$ the agent stops interacting and the observation $o_t$ is $s_k$. The agent points to a head-tail position pair in $s_k$.
iMRC: Making MRC Interactive ::: Query Types
Given the question “When is the deadline of AAAI?”, as a human, one might try searching “AAAI” on a search engine, follow the link to the official AAAI website, then search for keywords “deadline” or “due date” on the website to jump to a specific paragraph. Humans have a deep understanding of questions because of their significant background knowledge. As a result, the keywords they use to search are not limited to what appears in the question.
Inspired by this observation, we study 3 query types for the Ctrl+F $<$query$>$ command.
One token from the question: the setting with smallest action space. Because iMRC deals with Ctrl+F commands by exact string matching, there is no guarantee that all sentences are accessible from question tokens only.
One token from the union of the question and the current observation: an intermediate level where the action space is larger.
One token from the dataset vocabulary: the action space is huge (see Table TABREF16 for statistics of SQuAD and NewsQA). It is guaranteed that all sentences in all documents are accessible through these tokens.
iMRC: Making MRC Interactive ::: Evaluation Metric
Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance .
Baseline Agent
As a baseline, we propose QA-DQN, an agent that adopts components from QANet BIBREF18 and adds an extra command generation module inspired by LSTM-DQN BIBREF19.
As illustrated in Figure FIGREF6, the agent consists of three components: an encoder, an action generator, and a question answerer. More precisely, at a game step $t$, the encoder reads observation string $o_t$ and question string $q$ to generate attention aggregated hidden representations $M_t$. Using $M_t$, the action generator outputs commands (defined in previous sections) to interact with iMRC. If the generated command is stop or the agent is forced to stop, the question answerer takes the current information at game step $t$ to generate head and tail pointers for answering the question; otherwise, the information gathering procedure continues.
In this section, we describe the high-level model structure and training strategies of QA-DQN. We refer readers to BIBREF18 for detailed information. We will release datasets and code in the near future.
Baseline Agent ::: Model Structure
In this section, we use game step $t$ to denote one round of interaction between an agent with the iMRC environment. We use $o_t$ to denote text observation at game step $t$ and $q$ to denote question text. We use $L$ to refer to a linear transformation. $[\cdot ;\cdot ]$ denotes vector concatenation.
Baseline Agent ::: Model Structure ::: Encoder
The encoder consists of an embedding layer, two stacks of transformer blocks (denoted as encoder transformer blocks and aggregation transformer blocks), and an attention layer.
In the embedding layer, we aggregate both word- and character-level embeddings. Word embeddings are initialized by the 300-dimension fastText BIBREF20 vectors trained on Common Crawl (600B tokens), and are fixed during training. Character embeddings are initialized by 200-dimension random vectors. A convolutional layer with 96 kernels of size 5 is used to aggregate the sequence of characters. We use a max pooling layer on the character dimension, then a multi-layer perceptron (MLP) of size 96 is used to aggregate the concatenation of word- and character-level representations. A highway network BIBREF21 is used on top of this MLP. The resulting vectors are used as input to the encoding transformer blocks.
Each encoding transformer block consists of four convolutional layers (with shared weights), a self-attention layer, and an MLP. Each convolutional layer has 96 filters, each kernel's size is 7. In the self-attention layer, we use a block hidden size of 96 and a single head attention mechanism. Layer normalization and dropout are applied after each component inside the block. We add positional encoding into each block's input. We use one layer of such an encoding block.
At a game step $t$, the encoder processes text observation $o_t$ and question $q$ to generate context-aware encodings $h_{o_t} \in \mathbb {R}^{L^{o_t} \times H_1}$ and $h_q \in \mathbb {R}^{L^{q} \times H_1}$, where $L^{o_t}$ and $L^{q}$ denote length of $o_t$ and $q$ respectively, $H_1$ is 96.
Following BIBREF18, we use a context-query attention layer to aggregate the two representations $h_{o_t}$ and $h_q$. Specifically, the attention layer first uses two MLPs to map $h_{o_t}$ and $h_q$ into the same space, with the resulting representations denoted as $h_{o_t}^{\prime } \in \mathbb {R}^{L^{o_t} \times H_2}$ and $h_q^{\prime } \in \mathbb {R}^{L^{q} \times H_2}$, in which, $H_2$ is 96.
Then, a tri-linear similarity function is used to compute the similarities between each pair of $h_{o_t}^{\prime }$ and $h_q^{\prime }$ items:
where $\odot $ indicates element-wise multiplication and $w$ is trainable parameter vector of size 96.
We apply softmax to the resulting similarity matrix $S$ along both dimensions, producing $S^A$ and $S^B$. Information in the two representations are then aggregated as
where $h_{oq}$ is aggregated observation representation.
On top of the attention layer, a stack of aggregation transformer blocks is used to further map the observation representations to action representations and answer representations. The configuration parameters are the same as the encoder transformer blocks, except there are two convolution layers (with shared weights), and the number of blocks is 7.
Let $M_t \in \mathbb {R}^{L^{o_t} \times H_3}$ denote the output of the stack of aggregation transformer blocks, in which $H_3$ is 96.
Baseline Agent ::: Model Structure ::: Action Generator
The action generator takes $M_t$ as input and estimates Q-values for all possible actions. As described in previous section, when an action is a Ctrl+F command, it is composed of two tokens (the token “Ctrl+F” and the query token). Therefore, the action generator consists of three MLPs:
Here, the size of $L_{shared} \in \mathbb {R}^{95 \times 150}$; $L_{action}$ has an output size of 4 or 2 depending on the number of actions available; the size of $L_{ctrlf}$ is the same as the size of a dataset's vocabulary size (depending on different query type settings, we mask out words in the vocabulary that are not query candidates). The overall Q-value is simply the sum of the two components:
Baseline Agent ::: Model Structure ::: Question Answerer
Following BIBREF18, we append two extra stacks of aggregation transformer blocks on top of the encoder to compute head and tail positions:
Here, $M_{head}$ and $M_{tail}$ are outputs of the two extra transformer stacks, $L_0$, $L_1$, $L_2$ and $L_3$ are trainable parameters with output size 150, 150, 1 and 1, respectively.
Baseline Agent ::: Memory and Reward Shaping ::: Memory
In iMRC, some questions may not be easily answerable based only on observation of a single sentence. To overcome this limitation, we provide an explicit memory mechanism to QA-DQN. Specifically, we use a queue to store strings that have been observed recently. The queue has a limited size of slots (we use queues of size [1, 3, 5] in this work). This prevents the agent from issuing next commands until the environment has been observed fully, in which case our task would degenerate to the standard MRC setting. The memory slots are reset episodically.
Baseline Agent ::: Memory and Reward Shaping ::: Reward Shaping
Because the question answerer in QA-DQN is a pointing model, its performance relies heavily on whether the agent can find and stop at the sentence that contains the answer. We design a heuristic reward to encourage and guide this behavior. In particular, we assign a reward if the agent halts at game step $k$ and the answer is a sub-string of $o_k$ (if larger memory slots are used, we assign this reward if the answer is a sub-string of the memory at game step $k$). We denote this reward as the sufficient information reward, since, if an agent sees the answer, it should have a good chance of having gathered sufficient information for the question (although this is not guaranteed).
Note this sufficient information reward is part of the design of QA-DQN, whereas the question answering score is the only metric used to evaluate an agent's performance on the iMRC task.
Baseline Agent ::: Memory and Reward Shaping ::: Ctrl+F Only Mode
As mentioned above, an agent might bypass Ctrl+F actions and explore an iMRC game only via next commands. We study this possibility in an ablation study, where we limit the agent to the Ctrl+F and stop commands. In this setting, an agent is forced to explore by means of search a queries.
Baseline Agent ::: Training Strategy
In this section, we describe our training strategy. We split the training pipeline into two parts for easy comprehension. We use Adam BIBREF22 as the step rule for optimization in both parts, with the learning rate set to 0.00025.
Baseline Agent ::: Training Strategy ::: Action Generation
iMRC games are interactive environments. We use an RL training algorithm to train the interactive information-gathering behavior of QA-DQN. We adopt the Rainbow algorithm proposed by BIBREF23, which integrates several extensions to the original Deep Q-Learning algorithm BIBREF24. Rainbox exhibits state-of-the-art performance on several RL benchmark tasks (e.g., Atari games).
During game playing, we use a mini-batch of size 10 and push all transitions (observation string, question string, generated command, reward) into a replay buffer of size 500,000. We do not compute losses directly using these transitions. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer, compute loss, and update the network.
Detailed hyper-parameter settings for action generation are shown in Table TABREF38.
Baseline Agent ::: Training Strategy ::: Question Answering
Similarly, we use another replay buffer to store question answering transitions (observation string when interaction stops, question string, ground-truth answer).
Because both iSQuAD and iNewsQA are converted from datasets that provide ground-truth answer positions, we can leverage this information and train the question answerer with supervised learning. Specifically, we only push question answering transitions when the ground-truth answer is in the observation string. For each transition, we convert the ground-truth answer head- and tail-positions from the SQuAD and NewsQA datasets to positions in the current observation string. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer and train the question answerer using the Negative Log-Likelihood (NLL) loss. We use a dropout rate of 0.1.
Experimental Results
In this study, we focus on three factors and their effects on iMRC and the performance of the QA-DQN agent:
different Ctrl+F strategies, as described in the action space section;
enabled vs. disabled next and previous actions;
different memory slot sizes.
Below we report the baseline agent's training performance followed by its generalization performance on test data.
Experimental Results ::: Mastering Training Games
It remains difficult for RL agents to master multiple games at the same time. In our case, each document-question pair can be considered a unique game, and there are hundred of thousands of them. Therefore, as is common practice in the RL literature, we study an agent's training curves.
Due to the space limitations, we select several representative settings to discuss in this section and provide QA-DQN's training and evaluation curves for all experimental settings in the Appendix. We provide the agent's sufficient information rewards (i.e., if the agent stopped at a state where the observation contains the answer) during training in Appendix as well.
Figure FIGREF36 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are available. Figure FIGREF40 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are disabled. Note that all training curves are averaged over 3 runs with different random seeds and all evaluation curves show the one run with max validation performance among the three.
From Figure FIGREF36, we can see that the three Ctrl+F strategies show similar difficulty levels when next and previous are available, although QA-DQN works slightly better when selecting a word from the question as query (especially on iNewsQA). However, from Figure FIGREF40 we observe that when next and previous are disabled, QA-DQN shows significant advantage when selecting a word from the question as query. This may due to the fact that when an agent must use Ctrl+F to navigate within documents, the set of question words is a much smaller action space in contrast to the other two settings. In the 4-action setting, an agent can rely on issuing next and previous actions to reach any sentence in a document.
The effect of action space size on model performance is particularly clear when using a datasets' entire vocabulary as query candidates in the 2-action setting. From Figure FIGREF40 (and figures with sufficient information rewards in the Appendix) we see QA-DQN has a hard time learning in this setting. As shown in Table TABREF16, both datasets have a vocabulary size of more than 100k. This is much larger than in the other two settings, where on average the length of questions is around 10. This suggests that the methods with better sample efficiency are needed to act in more realistic problem settings with huge action spaces.
Experiments also show that a larger memory slot size always helps. Intuitively, with a memory mechanism (either implicit or explicit), an agent could make the environment closer to fully observed by exploring and memorizing observations. Presumably, a larger memory may further improve QA-DQN's performance, but considering the average number of sentences in each iSQuAD game is 5, a memory with more than 5 slots will defeat the purpose of our study of partially observable text environments.
Not surprisingly, QA-DQN performs worse in general on iNewsQA, in all experiments. As shown in Table TABREF16, the average number of sentences per document in iNewsQA is about 6 times more than in iSQuAD. This is analogous to games with larger maps in the RL literature, where the environment is partially observable. A better exploration (in our case, jumping) strategy may help QA-DQN to master such harder games.
Experimental Results ::: Generalizing to Test Set
To study QA-DQN's ability to generalize, we select the best performing agent in each experimental setting on the validation set and report their performance on the test set. The agent's test performance is reported in Table TABREF41. In addition, to support our claim that the challenging part of iMRC tasks is information seeking rather than answering questions given sufficient information, we also report the $\text{F}_1$ score of an agent when it has reached the piece of text that contains the answer, which we denote as $\text{F}_{1\text{info}}$.
From Table TABREF41 (and validation curves provided in appendix) we can observe that QA-DQN's performance during evaluation matches its training performance in most settings. $\text{F}_{1\text{info}}$ scores are consistently higher than the overall $\text{F}_1$ scores, and they have much less variance across different settings. This supports our hypothesis that information seeking play an important role in solving iMRC tasks, whereas question answering given necessary information is relatively straightforward. This also suggests that an interactive agent that can better navigate to important sentences is very likely to achieve better performance on iMRC tasks.
Discussion and Future Work
In this work, we propose and explore the direction of converting MRC datasets into interactive environments. We believe interactive, information-seeking behavior is desirable for neural MRC systems when knowledge sources are partially observable and/or too large to encode in their entirety — for instance, when searching for information on the internet, where knowledge is by design easily accessible to humans through interaction.
Despite being restricted, our proposed task presents major challenges to existing techniques. iMRC lies at the intersection of NLP and RL, which is arguably less studied in existing literature. We hope to encourage researchers from both NLP and RL communities to work toward solving this task.
For our baseline, we adopted an off-the-shelf, top-performing MRC model and RL method. Either component can be replaced straightforwardly with other methods (e.g., to utilize a large-scale pretrained language model).
Our proposed setup and baseline agent presently use only a single word with the query command. However, a host of other options should be considered in future work. For example, multi-word queries with fuzzy matching are more realistic. It would also be interesting for an agent to generate a vector representation of the query in some latent space. This vector could then be compared with precomputed document representations (e.g., in an open domain QA dataset) to determine what text to observe next, with such behavior tantamount to learning to do IR.
As mentioned, our idea for reformulating existing MRC datasets as partially observable and interactive environments is straightforward and general. Almost all MRC datasets can be used to study interactive, information-seeking behavior through similar modifications. We hypothesize that such behavior can, in turn, help in solving real-world MRC problems involving search. | They evaluate F1 score and agent's test performance on their own built interactive datasets (iSQuAD and iNewsQA) |
b2fab9ffbcf1d6ec6d18a05aeb6e3ab9a4dbf2ae | b2fab9ffbcf1d6ec6d18a05aeb6e3ab9a4dbf2ae_0 | Q: How do they train models in this setup?
Text: Introduction
Many machine reading comprehension (MRC) datasets have been released in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 to benchmark a system's ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein.
The supporting document is, more often than not, static and fully observable. This raises concerns, since models may find answers simply through shallow pattern matching; e.g., syntactic similarity between the words in questions and documents. As pointed out by BIBREF5, for questions starting with when, models tend to predict the only date/time answer in the supporting document. Such behavior limits the generality and usefulness of MRC models, and suggests that they do not learn a proper `understanding' of the intended task. In this paper, to address this problem, we shift the focus of MRC data away from `spoon-feeding' models with sufficient information in fully observable, static documents. Instead, we propose interactive versions of existing MRC tasks, whereby the information needed to answer a question must be gathered sequentially.
The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL).
As an initial case study, we repurpose two well known, related corpora with different difficulty levels for our interactive MRC task: SQuAD and NewsQA. Table TABREF2 shows some examples of a model performing interactive MRC on these datasets. Naturally, our reframing makes the MRC problem harder; however, we believe the added demands of iMRC more closely match web-level QA and may lead to deeper comprehension of documents' content.
The main contributions of this work are as follows:
We describe a method to make MRC datasets interactive and formulate the new task as an RL problem.
We develop a baseline agent that combines a top performing MRC model and a state-of-the-art RL optimization algorithm and test it on our iMRC tasks.
We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting.
Related Works
Skip-reading BIBREF6, BIBREF7, BIBREF8 is an existing setting in which MRC models read partial documents. Concretely, these methods assume that not all tokens in the input sequence are useful, and therefore learn to skip irrelevant tokens based on the current input and their internal memory. Since skipping decisions are discrete, the models are often optimized by the REINFORCE algorithm BIBREF9. For example, the structural-jump-LSTM proposed in BIBREF10 learns to skip and jump over chunks of text. In a similar vein, BIBREF11 designed a QA task where the model reads streaming data unidirectionally, without knowing when the question will be provided. Skip-reading approaches are limited in that they only consider jumping over a few consecutive tokens and the skipping operations are usually unidirectional. Based on the assumption that a single pass of reading may not provide sufficient information, multi-pass reading methods have also been studied BIBREF12, BIBREF13.
Compared to skip-reading and multi-turn reading, our work enables an agent to jump through a document in a more dynamic manner, in some sense combining aspects of skip-reading and re-reading. For example, it can jump forward, backward, or to an arbitrary position, depending on the query. This also distinguishes the model we develop in this work from ReasoNet BIBREF13, where an agent decides when to stop unidirectional reading.
Recently, BIBREF14 propose DocQN, which is a DQN-based agent that leverages the (tree) structure of documents and navigates across sentences and paragraphs. The proposed method has been shown to outperform vanilla DQN and IR baselines on TriviaQA dataset. The main differences between our work and DocQA include: iMRC does not depend on extra meta information of documents (e.g., title, paragraph title) for building document trees as in DocQN; our proposed environment is partially-observable, and thus an agent is required to explore and memorize the environment via interaction; the action space in our setting (especially for the Ctrl+F command as defined in later section) is arguably larger than the tree sampling action space in DocQN.
Closely related to iMRC is work by BIBREF15, in which the authors introduce a collection of synthetic tasks to train and test information-seeking capabilities in neural models. We extend that work by developing a realistic and challenging text-based task.
Broadly speaking, our approach is also linked to the optimal stopping problem in the literature Markov decision processes (MDP) BIBREF16, where at each time-step the agent either continues or stops and accumulates reward. Here, we reformulate conventional QA tasks through the lens of optimal stopping, in hopes of improving over the shallow matching behaviors exhibited by many MRC systems.
iMRC: Making MRC Interactive
We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\lbrace p, q, a\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.
We first split every paragraph $p$ into a list of sentences $\mathcal {S} = \lbrace s_1, s_2, ..., s_n\rbrace $, where $n$ stands for number of sentences in $p$. Given a question $q$, rather than showing the entire paragraph $p$, we only show an agent the first sentence $s_1$ and withhold the rest. The agent must issue commands to reveal the hidden sentences progressively and thereby gather the information needed to answer question $q$.
An agent decides when to stop interacting and output an answer, but the number of interaction steps is limited. Once an agent has exhausted its step budget, it is forced to answer the question.
iMRC: Making MRC Interactive ::: Interactive MRC as a POMDP
As described in the previous section, we convert MRC tasks into sequential decision-making problems (which we will refer to as games). These can be described naturally within the reinforcement learning (RL) framework. Formally, tasks in iMRC are partially observable Markov decision processes (POMDP) BIBREF17. An iMRC data-point is a discrete-time POMDP defined by $(S, T, A, \Omega , O, R, \gamma )$, where $\gamma \in [0, 1]$ is the discount factor and the other elements are described in detail below.
Environment States ($S$): The environment state at turn $t$ in the game is $s_t \in S$. It contains the complete internal information of the game, much of which is hidden from the agent. When an agent issues an action $a_t$, the environment transitions to state $s_{t+1}$ with probability $T(s_{t+1} | s_t, a_t)$). In this work, transition probabilities are either 0 or 1 (i.e., deterministic environment).
Actions ($A$): At each game turn $t$, the agent issues an action $a_t \in A$. We will elaborate on the action space of iMRC in the action space section.
Observations ($\Omega $): The text information perceived by the agent at a given game turn $t$ is the agent's observation, $o_t \in \Omega $, which depends on the environment state and the previous action with probability $O(o_t|s_t)$. In this work, observation probabilities are either 0 or 1 (i.e., noiseless observation). Reward Function ($R$): Based on its actions, the agent receives rewards $r_t = R(s_t, a_t)$. Its objective is to maximize the expected discounted sum of rewards $E \left[\sum _t \gamma ^t r_t \right]$.
iMRC: Making MRC Interactive ::: Action Space
To better describe the action space of iMRC, we split an agent's actions into two phases: information gathering and question answering. During the information gathering phase, the agent interacts with the environment to collect knowledge. It answers questions with its accumulated knowledge in the question answering phase.
Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \le k \le n$:
previous: jump to $ \small {\left\lbrace \begin{array}{ll} s_n & \text{if $k = 1$,}\\ s_{k-1} & \text{otherwise;} \end{array}\right.} $
next: jump to $ \small {\left\lbrace \begin{array}{ll} s_1 & \text{if $k = n$,}\\ s_{k+1} & \text{otherwise;} \end{array}\right.} $
Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;
stop: terminate information gathering phase.
Question Answering: We follow the output format of both SQuAD and NewsQA, where an agent is required to point to the head and tail positions of an answer span within $p$. Assume that at step $t$ the agent stops interacting and the observation $o_t$ is $s_k$. The agent points to a head-tail position pair in $s_k$.
iMRC: Making MRC Interactive ::: Query Types
Given the question “When is the deadline of AAAI?”, as a human, one might try searching “AAAI” on a search engine, follow the link to the official AAAI website, then search for keywords “deadline” or “due date” on the website to jump to a specific paragraph. Humans have a deep understanding of questions because of their significant background knowledge. As a result, the keywords they use to search are not limited to what appears in the question.
Inspired by this observation, we study 3 query types for the Ctrl+F $<$query$>$ command.
One token from the question: the setting with smallest action space. Because iMRC deals with Ctrl+F commands by exact string matching, there is no guarantee that all sentences are accessible from question tokens only.
One token from the union of the question and the current observation: an intermediate level where the action space is larger.
One token from the dataset vocabulary: the action space is huge (see Table TABREF16 for statistics of SQuAD and NewsQA). It is guaranteed that all sentences in all documents are accessible through these tokens.
iMRC: Making MRC Interactive ::: Evaluation Metric
Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance .
Baseline Agent
As a baseline, we propose QA-DQN, an agent that adopts components from QANet BIBREF18 and adds an extra command generation module inspired by LSTM-DQN BIBREF19.
As illustrated in Figure FIGREF6, the agent consists of three components: an encoder, an action generator, and a question answerer. More precisely, at a game step $t$, the encoder reads observation string $o_t$ and question string $q$ to generate attention aggregated hidden representations $M_t$. Using $M_t$, the action generator outputs commands (defined in previous sections) to interact with iMRC. If the generated command is stop or the agent is forced to stop, the question answerer takes the current information at game step $t$ to generate head and tail pointers for answering the question; otherwise, the information gathering procedure continues.
In this section, we describe the high-level model structure and training strategies of QA-DQN. We refer readers to BIBREF18 for detailed information. We will release datasets and code in the near future.
Baseline Agent ::: Model Structure
In this section, we use game step $t$ to denote one round of interaction between an agent with the iMRC environment. We use $o_t$ to denote text observation at game step $t$ and $q$ to denote question text. We use $L$ to refer to a linear transformation. $[\cdot ;\cdot ]$ denotes vector concatenation.
Baseline Agent ::: Model Structure ::: Encoder
The encoder consists of an embedding layer, two stacks of transformer blocks (denoted as encoder transformer blocks and aggregation transformer blocks), and an attention layer.
In the embedding layer, we aggregate both word- and character-level embeddings. Word embeddings are initialized by the 300-dimension fastText BIBREF20 vectors trained on Common Crawl (600B tokens), and are fixed during training. Character embeddings are initialized by 200-dimension random vectors. A convolutional layer with 96 kernels of size 5 is used to aggregate the sequence of characters. We use a max pooling layer on the character dimension, then a multi-layer perceptron (MLP) of size 96 is used to aggregate the concatenation of word- and character-level representations. A highway network BIBREF21 is used on top of this MLP. The resulting vectors are used as input to the encoding transformer blocks.
Each encoding transformer block consists of four convolutional layers (with shared weights), a self-attention layer, and an MLP. Each convolutional layer has 96 filters, each kernel's size is 7. In the self-attention layer, we use a block hidden size of 96 and a single head attention mechanism. Layer normalization and dropout are applied after each component inside the block. We add positional encoding into each block's input. We use one layer of such an encoding block.
At a game step $t$, the encoder processes text observation $o_t$ and question $q$ to generate context-aware encodings $h_{o_t} \in \mathbb {R}^{L^{o_t} \times H_1}$ and $h_q \in \mathbb {R}^{L^{q} \times H_1}$, where $L^{o_t}$ and $L^{q}$ denote length of $o_t$ and $q$ respectively, $H_1$ is 96.
Following BIBREF18, we use a context-query attention layer to aggregate the two representations $h_{o_t}$ and $h_q$. Specifically, the attention layer first uses two MLPs to map $h_{o_t}$ and $h_q$ into the same space, with the resulting representations denoted as $h_{o_t}^{\prime } \in \mathbb {R}^{L^{o_t} \times H_2}$ and $h_q^{\prime } \in \mathbb {R}^{L^{q} \times H_2}$, in which, $H_2$ is 96.
Then, a tri-linear similarity function is used to compute the similarities between each pair of $h_{o_t}^{\prime }$ and $h_q^{\prime }$ items:
where $\odot $ indicates element-wise multiplication and $w$ is trainable parameter vector of size 96.
We apply softmax to the resulting similarity matrix $S$ along both dimensions, producing $S^A$ and $S^B$. Information in the two representations are then aggregated as
where $h_{oq}$ is aggregated observation representation.
On top of the attention layer, a stack of aggregation transformer blocks is used to further map the observation representations to action representations and answer representations. The configuration parameters are the same as the encoder transformer blocks, except there are two convolution layers (with shared weights), and the number of blocks is 7.
Let $M_t \in \mathbb {R}^{L^{o_t} \times H_3}$ denote the output of the stack of aggregation transformer blocks, in which $H_3$ is 96.
Baseline Agent ::: Model Structure ::: Action Generator
The action generator takes $M_t$ as input and estimates Q-values for all possible actions. As described in previous section, when an action is a Ctrl+F command, it is composed of two tokens (the token “Ctrl+F” and the query token). Therefore, the action generator consists of three MLPs:
Here, the size of $L_{shared} \in \mathbb {R}^{95 \times 150}$; $L_{action}$ has an output size of 4 or 2 depending on the number of actions available; the size of $L_{ctrlf}$ is the same as the size of a dataset's vocabulary size (depending on different query type settings, we mask out words in the vocabulary that are not query candidates). The overall Q-value is simply the sum of the two components:
Baseline Agent ::: Model Structure ::: Question Answerer
Following BIBREF18, we append two extra stacks of aggregation transformer blocks on top of the encoder to compute head and tail positions:
Here, $M_{head}$ and $M_{tail}$ are outputs of the two extra transformer stacks, $L_0$, $L_1$, $L_2$ and $L_3$ are trainable parameters with output size 150, 150, 1 and 1, respectively.
Baseline Agent ::: Memory and Reward Shaping ::: Memory
In iMRC, some questions may not be easily answerable based only on observation of a single sentence. To overcome this limitation, we provide an explicit memory mechanism to QA-DQN. Specifically, we use a queue to store strings that have been observed recently. The queue has a limited size of slots (we use queues of size [1, 3, 5] in this work). This prevents the agent from issuing next commands until the environment has been observed fully, in which case our task would degenerate to the standard MRC setting. The memory slots are reset episodically.
Baseline Agent ::: Memory and Reward Shaping ::: Reward Shaping
Because the question answerer in QA-DQN is a pointing model, its performance relies heavily on whether the agent can find and stop at the sentence that contains the answer. We design a heuristic reward to encourage and guide this behavior. In particular, we assign a reward if the agent halts at game step $k$ and the answer is a sub-string of $o_k$ (if larger memory slots are used, we assign this reward if the answer is a sub-string of the memory at game step $k$). We denote this reward as the sufficient information reward, since, if an agent sees the answer, it should have a good chance of having gathered sufficient information for the question (although this is not guaranteed).
Note this sufficient information reward is part of the design of QA-DQN, whereas the question answering score is the only metric used to evaluate an agent's performance on the iMRC task.
Baseline Agent ::: Memory and Reward Shaping ::: Ctrl+F Only Mode
As mentioned above, an agent might bypass Ctrl+F actions and explore an iMRC game only via next commands. We study this possibility in an ablation study, where we limit the agent to the Ctrl+F and stop commands. In this setting, an agent is forced to explore by means of search a queries.
Baseline Agent ::: Training Strategy
In this section, we describe our training strategy. We split the training pipeline into two parts for easy comprehension. We use Adam BIBREF22 as the step rule for optimization in both parts, with the learning rate set to 0.00025.
Baseline Agent ::: Training Strategy ::: Action Generation
iMRC games are interactive environments. We use an RL training algorithm to train the interactive information-gathering behavior of QA-DQN. We adopt the Rainbow algorithm proposed by BIBREF23, which integrates several extensions to the original Deep Q-Learning algorithm BIBREF24. Rainbox exhibits state-of-the-art performance on several RL benchmark tasks (e.g., Atari games).
During game playing, we use a mini-batch of size 10 and push all transitions (observation string, question string, generated command, reward) into a replay buffer of size 500,000. We do not compute losses directly using these transitions. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer, compute loss, and update the network.
Detailed hyper-parameter settings for action generation are shown in Table TABREF38.
Baseline Agent ::: Training Strategy ::: Question Answering
Similarly, we use another replay buffer to store question answering transitions (observation string when interaction stops, question string, ground-truth answer).
Because both iSQuAD and iNewsQA are converted from datasets that provide ground-truth answer positions, we can leverage this information and train the question answerer with supervised learning. Specifically, we only push question answering transitions when the ground-truth answer is in the observation string. For each transition, we convert the ground-truth answer head- and tail-positions from the SQuAD and NewsQA datasets to positions in the current observation string. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer and train the question answerer using the Negative Log-Likelihood (NLL) loss. We use a dropout rate of 0.1.
Experimental Results
In this study, we focus on three factors and their effects on iMRC and the performance of the QA-DQN agent:
different Ctrl+F strategies, as described in the action space section;
enabled vs. disabled next and previous actions;
different memory slot sizes.
Below we report the baseline agent's training performance followed by its generalization performance on test data.
Experimental Results ::: Mastering Training Games
It remains difficult for RL agents to master multiple games at the same time. In our case, each document-question pair can be considered a unique game, and there are hundred of thousands of them. Therefore, as is common practice in the RL literature, we study an agent's training curves.
Due to the space limitations, we select several representative settings to discuss in this section and provide QA-DQN's training and evaluation curves for all experimental settings in the Appendix. We provide the agent's sufficient information rewards (i.e., if the agent stopped at a state where the observation contains the answer) during training in Appendix as well.
Figure FIGREF36 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are available. Figure FIGREF40 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are disabled. Note that all training curves are averaged over 3 runs with different random seeds and all evaluation curves show the one run with max validation performance among the three.
From Figure FIGREF36, we can see that the three Ctrl+F strategies show similar difficulty levels when next and previous are available, although QA-DQN works slightly better when selecting a word from the question as query (especially on iNewsQA). However, from Figure FIGREF40 we observe that when next and previous are disabled, QA-DQN shows significant advantage when selecting a word from the question as query. This may due to the fact that when an agent must use Ctrl+F to navigate within documents, the set of question words is a much smaller action space in contrast to the other two settings. In the 4-action setting, an agent can rely on issuing next and previous actions to reach any sentence in a document.
The effect of action space size on model performance is particularly clear when using a datasets' entire vocabulary as query candidates in the 2-action setting. From Figure FIGREF40 (and figures with sufficient information rewards in the Appendix) we see QA-DQN has a hard time learning in this setting. As shown in Table TABREF16, both datasets have a vocabulary size of more than 100k. This is much larger than in the other two settings, where on average the length of questions is around 10. This suggests that the methods with better sample efficiency are needed to act in more realistic problem settings with huge action spaces.
Experiments also show that a larger memory slot size always helps. Intuitively, with a memory mechanism (either implicit or explicit), an agent could make the environment closer to fully observed by exploring and memorizing observations. Presumably, a larger memory may further improve QA-DQN's performance, but considering the average number of sentences in each iSQuAD game is 5, a memory with more than 5 slots will defeat the purpose of our study of partially observable text environments.
Not surprisingly, QA-DQN performs worse in general on iNewsQA, in all experiments. As shown in Table TABREF16, the average number of sentences per document in iNewsQA is about 6 times more than in iSQuAD. This is analogous to games with larger maps in the RL literature, where the environment is partially observable. A better exploration (in our case, jumping) strategy may help QA-DQN to master such harder games.
Experimental Results ::: Generalizing to Test Set
To study QA-DQN's ability to generalize, we select the best performing agent in each experimental setting on the validation set and report their performance on the test set. The agent's test performance is reported in Table TABREF41. In addition, to support our claim that the challenging part of iMRC tasks is information seeking rather than answering questions given sufficient information, we also report the $\text{F}_1$ score of an agent when it has reached the piece of text that contains the answer, which we denote as $\text{F}_{1\text{info}}$.
From Table TABREF41 (and validation curves provided in appendix) we can observe that QA-DQN's performance during evaluation matches its training performance in most settings. $\text{F}_{1\text{info}}$ scores are consistently higher than the overall $\text{F}_1$ scores, and they have much less variance across different settings. This supports our hypothesis that information seeking play an important role in solving iMRC tasks, whereas question answering given necessary information is relatively straightforward. This also suggests that an interactive agent that can better navigate to important sentences is very likely to achieve better performance on iMRC tasks.
Discussion and Future Work
In this work, we propose and explore the direction of converting MRC datasets into interactive environments. We believe interactive, information-seeking behavior is desirable for neural MRC systems when knowledge sources are partially observable and/or too large to encode in their entirety — for instance, when searching for information on the internet, where knowledge is by design easily accessible to humans through interaction.
Despite being restricted, our proposed task presents major challenges to existing techniques. iMRC lies at the intersection of NLP and RL, which is arguably less studied in existing literature. We hope to encourage researchers from both NLP and RL communities to work toward solving this task.
For our baseline, we adopted an off-the-shelf, top-performing MRC model and RL method. Either component can be replaced straightforwardly with other methods (e.g., to utilize a large-scale pretrained language model).
Our proposed setup and baseline agent presently use only a single word with the query command. However, a host of other options should be considered in future work. For example, multi-word queries with fuzzy matching are more realistic. It would also be interesting for an agent to generate a vector representation of the query in some latent space. This vector could then be compared with precomputed document representations (e.g., in an open domain QA dataset) to determine what text to observe next, with such behavior tantamount to learning to do IR.
As mentioned, our idea for reformulating existing MRC datasets as partially observable and interactive environments is straightforward and general. Almost all MRC datasets can be used to study interactive, information-seeking behavior through similar modifications. We hypothesize that such behavior can, in turn, help in solving real-world MRC problems involving search. | Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL). |
e9cf1b91f06baec79eb6ddfd91fc5d434889f652 | e9cf1b91f06baec79eb6ddfd91fc5d434889f652_0 | Q: What commands does their setup provide to models seeking information?
Text: Introduction
Many machine reading comprehension (MRC) datasets have been released in recent years BIBREF0, BIBREF1, BIBREF2, BIBREF3, BIBREF4 to benchmark a system's ability to understand and reason over natural language. Typically, these datasets require an MRC model to read through a document to answer a question about information contained therein.
The supporting document is, more often than not, static and fully observable. This raises concerns, since models may find answers simply through shallow pattern matching; e.g., syntactic similarity between the words in questions and documents. As pointed out by BIBREF5, for questions starting with when, models tend to predict the only date/time answer in the supporting document. Such behavior limits the generality and usefulness of MRC models, and suggests that they do not learn a proper `understanding' of the intended task. In this paper, to address this problem, we shift the focus of MRC data away from `spoon-feeding' models with sufficient information in fully observable, static documents. Instead, we propose interactive versions of existing MRC tasks, whereby the information needed to answer a question must be gathered sequentially.
The key idea behind our proposed interactive MRC (iMRC) is to restrict the document context that a model observes at one time. Concretely, we split a supporting document into its component sentences and withhold these sentences from the model. Given a question, the model must issue commands to observe sentences in the withheld set; we equip models with actions such as Ctrl+F (search for token) and stop for searching through partially observed documents. A model searches iteratively, conditioning each command on the input question and the sentences it has observed previously. Thus, our task requires models to `feed themselves' rather than spoon-feeding them with information. This casts MRC as a sequential decision-making problem amenable to reinforcement learning (RL).
As an initial case study, we repurpose two well known, related corpora with different difficulty levels for our interactive MRC task: SQuAD and NewsQA. Table TABREF2 shows some examples of a model performing interactive MRC on these datasets. Naturally, our reframing makes the MRC problem harder; however, we believe the added demands of iMRC more closely match web-level QA and may lead to deeper comprehension of documents' content.
The main contributions of this work are as follows:
We describe a method to make MRC datasets interactive and formulate the new task as an RL problem.
We develop a baseline agent that combines a top performing MRC model and a state-of-the-art RL optimization algorithm and test it on our iMRC tasks.
We conduct experiments on several variants of iMRC and discuss the significant challenges posed by our setting.
Related Works
Skip-reading BIBREF6, BIBREF7, BIBREF8 is an existing setting in which MRC models read partial documents. Concretely, these methods assume that not all tokens in the input sequence are useful, and therefore learn to skip irrelevant tokens based on the current input and their internal memory. Since skipping decisions are discrete, the models are often optimized by the REINFORCE algorithm BIBREF9. For example, the structural-jump-LSTM proposed in BIBREF10 learns to skip and jump over chunks of text. In a similar vein, BIBREF11 designed a QA task where the model reads streaming data unidirectionally, without knowing when the question will be provided. Skip-reading approaches are limited in that they only consider jumping over a few consecutive tokens and the skipping operations are usually unidirectional. Based on the assumption that a single pass of reading may not provide sufficient information, multi-pass reading methods have also been studied BIBREF12, BIBREF13.
Compared to skip-reading and multi-turn reading, our work enables an agent to jump through a document in a more dynamic manner, in some sense combining aspects of skip-reading and re-reading. For example, it can jump forward, backward, or to an arbitrary position, depending on the query. This also distinguishes the model we develop in this work from ReasoNet BIBREF13, where an agent decides when to stop unidirectional reading.
Recently, BIBREF14 propose DocQN, which is a DQN-based agent that leverages the (tree) structure of documents and navigates across sentences and paragraphs. The proposed method has been shown to outperform vanilla DQN and IR baselines on TriviaQA dataset. The main differences between our work and DocQA include: iMRC does not depend on extra meta information of documents (e.g., title, paragraph title) for building document trees as in DocQN; our proposed environment is partially-observable, and thus an agent is required to explore and memorize the environment via interaction; the action space in our setting (especially for the Ctrl+F command as defined in later section) is arguably larger than the tree sampling action space in DocQN.
Closely related to iMRC is work by BIBREF15, in which the authors introduce a collection of synthetic tasks to train and test information-seeking capabilities in neural models. We extend that work by developing a realistic and challenging text-based task.
Broadly speaking, our approach is also linked to the optimal stopping problem in the literature Markov decision processes (MDP) BIBREF16, where at each time-step the agent either continues or stops and accumulates reward. Here, we reformulate conventional QA tasks through the lens of optimal stopping, in hopes of improving over the shallow matching behaviors exhibited by many MRC systems.
iMRC: Making MRC Interactive
We build the iSQuAD and iNewsQA datasets based on SQuAD v1.1 BIBREF0 and NewsQA BIBREF1. Both original datasets share similar properties. Specifically, every data-point consists of a tuple, $\lbrace p, q, a\rbrace $, where $p$ represents a paragraph, $q$ a question, and $a$ is the answer. The answer is a word span defined by head and tail positions in $p$. NewsQA is more difficult than SQuAD because it has a larger vocabulary, more difficult questions, and longer source documents.
We first split every paragraph $p$ into a list of sentences $\mathcal {S} = \lbrace s_1, s_2, ..., s_n\rbrace $, where $n$ stands for number of sentences in $p$. Given a question $q$, rather than showing the entire paragraph $p$, we only show an agent the first sentence $s_1$ and withhold the rest. The agent must issue commands to reveal the hidden sentences progressively and thereby gather the information needed to answer question $q$.
An agent decides when to stop interacting and output an answer, but the number of interaction steps is limited. Once an agent has exhausted its step budget, it is forced to answer the question.
iMRC: Making MRC Interactive ::: Interactive MRC as a POMDP
As described in the previous section, we convert MRC tasks into sequential decision-making problems (which we will refer to as games). These can be described naturally within the reinforcement learning (RL) framework. Formally, tasks in iMRC are partially observable Markov decision processes (POMDP) BIBREF17. An iMRC data-point is a discrete-time POMDP defined by $(S, T, A, \Omega , O, R, \gamma )$, where $\gamma \in [0, 1]$ is the discount factor and the other elements are described in detail below.
Environment States ($S$): The environment state at turn $t$ in the game is $s_t \in S$. It contains the complete internal information of the game, much of which is hidden from the agent. When an agent issues an action $a_t$, the environment transitions to state $s_{t+1}$ with probability $T(s_{t+1} | s_t, a_t)$). In this work, transition probabilities are either 0 or 1 (i.e., deterministic environment).
Actions ($A$): At each game turn $t$, the agent issues an action $a_t \in A$. We will elaborate on the action space of iMRC in the action space section.
Observations ($\Omega $): The text information perceived by the agent at a given game turn $t$ is the agent's observation, $o_t \in \Omega $, which depends on the environment state and the previous action with probability $O(o_t|s_t)$. In this work, observation probabilities are either 0 or 1 (i.e., noiseless observation). Reward Function ($R$): Based on its actions, the agent receives rewards $r_t = R(s_t, a_t)$. Its objective is to maximize the expected discounted sum of rewards $E \left[\sum _t \gamma ^t r_t \right]$.
iMRC: Making MRC Interactive ::: Action Space
To better describe the action space of iMRC, we split an agent's actions into two phases: information gathering and question answering. During the information gathering phase, the agent interacts with the environment to collect knowledge. It answers questions with its accumulated knowledge in the question answering phase.
Information Gathering: At step $t$ of the information gathering phase, the agent can issue one of the following four actions to interact with the paragraph $p$, where $p$ consists of $n$ sentences and where the current observation corresponds to sentence $s_k,~1 \le k \le n$:
previous: jump to $ \small {\left\lbrace \begin{array}{ll} s_n & \text{if $k = 1$,}\\ s_{k-1} & \text{otherwise;} \end{array}\right.} $
next: jump to $ \small {\left\lbrace \begin{array}{ll} s_1 & \text{if $k = n$,}\\ s_{k+1} & \text{otherwise;} \end{array}\right.} $
Ctrl+F $<$query$>$: jump to the sentence that contains the next occurrence of “query”;
stop: terminate information gathering phase.
Question Answering: We follow the output format of both SQuAD and NewsQA, where an agent is required to point to the head and tail positions of an answer span within $p$. Assume that at step $t$ the agent stops interacting and the observation $o_t$ is $s_k$. The agent points to a head-tail position pair in $s_k$.
iMRC: Making MRC Interactive ::: Query Types
Given the question “When is the deadline of AAAI?”, as a human, one might try searching “AAAI” on a search engine, follow the link to the official AAAI website, then search for keywords “deadline” or “due date” on the website to jump to a specific paragraph. Humans have a deep understanding of questions because of their significant background knowledge. As a result, the keywords they use to search are not limited to what appears in the question.
Inspired by this observation, we study 3 query types for the Ctrl+F $<$query$>$ command.
One token from the question: the setting with smallest action space. Because iMRC deals with Ctrl+F commands by exact string matching, there is no guarantee that all sentences are accessible from question tokens only.
One token from the union of the question and the current observation: an intermediate level where the action space is larger.
One token from the dataset vocabulary: the action space is huge (see Table TABREF16 for statistics of SQuAD and NewsQA). It is guaranteed that all sentences in all documents are accessible through these tokens.
iMRC: Making MRC Interactive ::: Evaluation Metric
Since iMRC involves both MRC and RL, we adopt evaluation metrics from both settings. First, as a question answering task, we use $\text{F}_1$ score to compare predicted answers against ground-truth, as in previous works. When there exist multiple ground-truth answers, we report the max $\text{F}_1$ score. Second, mastering multiple games remains quite challenging for RL agents. Therefore, we evaluate an agent's performance during both its training and testing phases. During training, we report training curves averaged over 3 random seeds. During test, we follow common practice in supervised learning tasks where we report the agent's test performance corresponding to its best validation performance .
Baseline Agent
As a baseline, we propose QA-DQN, an agent that adopts components from QANet BIBREF18 and adds an extra command generation module inspired by LSTM-DQN BIBREF19.
As illustrated in Figure FIGREF6, the agent consists of three components: an encoder, an action generator, and a question answerer. More precisely, at a game step $t$, the encoder reads observation string $o_t$ and question string $q$ to generate attention aggregated hidden representations $M_t$. Using $M_t$, the action generator outputs commands (defined in previous sections) to interact with iMRC. If the generated command is stop or the agent is forced to stop, the question answerer takes the current information at game step $t$ to generate head and tail pointers for answering the question; otherwise, the information gathering procedure continues.
In this section, we describe the high-level model structure and training strategies of QA-DQN. We refer readers to BIBREF18 for detailed information. We will release datasets and code in the near future.
Baseline Agent ::: Model Structure
In this section, we use game step $t$ to denote one round of interaction between an agent with the iMRC environment. We use $o_t$ to denote text observation at game step $t$ and $q$ to denote question text. We use $L$ to refer to a linear transformation. $[\cdot ;\cdot ]$ denotes vector concatenation.
Baseline Agent ::: Model Structure ::: Encoder
The encoder consists of an embedding layer, two stacks of transformer blocks (denoted as encoder transformer blocks and aggregation transformer blocks), and an attention layer.
In the embedding layer, we aggregate both word- and character-level embeddings. Word embeddings are initialized by the 300-dimension fastText BIBREF20 vectors trained on Common Crawl (600B tokens), and are fixed during training. Character embeddings are initialized by 200-dimension random vectors. A convolutional layer with 96 kernels of size 5 is used to aggregate the sequence of characters. We use a max pooling layer on the character dimension, then a multi-layer perceptron (MLP) of size 96 is used to aggregate the concatenation of word- and character-level representations. A highway network BIBREF21 is used on top of this MLP. The resulting vectors are used as input to the encoding transformer blocks.
Each encoding transformer block consists of four convolutional layers (with shared weights), a self-attention layer, and an MLP. Each convolutional layer has 96 filters, each kernel's size is 7. In the self-attention layer, we use a block hidden size of 96 and a single head attention mechanism. Layer normalization and dropout are applied after each component inside the block. We add positional encoding into each block's input. We use one layer of such an encoding block.
At a game step $t$, the encoder processes text observation $o_t$ and question $q$ to generate context-aware encodings $h_{o_t} \in \mathbb {R}^{L^{o_t} \times H_1}$ and $h_q \in \mathbb {R}^{L^{q} \times H_1}$, where $L^{o_t}$ and $L^{q}$ denote length of $o_t$ and $q$ respectively, $H_1$ is 96.
Following BIBREF18, we use a context-query attention layer to aggregate the two representations $h_{o_t}$ and $h_q$. Specifically, the attention layer first uses two MLPs to map $h_{o_t}$ and $h_q$ into the same space, with the resulting representations denoted as $h_{o_t}^{\prime } \in \mathbb {R}^{L^{o_t} \times H_2}$ and $h_q^{\prime } \in \mathbb {R}^{L^{q} \times H_2}$, in which, $H_2$ is 96.
Then, a tri-linear similarity function is used to compute the similarities between each pair of $h_{o_t}^{\prime }$ and $h_q^{\prime }$ items:
where $\odot $ indicates element-wise multiplication and $w$ is trainable parameter vector of size 96.
We apply softmax to the resulting similarity matrix $S$ along both dimensions, producing $S^A$ and $S^B$. Information in the two representations are then aggregated as
where $h_{oq}$ is aggregated observation representation.
On top of the attention layer, a stack of aggregation transformer blocks is used to further map the observation representations to action representations and answer representations. The configuration parameters are the same as the encoder transformer blocks, except there are two convolution layers (with shared weights), and the number of blocks is 7.
Let $M_t \in \mathbb {R}^{L^{o_t} \times H_3}$ denote the output of the stack of aggregation transformer blocks, in which $H_3$ is 96.
Baseline Agent ::: Model Structure ::: Action Generator
The action generator takes $M_t$ as input and estimates Q-values for all possible actions. As described in previous section, when an action is a Ctrl+F command, it is composed of two tokens (the token “Ctrl+F” and the query token). Therefore, the action generator consists of three MLPs:
Here, the size of $L_{shared} \in \mathbb {R}^{95 \times 150}$; $L_{action}$ has an output size of 4 or 2 depending on the number of actions available; the size of $L_{ctrlf}$ is the same as the size of a dataset's vocabulary size (depending on different query type settings, we mask out words in the vocabulary that are not query candidates). The overall Q-value is simply the sum of the two components:
Baseline Agent ::: Model Structure ::: Question Answerer
Following BIBREF18, we append two extra stacks of aggregation transformer blocks on top of the encoder to compute head and tail positions:
Here, $M_{head}$ and $M_{tail}$ are outputs of the two extra transformer stacks, $L_0$, $L_1$, $L_2$ and $L_3$ are trainable parameters with output size 150, 150, 1 and 1, respectively.
Baseline Agent ::: Memory and Reward Shaping ::: Memory
In iMRC, some questions may not be easily answerable based only on observation of a single sentence. To overcome this limitation, we provide an explicit memory mechanism to QA-DQN. Specifically, we use a queue to store strings that have been observed recently. The queue has a limited size of slots (we use queues of size [1, 3, 5] in this work). This prevents the agent from issuing next commands until the environment has been observed fully, in which case our task would degenerate to the standard MRC setting. The memory slots are reset episodically.
Baseline Agent ::: Memory and Reward Shaping ::: Reward Shaping
Because the question answerer in QA-DQN is a pointing model, its performance relies heavily on whether the agent can find and stop at the sentence that contains the answer. We design a heuristic reward to encourage and guide this behavior. In particular, we assign a reward if the agent halts at game step $k$ and the answer is a sub-string of $o_k$ (if larger memory slots are used, we assign this reward if the answer is a sub-string of the memory at game step $k$). We denote this reward as the sufficient information reward, since, if an agent sees the answer, it should have a good chance of having gathered sufficient information for the question (although this is not guaranteed).
Note this sufficient information reward is part of the design of QA-DQN, whereas the question answering score is the only metric used to evaluate an agent's performance on the iMRC task.
Baseline Agent ::: Memory and Reward Shaping ::: Ctrl+F Only Mode
As mentioned above, an agent might bypass Ctrl+F actions and explore an iMRC game only via next commands. We study this possibility in an ablation study, where we limit the agent to the Ctrl+F and stop commands. In this setting, an agent is forced to explore by means of search a queries.
Baseline Agent ::: Training Strategy
In this section, we describe our training strategy. We split the training pipeline into two parts for easy comprehension. We use Adam BIBREF22 as the step rule for optimization in both parts, with the learning rate set to 0.00025.
Baseline Agent ::: Training Strategy ::: Action Generation
iMRC games are interactive environments. We use an RL training algorithm to train the interactive information-gathering behavior of QA-DQN. We adopt the Rainbow algorithm proposed by BIBREF23, which integrates several extensions to the original Deep Q-Learning algorithm BIBREF24. Rainbox exhibits state-of-the-art performance on several RL benchmark tasks (e.g., Atari games).
During game playing, we use a mini-batch of size 10 and push all transitions (observation string, question string, generated command, reward) into a replay buffer of size 500,000. We do not compute losses directly using these transitions. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer, compute loss, and update the network.
Detailed hyper-parameter settings for action generation are shown in Table TABREF38.
Baseline Agent ::: Training Strategy ::: Question Answering
Similarly, we use another replay buffer to store question answering transitions (observation string when interaction stops, question string, ground-truth answer).
Because both iSQuAD and iNewsQA are converted from datasets that provide ground-truth answer positions, we can leverage this information and train the question answerer with supervised learning. Specifically, we only push question answering transitions when the ground-truth answer is in the observation string. For each transition, we convert the ground-truth answer head- and tail-positions from the SQuAD and NewsQA datasets to positions in the current observation string. After every 5 game steps, we randomly sample a mini-batch of 64 transitions from the replay buffer and train the question answerer using the Negative Log-Likelihood (NLL) loss. We use a dropout rate of 0.1.
Experimental Results
In this study, we focus on three factors and their effects on iMRC and the performance of the QA-DQN agent:
different Ctrl+F strategies, as described in the action space section;
enabled vs. disabled next and previous actions;
different memory slot sizes.
Below we report the baseline agent's training performance followed by its generalization performance on test data.
Experimental Results ::: Mastering Training Games
It remains difficult for RL agents to master multiple games at the same time. In our case, each document-question pair can be considered a unique game, and there are hundred of thousands of them. Therefore, as is common practice in the RL literature, we study an agent's training curves.
Due to the space limitations, we select several representative settings to discuss in this section and provide QA-DQN's training and evaluation curves for all experimental settings in the Appendix. We provide the agent's sufficient information rewards (i.e., if the agent stopped at a state where the observation contains the answer) during training in Appendix as well.
Figure FIGREF36 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are available. Figure FIGREF40 shows QA-DQN's training performance ($\text{F}_1$ score) when next and previous actions are disabled. Note that all training curves are averaged over 3 runs with different random seeds and all evaluation curves show the one run with max validation performance among the three.
From Figure FIGREF36, we can see that the three Ctrl+F strategies show similar difficulty levels when next and previous are available, although QA-DQN works slightly better when selecting a word from the question as query (especially on iNewsQA). However, from Figure FIGREF40 we observe that when next and previous are disabled, QA-DQN shows significant advantage when selecting a word from the question as query. This may due to the fact that when an agent must use Ctrl+F to navigate within documents, the set of question words is a much smaller action space in contrast to the other two settings. In the 4-action setting, an agent can rely on issuing next and previous actions to reach any sentence in a document.
The effect of action space size on model performance is particularly clear when using a datasets' entire vocabulary as query candidates in the 2-action setting. From Figure FIGREF40 (and figures with sufficient information rewards in the Appendix) we see QA-DQN has a hard time learning in this setting. As shown in Table TABREF16, both datasets have a vocabulary size of more than 100k. This is much larger than in the other two settings, where on average the length of questions is around 10. This suggests that the methods with better sample efficiency are needed to act in more realistic problem settings with huge action spaces.
Experiments also show that a larger memory slot size always helps. Intuitively, with a memory mechanism (either implicit or explicit), an agent could make the environment closer to fully observed by exploring and memorizing observations. Presumably, a larger memory may further improve QA-DQN's performance, but considering the average number of sentences in each iSQuAD game is 5, a memory with more than 5 slots will defeat the purpose of our study of partially observable text environments.
Not surprisingly, QA-DQN performs worse in general on iNewsQA, in all experiments. As shown in Table TABREF16, the average number of sentences per document in iNewsQA is about 6 times more than in iSQuAD. This is analogous to games with larger maps in the RL literature, where the environment is partially observable. A better exploration (in our case, jumping) strategy may help QA-DQN to master such harder games.
Experimental Results ::: Generalizing to Test Set
To study QA-DQN's ability to generalize, we select the best performing agent in each experimental setting on the validation set and report their performance on the test set. The agent's test performance is reported in Table TABREF41. In addition, to support our claim that the challenging part of iMRC tasks is information seeking rather than answering questions given sufficient information, we also report the $\text{F}_1$ score of an agent when it has reached the piece of text that contains the answer, which we denote as $\text{F}_{1\text{info}}$.
From Table TABREF41 (and validation curves provided in appendix) we can observe that QA-DQN's performance during evaluation matches its training performance in most settings. $\text{F}_{1\text{info}}$ scores are consistently higher than the overall $\text{F}_1$ scores, and they have much less variance across different settings. This supports our hypothesis that information seeking play an important role in solving iMRC tasks, whereas question answering given necessary information is relatively straightforward. This also suggests that an interactive agent that can better navigate to important sentences is very likely to achieve better performance on iMRC tasks.
Discussion and Future Work
In this work, we propose and explore the direction of converting MRC datasets into interactive environments. We believe interactive, information-seeking behavior is desirable for neural MRC systems when knowledge sources are partially observable and/or too large to encode in their entirety — for instance, when searching for information on the internet, where knowledge is by design easily accessible to humans through interaction.
Despite being restricted, our proposed task presents major challenges to existing techniques. iMRC lies at the intersection of NLP and RL, which is arguably less studied in existing literature. We hope to encourage researchers from both NLP and RL communities to work toward solving this task.
For our baseline, we adopted an off-the-shelf, top-performing MRC model and RL method. Either component can be replaced straightforwardly with other methods (e.g., to utilize a large-scale pretrained language model).
Our proposed setup and baseline agent presently use only a single word with the query command. However, a host of other options should be considered in future work. For example, multi-word queries with fuzzy matching are more realistic. It would also be interesting for an agent to generate a vector representation of the query in some latent space. This vector could then be compared with precomputed document representations (e.g., in an open domain QA dataset) to determine what text to observe next, with such behavior tantamount to learning to do IR.
As mentioned, our idea for reformulating existing MRC datasets as partially observable and interactive environments is straightforward and general. Almost all MRC datasets can be used to study interactive, information-seeking behavior through similar modifications. We hypothesize that such behavior can, in turn, help in solving real-world MRC problems involving search. | previous, next, Ctrl+F $<$query$>$, stop |
6976296126e4a5c518e6b57de70f8dc8d8fde292 | 6976296126e4a5c518e6b57de70f8dc8d8fde292_0 | Q: What models do they propose?
Text: Introduction
Social Media platforms such as Facebook, Twitter or Reddit have empowered individuals' voices and facilitated freedom of expression. However they have also been a breeding ground for hate speech and other types of online harassment. Hate speech is defined in legal literature as speech (or any form of expression) that expresses (or seeks to promote, or has the capacity to increase) hatred against a person or a group of people because of a characteristic they share, or a group to which they belong BIBREF0. Twitter develops this definition in its hateful conduct policy as violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
In this work we focus on hate speech detection. Due to the inherent complexity of this task, it is important to distinguish hate speech from other types of online harassment. In particular, although it might be offensive to many people, the sole presence of insulting terms does not itself signify or convey hate speech. And, the other way around, hate speech may denigrate or threaten an individual or a group of people without the use of any profanities. People from the african-american community, for example, often use the term nigga online, in everyday language, without malicious intentions to refer to folks within their community, and the word cunt is often used in non hate speech publications and without any sexist purpose. The goal of this work is not to discuss if racial slur, such as nigga, should be pursued. The goal is to distinguish between publications using offensive terms and publications attacking communities, which we call hate speech.
Modern social media content usually include images and text. Some of these multimodal publications are only hate speech because of the combination of the text with a certain image. That is because, as we have stated, the presence of offensive terms does not itself signify hate speech, and the presence of hate speech is often determined by the context of a publication. Moreover, users authoring hate speech tend to intentionally construct publications where the text is not enough to determine they are hate speech. This happens especially in Twitter, where multimodal tweets are formed by an image and a short text, which in many cases is not enough to judge them. In those cases, the image might give extra context to make a proper judgement. Fig. FIGREF5 shows some of such examples in MMHS150K.
The contributions of this work are as follows:
[noitemsep,leftmargin=*]
We propose the novel task of hate speech detection in multimodal publications, collect, annotate and publish a large scale dataset.
We evaluate state of the art multimodal models on this specific task and compare their performance with unimodal detection. Even though images are proved to be useful for hate speech detection, the proposed multimodal models do not outperform unimodal textual models.
We study the challenges of the proposed task, and open the field for future research.
Related Work ::: Hate Speech Detection
The literature on detecting hate speech on online textual publications is extensive. Schmidt and Wiegand BIBREF1 recently provided a good survey of it, where they review the terminology used over time, the features used, the existing datasets and the different approaches. However, the field lacks a consistent dataset and evaluation protocol to compare proposed methods. Saleem et al. BIBREF2 compare different classification methods detecting hate speech in Reddit and other forums. Wassem and Hovy BIBREF3 worked on hate speech detection on twitter, published a manually annotated dataset and studied its hate distribution. Later Wassem BIBREF4 extended the previous published dataset and compared amateur and expert annotations, concluding that amateur annotators are more likely than expert annotators to label items as hate speech. Park and Fung BIBREF5 worked on Wassem datasets and proposed a classification method using a CNN over Word2Vec BIBREF6 word embeddings, showing also classification results on racism and sexism hate sub-classes. Davidson et al. BIBREF7 also worked on hate speech detection on twitter, publishing another manually annotated dataset. They test different classifiers such as SVMs and decision trees and provide a performance comparison. Malmasi and Zampieri BIBREF8 worked on Davidson's dataset improving his results using more elaborated features. ElSherief et al. BIBREF9 studied hate speech on twitter and selected the most frequent terms in hate tweets based on Hatebase, a hate expression repository. They propose a big hate dataset but it lacks manual annotations, and all the tweets containing certain hate expressions are considered hate speech. Zhang et al. BIBREF10 recently proposed a more sophisticated approach for hate speech detection, using a CNN and a GRU BIBREF11 over Word2Vec BIBREF6 word embeddings. They show experiments in different datasets outperforming previous methods. Next, we summarize existing hate speech datasets:
[noitemsep,leftmargin=*]
RM BIBREF10: Formed by $2,435$ tweets discussing Refugees and Muslims, annotated as hate or non-hate.
DT BIBREF7: Formed by $24,783$ tweets annotated as hate, offensive language or neither. In our work, offensive language tweets are considered as non-hate.
WZ-LS BIBREF5: A combination of Wassem datasets BIBREF4, BIBREF3 labeled as racism, sexism, neither or both that make a total of $18,624$ tweets.
Semi-Supervised BIBREF9: Contains $27,330$ general hate speech Twitter tweets crawled in a semi-supervised manner.
Although often modern social media publications include images, not too many contributions exist that exploit visual information. Zhong et al. BIBREF12 worked on classifying Instagram images as potential cyberbullying targets, exploiting both the image content, the image caption and the comments. However, their visual information processing is limited to the use of features extracted by a pre-trained CNN, the use of which does not achieve any improvement. Hosseinmardi et al. BIBREF13 also address the problem of detecting cyberbullying incidents on Instagram exploiting both textual and image content. But, again, their visual information processing is limited to use the features of a pre-trained CNN, and the improvement when using visual features on cyberbullying classification is only of 0.01%.
Related Work ::: Visual and Textual Data Fusion
A typical task in multimodal visual and textual analysis is to learn an alignment between feature spaces. To do that, usually a CNN and a RNN are trained jointly to learn a joint embedding space from aligned multimodal data. This approach is applied in tasks such as image captioning BIBREF14, BIBREF15 and multimodal image retrieval BIBREF16, BIBREF17. On the other hand, instead of explicitly learning an alignment between two spaces, the goal of Visual Question Answering (VQA) is to merge both data modalities in order to decide which answer is correct. This problem requires modeling very precise correlations between the image and the question representations. The VQA task requirements are similar to our hate speech detection problem in multimodal publications, where we have a visual and a textual input and we need to combine both sources of information to understand the global context and make a decision. We thus take inspiration from the VQA literature for the tested models. Early VQA methods BIBREF18 fuse textual and visual information by feature concatenation. Later methods, such as Multimodal Compact Bilinear pooling BIBREF19, utilize bilinear pooling to learn multimodal features. An important limitation of these methods is that the multimodal features are fused in the latter model stage, so the textual and visual relationships are modeled only in the last layers. Another limitation is that the visual features are obtained by representing the output of the CNN as a one dimensional vector, which losses the spatial information of the input images. In a recent work, Gao et al. BIBREF20 propose a feature fusion scheme to overcome these limitations. They learn convolution kernels from the textual information –which they call question-guided kernels– and convolve them with the visual information in an earlier stage to get the multimodal features. Margffoy-Tuay et al. BIBREF21 use a similar approach to combine visual and textual information, but they address a different task: instance segmentation guided by natural language queries. We inspire in these latest feature fusion works to build the models for hate speech detection.
The MMHS150K dataset
Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps.
The MMHS150K dataset ::: Tweets Gathering
We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter.
The MMHS150K dataset ::: Textual Image Filtering
We aim to create a multimodal hate speech database where all the instances contain visual and textual information that we can later process to determine if a tweet is hate speech or not. But a considerable amount of the images of the selected tweets contain only textual information, such as screenshots of other tweets. To ensure that all the dataset instances contain both visual and textual information, we remove those tweets. To do that, we use TextFCN BIBREF22, BIBREF23 , a Fully Convolutional Network that produces a pixel wise text probability map of an image. We set empirical thresholds to discard images that have a substantial total text probability, filtering out $23\%$ of the collected tweets.
The MMHS150K dataset ::: Annotation
We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers.
We received a lot of valuable feedback from the annotators. Most of them had understood the task correctly, but they were worried because of its subjectivity. This is indeed a subjective task, highly dependent on the annotator convictions and sensitivity. However, we expect to get cleaner annotations the more strong the attack is, which are the publications we are more interested on detecting. We also detected that several users annotate tweets for hate speech just by spotting slur. As already said previously, just the use of particular words can be offensive to many people, but this is not the task we aim to solve. We have not included in our experiments those hits that were made in less than 3 seconds, understanding that it takes more time to grasp the multimodal context and make a decision.
We do a majority voting between the three annotations to get the tweets category. At the end, we obtain $112,845$ not hate tweets and $36,978$ hate tweets. The latest are divided in $11,925$ racist, $3,495$ sexist, $3,870$ homophobic, 163 religion-based hate and $5,811$ other hate tweets (Fig. FIGREF17). In this work, we do not use hate sub-categories, and stick to the hate / not hate split. We separate balanced validation ($5,000$) and test ($10,000$) sets. The remaining tweets are used for training.
We also experimented using hate scores for each tweet computed given the different votes by the three annotators instead of binary labels. The results did not present significant differences to those shown in the experimental part of this work, but the raw annotations will be published nonetheless for further research.
As far as we know, this dataset is the biggest hate speech dataset to date, and the first multimodal hate speech dataset. One of its challenges is to distinguish between tweets using the same key offensive words that constitute or not an attack to a community (hate speech). Fig. FIGREF18 shows the percentage of hate and not hate tweets of the top keywords.
Methodology ::: Unimodal Treatment ::: Images.
All images are resized such that their shortest size has 500 pixels. During training, online data augmentation is applied as random cropping of $299\times 299$ patches and mirroring. We use a CNN as the image features extractor which is an Imagenet BIBREF24 pre-trained Google Inception v3 architecture BIBREF25. The fine-tuning process of the Inception v3 layers aims to modify its weights to extract the features that, combined with the textual information, are optimal for hate speech detection.
Methodology ::: Unimodal Treatment ::: Tweet Text.
We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Since our dataset is not big enough to train a GloVe word embedding model, we used a pre-trained model that has been trained in two billion tweets. This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter. To process the tweets text before generating the word embeddings, we use the same pipeline as the model authors, which includes generating symbols to encode Twitter special interactions such as user mentions (@user) or hashtags (#hashtag). To encode the tweet text and input it later to multimodal models, we use the LSTM hidden state after processing the last tweet word. Since the LSTM has been trained for hate speech classification, it extracts the most useful information for this task from the text, which is encoded in the hidden state after inputting the last tweet word.
Methodology ::: Unimodal Treatment ::: Image Text.
The text in the image can also contain important information to decide if a publication is hate speech or not, so we extract it and also input it to our model. To do so, we use Google Vision API Text Detection module BIBREF27. We input the tweet text and the text from the image separately to the multimodal models, so it might learn different relations between them and between them and the image. For instance, the model could learn to relate the image text with the area in the image where the text appears, so it could learn to interpret the text in a different way depending on the location where it is written in the image. The image text is also encoded by the LSTM as the hidden state after processing its last word.
Methodology ::: Multimodal Architectures
The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any).
Methodology ::: Multimodal Architectures ::: Feature Concatenation Model (FCM)
The image is fed to the Inception v3 architecture and the 2048 dimensional feature vector after the last average pooling layer is used as the visual representation. This vector is then concatenated with the 150 dimension vectors of the LSTM last word hidden states of the image text and the tweet text, resulting in a 2348 feature vector. This vector is then processed by three fully connected layers of decreasing dimensionality $(2348, 1024, 512)$ with following corresponding batch normalization and ReLu layers until the dimensions are reduced to two, the number of classes, in the last classification layer. The FCM architecture is illustrated in Fig. FIGREF26.
Methodology ::: Multimodal Architectures ::: Spatial Concatenation Model (SCM)
Instead of using the latest feature vector before classification of the Inception v3 as the visual representation, in the SCM we use the $8\times 8\times 2048$ feature map after the last Inception module. Then we concatenate the 150 dimension vectors encoding the tweet text and the tweet image text at each spatial location of that feature map. The resulting multimodal feature map is processed by two Inception-E blocks BIBREF28. After that, dropout and average pooling are applied and, as in the FCM model, three fully connected layers are used to reduce the dimensionality until the classification layer.
Methodology ::: Multimodal Architectures ::: Textual Kernels Model (TKM)
The TKM design, inspired by BIBREF20 and BIBREF21, aims to capture interactions between the two modalities more expressively than concatenation models. As in SCM we use the $8\times 8\times 2048$ feature map after the last Inception module as the visual representation. From the 150 dimension vector encoding the tweet text, we learn $K_t$ text dependent kernels using independent fully connected layers that are trained together with the rest of the model. The resulting $K_t$ text dependent kernels will have dimensionality of $1\times 1\times 2048$. We do the same with the feature vector encoding the image text, learning $K_{it}$ kernels. The textual kernels are convolved with the visual feature map in the channel dimension at each spatial location, resulting in a $8\times 8\times (K_i+K_{it})$ multimodal feature map, and batch normalization is applied. Then, as in the SCM, the 150 dimension vectors encoding the tweet text and the tweet image text are concatenated at each spatial dimension. The rest of the architecture is the same as in SCM: two Inception-E blocks, dropout, average pooling and three fully connected layers until the classification layer. The number of tweet textual kernels $K_t$ and tweet image textual kernels $K_it$ is set to $K_t = 10$ and $K_it = 5$. The TKM architecture is illustrated in Fig. FIGREF29.
Methodology ::: Multimodal Architectures ::: Training
We train the multimodal models with a Cross-Entropy loss with Softmax activations and an ADAM optimizer with an initial learning rate of $1e-4$. Our dataset suffers from a high class imbalance, so we weight the contribution to the loss of the samples to totally compensate for it. One of the goals of this work is to explore how every one of the inputs contributes to the classification and to prove that the proposed model can learn concurrences between visual and textual data useful to improve the hate speech classification results on multimodal data. To do that we train different models where all or only some inputs are available. When an input is not available, we set it to zeros, and we do the same when an image has no text.
Results
Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models.
First, notice that given the subjectivity of the task and the discrepancies between annotators, getting optimal scores in the evaluation metrics is virtually impossible. However, a system with relatively low metric scores can still be very useful for hate speech detection in a real application: it will fire on publications for which most annotators agree they are hate, which are often the stronger attacks. The proposed LSTM to detect hate speech when only text is available, gets similar results as the method presented in BIBREF7, which we trained with MMHS150K and the same splits. However, more than substantially advancing the state of the art on hate speech detection in textual publications, our key purpose in this work is to introduce and work on its detection on multimodal publications. We use LSTM because it provides a strong representation of the tweet texts.
The FCM trained only with images gets decent results, considering that in many publications the images might not give any useful information for the task. Fig. FIGREF33 shows some representative examples of the top hate and not hate scored images of this model. Many hate tweets are accompanied by demeaning nudity images, being sexist or homophobic. Other racist tweets are accompanied by images caricaturing black people. Finally, MEMES are also typically used in hate speech publications. The top scored images for not hate are portraits of people belonging to minorities. This is due to the use of slur inside these communities without an offensive intention, such as the word nigga inside the afro-american community or the word dyke inside the lesbian community. These results show that images can be effectively used to discriminate between offensive and non-offensive uses of those words.
Despite the model trained only with images proves that they are useful for hate speech detection, the proposed multimodal models are not able to improve the detection compared to the textual models. Besides the different architectures, we have tried different training strategies, such as initializing the CNN weights with a model already trained solely with MMHS150K images or using dropout to force the multimodal models to use the visual information. Eventually, though, these models end up using almost only the text input for the prediction and producing very similar results to those of the textual models. The proposed multimodal models, such as TKM, have shown good performance in other tasks, such as VQA. Next, we analyze why they do not perform well in this task and with this data:
[noitemsep,leftmargin=*]
Noisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.
Complexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.
Small set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate.
Conclusions
In this work we have explored the task of hate speech detection on multimodal publications. We have created MMHS150K, to our knowledge the biggest available hate speech dataset, and the first one composed of multimodal data, namely tweets formed by image and text. We have trained different textual, visual and multimodal models with that data, and found out that, despite the fact that images are useful for hate speech detection, the multimodal models do not outperform the textual models. Finally, we have analyzed the challenges of the proposed task and dataset. Given that most of the content in Social Media nowadays is multimodal, we truly believe on the importance of pushing forward this research. The code used in this work is available in . | Feature Concatenation Model (FCM), Spatial Concatenation Model (SCM), Textual Kernels Model (TKM) |
53640834d68cf3b86cf735ca31f1c70aa0006b72 | 53640834d68cf3b86cf735ca31f1c70aa0006b72_0 | Q: Are all tweets in English?
Text: Introduction
Social Media platforms such as Facebook, Twitter or Reddit have empowered individuals' voices and facilitated freedom of expression. However they have also been a breeding ground for hate speech and other types of online harassment. Hate speech is defined in legal literature as speech (or any form of expression) that expresses (or seeks to promote, or has the capacity to increase) hatred against a person or a group of people because of a characteristic they share, or a group to which they belong BIBREF0. Twitter develops this definition in its hateful conduct policy as violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
In this work we focus on hate speech detection. Due to the inherent complexity of this task, it is important to distinguish hate speech from other types of online harassment. In particular, although it might be offensive to many people, the sole presence of insulting terms does not itself signify or convey hate speech. And, the other way around, hate speech may denigrate or threaten an individual or a group of people without the use of any profanities. People from the african-american community, for example, often use the term nigga online, in everyday language, without malicious intentions to refer to folks within their community, and the word cunt is often used in non hate speech publications and without any sexist purpose. The goal of this work is not to discuss if racial slur, such as nigga, should be pursued. The goal is to distinguish between publications using offensive terms and publications attacking communities, which we call hate speech.
Modern social media content usually include images and text. Some of these multimodal publications are only hate speech because of the combination of the text with a certain image. That is because, as we have stated, the presence of offensive terms does not itself signify hate speech, and the presence of hate speech is often determined by the context of a publication. Moreover, users authoring hate speech tend to intentionally construct publications where the text is not enough to determine they are hate speech. This happens especially in Twitter, where multimodal tweets are formed by an image and a short text, which in many cases is not enough to judge them. In those cases, the image might give extra context to make a proper judgement. Fig. FIGREF5 shows some of such examples in MMHS150K.
The contributions of this work are as follows:
[noitemsep,leftmargin=*]
We propose the novel task of hate speech detection in multimodal publications, collect, annotate and publish a large scale dataset.
We evaluate state of the art multimodal models on this specific task and compare their performance with unimodal detection. Even though images are proved to be useful for hate speech detection, the proposed multimodal models do not outperform unimodal textual models.
We study the challenges of the proposed task, and open the field for future research.
Related Work ::: Hate Speech Detection
The literature on detecting hate speech on online textual publications is extensive. Schmidt and Wiegand BIBREF1 recently provided a good survey of it, where they review the terminology used over time, the features used, the existing datasets and the different approaches. However, the field lacks a consistent dataset and evaluation protocol to compare proposed methods. Saleem et al. BIBREF2 compare different classification methods detecting hate speech in Reddit and other forums. Wassem and Hovy BIBREF3 worked on hate speech detection on twitter, published a manually annotated dataset and studied its hate distribution. Later Wassem BIBREF4 extended the previous published dataset and compared amateur and expert annotations, concluding that amateur annotators are more likely than expert annotators to label items as hate speech. Park and Fung BIBREF5 worked on Wassem datasets and proposed a classification method using a CNN over Word2Vec BIBREF6 word embeddings, showing also classification results on racism and sexism hate sub-classes. Davidson et al. BIBREF7 also worked on hate speech detection on twitter, publishing another manually annotated dataset. They test different classifiers such as SVMs and decision trees and provide a performance comparison. Malmasi and Zampieri BIBREF8 worked on Davidson's dataset improving his results using more elaborated features. ElSherief et al. BIBREF9 studied hate speech on twitter and selected the most frequent terms in hate tweets based on Hatebase, a hate expression repository. They propose a big hate dataset but it lacks manual annotations, and all the tweets containing certain hate expressions are considered hate speech. Zhang et al. BIBREF10 recently proposed a more sophisticated approach for hate speech detection, using a CNN and a GRU BIBREF11 over Word2Vec BIBREF6 word embeddings. They show experiments in different datasets outperforming previous methods. Next, we summarize existing hate speech datasets:
[noitemsep,leftmargin=*]
RM BIBREF10: Formed by $2,435$ tweets discussing Refugees and Muslims, annotated as hate or non-hate.
DT BIBREF7: Formed by $24,783$ tweets annotated as hate, offensive language or neither. In our work, offensive language tweets are considered as non-hate.
WZ-LS BIBREF5: A combination of Wassem datasets BIBREF4, BIBREF3 labeled as racism, sexism, neither or both that make a total of $18,624$ tweets.
Semi-Supervised BIBREF9: Contains $27,330$ general hate speech Twitter tweets crawled in a semi-supervised manner.
Although often modern social media publications include images, not too many contributions exist that exploit visual information. Zhong et al. BIBREF12 worked on classifying Instagram images as potential cyberbullying targets, exploiting both the image content, the image caption and the comments. However, their visual information processing is limited to the use of features extracted by a pre-trained CNN, the use of which does not achieve any improvement. Hosseinmardi et al. BIBREF13 also address the problem of detecting cyberbullying incidents on Instagram exploiting both textual and image content. But, again, their visual information processing is limited to use the features of a pre-trained CNN, and the improvement when using visual features on cyberbullying classification is only of 0.01%.
Related Work ::: Visual and Textual Data Fusion
A typical task in multimodal visual and textual analysis is to learn an alignment between feature spaces. To do that, usually a CNN and a RNN are trained jointly to learn a joint embedding space from aligned multimodal data. This approach is applied in tasks such as image captioning BIBREF14, BIBREF15 and multimodal image retrieval BIBREF16, BIBREF17. On the other hand, instead of explicitly learning an alignment between two spaces, the goal of Visual Question Answering (VQA) is to merge both data modalities in order to decide which answer is correct. This problem requires modeling very precise correlations between the image and the question representations. The VQA task requirements are similar to our hate speech detection problem in multimodal publications, where we have a visual and a textual input and we need to combine both sources of information to understand the global context and make a decision. We thus take inspiration from the VQA literature for the tested models. Early VQA methods BIBREF18 fuse textual and visual information by feature concatenation. Later methods, such as Multimodal Compact Bilinear pooling BIBREF19, utilize bilinear pooling to learn multimodal features. An important limitation of these methods is that the multimodal features are fused in the latter model stage, so the textual and visual relationships are modeled only in the last layers. Another limitation is that the visual features are obtained by representing the output of the CNN as a one dimensional vector, which losses the spatial information of the input images. In a recent work, Gao et al. BIBREF20 propose a feature fusion scheme to overcome these limitations. They learn convolution kernels from the textual information –which they call question-guided kernels– and convolve them with the visual information in an earlier stage to get the multimodal features. Margffoy-Tuay et al. BIBREF21 use a similar approach to combine visual and textual information, but they address a different task: instance segmentation guided by natural language queries. We inspire in these latest feature fusion works to build the models for hate speech detection.
The MMHS150K dataset
Existing hate speech datasets contain only textual data. Moreover, a reference benchmark does not exists. Most of the published datasets are crawled from Twitter and distributed as tweet IDs but, since Twitter removes reported user accounts, an important amount of their hate tweets is no longer accessible. We create a new manually annotated multimodal hate speech dataset formed by $150,000$ tweets, each one of them containing text and an image. We call the dataset MMHS150K, and made it available online . In this section, we explain the dataset creation steps.
The MMHS150K dataset ::: Tweets Gathering
We used the Twitter API to gather real-time tweets from September 2018 until February 2019, selecting the ones containing any of the 51 Hatebase terms that are more common in hate speech tweets, as studied in BIBREF9. We filtered out retweets, tweets containing less than three words and tweets containing porn related terms. From that selection, we kept the ones that included images and downloaded them. Twitter applies hate speech filters and other kinds of content control based on its policy, although the supervision is based on users' reports. Therefore, as we are gathering tweets from real-time posting, the content we get has not yet passed any filter.
The MMHS150K dataset ::: Textual Image Filtering
We aim to create a multimodal hate speech database where all the instances contain visual and textual information that we can later process to determine if a tweet is hate speech or not. But a considerable amount of the images of the selected tweets contain only textual information, such as screenshots of other tweets. To ensure that all the dataset instances contain both visual and textual information, we remove those tweets. To do that, we use TextFCN BIBREF22, BIBREF23 , a Fully Convolutional Network that produces a pixel wise text probability map of an image. We set empirical thresholds to discard images that have a substantial total text probability, filtering out $23\%$ of the collected tweets.
The MMHS150K dataset ::: Annotation
We annotate the gathered tweets using the crowdsourcing platform Amazon Mechanical Turk. There, we give the workers the definition of hate speech and show some examples to make the task clearer. We then show the tweet text and image and we ask them to classify it in one of 6 categories: No attacks to any community, racist, sexist, homophobic, religion based attacks or attacks to other communities. Each one of the $150,000$ tweets is labeled by 3 different workers to palliate discrepancies among workers.
We received a lot of valuable feedback from the annotators. Most of them had understood the task correctly, but they were worried because of its subjectivity. This is indeed a subjective task, highly dependent on the annotator convictions and sensitivity. However, we expect to get cleaner annotations the more strong the attack is, which are the publications we are more interested on detecting. We also detected that several users annotate tweets for hate speech just by spotting slur. As already said previously, just the use of particular words can be offensive to many people, but this is not the task we aim to solve. We have not included in our experiments those hits that were made in less than 3 seconds, understanding that it takes more time to grasp the multimodal context and make a decision.
We do a majority voting between the three annotations to get the tweets category. At the end, we obtain $112,845$ not hate tweets and $36,978$ hate tweets. The latest are divided in $11,925$ racist, $3,495$ sexist, $3,870$ homophobic, 163 religion-based hate and $5,811$ other hate tweets (Fig. FIGREF17). In this work, we do not use hate sub-categories, and stick to the hate / not hate split. We separate balanced validation ($5,000$) and test ($10,000$) sets. The remaining tweets are used for training.
We also experimented using hate scores for each tweet computed given the different votes by the three annotators instead of binary labels. The results did not present significant differences to those shown in the experimental part of this work, but the raw annotations will be published nonetheless for further research.
As far as we know, this dataset is the biggest hate speech dataset to date, and the first multimodal hate speech dataset. One of its challenges is to distinguish between tweets using the same key offensive words that constitute or not an attack to a community (hate speech). Fig. FIGREF18 shows the percentage of hate and not hate tweets of the top keywords.
Methodology ::: Unimodal Treatment ::: Images.
All images are resized such that their shortest size has 500 pixels. During training, online data augmentation is applied as random cropping of $299\times 299$ patches and mirroring. We use a CNN as the image features extractor which is an Imagenet BIBREF24 pre-trained Google Inception v3 architecture BIBREF25. The fine-tuning process of the Inception v3 layers aims to modify its weights to extract the features that, combined with the textual information, are optimal for hate speech detection.
Methodology ::: Unimodal Treatment ::: Tweet Text.
We train a single layer LSTM with a 150-dimensional hidden state for hate / not hate classification. The input dimensionality is set to 100 and GloVe BIBREF26 embeddings are used as word input representations. Since our dataset is not big enough to train a GloVe word embedding model, we used a pre-trained model that has been trained in two billion tweets. This ensures that the model will be able to produce word embeddings for slang and other words typically used in Twitter. To process the tweets text before generating the word embeddings, we use the same pipeline as the model authors, which includes generating symbols to encode Twitter special interactions such as user mentions (@user) or hashtags (#hashtag). To encode the tweet text and input it later to multimodal models, we use the LSTM hidden state after processing the last tweet word. Since the LSTM has been trained for hate speech classification, it extracts the most useful information for this task from the text, which is encoded in the hidden state after inputting the last tweet word.
Methodology ::: Unimodal Treatment ::: Image Text.
The text in the image can also contain important information to decide if a publication is hate speech or not, so we extract it and also input it to our model. To do so, we use Google Vision API Text Detection module BIBREF27. We input the tweet text and the text from the image separately to the multimodal models, so it might learn different relations between them and between them and the image. For instance, the model could learn to relate the image text with the area in the image where the text appears, so it could learn to interpret the text in a different way depending on the location where it is written in the image. The image text is also encoded by the LSTM as the hidden state after processing its last word.
Methodology ::: Multimodal Architectures
The objective of this work is to build a hate speech detector that leverages both textual and visual data and detects hate speech publications based on the context given by both data modalities. To study how the multimodal context can boost the performance compared to an unimodal context we evaluate different models: a Feature Concatenation Model (FCM), a Spatial Concatenation Model (SCM) and a Textual Kernels Model (TKM). All of them are CNN+RNN models with three inputs: the tweet image, the tweet text and the text appearing in the image (if any).
Methodology ::: Multimodal Architectures ::: Feature Concatenation Model (FCM)
The image is fed to the Inception v3 architecture and the 2048 dimensional feature vector after the last average pooling layer is used as the visual representation. This vector is then concatenated with the 150 dimension vectors of the LSTM last word hidden states of the image text and the tweet text, resulting in a 2348 feature vector. This vector is then processed by three fully connected layers of decreasing dimensionality $(2348, 1024, 512)$ with following corresponding batch normalization and ReLu layers until the dimensions are reduced to two, the number of classes, in the last classification layer. The FCM architecture is illustrated in Fig. FIGREF26.
Methodology ::: Multimodal Architectures ::: Spatial Concatenation Model (SCM)
Instead of using the latest feature vector before classification of the Inception v3 as the visual representation, in the SCM we use the $8\times 8\times 2048$ feature map after the last Inception module. Then we concatenate the 150 dimension vectors encoding the tweet text and the tweet image text at each spatial location of that feature map. The resulting multimodal feature map is processed by two Inception-E blocks BIBREF28. After that, dropout and average pooling are applied and, as in the FCM model, three fully connected layers are used to reduce the dimensionality until the classification layer.
Methodology ::: Multimodal Architectures ::: Textual Kernels Model (TKM)
The TKM design, inspired by BIBREF20 and BIBREF21, aims to capture interactions between the two modalities more expressively than concatenation models. As in SCM we use the $8\times 8\times 2048$ feature map after the last Inception module as the visual representation. From the 150 dimension vector encoding the tweet text, we learn $K_t$ text dependent kernels using independent fully connected layers that are trained together with the rest of the model. The resulting $K_t$ text dependent kernels will have dimensionality of $1\times 1\times 2048$. We do the same with the feature vector encoding the image text, learning $K_{it}$ kernels. The textual kernels are convolved with the visual feature map in the channel dimension at each spatial location, resulting in a $8\times 8\times (K_i+K_{it})$ multimodal feature map, and batch normalization is applied. Then, as in the SCM, the 150 dimension vectors encoding the tweet text and the tweet image text are concatenated at each spatial dimension. The rest of the architecture is the same as in SCM: two Inception-E blocks, dropout, average pooling and three fully connected layers until the classification layer. The number of tweet textual kernels $K_t$ and tweet image textual kernels $K_it$ is set to $K_t = 10$ and $K_it = 5$. The TKM architecture is illustrated in Fig. FIGREF29.
Methodology ::: Multimodal Architectures ::: Training
We train the multimodal models with a Cross-Entropy loss with Softmax activations and an ADAM optimizer with an initial learning rate of $1e-4$. Our dataset suffers from a high class imbalance, so we weight the contribution to the loss of the samples to totally compensate for it. One of the goals of this work is to explore how every one of the inputs contributes to the classification and to prove that the proposed model can learn concurrences between visual and textual data useful to improve the hate speech classification results on multimodal data. To do that we train different models where all or only some inputs are available. When an input is not available, we set it to zeros, and we do the same when an image has no text.
Results
Table TABREF31 shows the F-score, the Area Under the ROC Curve (AUC) and the mean accuracy (ACC) of the proposed models when different inputs are available. $TT$ refers to the tweet text, $IT$ to the image text and $I$ to the image. It also shows results for the LSTM, for the Davison method proposed in BIBREF7 trained with MMHS150K, and for random scores. Fig. FIGREF32 shows the Precision vs Recall plot and the ROC curve (which plots the True Positive Rate vs the False Positive Rate) of the different models.
First, notice that given the subjectivity of the task and the discrepancies between annotators, getting optimal scores in the evaluation metrics is virtually impossible. However, a system with relatively low metric scores can still be very useful for hate speech detection in a real application: it will fire on publications for which most annotators agree they are hate, which are often the stronger attacks. The proposed LSTM to detect hate speech when only text is available, gets similar results as the method presented in BIBREF7, which we trained with MMHS150K and the same splits. However, more than substantially advancing the state of the art on hate speech detection in textual publications, our key purpose in this work is to introduce and work on its detection on multimodal publications. We use LSTM because it provides a strong representation of the tweet texts.
The FCM trained only with images gets decent results, considering that in many publications the images might not give any useful information for the task. Fig. FIGREF33 shows some representative examples of the top hate and not hate scored images of this model. Many hate tweets are accompanied by demeaning nudity images, being sexist or homophobic. Other racist tweets are accompanied by images caricaturing black people. Finally, MEMES are also typically used in hate speech publications. The top scored images for not hate are portraits of people belonging to minorities. This is due to the use of slur inside these communities without an offensive intention, such as the word nigga inside the afro-american community or the word dyke inside the lesbian community. These results show that images can be effectively used to discriminate between offensive and non-offensive uses of those words.
Despite the model trained only with images proves that they are useful for hate speech detection, the proposed multimodal models are not able to improve the detection compared to the textual models. Besides the different architectures, we have tried different training strategies, such as initializing the CNN weights with a model already trained solely with MMHS150K images or using dropout to force the multimodal models to use the visual information. Eventually, though, these models end up using almost only the text input for the prediction and producing very similar results to those of the textual models. The proposed multimodal models, such as TKM, have shown good performance in other tasks, such as VQA. Next, we analyze why they do not perform well in this task and with this data:
[noitemsep,leftmargin=*]
Noisy data. A major challenge of this task is the discrepancy between annotations due to subjective judgement. Although this affects also detection using only text, its repercussion is bigger in more complex tasks, such as detection using images or multimodal detection.
Complexity and diversity of multimodal relations. Hate speech multimodal publications employ a lot of background knowledge which makes the relations between visual and textual elements they use very complex and diverse, and therefore difficult to learn by a neural network.
Small set of multimodal examples. Fig. FIGREF5 shows some of the challenging multimodal hate examples that we aimed to detect. But although we have collected a big dataset of $150K$ tweets, the subset of multimodal hate there is still too small to learn the complex multimodal relations needed to identify multimodal hate.
Conclusions
In this work we have explored the task of hate speech detection on multimodal publications. We have created MMHS150K, to our knowledge the biggest available hate speech dataset, and the first one composed of multimodal data, namely tweets formed by image and text. We have trained different textual, visual and multimodal models with that data, and found out that, despite the fact that images are useful for hate speech detection, the multimodal models do not outperform the textual models. Finally, we have analyzed the challenges of the proposed task and dataset. Given that most of the content in Social Media nowadays is multimodal, we truly believe on the importance of pushing forward this research. The code used in this work is available in . | Unanswerable |